uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,500,915 | arxiv | \section{Introduction} \label{sec:Intro}
The close interdependency between the physical subsystem (power grid) and its control subsystem (Supervisory Control and Data Acquisition - SCADA or Wide-Area Monitoring Protection and Control - WAMPAC) in modern power grids makes them vulnerable to simultaneous cyber-physical attacks that target both subsystems. Such attacks can cause devastating consequences, e.g., the attack on Ukraine's power grid left 225,000 people without power for days~\cite{UkraineAttack},
{as the attacker simultaneously opened circuit breakers (physical attack) while keeping the system operators unaware by jamming the phone lines and launching KillDisk server wiping (cyber attack).}
As the main challenge in dealing with cyber-physical attacks is the difficulty in accurately identifying the damaged grid elements (e.g., failed transmission lines) due to the lack of measurements (e.g., breaker status) from the attacked area caused by the cyber attack, efforts on countering such attacks have focused on estimating the grid state inside the attacked area using power flow models and measurements outside that area.
Specifically, assuming the post-attack power injections to be known, \cite{Soltan18TCNS} developed methods to estimate the grid state under cyber-physical attacks using the \emph{direct-current (DC) power flow model}, and \cite{Soltan17PES} developed similar methods using the \emph{alternating-current (AC) power flow model}. Recently, \cite{yudi20SmartGridComm} further extends such methods to handle unknown post-attack power injections within the attacked area by proposing a linear programming (LP) based algorithm that can correctly identify all the failed links (i.e., transmission lines) under certain conditions. The conditions, however, involve the ground truth link states {(i.e., whether a link has truly failed or not)} within the attacked area and is thus not verifiable in practice.
In this work, we advance the work in \cite{yudi20SmartGridComm} by developing conditions and algorithms to verify the correctness of link states estimated by the LP-based algorithm proposed therein.
{Besides providing more confidence in the estimated link states, such algorithms can also facilitate recovery planning after an attack, which will schedule the repairs based on the results of failure localization under resource limitation. As no current algorithm can guarantee $100\%$ localization accuracy and false alarms are costly,
it is highly desirable to verify the correctness of the failure localization results before scheduling repairs. The role of failure localization verification is shown in red in Fig.~\ref{fig:role_verification}, where the set $\hat{F}$ contains the estimated failed links. One application of the proposed method is to guide crew dispatch during line repairing/restoration.
}
\begin{figure}[tb]
\centering
\includegraphics[width=.9\linewidth]{figures/Fig_role_of_verification_use.pdf}
\vspace{-.5em}
\caption{The role of failure localization verification.
} \label{fig:role_verification}
\vspace{-1em}
\end{figure}
\begin{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=.9\linewidth]{figures/Fig_role_of_verification_use.pdf}
\vspace{-1em}
\caption{The role of failure localization.
} \label{fig:role_certification}
\vspace{-.5em}
\end{figure}
\end{comment}
\subsection{Related Work}
State estimation is of fundamental importance for the supervisory control of the power grid \cite{huang2012state}. Specifically, link status identification or failed link localization is critical for post-attack failure assessment and recovery planning. To detect failed links in physical attacks, early works~\cite{tate2008line, tate2009double} formulated this problem as a mixed-integer program, which cannot scale to multi-link failures. Later works tackled this problem by formulating it as a sparse recovery problem over an overcomplete representation~\cite{Zhu12TPS, chen2014efficient}, which is then relaxed into an LP for computational efficiency, or applying machine learning techniques~\cite{garcia2015line, zhao2019learning}.
Localizing failed links is more difficult under joint cyber-physical attacks. For cyber attacks that block sensor data to the control center as considered in this work, \cite{Soltan18TCNS} proposed an LP-based algorithm and graph-theoretic conditions for perfect failure localization under the DC power flow model. In \cite{soltan2018react}, a heuristic algorithm was proposed to handle cyber attacks that distort sensor data or inject stealthy data. Moreover, \cite{soltan2018expose} modified the algorithm and the theoretical guarantees in \cite{Soltan18TCNS} according to the AC power flow model. However, the above works were all based on the assumption that the power grid remained connected after the failures, which may not be true under multi-link failures~\cite{yudi20SmartGridComm}. Recently, \cite{yudi20SmartGridComm} eliminated this assumption by developing an LP-based algorithm that can jointly estimate the link states and the load shedding values within the attacked area. However, despite the empirical success of this algorithm, there is no existing method for verifying the correctness of its estimates.
{Another line of related works is fault localization, e.g., \cite{Adu01TPD,salim2009extended,Codino17TEC,saha2009fault} and references therein. These works differ from our work in the sense that they (i) target naturally-occurring faults which exhibit signatures not necessarily present during malicious attacks, (ii) mostly focus on finding the exact location of faults along a line as opposed to localizing the failed lines, and (iii) do not traditionally consider the lack of information due to cyber attacks. }
\looseness=-1
\subsection{Summary of Contributions}
We aim at estimating the states (failed/operational) of links (transmission lines) in a smart grid under a joint cyber-physical attack, where the cyber attack blocks sensor data from the attacked area and the physical attack disconnects certain links that may disconnect the grid, with the following contributions: \begin{enumerate}
\item
{We provide conditions and a corresponding algorithm to verify the correctness of failure localization results (the states of links) using only observable information. Compared to previous recovery conditions in~\cite{yudi20SmartGridComm,Huang20arXiv} that cannot be tested during operation, the proposed algorithm requires no information about the ground truth link states and is thus applicable after cyber-physical attacks.}\looseness=-1
\item We provide a further theoretical condition for verifying the states of potentially more links based on observable information and the link states that are already verified by the above algorithm,
as well as the corresponding verification algorithm.
\item We show that our conditions and algorithms can be easily adapted to incorporate the knowledge of connectivity if the post-attack grid is known to remain connected as assumed in most existing works.
\item Our evaluations on
{the Polish grid and the IEEE 300-bus system}
show that the proposed algorithms can verify {$60$--$80\%$} of failed links and {$40$--$50\%$} of operational links in general, and these numbers increase to {$80$--$95\%$ and $70$--$90\%$} if the post-attack grid is known to remain connected, which provides valuable information for prioritizing repairs during recovery.
\end{enumerate}
\textbf{Roadmap.} Section~\ref{sec:Problem Formulation} formulates our problem. Section~\ref{sec:Localizing Failed Links} recaps our previously proposed algorithm \cite{yudi20SmartGridComm} and its properties. In Section~\ref{sec: verification_cond}, theoretical conditions and two algorithms are developed to verify the correctness of the estimated link states. Section~\ref{sec:Performance Evaluation} evaluates the proposed algorithms on a real grid topology. Finally, Section~\ref{sec:Conclusion} concludes the paper.
\section{Problem Formulation}\label{sec:Problem Formulation}
\subsection{Power Grid Model}
We adopt the DC power flow model.
The power grid is modeled as a connected undirected graph $G=(V,E)$, where $V$ denotes the set of nodes (buses) and $E$ the set of links (transmission lines). Each link $e=(s,t)$ is associated with a \emph{reactance} $r_{st}$ ($r_{st} = r_{ts}$) and a state $\in \{\mbox{``operational''}, \mbox{``failed''}\}$ (assumed to be operational before attack). Let $\bm{\Gamma}:= \text{diag}\{\frac{1}{r_e}\}_{e\in E}$. Each node $v$ is associated with a phase angle $\theta_v$ and an active power injection $p_v$, which are coupled by DC power flow equation:
\begin{align}\label{eq:B theta = p}
\boldsymbol{B} \boldsymbol{\theta} = \boldsymbol{p},
\end{align}
where $\boldsymbol{\theta}:=(\theta_v)_{v\in V}$, $\boldsymbol{p}:=(p_v)_{v\in V}$, and $\boldsymbol{B}:=(b_{uv})_{u,v\in V} \in \mathbb{R}^{|V|\times|V|}$ is the \emph{admittance matrix}, defined as:
\begin{align}
b_{uv} &=\left\{\begin{array}{ll}
0 & \mbox{if }u\neq v, (u,v)\not\in E,\\
-1/r_{uv} & \mbox{if } u\neq v, (u,v)\in E,\\
-\sum_{w\in V\setminus \{u\}}b_{uw} &\mbox{if }u=v.
\end{array}\right.
\end{align}
Given an arbitrary orientation of the links, the topology of $G$ can also be represented by the \emph{incidence matrix} $\boldsymbol{D}\in \{-1,0,1\}^{|V|\times|E|}$, {where the entry for $u\in V$ and $e\in E$ is\looseness=-1
\begin{align}
D_{u,e} &= \left\{\begin{array}{ll}
1 & \mbox{if link }e\mbox{ comes out of node }u,\\
-1 & \mbox{if link }e\mbox{ goes into node }u,\\
0 & \mbox{otherwise.}
\end{array}\right.
\end{align}
}
We assume that each node is deployed with
a phasor measurement unit (PMU) that can measure its phase angle,
and remote terminal units (RTUs) measuring the active power injection, as well as the (breaker) states and the power flows of its incident links. These reports are sent to the control center, where
the PMU measurements are communicated over a secure WAMPAC network~{\cite{WASA}}, and the RTU measurements over a more vulnerable SCADA network.
\subsection{Attack Model}
\begin{figure}[tb]
\centering
\includegraphics[width=.6\linewidth]{figures/Fig1.pdf}
\vspace{-1em}
\caption{A cyber-physical attack that blocks information from the attacked area $H$ while disconnecting certain links within $H$.
} \label{fig:cyber_physical_attack_hyd}
\vspace{-.5em}
\end{figure}
As illustrated in Fig.~\ref{fig:cyber_physical_attack_hyd}, a joint cyber-physical attack on an area $H=(V_H,E_H)$ (a subgraph induced by a set of nodes $V_H\subseteq V$) comprises of: (i) cyber attack that blocks reports from the nodes in $V_H$, and (ii) physical attack that disconnects a set $F\subseteq E_H$ of links within $H$, where $E_H$ is the set of links with both endpoints in $V_H$.
In contrast to the previous works \cite{Soltan18TCNS, Zhu12TPS, chen2014efficient}, we consider that the grid may be decomposed into islands after attack, which leads to possible changes in $\boldsymbol{p}$. Let $\boldsymbol{\Delta} = (\Delta_v)_{v\in V} := \boldsymbol{p}-\boldsymbol{p}'$ denote the change in active power injections, where $\boldsymbol{p}'$ denotes the active power injections after the attack.
Define
\begin{align}\label{eq:tilde{D}}
\tilde{\boldsymbol{D}} := \boldsymbol{D} \bm{\Gamma} \text{diag}\{\boldsymbol{D}^T \boldsymbol{\theta}'\},
\end{align}
where $\boldsymbol{\theta}'$ denotes post-attack phase angles.
{For link $e=(u,v)$, $\bm{\tilde{D}}_{u,e} =-\bm{\tilde{D}}_{v,e} = \frac{\theta_u'-\theta_v'}{r_{uv}}$ denotes
the post-attack power flow on $e$ if it is operational. If link $e$ fails after attack, then $\bm{\tilde{D}}_{u,e}$ represents the ``hypothetical power flow''.}
\subsection{Failure Localization Problem}
\begin{table}[tb]
\footnotesize
\renewcommand{\arraystretch}{1.3}
\caption{Notations} \label{tab:notation}
\vspace{-.5em}
\centering
\begin{tabular}{c|l}
\hline
Notation & Description \\
\hline
$G=(V,E)$ & power grid \\
\hline
$H$, $\bar{H}$ & attacked/unattacked area \\
\hline
$F$ & set of failed links \\
\hline
$\boldsymbol{B}$ & admittance matrix \\
\hline
$\boldsymbol{D}$ & incidence matrix \\
\hline
$\boldsymbol{\theta}$ & vector of phase angles \\
\hline
$\boldsymbol{p}$ & vector of active power injections \\
\hline
$\boldsymbol{\Delta}$ & vector of changes in active power injections
\\
\hline
$\bm{x}$ & vector of failure indicators \\
\hline
\end{tabular}
\end{table}
\normalsize
\textbf{Notation.} The main notations are summarized in Table~\ref{tab:notation}.
Moreover, given a subgraph $X$ of $G$, $V_X$ and $E_X$ denote the subsets of nodes/links in $X$, and $\boldsymbol{x}_X$ denotes the subvector of a vector $\boldsymbol{x}$ containing elements corresponding to $X$. Similarly, given two subgraphs $X$ and $Y$ of ${G}$, $\boldsymbol{A}_{X|Y}$ denotes the submatrix of a matrix $\boldsymbol{A}$ containing rows corresponding to $X$ and columns corresponding to $Y$. We use $[A, B]$ to denote the horizontal concatenation of matrices $A, B$ and $\bm{I}_{n}$ to denote the $n\times n$ identity matrix.
We use $\boldsymbol{D}_H\in \{-1,0,1\}^{|V_H|\times|E_H|}$ and $\tilde{\boldsymbol{D}}_H\in \mathbb{R}^{|V_H|\times|E_H|}$ to denote the submatrices of $\boldsymbol{D}$ and $\tilde{\boldsymbol{D}}$ for the attacked area $H$.
For each variable $x$, we use $x'$ to denote its value after the attack. {We follow the convention that $|x|$ indicates the absolute value if $x$ is a scalar and $|A|$ denotes the cardinality if $A$ is a set.}
\textbf{Goal.}
Our goal is to localize the failed links $F$ within the attacked area, based on knowledge before the attack and measurements from the unattacked area $\bar{H}$ after the attack.
In contrast to \cite{yudi20SmartGridComm}, we aim at obtaining estimates with \emph{verifiable correctness}. \looseness=-1
{\textbf{Assumptions.} Our analysis and solution are based on the following assumptions:\\
1. \emph{DC power flow model:} This is an approximation of the AC power flow model by neglecting resistive losses and assuming a uniform voltage magnitude. Due to its computational efficiency, DC power flow model has been widely used for analyzing link failures in large power grids~\cite{tate2008line,tate2009double,Zhu12TPS,chen2014efficient, Soltan18TCNS,zhao2019learning,soltan2018react}. We leave the extension to the AC power flow model to future work. \\
2. \emph{Availability of phase angles:} We assume that the phase angle at every bus is available before/after the attack. Before-attack observability from PMU measurements is consistent with the goal of PMU deployment, at least in North American bulk transmission systems~\cite{PMUdeployment}. Under the North American SynchroPhasor Initiative (NASPI) \cite{dagle2010north}, the number of PMUs is steadily growing, and some utilities have already achieved full observability in their networks, e.g., Dominion Power has piloted the PMU-based linear state estimator \cite{jones2013three,jones2014methodology}. These trends indicate that it is just a matter of time that complete observability through PMUs is achieved.
The post-attack observability can be achieved by securing PMU measurements through the stronger cyber security requirements of WAMPAC~\cite{WAMPACsecurity}, or through inference when $B_{\bar{H}|H}$ has a full column rank~\cite{yudi20SmartGridComm}. \\
3. \emph{$\theta_s' \ne \theta_t'$ for each link $(s,t)\in E_H$:} This assumption simply means that we only focus on the states of links in $H$ that will carry power flow if not failed, as the states of links carrying no flow have no impact and thus cannot be identified~\cite{Soltan18TCNS, yudi20SmartGridComm}.
}
\iffalse
\begin{assumption}\label{as:island_allLoad} As in \cite{Soltan18TCNS, yudi20SmartGridComm}, we assume that $\theta_s' \ne \theta_t'$ for each link $(s,t)\in E_H$, as otherwise the link will carry no power flow and hence its state cannot be identified\footnote{This assumption essentially means that we will ignore the existence of links $(s,t)\in E_H$ with $\theta_s' = \theta_t'$ in failure localization.}.
\end{assumption}
\fi
\section{Estimating Link States}\label{sec:Localizing Failed Links}
To our knowledge, the only algorithm for estimating link states (and hence localizing failed links) under a cyber-physical attack that can disconnect the grid is an algorithm called \emph{Failed Link Detection (FLD)} proposed in \cite{yudi20SmartGridComm}. FLD has exhibited very good accuracy in detecting the failed links with very few false alarms~\cite{yudi20SmartGridComm}. Our idea is to develop algorithms to verify the output of FLD. In this section, we briefly recap FLD and its existing (unverifiable) recovery conditions for completeness. \looseness=-1
\subsection{Existing Algorithm}
Let $\boldsymbol{x}_H \in \{0,1\}^{|E_H| }$ be an indicator vector such that $x_e=1$ if $e\in F$ and $x_e=0$ if $e \in E_H\setminus F$. It has been shown in \cite{yudi20SmartGridComm} that any feasible solution to $\boldsymbol{x}_H$ and $\boldsymbol{\Delta}_H$ must satisfy
\begin{align}
& \boldsymbol{\Delta}_H = \boldsymbol{B}_{H|G}(\boldsymbol{\theta}-\boldsymbol{\theta}') + \boldsymbol{D}_{H}\bm{\Gamma}_H\text{diag}\{\boldsymbol{D}_{G|H}^T \boldsymbol{\theta}'\}\vx_H, \label{eq:pf_constraint}\\
& p_v \ge {\Delta_v} \ge 0, ~~~\forall v \in \left\{ {u\: |u \in V_H, p_u > 0} \right\}, \label{eq:const_valid_start}\\
& p_v \le {\Delta_v} \le 0,~~~\forall v \in \left\{ {u\: |u \in V_H, p_u \le 0} \right\}, \label{eq:const_valid_load}
\end{align}
FLD formulates the problem of failure localization as an LP:
\begin{subequations}\label{eq1:L1_binary_load}
\begin{alignat}{2}
(\text{P1}) \quad &\min_{\boldsymbol{x}_H, \boldsymbol{\Delta}_H} \Arrowvert \boldsymbol{x}_H \Arrowvert_1 &\\
\mbox{s.t.} \quad
&\eqref{eq:pf_constraint}, \eqref{eq:const_valid_start}, \eqref{eq:const_valid_load},&\\
&{\rm{ }}{\bf{0}} \le {{\boldsymbol{x}}_H} \le {\bf{1}}.&
\end{alignat}
\end{subequations}
which is the convex relaxation of a sparse-recovery-based formulation. After solving (P1) in polynomial time, FLD estimates the set of failed links as
\begin{align}\label{eq:F_Hat}
\hat{F}=\{e:x_e \geq \eta\},
\end{align}
where $\eta\in (0, 1)$ is a threshold for rounding the factional solution of $\boldsymbol{x}_H$ to an integral solution ($\eta=0.5$ in this paper).
\begin{comment}
\begin{algorithm}\label{alg: fedgeDet}
\SetAlgoLined
\SetKwFunction{Fmain}{FailEdgeDetection}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\KwIn{$\boldsymbol{B}, \boldsymbol{p}, \boldsymbol{\Delta}_{\bar{H}}, \boldsymbol{\theta}, \boldsymbol{\theta}', \boldsymbol{D}, \eta$
}
\KwOut{$\hat{F}$}
Solve the problem (P1) to obtain $\boldsymbol{x}_H$;\\
Return $\hat{F}=\{e:x_e \geq \eta\}$.
\caption{Failed Link Detection (FLD) }
\end{algorithm}
\end{comment}
\subsection{Existing Recovery Conditions}\label{subsec:Existing Recovery Conditions}
FLD is known to recover the link states correctly under the following conditions~\cite{Huang20arXiv} (which improved the conditions in \cite{yudi20SmartGridComm}), where $\boldsymbol{x}^*_H$ and $\boldsymbol{\Delta}^*_H$ denote the true values of $\boldsymbol{x}_H$ and $\boldsymbol{\Delta}_H$. \looseness=-1
\begin{comment}
\begin{figure}[tb]
\vspace{-.5em}
\centering
\includegraphics[width=.6\linewidth]{figures/G1G2.pdf}
\vspace{-1em}
\caption{Decomposition of the attacked area $H$. } \label{fig:G1G2}
\vspace{-1em}
\end{figure}
\end{comment}
\subsubsection{Implicit Conditions}
Denote $V_{L} \subseteq V_H$ as the set containing nodes with $p_v\le 0$, and $V_G := V_H\setminus V_L$ as the remaining nodes in $V_H$ (with $p_v>0$). Accordingly, $\boldsymbol{\Delta}_i$ and $\boldsymbol{p}_i$ ($i=L, G$) denote the subvectors of $\boldsymbol{\Delta}_H$ and $\boldsymbol{p}_H$, respectively, corresponding to $V_i$, and $\tilde{\boldsymbol{D}}_i$ denotes the submatrix of $\tilde{\boldsymbol{D}}_H$ containing the rows corresponding to $V_i$. Given a set $Q_m := F\setminus \hat{F}$ of failed links that are missed and a set $Q_f := \hat{F}\setminus F$ of operational links that are falsely detected,
define $\bm{W}_m \in \{0,1\}^{|Q_m|\times |E_H|}$ as a binary matrix where $(W_m)_{i,j} = 1$ indicates the $i$-th missed link to be $e_j$, and define $\bm{W}_f \in \{0,1\}^{|Q_f|\times |E_H|}$ similarly such that $(W_f)_{i,k} = 1$ if the $i$-th false-alarmed link is $e_k$. Based on these notions, define\looseness=-1
\begin{subequations}\label{eq:inter_sub}
\begin{alignat}{2}
\bm{A}_D^T &:= [\tilde{\boldsymbol{D}}^T_{L}, -\tilde{\boldsymbol{D}}^T_{L}, -\tilde{\boldsymbol{D}}^T_{G}, \tilde{\boldsymbol{D}}^T_{G}] \in \mathbb{R}^{|E_H|\times 2|V_H|}, \\
\bm{A}_x^T &:= [-\bm{I}_{|E_H|}, \bm{I}_{|E_H|}]\in \mathbb{R}^{|E_H|\times 2|E_H|},\\
\bm{W}^T &:= [\bm{W}^T_m, -\bm{W}^T_f]\in \mathbb{R}^{|E_H|\times (|Q_m|+|Q_f|)}, \label{eq:bmW}\\
\bm{g}_D^T&:=[-(\boldsymbol{\Delta}_{L}^*)^T, (-\boldsymbol{p}_{L}')^T, (\boldsymbol{\Delta}_{G}^*)^T, (\boldsymbol{p}_{G}')^T],\\
\bm{g}_x^T &:= [(\boldsymbol{x}^*_H)^T, \bm{1}^T-(\boldsymbol{x}^*_H)^T]\in \mathbb{R}^{1\times 2|E_H|},\\
\bm{g}_w^T &:= [(\eta-1)\bm{1}^T, -\eta\bm{1}^T]\in \mathbb{R}^{1\times (|Q_m|+|Q_f|)}.
\end{alignat}
\end{subequations}
Then, the correctness of FLD is guaranteed as follows.
\begin{lemma}[\cite{Huang20arXiv}]\label{lem:ground_alter_gale}
A link $e\in F$ cannot be missed ($e\notin F \setminus \hat{F}$) by FLD if for any $Q_m$ containing $e$, there is a solution $\bm{z}\ge \bm{0}$ to \looseness=-1
\begin{subequations}\label{eq:alter_gale}
\begin{alignat}{2}
[\bm{A}_D^T, \bm{A}_x^T, \bm{W}^T, \bm{1}]\bm{z} = \bm{0},\label{eq:alter_gale_eq} \\
[\bm{g}_D^T, \bm{g}_x^T, \bm{g}_w^T, \bm{0}]\bm{z} < 0.\label{eq:alter_gale_ineq}
\end{alignat}
\end{subequations}
Similarly, a link $e\in E_H \setminus F$ cannot be falsely detected as failed if
for any $Q_f$ with $e \in Q_f$,
there is a solution $\bm{z}\ge\bm{0}$ to \eqref{eq:alter_gale}.\looseness=-1
\end{lemma}
\subsubsection{Explicit Conditions}
Besides Lemma~\ref{lem:ground_alter_gale}, \cite{Huang20arXiv} also provided more explicit conditions in terms of post-attack power flows and power injections. The following definitions will be needed to present this result.
Let $\bm{z}_D\in \mathbb{R}^{2|V_{H}|}, \bm{z}_x\in \mathbb{R}^{2|E_{H}|}$, $\bm{z}_w \in \mathbb{R}^{|Q_m|+|Q_f|}$ and $z_*\in \mathbb{R}$ denote subvectors of $\bm{z}$ corresponding to $\bm{A}_D^T, \bm{A}_x^T, \bm{W}^T$, and $\bm{1}$ in \eqref{eq:alter_gale_eq}. Denote $\tilde{\boldsymbol{D}}_{u}$ as the row in $\tilde{\boldsymbol{D}}$ corresponding to node $u$, and $\tilde{D}_{u,e}$ as the entry in $\tilde{\boldsymbol{D}}_{u}$ corresponding to link $e$. Denote $z_{D,u}$ as the entry in $\bm{z}_D$ corresponding to $\tilde{\boldsymbol{D}}_{u}$ in $\bm{A}_D$ and $z_{D,-u}$ as the entry corresponding to $-\tilde{\boldsymbol{D}}_{u}$ in $\bm{A}_D$.
Define $g_{D,u}$ and $g_{D,-u}$ as the entries in $\bm{g}_D$ corresponding to $z_{D,u}$ and $z_{D,-u}$, respectively, i.e.,\looseness=-1
\begin{align}
&g_{D,u}\hspace{-.25em} := \hspace{-.25em}\left\{\hspace{-.25em}\begin{array}{ll}
-\Delta_u^* & \hspace{-.5em}\mbox{if } p_u \hspace{-.25em}\le\hspace{-.25em} 0,\\
p_u' & \hspace{-.5em}\mbox{if } p_u\hspace{-.25em}>\hspace{-.25em} 0,
\end{array}\right.
&\hspace{-1em} g_{D,-u} \hspace{-.25em}:=\hspace{-.25em} \left\{\hspace{-.25em}\begin{array}{ll}
-p_u' & \hspace{-.5em}\mbox{if } p_u \hspace{-.25em}\le\hspace{-.25em} 0,\\
\Delta_u^* & \hspace{-.5em}\mbox{if } p_u \hspace{-.25em}>\hspace{-.25em} 0.
\end{array}\right.
\end{align}
Moreover, if link $e$ is the $i^{th}$ link in $Q_m$, then $z_{w,m,e}$ is used to denote the entry in $\bm{z}_{w}$ that corresponds to the $i^{th}$ column of $\bm{W}_m^T$; $z_{w,f,e}$ is defined similarly if $e\in Q_f$. For each link $e$, we denote $z_{x-,e}$ as the entry in $\bm{z}_x$ corresponding to $x^*_e$ in $\bm{g}_x$ and $z_{x+,e}$ as the entry corresponding to $(1-x^*_e)$ in $\bm{g}_x$.\looseness=-1
Referring to a set of nodes $U \subseteq V_H$ that induce a connected subgraph before attack as a \emph{hyper-node},
\cite{Huang20arXiv} established recovery conditions based on the following attributes of hyper-nodes. Define $E_U$ as the set of links in $H$ with exactly one endpoint in $U$, i.e, $E_U := \{e|e=(s,t)\in E_H, s\in U, t\notin U\}$. If $E_U\cap F \neq \emptyset$, define:
\begin{subequations}\label{eq:properties of hyper-node}
\begin{align}
\tilde{D}_{U,e} &:= \sum_{u\in U}\tilde{D}_{u,e},\\
S_U &:= \{ e\in E_U\setminus F|\: \exists l \in E_U \cap F, \tilde{D}_{U,l}\tilde{D}_{U,e} > 0\}, \\
f_{U,g} &:= \hspace{-.25em}
\begin{cases}
{\sum_{u\in U}{g_{D,u}}\mbox{\ \ \ if } {\exists l \in {E_U} \cap F, {\tilde D}_{U,l} < 0} },\\
\sum_{u\in U}{g_{D, - u}} \mbox{\ otherwise.}\\
\end{cases}\label{eq:def_fug}
\end{align}
\end{subequations}
{An illustrative example of hyper-node is $U=\{u_1, u_2, u_3\}$ in Fig.~\ref{fig:eg_hyper_node}, where $E_U=\{l_2, l_4, l_6, l_7\}$.} If $E_U\cap F = \emptyset$, we define:
\begin{align}
{f_{U,g}} := \left\{ {\begin{array}{*{20}{ll}}
{\sum_{u\in U}{g_{D,u}}\mbox{\ \ \ if } {\exists l \in {E_U} \setminus F, {\tilde D}_{U,l} > 0} },\\
\sum_{u\in U}{g_{D, - u}} \mbox{\ otherwise.}
\end{array}} \right.
\end{align}
\begin{theorem}[\cite{Huang20arXiv}]\label{lem:no_Miss_hyper_node}
A failed link $l \in F$ will be detected by FLD, i.e., $l\in \hat{F}$, if there exists at least one hyper-node (say $U$) such that $l\in E_U$, for which the following conditions hold:\looseness=-1
\begin{enumerate}
\item $\forall e, l\in E_U\cap F$, $\tilde{D}_{U,e}\tilde{D}_{U,l} > 0$,
\item $S_U = \emptyset$, and
\item $f_{U,g} + (\eta-1)|\tilde{D}_{U,l}|<0$.
\end{enumerate}
\end{theorem}
\begin{theorem}[\cite{Huang20arXiv}]\label{lem:no_fa_hyper_direc}
An operational link $l \in E_H\setminus F$ will not be detected as failed by FLD, i.e., $l\notin \hat{F}$, if there exists at least one hyper-node (say $U$) such that $l\in E_U$, for which the following conditions hold:\looseness=0
\begin{enumerate}
\item
$\forall l, l'\in E_U\setminus F,\: \tilde{D}_{U,l}\tilde{D}_{U,l'} > 0$,
\item $S_U = \emptyset$ if $E_U\cap F \ne \emptyset$, and
\item $f_{U,g}-\eta|\tilde{D}_{U,l}|<0$.
\end{enumerate}
\end{theorem}
While useful for performance analysis, the above conditions cannot be directly applied to verify whether the estimated state of a link is correct or not as the ground truth $F$ is unknown.
\section{Verifying Estimated Link States}\label{sec: verification_cond}
We will show that in some cases, we can guarantee the correctness of estimated link states based on observable information. Our idea is to (1) derive stronger recovery conditions that can be tested without knowledge of the ground truth {link states}, and then (2) extend these conditions to test more links based on the link states verified in step~(1).
Our results are based on the assumption that the grid follows the \emph{proportional load shedding/generation reduction policy}, where (i) either the load or the generation (but not both) will be reduced upon the formation of an island, and (ii) if nodes $u$ and $v$ are in the same island and of the same type (both load or generator), then $p_u'/p_u = p_v'/p_v$. This policy models the common practice in adjusting load/generation due to islanding~\cite{pal2006robust,lu2016under}. Under this policy, it is known that the post-attack power injections can be recovered under the following condition.
\begin{lemma}[\cite{yudi20SmartGridComm}]\label{lem:recover delta}
Let $N(v;\bar{H})$ denote the set of all the nodes in $\bar{H}$ that are connected to node $v$ via links in $E\setminus E_H$.
Then under the proportional load shedding/generation reduction policy, $\Delta_v$ for $v\in V_H$ can be recovered unless $N(v;\bar{H})=\emptyset$ or every $u\in N(v;\bar{H})$ is of a different type from $v$ with $\Delta_u=0$.
\end{lemma}
Define $U_B$ as the set of nodes such that $\forall u\in U_B$, $\Delta_u$ can be recovered through Lemma~\ref{lem:recover delta}.
Our key observation is that for any hyper-node $U$, $\tilde{D}_{U,l}$ for any $l\in E_U$ can be computed with the knowledge of $\boldsymbol{\theta}'$, and $f_{U,g}$ can be upper-bounded by
\begin{align}
\hat{f}_{U,g} := \sum_{u\in U \cap U_B} f_{u,g} + \sum_{u\in U\setminus U_B} |p_u|,
\end{align}
where $f_{u,g}$ is defined in \eqref{eq:def_fug} for $U=\{u\}$. Since $f_{u,g}$ is known for nodes in $U_B$ and $p_u$ (power injection at $u$ before attack) is also known, $\hat{f}_{U,g}$ is computable. We now show how to use this information to verify the estimated link states based on Lemma~\ref{lem:ground_alter_gale} and Theorems~\ref{lem:no_Miss_hyper_node}--\ref{lem:no_fa_hyper_direc}.
\subsection{Verification without Knowledge of Ground Truth}
We first tackle the links whose states can be verified without any knowledge of the ground truth {link states}.
\subsubsection{Verifiable Conditions}\label{subsubsec:Verifiable Condition, no ground truth}
The basic idea is to rule out the other possibility by constructing \emph{counterexamples} to the theorems in Section~\ref{subsec:Existing Recovery Conditions} {if the estimated link state is incorrect.}
\begin{figure}[tb]
\vspace{-.5em}
\centering
\includegraphics[width=.6\linewidth]{figures/eg_hyper_node.pdf}
\vspace{-1em}
\caption{An example of hyper-node (arrow denotes the direction of a power flow over an operational link or a hypothetical power flow over a failed link). } \label{fig:eg_hyper_node}
\vspace{-1em}
\end{figure}
\emph{Links in $1$-edge cuts:}
If link $e = (u_1,u_2)$ forms a \emph{cut} of $H$, i.e., $(V_H, E_H\setminus \{e\})$ contains more connected components than $H$, then by breadth-first search (BFS) starting from $u_1$ and $u_2$ respectively without traversing $e$, we can construct two hyper-nodes $U_1$ and $U_2$ such that $E_{U_1} = E_{U_2} = \{e\}$ and thus $S_{U_1}=S_{U_1}=\emptyset$. {For example, in Fig.~\ref{fig:eg_hyper_node}, link $e:=l_6$ is a 1-edge cut, and thus $U_1:=\{u_4,u_5\}$ and $U_2:=V_H\setminus U_1$ satisfy this condition. }
Then the following verifiable conditions are directly implied by Theorems~\ref{lem:no_Miss_hyper_node}--\ref{lem:no_fa_hyper_direc}:
\begin{corollary}\label{coro:verifiable condition of 1-edge cut}
If $e\in \hat{F}$ and $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} - \eta|\tilde{D}_{U_1,e}| <0$, then we can verify $e\in F$.
If $e\in E_H\setminus \hat{F}$ and $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} + (\eta-1)|\tilde{D}_{U_1,e}| <0$, then we can verify $e\in E_H\setminus F$.
\end{corollary}
\begin{proof}
If $e\in \hat{F}$ and $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} - \eta|\tilde{D}_{U_1,e}| <0$, then $e$ must have failed, since otherwise $e$ would have been estimated as operational according to Theorem~\ref{lem:no_fa_hyper_direc}. Similarly, if $e\in E_H\setminus \hat{F}$ and $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} + (\eta-1)|\tilde{D}_{U_1,e}| <0$, then $e$ must be operational, since otherwise $e$ would have been estimated as failed according to Theorem~\ref{lem:no_Miss_hyper_node}.
Note that
as our verification is based on contradiction, $\hat{f}_{U_i,g}$ should be computed as
if $e\in E_H\setminus F$ to verify $e\in \hat{F}$ and vice-versa.
\end{proof}
\emph{Links in $2$-edge cuts:}
If links $e_1, e_2\in E_H$ together form a cut of $H$ but {each individual link does} not,
then by BFS starting from the endpoints of $e_1$ (or $e_2$) without traversing $e_1$ or $e_2$, we can construct two hyper-nodes $U_1, U_2$ such that $E_{U_1} = E_{U_2} = \{e_1, e_2\}$. {For example, as $e_1:=l_4$ and $e_2:=l_7$ form a $2$-edge cut of $H$ in Fig.~\ref{fig:eg_hyper_node}, $U_1:=\{u_6,u_7\}$ and $U_2:=V_H\setminus U_1$ satisfy this condition. Moreover, any pair of links in a cycle $C$ form a 2-edge cut if they are not in any other cycle in $H$, e.g., any pair of links in the cycle $\{l_1, l_3, l_5\}$ satisfy this condition.} Based on this observation, we provide the following conditions for verifying the states of such links.
\begin{theorem}\label{lem:certify_Ef_0}
Consider a hyper-node $U$ with $E_U = \{e_1,e_2\}$ and $e_1, e_2 \in E_H\setminus \hat{F}$. If $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} < 0$, then $e_1, e_2$ are guaranteed to both belong to $E_H\setminus F$ if
\begin{enumerate}
\item $\hat{f}_{U,g} + (\eta-1) \min\{|\tilde{D}_{U,e_1}|, |\tilde{D}_{U,e_2}|\} < 0$, and
\item $\eta < 1- \min\{ \frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_1}|}{|\tilde{D}_{U,e_2}|}, \frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_2}|}{|\tilde{D}_{U,e_1}|} \} $.
\end{enumerate}
If $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} > 0$, then we can verify:
\begin{enumerate}
\item $e_1\in E_H\setminus F$ if $(1-\eta)|\tilde{D}_{U,e_1}| > \hat{f}_{U,g} + |\tilde{D}_{U,e_2}|$,
\item $e_2 \in E_H\setminus F$ if $(1-\eta)|\tilde{D}_{U,e_2}| > \hat{f}_{U,g} + |\tilde{D}_{U,e_1}|$.
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove the case that $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} < 0$. Given $e_1, e_2 \in E_H\setminus \hat{F}$ where $\hat{F}$ is returned by FLD, there are 3 possible forms of mistakes when the ground truth {failed link set $F$} is unknown, and we will prove the impossibility for each of them. If $e_1\in F, e_2\in E_H\setminus F$, Theorem~\ref{lem:no_Miss_hyper_node} guarantees that $e_1\notin Q_m$ due to condition~1), which {introduces} contradiction. Similarly, $e_2\in F, e_1\in E_H\setminus F$ is also impossible. If $e_1, e_2 \in Q_m$, assume without loss of generality that $\eta <1- \frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_1}|}{|\tilde{D}_{U,e_2}|}$. Then, we construct the following $\bm{z}$: $\forall u\in U$, $z_{D,u} = 1$ if $\tilde{D}_{U,e_2} < 0$ or $z_{D,-u} = 1$ if $\tilde{D}_{U,e_2} > 0$, $z_{w,m,e_2} = |\tilde{D}_{U,e_2}|$, $z_{x-,e_1} = |\tilde{D}_{U,e_1}|$, and other entries of $\bm{z}$ as 0. Then, \eqref{eq:alter_gale_eq} holds for sure and \eqref{eq:alter_gale_ineq} holds since it can be expanded as $\hat{f}_{U,g} + (\eta-1)|\tilde{D}_{U,e_2}| + |\tilde{D}_{U,e_1}| < 0$ due to condition~2). According to Lemma~\ref{lem:ground_alter_gale}, {it is impossible to have} $e_1, e_2 \in Q_m$, which verifies {that} $e_1, e_2 \in E_H\setminus F$.
Next, with $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} > 0$, we show how to verify $e_1$. If $e_1\in Q_m$, regardless of the true {state} of $e_2$, we construct the following $\bm{z}$ for Lemma~\ref{lem:ground_alter_gale}: $\forall u\in U$, $z_{D,u} = 1$ if $\tilde{D}_{U,e_1} < 0$ or $z_{D,-u} = 1$ if $\tilde{D}_{U,e_1} > 0$, $z_{w,m,e_1} = |\tilde{D}_{U,e_1}|$, $z_{x+,e_2} = |\tilde{D}_{U,e_2}|$, and other entries of $\bm{z}$ as 0. Then \eqref{eq:alter_gale} holds due to condition~1), which contradicts {the} assumption that $e_1\in Q_m$. The verification condition for $e_2$ can be derived similarly.
\end{proof}
\begin{theorem}\label{lem:certify_Ef_1}
Consider a hyper-node $U$ with $E_U = \{e_1,e_2\}$ and $e_1 \in \hat{F},e_2\in E_H\setminus \hat{F}$. If $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} > 0$, then the states of $e_1, e_2$ are guaranteed to be correctly identified if
\begin{enumerate}
\item $\hat{f}_{U,g} - \eta|\tilde{D}_{U,e_1}| < 0$, $\hat{f}_{U,g} + (\eta-1) |\tilde{D}_{U,e_2}| < 0$, and
\item either $\eta > \frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_2}|}{|\tilde{D}_{U,e_1}|}$ or $\eta < 1-\frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_1}|}{|\tilde{D}_{U,e_2}|}$.
\end{enumerate}
If $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} < 0$, then we can verify:
\begin{enumerate}
\item $e_1\in F$ if $\eta|\tilde{D}_{U,e_1}| > \hat{f}_{U,g} + |\tilde{D}_{U,e_2}|$,
\item $e_2 \in E_H\setminus F$ if $(1-\eta)|\tilde{D}_{U,e_2}| >\hat{f}_{U,g} + |\tilde{D}_{U,e_1}|$.
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove the impossibility of each possible mistake if $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} > 0$. First, we rule out the possibility that $e_1 \in Q_f$, $e_2 \in E_H\setminus F$ according to Theorem~\ref{lem:no_fa_hyper_direc} and condition~1). Similarly, according to Theorem~\ref{lem:no_Miss_hyper_node} and condition~1), $e_1\in F$ while $e_2\in Q_m$ is also impossible. Next, we prove the impossibility of $e_1\in Q_f, e_2\in Q_m$ by constructing a solution $\bm{z}$ to \eqref{eq:alter_gale}. Specifically, if $\eta > \frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_2}|}{|\tilde{D}_{U,e_1}|}$, then $\forall u\in U$, we set $z_{D,u} = 1$ if $\tilde{D}_{U,e_1} > 0$ or $z_{D,-u} = 1$ if $\tilde{D}_{U,e_1} < 0$, $z_{w,f,e_1} = |\tilde{D}_{U,e_1}|$, $z_{x-,e_2} = |\tilde{D}_{U,e_2}|$, and other entries of $\bm{z}$ as 0. If $\eta < 1-\frac{\hat{f}_{U,g} + |\tilde{D}_{U,e_1}|}{|\tilde{D}_{U,e_2}|}$, then $\forall u\in U$, we set $z_{D,u} = 1$ if $\tilde{D}_{U,e_2} < 0$ or $z_{D,-u} = 1$ if $\tilde{D}_{U,e_2} > 0$, $z_{w,m,e_2} = |\tilde{D}_{U,e_2}|$, $z_{x+,e_1} = |\tilde{D}_{U,e_1}|$, and other entries of $\bm{z}$ as 0. It is easy to check the satisfaction of \eqref{eq:alter_gale} under both constructions above, which {rules} out the possibility of $e_1\in Q_f, e_2\in Q_m$ according to Lemma~\ref{lem:ground_alter_gale} and $e_1 \in F,e_2\in E_H\setminus F$ is thus guaranteed.
Next, we prove the verification condition for $e_1\notin Q_f$ if $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2} < 0$. We prove by constructing a solution $\bm{z}$ as follows regardless of the status of $e_2$: $\forall u\in U$, if $\tilde{D}_{U,e_1} < 0$, we set $z_{D,-u} = 1$; otherwise, we set $z_{D,u} = 1$. Then, we set $z_{w,f,e_1} = |\tilde{D}_{U,e_1}|$, $z_{x+,e_2} = |\tilde{D}_{U,e_2}|$, and other entries of $\bm{z}$ as 0. Then, \eqref{eq:alter_gale_eq} holds for sure and \eqref{eq:alter_gale_ineq} holds since it can be expanded as $\hat{f}_{U,g} -\eta|\tilde{D}_{U,e_1}| + |\tilde{D}_{U,e_2}| < 0$ due to condition 1), which rules out the possibility of $e_1\in Q_f$ according to Lemma~\ref{lem:ground_alter_gale} and thus verifies that $e_1\in F$. The verification condition for $e_2\notin Q_m$ can be proved similarly.
\end{proof}
\begin{theorem}\label{lem:certify_Ef_2}
Consider a hyper-node $U$ with $E_U = \{e_1,e_2\}$ and $e_1, e_2 \in \hat{F}$. Then, we can verify:
\begin{enumerate}
\item $e_1\in F$ if $\eta|\tilde{D}_{U,e_1}| > \hat{f}_{U,g} + |\tilde{D}_{U,e_2}|$,
\item $e_2 \in F$ if $\eta|\tilde{D}_{U,e_2}| > \hat{f}_{U,g} + |\tilde{D}_{U,e_1}|$.
\end{enumerate}
\end{theorem}
\begin{proof}
We only prove the verification condition for $e_1\in F$ since the condition for $e_2$ can be proved similarly. We prove by contradiction that constructs a solution to \eqref{eq:alter_gale} if $e_1\in Q_f$. Specifically, with condition~1), we can always construct {a} $\bm{z}$ for \eqref{eq:alter_gale} as follows regardless of the status of $e_2$: $\forall u\in U$, $z_{D,u} = 1$ if $\tilde{D}_{U,e_1} > 0$ or $z_{D,-u} = 1$ if $\tilde{D}_{U,e_1} < 0$ and $z_{w,f,e_1} = |\tilde{D}_{U,e_1}|$. In addition, if $\tilde{D}_{U,e_1}\tilde{D}_{U,e_2}>0$, we set $z_{x-,e_2} = |\tilde{D}_{U,e_2}|$; otherwise, we set $z_{x+,e_2} = |\tilde{D}_{U,e_2}|$. Finally, other entries of $\bm{z}$ are set as 0. It is easy to check the satisfaction of \eqref{eq:alter_gale_eq}, and \eqref{eq:alter_gale_ineq} holds since it can be expanded as $[\bm{g}_D^T, \bm{g}_x^T, \bm{g}_w^T, \bm{0}]\bm{z} \le \hat{f}_{U,g} + |\tilde{D}_{U,e_2}|-\eta |\tilde{D}_{U,e_1}| < 0$, where {the} last inequality holds due to condition~1). Thus, {we must have $e_1\notin Q_f$} according to Lemma~\ref{lem:ground_alter_gale}.
\end{proof}
\vspace{-0.6em}
\emph{Remark:} While in theory such verifiable conditions can also be derived for links in larger cuts, the number of cases will grow exponentially. We also find $1$--$2$-edge cuts to cover the majority of links in practice (see Fig.~\ref{fig:verifytopology_H40}).
\subsubsection{Verification Algorithm}
Based on Lemmas~\ref{lem:certify_Ef_0}--\ref{lem:certify_Ef_2}, we develop an algorithm as shown in Algorithm~\ref{alg: Verification_Alg} for verifying the link states estimated by FLD, which can be applied to links in $1$--$2$-edge cuts.
Here, $E_a$ denotes the set of all the links in $1$-edge cuts of $H$, while $\mathcal{E}_c$ denotes the set of $2$-edge cuts. In the algorithm, links in $E_a$ are tested before links in $\mathcal{E}_c$ since it is easier to extend the knowledge of $U_B$ based on the test results for $E_a$.
As for the complexity, {we} first note that the time complexity of each iteration is $\mathcal{O}(|E_H|+|V_H|)$ due to BFS. Then, it takes $\mathcal{O}(|E_H|)$ iterations to verify $E_a$ and $\mathcal{O}(|E_H|^2)$ iterations for $\mathcal{E}_c$, which results in a total complexity of $\mathcal{O}(|E_H|^2(|E_H|+|V_H|))$.
\begin{algorithm}\label{alg: Verification_Alg}
\SetAlgoLined
\SetKwFunction{Fmain}{FailEdgeDetection}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\KwIn{$\tilde{\boldsymbol{D}}, \boldsymbol{p}, \boldsymbol{\Delta}_{\bar{H}}, U_B, \eta, E_a, \mathcal{E}_c, \hat{F}$
}
\KwOut{$E_v$}
$E_v\leftarrow \emptyset$\tcc*{verifiable links}
\ForEach{$e = (u_1,u_2) \in E_a$}{
Construct hyper-nodes $U_1$ and $U_2$ such that $E_{U_1} = E_{U_2} = \{e\}$\;
\eIf{$e\in \hat{F}$}{
Add $e$ to $E_v$ if $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} - \eta|\tilde{D}_{U_1,e}| <0$\;
}
{
Add $e$ to $E_v$ if $\min\{ \hat{f}_{U_1,g}, \hat{f}_{U_2,g} \} + (\eta-1)|\tilde{D}_{U_1,e}| <0$\;
}
\If{$e$ is verified to be in $E_H \setminus F$}{
Add $u_i$ to $U_B$ if $\Delta_{u_i}$ ($i=1,2$) can be recovered through Lemma~\ref{lem:recover delta}\;
}
}
\ForEach{$\{e_1, e_2\}\in \mathcal{E}_c$}{
Construct hyper-nodes $U_1$ and $U_2$ such that $E_{U_1} = E_{U_2} = \{e_1, e_2\}$\;
Test the satisfaction of Lemma~\ref{lem:certify_Ef_0}, \ref{lem:certify_Ef_1}, or \ref{lem:certify_Ef_2} for $U_1$ and $U_2$, respectively\;
Add $e_i$ ($i=1,2$) to $E_v$ if it is verified\;
}
\caption{Verification without Ground Truth}
\end{algorithm}
\subsection{Verification with Partial Knowledge of Ground Truth}
Algorithm~\ref{alg: Verification_Alg} assumes no knowledge of the ground truth {link states}, even if the states of some links are already verified. However, links that cannot be verified by Algorithm~\ref{alg: Verification_Alg} may become verifiable after obtaining partial knowledge of the ground truth (i.e., link set $E_v$ verified by Algorithm~\ref{alg: Verification_Alg}). In addition, links in larger cuts are not tested in Algorithm~\ref{alg: Verification_Alg}. To address these issues, we propose a followup step designed to verify the states of the links in $E_H\setminus E_v$.
\subsubsection{Verifiable Conditions}
The idea for verifying the correctness of $e\in \hat{F}$ (or $e\in E_H\setminus \hat{F}$) is to construct a solution to \eqref{eq:alter_gale} as if $e\in E_H\setminus F$ (or $e\in F$). Specifically, it can be shown that for a link $e\in \hat{F}$, if there exists $\bm{z}\geq \bm{0}$ for \eqref{eq:alter_gale} where $\bm{W}$ is constructed for $Q_f = \{e\}$ and $Q_m=\emptyset$, then $e$ is guaranteed to have failed since otherwise it must have been estimated to be operational. The challenge is the unknown $\bm{g}_D$, $\bm{g}_x$, and $\bm{g}_w$ due to unknown $F$ and $\boldsymbol{\Delta}_H^*$. To tackle this challenge, we approximate these parameters by their worst possible values
(in terms of satisfying \eqref{eq:alter_gale}), which leads to the following result:
\begin{theorem}\label{theo: verify_by_LP}
Given a set $E_v$ of links with known states, we define $\hat{\bm{g}}_D \in \mathbb{R}^{2|V_H|}$ and $\hat{\bm{g}}_x\in \mathbb{R}^{2|E_H|}$ as follows:
\begin{align}
\hat{g}_{D,u} &=
\begin{cases}
{g_{D,u}}, \mbox{ if $u\in U_B$,} \\
\left| {{p_u}} \right|, \mbox{ otherwise, }
\end{cases}
~~\hat{g}_{x,e} &=
\begin{cases}
{g_{x,e}}, \mbox{ if $e\in E_v$, }\\
1, \mbox{ otherwise, }
\end{cases}\nonumber
\end{align}
and define $\hat{g}_{D,-u}$ and $\hat{g}_{x,-e}$ similarly. Then, a link $l\in \hat{F}$ is verified to have failed if there exists a solution $\bm{z} \ge \bm{0}$ to
\begin{subequations}\label{eq:computable_gale}
\begin{alignat}{2}
[\bm{A}_D^T, \bm{A}_x^T, \bm{w}^T, \bm{1}]\bm{z} = \bm{0},\label{eq: computable_gale_eq} \\
[\hat{\bm{g}}_D^T, \hat{\bm{g}}_x^T, g_w, \bm{0}]\bm{z} < 0,\label{eq: computable_gale_ineq}
\end{alignat}
\end{subequations}
where $\bm{w} \in \{0,1\}^{|E_H|}$ is defined to be $\bm{W}_f$ with $Q_f = \{l\}$, and $g_w := -\eta$. Similarly, a link $e \in E_H \setminus \hat{F}$ is verified to be operational if $\exists \bm{z} \ge \bm{0}$ that satisfies \eqref{eq:computable_gale}, where $\bm{w}\in \{0,1\}^{|E_H|}$ is defined to be $\bm{W}_m$ with $Q_m = \{e\}$, and $g_w := \eta -1$.
\end{theorem}
\begin{proof}
We only prove for the case that $l \in \hat{F}$ since the case that $e\in E_H\setminus \hat{F}$ is similar. {First note that if $\exists \bm{z}_0\geq \bm{0}$ that satisfies \eqref{eq:alter_gale} for $\bm{W}$ constructed according to $Q_f = \{l\}$ and $Q_m = \emptyset$, then for any $\bm{W}$ corresponding to $Q_f$ that contains $l$, we can always construct a non-negative solution to \eqref{eq:alter_gale} based on $\bm{z}_0$ by setting $z_{w,f,e'} = 0, \forall e'\in Q_f\setminus \{l\}$.} Thus, according to Lemma~\ref{lem:ground_alter_gale}, $l$ {can be verified} as $l \in F$ if $\exists \bm{z}\ge \bm{0}$ for \eqref{eq:alter_gale} where $\bm{W}$ is constructed for $Q_f = \{l\}$ and $Q_m = \emptyset$, since otherwise $l$ must have been estimated to be operational.
Thus, we only need to prove that any solution to \eqref{eq:computable_gale} is a solution to \eqref{eq:alter_gale} when $Q_f=\{l\}$ and $Q_m=\emptyset$.
To this end, let $\bar{\bm{z}} \ge \bm{0}$ be a feasible solution to \eqref{eq:computable_gale}.
First, \eqref{eq:alter_gale_eq} holds since it is the same as \eqref{eq: computable_gale_eq} in this case. As for \eqref{eq:alter_gale_ineq}, we have
\begin{align}
[\bm{g}_D^T, \bm{g}_x^T, g_w, \bm{0}]\bar{\bm{z}} \le [\hat{\bm{g}}_D^T, \hat{\bm{g}}_x^T, g_w, \bm{0}]\bar{\bm{z}} < 0,
\end{align}
where the first inequality holds since $\bm{0}\le [\bm{g}_D^T, \bm{g}_x^T] \le [\hat{\bm{g}}_D^T, \hat{\bm{g}}_x^T]$ (element-wise inequality), while the second inequality holds since $\bar{\bm{z}}$ satisfies \eqref{eq:computable_gale}. Therefore, $\bar{\bm{z}}$ is also a feasible solution to \eqref{eq:alter_gale}, which verifies that $l \in F$.
\end{proof}
\subsubsection{Verification Algorithm}
All the elements in \eqref{eq:computable_gale} are known, and thus the existence of a solution can be checked by solving an LP. Based on this result,
we propose Algorithm~\ref{alg: verify_LP} for verifying the estimated states of the remaining links, which iteratively updates $E_v$. {Each iteration of Algorithm~\ref{alg: verify_LP} involves solving $O(|E_H|)$ LPs, each of which has a time complexity that is polynomial\footnote{The exact order of the polynomial depends on the specific algorithm used to solve the LP~\cite{terlaky2013interior}. } in the number of decision variables ($|E_H|$) and the number of constraints ($|V_H|+|E_H|$)~\cite{terlaky2013interior}. Since Algorithm~\ref{alg: verify_LP} has at most $|E_H|$ iterations, the total time complexity of Algorithm~\ref{alg: verify_LP} is polynomial in $|E_H|$ and $|V_H|$.}
\begin{algorithm}\label{alg: verify_LP}
\SetAlgoLined
\SetKwFunction{Fmain}{FailEdgeDetection}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\KwIn{$\tilde{\boldsymbol{D}}, \boldsymbol{p}, \boldsymbol{\Delta}_{\bar{H}}, U_B, \eta, E_H, E_v, \hat{F}, \hat{\bm{g}}_D, \hat{\bm{g}}_x$}
\While{$E_H\setminus E_v \ne \emptyset$}{
$\bar{E}_v \leftarrow E_v$;\\
\ForEach{$e\in E_H\setminus E_v$}{
\If{$\exists \bm{z}\ge \bm{0}$ {satisfying} \eqref{eq:computable_gale} for $e$}{
$\bar{E}_v \leftarrow \bar{E}_v \cup \{e\}$;\\
Update $\hat{\bm{g}}_x$;
}
}
\eIf{$|\bar{E}_v| > |E_v|$}{
$E_v \leftarrow \bar{E}_v$;
}{
break\;
}
}
\caption{Verification with Partial Ground Truth}
\end{algorithm}
\subsection{Special Case of Connected Post-attack Grid}\label{sec: special_connected}
In this section, we study the special case that the grid is known to stay connected after the attack, which is assumed in most of the existing works \cite{Soltan18TCNS, Zhu12TPS, chen2014efficient}. In this case, FLD is modified by replacing constraints \eqref{eq:const_valid_start} and \eqref{eq:const_valid_load} with $\vDelta_H = \bm{0}$ (implied by the assumption of the connected post-attack grid). Next, we demonstrate how Algorithm~\ref{alg: Verification_Alg}-\ref{alg: verify_LP} will change in this case. To this end, we study the effect of $\vDelta_H = \bm{0}$ on Lemma~\ref{lem:ground_alter_gale}. Noting that according to \cite{Huang20arXiv}, any pair of $(\vDelta_H, \vx_H)$ satisfying \eqref{eq:pf_constraint} can be represented by $\vc\in \mathbb{R}^{|E_H|}$ as
\begin{align}
\vDelta_H = \vDelta_H^* + \tilde{\vD}_H\vc, \quad \vx_H = \vx_H^* + \vI_{|E_H|}\vc.
\end{align}
Thus, we have $\tilde{\vD}_H\vc = \bm{0}$ due to $\vDelta_H = \vDelta_H^* = \bm{0}$, which is equivalent to requiring $\tilde{\vD}_H\vc \le \bm{0}$ and $-\tilde{\vD}_H\vc \le \bm{0}$.
Accordingly, $\vA_D$ and $\vg_D$ in \eqref{eq:alter_gale}, which used to model
\eqref{eq:const_valid_start} and \eqref{eq:const_valid_load},
now become $\vA_D^T := [\tilde{\vD}_H^T, -\tilde{\vD}_H^T], \vg_D := \bm{0}$.
The direct implication of $\vg_D = \bm{0}$ is that $f_{U,g} = \sum_{u\in U} f_{u,g} = 0, \forall U\subseteq V_H$. That is to say, Theorems~\ref{lem:certify_Ef_0}-\ref{lem:certify_Ef_2} still hold for the modified FLD except that $\hat{f}_{U,g} = \bm{0}$, which implies the following result:
\begin{corollary}\label{lem: No_mistake_1_cut}
If it is known that the post-attack grid $G' = (V,E\setminus F)$ is connected, then the state of any link that forms a 1-edge cut of $H$ will be identified correctly by a variation of FLD that replaces the constraints \eqref{eq:const_valid_start} and \eqref{eq:const_valid_load} by $\vDelta_H = \bm{0}$.
\end{corollary}
\begin{proof}
As in the proof of Corollary~\ref{coro:verifiable condition of 1-edge cut}, for any link $e= (u_1, u_2)\in\hat{F}$ forming a cut of $H$, we can verify that $e\in F$ if $\min\{ {f}_{U_1,g}, {f}_{U_2,g} \} - \eta|\tilde{D}_{U_1,e}| <0$ (otherwise, $e$ must have been estimated as operational by Theorem~\ref{lem:no_fa_hyper_direc}). Since ${f}_{U_i,g} = 0$ ($i=1,2$) if the grid remains connected after the attack and $|\tilde{D}_{U_1,e}|>0$ by Assumption~3,
$e\in F$ can always be verified. Similar argument applies to any link $l\in E_H\setminus \hat{F}$.
\end{proof}
By Corollary~\ref{lem: No_mistake_1_cut}, the verification of the link states in $E_a$
can be skipped
if the post-attack grid is known to stay connected.
\iffalse
Since acyclic $H$ has been studied in \cite{Soltan18TCNS}, we only consider $H$ that contains at least one cycle. \ting{why is it critical for $C$ to be a cycle?} To this end, for any given cycle $C=(V_C,E_C)$ in $H$, we define $\vx_C \in \{0,1\}^{|E_H|}$ such that $x_{C,e} = 1$ if $e\in E_C$ and $x_{C,e} = 0$ otherwise. Then, by setting $\vDelta_H^* = \bm{0}$ in \eqref{eq:pf_constraint} (implied by the fact that the grid remains connected), it is easy to see that any $\vx_H$ satisfying \eqref{eq:pf_constraint} can be represented as $\vx_H = \vx_H^*+c\vx_C$, where $c\in \mathbb{R}$. \ting{isn't there inconsistency of this result if there are different cycles $C, C'$ such that $\vx_{C}\neq\vx_{C'}$?} Then,
$\vW_m\in \{0,1\}^{|Q_m|}$ satisfies that $(W_m)_i = x_{C,e}$ if and only if $e$ is the $i^{th}$ link in $Q_m$, and $\vW_f\in \{0,1\}^{|Q_f|}$ satisfies that $(W_f)_i = x_{C,e}$ if and only if $e$ is the $i^{th}$ link in $Q_f$. \ting{why?} Thus, \eqref{eq:alter_gale} becomes
\begin{subequations}\label{eq: connected_gale}
\begin{align}
[-\vx_C^T, \vx_C^T, \vW_m^T, -\vW_f^T, 1] \vz = 0,\\
[(\vx^*_H)^T, (1-\vx^*_H)^T, (\eta-1)\bm{1}^T, -\eta\bm{1}^T, 0]\vz < 0,
\end{align}
\end{subequations}
which leads to the following results.
\begin{lemma}\label{lem:connected:no false alarm}
\ting{If the post-attack grid $(V,E\setminus F)$ remains connected and $H$ contains at least one cycle (before attack)?} The set of estimated failures $\hat{F}$ returned by FLD contains no operational link, i.e., no false alarm.
\end{lemma}
\begin{proof}
Assume without loss of generality that link $e\in Q_f$ such that $(W_f)_1 = x_{C,e}$. Then, by setting $z_{w,f,e} = 1$, $z_* = 1$ if $x_{C,e} = 1$ and $z_* = 0$ otherwise, and other entries as 0, \eqref{eq: connected_gale} will hold, which confirms that $\hat{F} \subseteq F$ according to Lemma~\ref{lem:ground_alter_gale}.
\end{proof}
\begin{lemma}\label{lem:connected:no miss of failed bridge}
Any \emph{failed link that forms a} (1-edge) cut of $H$ will be detected as failed by FLD.
\end{lemma}
\begin{proof}
For any link $e$ that is an 1-edge cut, we have $x_{C,e} = 0$ and thus $(W_m)_i = 0$ if $e$ is the $i^{th}$ link in $Q_m$. Therefore, \eqref{eq: connected_gale} will hold if we set $z_{w,m,e} = 1$ and other entries of $\vz$ as 0, which guarantees the correct identification of $e$'s status according to Lemma~\ref{lem:ground_alter_gale}.
\end{proof}
\ting{Lemma~\ref{lem:connected:no false alarm} implies that all the links detected as failed must have failed, and Lemma~\ref{lem:connected:no miss of failed bridge} implies that all the cut links not detected as failed must be operational. }
\fi
\iffalse
{\bf{\emph{Remark: }}}
\begin{enumerate}
\item The difficulty of verifying links in a cycle through Lemma~\ref{lem:certify_one}, \ref{lem:certify_two} and \ref{lem:certify_3} is the unknown $f_{U,g}$, which however, can be approximated as $\hat{f}_{U,g} = \sum_{u\in U}p_u$ without any other information.
\item Verifying results of FLD is an iterative process, since the verification of one link can help verifying another link. The reasons is twofold. First, the conditions in Lemma~\ref{lem:certify_one}, \ref{lem:certify_two} and \ref{lem:certify_3} can be relaxed. For example, if $e_1\in F$ is certified, condition~3) of Lemma~\ref{lem:certify_one} can be relaxed. Second, the verification of certain links help to better approximate $f_{U,g}$. For example, in the system shown in Fig.~\ref{fig:eg_hyper_node}, $\Delta_{u_1}^*$ can be recovered through Lemma~\ref{lem:recover delta} if $l_1 \in E_H\setminus F$ is certified.
\item Intuitively, the verification process should start from links in no cycles and then the links in a cycle. For example, in the system shown in Fig.~\ref{fig:eg_hyper_node}, $l_2$ and $l_6$ should be verified before other links.
\end{enumerate}
\ting{I think it is more interesting if a verification algorithm is developed. If not, perhaps only present the verification of link not on cycle, as the results do not cover all cases of a link on cycles. }
It is hard to derive verifiable conditions based on Theorem~\ref{lem: final_lemma} since it requires the knowledge of $F$ to form a set of fail-cover hyper-nodes $T$.
\fi
\iffalse
\emph{Remark:} A few remarks are in order:
\begin{enumerate}
\item If $|V_H|$ is small, it is common that $H$ contains either no generator or no load, as covered by Corollary~\ref{col:no_gen_load}.
\item In the special cases that $H$ satisfies Theorem~\ref{thm:localize failed links}, while Theorem~\ref{thm:localize failed links} gave a method to uniquely localize the failed links if $\boldsymbol{\Delta}_H$ is known, $\boldsymbol{\Delta}_H$ cannot be recovered from measurements from $\bar{H}$
if $V_2\neq \emptyset$, as some nodes in $H$ are disconnected from $\bar{H}$ by the attack. However, FLD can correctly localize the failed links under the conditions in Theorem~\ref{thm:capacity_twoIslands_load_both}.
%
\item Below, we provide an approach to configure the grid (more precisely, the PMUs) to satisfy the conditions of Corollary~\ref{col:no_gen_load}.
If a generator bus loses all its power (e.g., $p_v>0$ and $p'_v=0$), then it will set its phase angle to a pre-defined large value; if a load bus loses all its power (e.g., $p_v<0$ and $p'_v=0$), then it will set its phase angle to a pre-defined small (negative) value.
If $H$ contains no generator (or no load), there will be no power flow in $G_2$, and all the nodes in $G_2$ will set their phase angles to the same large (or small) value. By definition \eqref{eq:tilde{D}}, we will have
$\tilde{\boldsymbol{D}}_{2c,L}\geq \bm{0}$ if $H$ contains no generator and $\tilde{\boldsymbol{D}}_{2c,G}\leq \bm{0}$ if $H$ contains no load.
\end{enumerate}
\fi
\section{Performance Evaluation}\label{sec:Performance Evaluation}
We {first} test our solutions on the Polish power grid (``Polish system - winter 1999-2000 peak'') \cite{zimmerman2019matpower}
with $2383$ nodes and $2886$ links,
where parallel links are combined into one link. We generate the attacked area $H$ by randomly choosing one node as a starting point and performing a breadth first search to obtain $H$ with a predetermined $|V_H|$.
We then randomly choose $|F|$ links within $H$ to fail.
{The generated $H$ consists of buses topologically close to each other, which will intuitively share communication links in connecting to the control center and can thus be blocked together once a cyber attack jams some of these links. Note, however, that our solution does not depend on this specific way of forming $H$.}
The phase angles of each island without any generator or load are set to $0$, and the rest are computed according to \eqref{eq:B theta = p}. %
For each setting of $|V_H|$ and $|F|$, we generate $300$ different $H$'s and $70$ different $F$'s per $H$. Each evaluated metric is shown via the mean and the $25^{\mbox{\small th}}$/$75^{\mbox{\small th}}$ percentile (indicated by the error bars). The threshold $\eta$ is set as $0.5$.\looseness=-1
We first evaluate the fraction of verifiable links in $E_a$ (links in 1-edge cuts) and $E_c$ (links in 2-edge cuts, i.e., $E_c:=\bigcup_{s\in \mathcal{E}_c}s$), as shown in Fig.~\ref{fig:verifytopology_H40}. For each generated case (combination of $H$ and $F$), denote $E_{a,v}:=E_a\cap E_v$ and ${E}_{c,v}:={E}_{c}\cap E_v$. Then in Fig.~\ref{fig:verifytopology_H40}(a), we evaluate the fractions of testable and verifiable links in $E_a$ (${E}_{c}$) for failed links, i.e., $\frac{|E_{a}\cap F|}{|F|}$ ($\frac{|{E}_{c}\cap F|}{|F|}$) and $\frac{|E_{a,v}\cap F|}{|F|}$ ($\frac{|{E}_{c,v}\cap F|}{|F|}$).
The evaluation for operational links is conducted similarly in Fig.~\ref{fig:verifytopology_H40}(b). As can be seen, (\romannumeral1) the fractions of testable and verifiable links both stay almost constant with varying $|F|$, which demonstrates the robustness of Algorithm~\ref{alg: Verification_Alg}; (\romannumeral2) among the testable links ($E_a\cup {E}_c$), most of the failed links are verifiable, but only half of the operational links are verifiable; (\romannumeral3) compared to links in ${E}_{c}$, links in $E_a$ have a higher chance of being verifiable, which
indicates that it is easier to recover the states of the critical links in the attacked area (that form 1-edge cuts).
\begin{figure}
\begin{minipage}{.495\linewidth}\label{mini:topology_Ef}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_verify_topology_EF_H40.pdf}}
\vspace{-.05em}
\centerline{\small (a) Fraction of failed links}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}\label{mini:topology_E2}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_verify_topology_E2_H40.pdf}}
\vspace{-.05em}
\centerline{\small (b) Fraction of operational links}
\end{minipage}
\caption{Fraction of testable/verifiable links in Polish system
($|V_H|=40$).}
\label{fig:verifytopology_H40}
\vspace{-1em}
\end{figure}
\begin{table}[tb]
\footnotesize
\renewcommand{\arraystretch}{1.3}
\caption{Percentage of cases that Algorithm~\ref{alg: verify_LP} verifies additional links in Polish system} \label{tab:frac_alg2_effective}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
Type of links & $|F| = 3$& $|F| = 6$& $|F| = 9$& $|F| = 12$ \\
\hline
Failed Links & $18.86\%$ & $31.94\%$ & $45.69\%$ & $54.42\%$ \\
\hline
Operational Links & $81.13\%$ & $84.24\%$ & $85.41\%$ & $85.69\%$ \\
\hline
All Links & $83.75\%$ & $88.23\%$ & $91.02\%$ & $91.48\%$ \\
\hline
\end{tabular}
\end{table}
Next, we evaluate two metrics to study the value of Algorithm~\ref{alg: verify_LP}. The first is the fraction of links verified by Algorithm~\ref{alg: verify_LP} but not Algorithm~\ref{alg: Verification_Alg}, as shown in Fig.~\ref{fig:verifytopology_H40} as 'Verifiable - Alg.~2'. The second is the {percentage of cases} that Algorithm~\ref{alg: verify_LP} can verify additional links, given in Table~\ref{tab:frac_alg2_effective} for different $|F|$. We observe that Algorithm~\ref{alg: verify_LP} can usually verify more links based on the results of Algorithm~\ref{alg: Verification_Alg}, although the number of additionally verified links is not large.
\begin{figure}
\begin{minipage}{.495\linewidth}\label{subfig:Verify_EF_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_Verify_EF_H40.pdf}}
\centerline{\small (a) Fraction of failed links.}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}\label{subfig:Verify_E2_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_Verify_E2_H40.pdf}}
\centerline{\small (b) Fraction of operational links.}
\end{minipage}
\caption{Comparison between verifiable links, theoretically guaranteed links, and actually correctly identified links in Polish system ($|V_H|=40$).}
\label{fig:verify_guarantee_general}
\vspace{-1em}
\end{figure}
Then, we compare the fraction of \emph{verifiable links} with unknown ground truth {of $F$} to the fraction of links whose states are \emph{guaranteed} to be correctly estimated by FLD based on the ground truth $F$ according to Lemma~\ref{lem:ground_alter_gale} (`Guaranteed') and the \emph{actual} fraction of links whose states are correctly estimated by FLD (`Experiment Results'), as shown in Fig.~\ref{fig:verify_guarantee_general}. We see that most of the failed links are verifiable, while only half of the operational links are verifiable. This indicates that most (more than $90\%$) of the unverifiable links are operational. To understand such a phenomenon, we observe in experiments that many operational links carry small post-attack power flow, which makes the conditions in Theorem~\ref{lem:certify_Ef_0}-\ref{lem:certify_Ef_2} hard to satisfy. On the contrary, the values of hypothetical power flows on failed links are usually large. {Nevertheless, the fraction of links whose states are correctly identified by FLD is much higher: out of all the failed links, over $80\%$ will be estimated as failed and verified as so, while another $15\%$ will be estimated as failed but not verified; out of all the operational links, over $50\%$ will be estimated and verified as operational, while the rest will also be estimated as operational but not verified.}
\looseness=0
Finally, for the special case that the post-attack grid stays connected, we study the benefits of knowing the connectivity and the corresponding modification in Section~\ref{sec: special_connected}, as shown in Fig.~\ref{fig:verifyGuaranteeConnected_H40} and Table~\ref{tab:frac_connected_grid}. Specifically, `X-agnostic' denotes the performance of `X' without knowing the connectivity, while `X-known' denotes the counterpart that adopts the modification in Section~\ref{sec: special_connected}. The meaning of `X' is the same as in Fig.~\ref{fig:verify_guarantee_general}. In Table~\ref{tab:frac_connected_grid}, we evaluate the percentage of randomly generated cases ($H$ and $F$) that the post-attack grid $G'$ remains connected. We observe that (\romannumeral1) the knowledge of connectivity can help verify more than $10\%$ additional failed links and $30\%$ additional operational links; (\romannumeral2) when $|F|$ is small (e.g., $|F|\le 3$), $G'$ remains connected in the majority of the cases.
These results indicate the value of the knowledge of connectivity.
\begin{table}[tb]
\footnotesize
\renewcommand{\arraystretch}{1.3}
\caption{Percentage of cases of connected post-attack Polish system ($|V_H| = 40$)} \label{tab:frac_connected_grid}
\centering
\begin{tabular}{c|c|c|c}
\hline
$|F| = 3$& $|F| = 6$& $|F| = 9$& $|F| = 12$ \\
\hline
57.12$\%$& 26.33$\%$& 11.87$\%$ & 5.04$\%$ \\
\hline
\end{tabular}
\end{table}
{To validate our observations, we further evaluate our solutions on the IEEE 300-bus system extracted from MATPOWER \cite{zimmerman2019matpower}, as shown in Fig.~\ref{fig:IEEE_verify_guarantee_general}--\ref{fig:IEEE_verify_guarantee_connected} and Table~\ref{tab:IEEE_frac_connected_grid}. The configuration of these experiments is the same as before, except that $|V_H| = 20$ due to the smaller scale of the test system. Compared with Fig.~\ref{fig:verify_guarantee_general}--\ref{fig:verifyGuaranteeConnected_H40} and Table~\ref{tab:frac_connected_grid}, all the results from the IEEE 300-bus system are qualitatively similar to those from the Polish system, and hence validate the generality of our previously observations. }
\iffalse
\begin{figure}
\begin{minipage}{.495\linewidth}\label{subfig:Verify_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_Verify_H40.pdf}}
\centerline{\small (a) Fraction of failed links.}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}\label{subfig:Verify_VaryingH}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_Verify_VaryingH.pdf}}
\centerline{\small (b) Prob. of no miss}
\end{minipage}
\caption{Prob. of correctly identifying failed link status with known phase angles but unknown active powers ($|V_H|=40$). \ting{why the bars in (a) do not add up to 1? why not showing results for operational links? why the fraction of testable links can be smaller than the fraction of verifiable liks in (b)? typo: two legends for $|F|=9$}}
\label{fig:verifyCond_acyclic}
\vspace{-1em}
\end{figure}
\fi
\begin{figure}
\begin{minipage}{.495\linewidth}\label{mini:guarantee_ef}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_verify_guarantee_EF_H40.pdf}}
\centerline{\small (a) Fraction of failed links.}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_verify_guarantee_E2_H40.pdf}}
\centerline{\small (b) Fraction of operational links.}
\end{minipage}
\caption{Performance comparison for connected post-attack Polish system ($|V_H|=40$).}
\label{fig:verifyGuaranteeConnected_H40}
\vspace{-1em}
\end{figure}
\begin{figure}
\begin{minipage}{.495\linewidth}\label{subfig:IEEE_Verify_EF_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_IEEE_Verify_EF_H40.pdf}}
\centerline{\small (a) Fraction of failed links.}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}\label{subfig:IEEE_Verify_E2_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_IEEE_Verify_E2_H40.pdf}}
\centerline{\small (b) Fraction of operational links.}
\end{minipage}
{\caption{Comparison between verifiable links, theoretically guaranteed links, and actually correctly identified links in IEEE 300-bus system ($|V_H|=20$).}
\label{fig:IEEE_verify_guarantee_general}}
\end{figure}
\begin{figure}
\begin{minipage}{.495\linewidth}\label{subfig:IEEE_VerifyConn_EF_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_IEEE_VerifyConn_EF_H40.pdf}}
\centerline{\small (a) Fraction of failed links.}
\end{minipage}\hfill
\begin{minipage}{.495\linewidth}\label{subfig:IEEE_VerifyConn_E2_H40}
\centerline{
\includegraphics[width=1\columnwidth]{figures/Fig_IEEE_VerifyConn_E2_H40.pdf}}
\centerline{\small (b) Fraction of operational links.}
\end{minipage}
{\caption{Performance comparison for connected post-attack IEEE 300-bus system ($|V_H|=20$). }
\label{fig:IEEE_verify_guarantee_connected}}
\end{figure}
\begin{table}[tb]
\footnotesize
\renewcommand{\arraystretch}{1.3}
{\caption{Percentage of cases of connected post-attack IEEE 300-bus system ($|V_H|=20$)} \label{tab:IEEE_frac_connected_grid}}
\centering
\begin{tabular}{c|c|c|c}
\hline
$|F| = 2$& $|F| = 4$& $|F| = 6$& $|F| = 8$ \\
\hline
73.73$\%$& 51.10$\%$& 32.89$\%$ & 18.54$\%$ \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}\label{sec:Conclusion}
We considered the problem of localizing failed links in a smart grid under a cyber-physical attack that blocks sensor data from the attacked area and disconnects an unknown subset of links within this area that may disconnect the grid. Building on top of a recently proposed failure detection algorithm (FLD) that has shown empirical success, we focused on verifying the correctness of the estimated link states, by developing theoretical conditions that can be verified based on observable information and polynomial-time algorithms that use these conditions to verify link states. Our evaluations based on the Polish power grid showed that the proposed algorithms are highly successful in verifying the states of truly failed links. Compared to the previous solutions {(including \cite{Huang20arXiv})} for link state estimation that label links with binary states (failed/operational) without guaranteed correctness, our solution labels links with ternary states (failed/operational/unverifiable), where the states of verifiable links are identified with guaranteed correctness. This, together with the observation that most of the unverifiable links are operational, provides valuable information for planning repairs during the recovery process.
\bibliographystyle{IEEEtran}
|
1,116,691,500,916 | arxiv | \section{Introduction}
In the last years, the spectrum of the Dirac operator on hypersurfaces of $\Spin$
manifolds has been intensively studied. Indeed, many extrinsic upper bounds have been obtained
(see \cite{Am1, Am2, AF, An, Ba, Bm} and references therein) and more recently
in \cite{HMZ1, HMZ2, HMZ02, HZ1, HZ2, Z}, extrinsic lower bounds for the hypersurface Dirac operator are established.
From these spectral estimates and their limiting cases, many topological and
geometric informations on the hypersurface are derived.\\\\
In \cite{HMZ1}, O. Hijazi, S. Montiel and X. Zhang investigated the spectral
properties of the Dirac operator on a compact manifold with boundary for the Atiyah-Patodi-Singer type boundary
condition (or shortly APS-boundary condition) corresponding to the spectral resolution of the classical Dirac operator
of the boundary hypersurface. They proved that, on the compact boundary $\Sigma = \partial M$ of a compact
Riemannian $\spin$ manifold $(M^{n+1}, g)$ of nonnegative
scalar curvature $\s^M$, the first nonnegative eigenvalue
of the Dirac operator on the boundary satisfies
\begin{eqnarray}\label{hmz}
\lambda_1 \geq \frac n2 \inf_\Sigma H,
\end{eqnarray}
where the mean curvature of the boundary $H$ is calculated with respect to the
inner normal and assumed to be nonnegative. Equality holds in \eqref{hmz} if and only if $H$ is
constant and every eigenspinor associated with the eigenvalue
$\lambda_1$ is the restriction to $\Sigma$ of a parallel spinor field on $M$
(and hence $M$ is Ricci-flat). As application of the limiting case, they gave an elementary $\Spin$ proof of
the famous Alexandrov theorem: {\it The only closed embedded hypersurface in $\R^{n+1}$ of constant mean
curvature is the sphere of dimension $n$.}\\\\
Furthermore, Inequality \eqref{hmz} does not only give an extrinsic lower bound on the
first nonnegative eigenvalue but can
also be seen as an obstruction to positive scalar curvature of the interior
given only in terms of a neighbourhood of the boundary. More precisely, let a
neighbourhood of the boundary $\Sigma$ be equipped with a metric of
nonnegative scalar curvature and such that the boundary has nonnegative mean curvature. If the lowest positive
eigenvalue of the Dirac operator on the boundary is smaller than
$\frac{n}{2}\inf_{\Sigma} H$, then the metric cannot be extended to all of
$M$ such that the scalar curvature remains nonnegative.\\
In this paper, we extend the lower bound \eqref{hmz} to noncompact boundaries of Riemannian
$\Spinc$ manifolds under suitable geometric assumptions, see Theorem \ref{main}. When shifting from the compact case to the noncompact case, many obstacles
occur. Moreover, when shifting from the classical $\Spin$ geometry to $\Spinc$ geometry, the situation is more general
since the spectrum of the Dirac operator will not only depend on the geometry of the manifold but also on the connection of
the auxiliary line bundle associated with the fixed $\Spinc$ structure.\\
When we consider a Riemannian $\Spin$ or $\Spinc$ manifold with noncompact boundary, the main technical difference to the compact case
is that we cannot
restrict all our computations to smooth spinors. For compact manifolds, this is
possible by using the spectral decomposition of $L^2$ by an eigenbasis. For
complete manifolds, eigenspinors do not have to exist or even if they do, in
general they do not form an orthonormal basis of $L^2$ since continuous
spectrum can occur. Additionally, the proof of Inequality \eqref{hmz} in the closed case uses
the existence of a solution of a boundary value problem defined under the $APS$-boundary condition. While for noncompact
boundaries
the idea
of $APS$-boundary conditions can be carried over to noncompact boundaries
by using the spectral theorem, it is not clear to us whether they actually define an actual boundary condition,
see Example \ref{ex_bd_cond}.\\
In order to circumvent all these problems, a large part of the paper is devoted to give a generalization of the theory
of boundary value problems for noncompact boundaries, see Section \ref{boundary_values}. We stick
to the part of the theory that gives existence of solutions of such boundary value problem, cf. Remark \ref{comparebb}.
For complete manifolds with closed boundary, the theory of boundary value problems is given in \cite{baer_ballmann_11} by Ch.~B\"ar and W.~Ballmann. They did not only restrict to the classical Dirac operator but they generalized the traditional
theory of elliptic boundary value problems to Dirac type operators. Additionally, they proved a decomposition
theorem for the essential spectrum, a general version of Gromov and Lawson's relative index
theorem and a generalization of the cobordism theorem.\\\\
In Section \ref{boundary_values}, we will classify boundary conditions for a Riemannian $\Spinc$ manifold $(M^{n+1}, g)$ with
noncompact boundary $\Sigma := \partial M$ and of bounded geometry, see Definition \ref{bdd_geo}. Indeed, we prove in Section \ref{boundary_values} that
the trace map or the restriction map $R: \phi\mapsto \phi|_\Sigma$ where $\phi$ is a compactly supported smooth spinor on
$M$ can be extended to a bounded operator
$$R: \dom\, D_{\mathrm{max}} \to H_{-\frac{1}{2}} (\Sigma, \SS_M|_\Sigma).$$ Here $\dom\, D_\text{max}$ is the maximal
domain of the Dirac operator on $M$, $\SS_M|_\Sigma$ is the restriction of the $\Spinc$ bundle $\SS_M$ to $\Sigma$
and for $H_{-\frac{1}{2}} (\Sigma, \SS_M|_\Sigma)$ see Definition \ref{-12-Sobolev}. The map $R$ is not surjective. But in Theorem \ref{workaround_image}, we show that
there is an extension map $\tilde{\mathcal{E}}$ -- a right inverse to the restriction map $R: \Gamma_c^\infty(M, \SS_M) \to \Gamma_c^\infty(M, \SS_M)$ -- such that $\tilde{\mathcal{E}}R$ is a bounded linear operator
from $\dom\, D_\text{max}$ to itself. The definition of $\tilde{\mathcal{E}}$ uses the extension map for closed boundaries introduced by B\"ar and Ballmann in \cite{baer_ballmann_11} as local building blocks. This will allow to
equip $R(\dom\ D_\mathrm{max})$ with a norm $\Vert.\Vert_{\check{R}}$ that turns it into a Hilbert space. With these ingredients, we can then classify the closed extensions of the Dirac operator $D_{cc}$ acting on smooth compactly
supported spinors on $M$: For every closed extension of the Dirac operator acting on smooth compactly supported spinors on $M$ the set $B:=R(\dom\, D)\subset H_{-\frac{1}{2}} (\Sigma, \SS_M|_\Sigma)$ is closed in $( R(\dom\, D_{\mathrm{max}}), \Vert.\Vert_{\check{R}})$. Conversely, every closed linear subset $B\subset (R(\dom\, D_{\mathrm{max}}),\Vert.\Vert_{\check{R}})$ gives the domain $\dom\, D_B$ of a closed extension.
Such subsets $B$ are called a boundary conditions.\\\\
Then, we generalize the existence result for boundary value problems to our noncompact setting. For this, we need the
notion of {\it $B$-coercivity at infinity}, see Definition \ref{coer}. This notion generalizes the
notion of {\it coercivity at infinity} for closed boundaries as used in \cite{baer_ballmann_11}, where this assumption is also needed when characterizing the Fredholmness of the Dirac operator. The {\it
$B$-coercivity at infinity} condition will in
general depend on the boundary condition $B$ and under some additional assumptions, it coincides with the
{\it coercivity at infinity} condition used in \cite{baer_ballmann_11}.
\begin{theorem}\label{intro-bvp} Let $M$ be a Riemannian $\Spinc$ manifold with boundary $N$. Let $(M,N)$ and the auxiliary line bundle $L$ over $M$ be of bounded geometry, cp. Definitions~\ref{bdd_geo} and~\ref{bdd_geo2}. Let $B\subset R(\dom\, D_{\rm max})$ be a boundary condition, and let the Dirac operator
$$D_B\colon \mathrm{dom} D_B\subset L^2(M, \SS_M) \to L^2(M, \SS_M)$$
be $B$-coercive at infinity. Let $P_B$ be a projection from $R(\dom\, D_\mathrm{max})$ to $B$.
Then, for all $\psi\in L^2(M, \SS_M)$ and $\tilde{\rho}\in \dom\, D_\mathrm{max}$
where $\psi-D\tilde{\rho}\in (\ker (D_{B})^*)^\perp$ the boundary value problem
$$\left\{
\begin{array}{rlr}
D\phi &=\psi & \text{on}\ M,\\
(\Id - P_{B})R\phi&= (\Id - P_{B})R\tilde{\rho}&\text{on}\ \Sigma,
\end{array}
\right.
$$
has a unique solution $\phi\in \dom\, D_\mathrm{max}$, up to elements of the kernel $\ker D_B$.
\end{theorem}
Note that projection just means a linear operator that restricted to $B$ acts as identity operator.
Theorem \ref{intro-bvp} will be one of the main ingredients to generalize Inequality \eqref{hmz} to our noncompact setting. As boundary condition $B$ we will not take the APS-boundary condition as in the closed case but another one: $B_\pm$, cf. Section \ref{boundcond_B+-}. For closed boundaries, the $B_\pm$ boundary condition
was introduced in \cite{HMZ02} to prove a conformal version of \eqref{hmz}.
Using Theorem~\ref{intro-bvp} for the boundary condition $B_\pm$ and the $\Spinc$ Reilly
inequality on possibly open boundary domains, we obtain
\begin{theorem}\label{main}
Let $(M^{n+1},g)$ be a complete Riemannian $\Spinc$ manifold with boundary $\Sigma$ and $L$ be the auxiliary line bundle associated to the $\Spinc$-structure. Assume that $(M,\Sigma)$ and $L$ are of
bounded geometry. Moreover, we assume that
$\Sigma$ has nonnegative
mean curvature $H$ with respect to its inner unit normal field of $\Sigma$,
the Dirac operator $D$ is $(B_+)$- or $(B_-)$-coercive at infinity and that $ \s^M
+2\i\Omega\cdot$ is a nonnegative operator where $\i\Omega$ denotes the curvature $2$-form of $L$. Then, the
infimum $\lambda_1$ of the nonnegative part of the spectrum of the Dirac operator on $\Sigma$ satisfies
\begin{eqnarray*
\lambda_1\geq \frac{n}{2} \inf_{\Sigma} H.
\end{eqnarray*}
If $\lambda_1 \geq 0$ is an eigenvalue, equality holds if and only if $H$ is constant and any eigenspinor corresponding
to $\lambda_1$ is the restriction of a parallel $\Spinc$ spinor $\phi$ on $M$.
\end{theorem}
The paper is structured as follows: In Section \ref{prelim}, we give all the preliminaries as e.g.
the $\Spinc$ Dirac operator and the assumption on the bounded geometry. In Section \ref{sec_trace} we review the trace and extension theorem for Sobolev spaces on manifolds of bounded geometry and appropriate noncompact boundary, the spectral decomposition of the Dirac operator on the boundary and analyze an extension map for the maximal domain of the Dirac operator.
The theory of boundary values will be generalized to
our noncompact setting in Section \ref{boundary_values}. The special boundary condition $B_\pm$ needed to proof
the desired inequality is examined in Section \ref{boundcond_B+-}. In Section \ref{section_coer}, we study the coercivity
condition for the Dirac operator. Then, we review the spinorial Reilly inequality in order to ready to proof
the inequality in Section \ref{proofmain}.
\section{Notations and preliminaries}\label{prelim}
In this section, we briefly review
some basic facts about $\Spinc$ geometry. Then, we give the necessary
preliminaries on Sobolev spaces on manifolds with boundary, the Trace
Theorem and its implications, some basics of spectral theory, and we recall the closed
range theorem.\\
\paragraph{\bf The $\Spinc$ Dirac operator.} Let $(M^{n+1}, g)$ be an $(n+1)$-dimensional
Riemannian $\Spinc$ manifold with boundary. On such a manifold we have a
Hermitian complex
vector bundle $\SS_M$ endowed with a natural scalar product $\langle., .\rangle$ and with a connection
$\nabla$ which parallelizes the metric.
Moreover, the bundle $\SS_M$, called the $\Spinc$ bundle, is endowed with a Clifford
multiplication denoted by ``$\cdot$'',
$\cdot : TM \longrightarrow \mathrm{End}_\mC (\SS_M)$, such that at every
point $x\in M$, ``$\cdot$''defines
an irreducible
representation of the corresponding Clifford algebra. Hence, the complex rank of
$\SS_M$ is $2^{[\frac{n+1}{2}]}$. Given a $\Spinc$ structure on $(M^{n+1}, g)$, one can prove that the
determinant line bundle $\mathrm{det}\ \SS_M$ has a root of index $2^{[\frac
{n+1}{2}]-1}$, see \cite[Section 2.5]{6}. We denote
by $L$ this root
line bundle over $M$ and call it the auxiliary line bundle associated with the
$\Spinc$ structure.
Locally, a $\Spin$ structure always exists. We denote by $\SS_M^{'}$ the (possibly globally non-existent)
spinor bundle. Moreover, the square root of the
auxiliary line bundle $L$ always exists locally. But, $\SS_M = \SS_M^{'} \otimes L^{\frac 12}$, see
\cite[Appendix D]{6} and
\cite{nakadthese}. This essentially means that, while the spinor bundle and
$L^{\frac 12}$ may not exist globally, their tensor product (the $\Spinc$ bundle) is defined globally. Thus,
the connection $\nabla$ on $\SS_M$ is the twisted connection of the one on the spinor bundle (coming
from the Levi-Civita connection) and a fixed connection on $L$.\\
We denote by $\Gamma_c^\infty(M, \SS_M)$ the set of all compactly supported smooth spinors on $M$. This allows boundary values if
$\partial M \neq \emptyset$.
The set of smooth spinors that are compactly supported in the interior of $M$ is
denoted by $\Gamma_{cc}^\infty(M, \SS_M)$. For abbreviation, we set $L^2=L^2(M)=L^2(M, \SS_M)$ and $L^2(\Sigma)=L^2(\Sigma, \SS_M|_\Sigma)$ and
analogously for other function spaces. Moreover, $(.,.)$ shall always denote the $L^2$-scalar product on $M$
and $(.,.)_\Sigma$ the one on $\Sigma$.
With these ingredients, we may define the Dirac operator $D$ acting on the space of smooth sections of $\SS_M$ -- denoted by $\Gamma^\infty(M, \SS_M)$ -- by the composition of the metric connection and the Clifford
multiplication. In local coordinates this reads as
$$D =\sum_{j=1}^{n+1} e_j \cdot \nabla_{e_j}$$
where $\{e_j\}_{j=1,\cdots, n+1}$ is an orthonormal basis of $TM$. It is a first-order elliptic
operator satisfying for all smooth spinors $\varphi,\psi$ on $M$ at least one of them being compactly supported \
\begin{align}\label{L2-structure_mod_boundary}
(D\psi, \varphi)-(\psi, D\varphi)=-\int_{{\partial M}} \langle \nu\cdot\psi|_{\partial M},
\varphi|_{\partial M}\rangle ds,
\end{align}
where $(., .)$ is the $L^2$-scalar product given by $(\phi, \psi)=\int_M \langle
\phi, \psi\rangle dv$, $\partial M$ is the boundary of $M$, $|_{ \partial M}$ denotes the restriction to the boundary, $\nu$ the inner unit
normal
vector of the embedding $\partial M \hookrightarrow M$, and $dv$ (resp. $ds$) is
the Riemannian volume form of $M$
(resp. of $\partial M$). Hence, if $\partial M = \emptyset$, the Dirac operator
is formally self-adjoint with respect to the $L^2$-scalar product.\\
An important tool when examining the Dirac operator on $\Spinc$ manifolds is the
Schr\"{o}dinger-Lichnerowicz formula:
\begin{eqnarray}
D^2 = \nabla^*\nabla + \frac 14 \s^M\; \mathrm{Id}_{\Gamma (\SS_M)}+
\frac{\i}{2}\Omega\cdot,
\label{sl}
\end{eqnarray}
where $\nabla^*$ is the adjoint of $\nabla$ with respect to the $L^2$-scalar
product,
$\i\Omega$ is the curvature of the auxiliary line bundle $L$ associated with a
fixed connection ($\Omega$ is a real $2$-form
on $M$) and $\Omega\cdot$ is the extension of the Clifford multiplication to
differential forms.
\begin{example}
(i) A $\Spin$ structure can be
seen as a $\Spinc$ structure with trivial auxiliary
line bundle $L$ and trivial connection (and so $\i\Omega =0$).\\
(ii) Every almost complex manifold $(M^{2m = n+1}, g, J)$
of complex dimension $m$ has a canonical $\Spinc$ structure. In fact, the complexified cotangent bundle
$T^*M\otimes \mathbb{C} = \Lambda^{1,0} M \oplus \Lambda^{0,1}M$
decomposes into the $\pm \i$-eigenbundles of the complex linear extension of the complex structure $J$.
Thus, the spinor bundle of the canonical $\Spinc$ structure is given by $$\SS_M = \Lambda^{0,*} M =\oplus_{r=0}^m \Lambda^{0,r}M,$$
where $\Lambda^{0,r}M = \Lambda^r(\Lambda^{0,1}M)$ is the bundle of $r$-forms of type $(0, 1)$.
The auxiliary line bundle of this canonical $\Spinc$ structure is given by
$L = (K_M)^{-1}= \Lambda^m (\Lambda^{0,1}M)$, where $K_M$ is the canonical bundle of $M$ \cite{6, 19, HMU1, nakadthese}.
Let $\alpha$ be the K\"{a}hler form defined by the complex structure $J$, i.e. $\alpha (X, Y)= g(X, JY)$ for
all vector fields $X,Y\in \Gamma(TM).$ The auxiliary line bundle $L= (K_M)^{-1}$ has a canonical holomorphic
connection induced from the Levi-Civita connection whose curvature form is given by $\i\Omega= \i\rho$, where $\rho$
is the Ricci $2$-form given by $\rho(X, Y) = \mathrm{Ric} (X, JY)$. Here $\mathrm{Ric}$ denotes the Ricci tensor of $M$. For any other $\Spinc$ structure on $M^{2m}$, the spinorial bundle can be written as \cite{6, HMU1}:
$$\SS_M = \Lambda^{0,*}M\otimes\mathcal L,$$
where $\mathcal L^2 = K_M\otimes L$ and $L$ is the auxiliary bundle associated with this $\Spinc$
structure. In this case, the $2$-form $\alpha$ can be considered as an endomorphism of $\SS_M$ via
Clifford multiplication and we have the well-known orthogonal splitting $\SS_M = \oplus_{r=0}^{m}\SS_M^r,$
where $\SS_M^r$ denotes the eigensubbundle corresponding
to the eigenvalue $\i (m-2r)$ of $\alpha$, with complex rank $\binom{m}{k}$. The bundle $\SS_M^r$ corresponds
to $\Lambda^{0, r}M\otimes\mathcal L$. For the canonical $\Spinc$ structure, the subbundle $\SS_M^0$ is trivial.
Hence and when $M$ is a K\"{a}hler manifold, this $\Spinc$ structure admits parallel spinors (constant functions)
lying in $\SS_M^0$ \cite{19}. Of course, we can define another $\Spinc$ structure for which the spinor bundle is
given by
$\Lambda^{*, 0} M =\oplus_{r=0}^m \Lambda^r (T_{1, 0}^* M)$ and the auxiliary line bundle by $K_M$.
This $\Spinc$ structure is called the anti-canonical $\Spinc$ structure.\\
\end{example}
Any $\Spinc$ structure on $(M^{n+1}, g)$ induces a $\Spinc$ structure on its
boundary $\Sigma = \partial M$ and we have
$$\left\{
\begin{array}{l}
{\SS_M}{|_\Sigma} \ \ \ \ \simeq \SS_{\Sigma}\ \text{\ \ \ if\ $n$ is even,} \\
{\SS^+_M}{|_\Sigma}\ \ \ \ \simeq\SS_{\Sigma} \ \text{\ \ \ if\ $n$ is odd.}
\end{array}
\right.
$$
We recall that if $n$ is odd, the spinor bundle $\SS_M$ splits into
$$\SS_M = {\SS^+_M} \oplus {\SS^-_M},$$
by the action of the complex volume element. Moreover, Clifford multiplication
with a vector field $X$ tangent to $\Sigma$ is given by
$$X\bullet\phi = (X\cdot \nu\cdot \psi){|_\Sigma},$$
where $\psi \in \Gamma^\infty(M, \SS_M)$ (or $\psi \in \Gamma^\infty(\SS^+_M)$ if $n$ is odd),
$\phi$ is the restriction of $\psi$
to $\Sigma$, '$\bullet$' is the Clifford multiplication on $M$.
When $n$ is odd we also get ${\SS^-_M} \simeq
\SS_{\Sigma}$. In this case, the Clifford multiplication by a
vector field $X$ tangent to $\Sigma$
is given by $X\bullet\phi = - (X\cdot\nu\cdot\psi){|_\Sigma}$ and hence we
have
${\SS_M}{|_\Sigma} \simeq \SS_{\Sigma} \oplus\SS_{\Sigma}$.
Moreover, the corresponding auxiliary line bundle $L^\Sigma$ on $\Sigma$ is the restriction to $\Sigma$ of
the auxiliary line bundle $L$
and $\i\Omega^\Sigma = {\i\Omega}{|_\Sigma}$.
We denote by $\nabla^\Sigma$ the spinorial Levi-Civita connection on
$\SS_{\Sigma}$.
For all smooth vector fields $X\in \Gamma^\infty(T\Sigma)$ and for every smooth spinor field $\psi \in
\Gamma^\infty(M, \SS_M)$, we consider $\phi= \psi{|_\Sigma}$
and we have the following $\Spinc$ Gauss formula \cite{HMU1, nakadthese,
2ana}:
\begin{equation*}
(\nabla_X\psi){|_\Sigma} = \nabla^{\Sigma}_X \phi + \frac 12 II(X)\bullet\phi,
\end{equation*}
where $II$ denotes the Weingarten map with respect to $\nu$. Moreover, let $D$
and $D^\Sigma$ be the Dirac operators
on $M$ and $\Sigma$. After denoting any smooth spinor and its
restriction to $\Sigma$ by the same symbol, we have on $\Sigma$ (see \cite{HMU1, 2ana, nakadthese}) that
\begin{eqnarray}
\widetilde{D}^{\Sigma} \phi = \frac{n}{2}H\phi -\nu\cdot D\phi-\nabla_{\nu}\phi ,
\label{diracgauss}
\end{eqnarray}
\begin{eqnarray}
\widetilde{D}^{\Sigma}(\nu\cdot\varphi) = -\nu\cdot\widetilde{D}^{\Sigma} \varphi,
\label{d1}
\end{eqnarray}
where $H = \frac 1n \mathrm{tr}(II)$ denotes the mean curvature and
$\widetilde{D}^{\Sigma} = D^\Sigma$ if $n$ is even and $\widetilde{D}^{\Sigma}=D^\Sigma
\oplus(-D^\Sigma)$ if $n$ is odd. Note that $\sigma(\widetilde{D}^{\Sigma})=\{\pm\lambda\ |\ \lambda\in\sigma(D^\Sigma)\}$
where $\sigma(A)$ denotes the spectrum of an operator $A$.\\
\paragraph{\bf Bounded geometry.} In this paragraph, we recall the definition of manifolds of bounded geometry.
\begin{definition}\label{bdd_geo}\cite[Definition 2.2]{Schick01}
Let $(M^{n+1},g)$ be a complete Riemannian manifold with boundary $\Sigma$. We say
that $(M,\Sigma)$ is of bounded geometry if the following is fulfilled
\begin{itemize}
\item[(i)] The curvature tensor of $M$ and all its covariant derivatives are bounded.
\item[(ii)] The injectivity radius of $\Sigma$ is positive.
\item[(iii)] There is a collar around $\Sigma$, i.e: There is $r_\partial>0$ such that the geodesic collar
\[F: U_\Sigma=[0,r_\partial)\times \Sigma \to M,\ (t,x)\mapsto \exp_x(t\nu)\]
is a diffeomorphism onto its image where $\nu$ is the inner unit normal field on $\Sigma$. We equip $U_\Sigma$ with the induced metric and will identify $U_\Sigma$ with its image.
\item[(iv)] There exists $\epsilon>0$ such that the injectivity radius of each point $x\in M\setminus U_\Sigma$ is greater or equal than $\epsilon$.
\item[(v)] The mean curvature of $\Sigma$ and all its covariant derivatives are bounded.
\end{itemize}
\end{definition}
\begin{definition}\label{bdd_geo2}(cp. \cite[A.1.1]{Shubin} together with \cite[Theorem B]{Eich})
Let $E$ be a hermitian vector bundle over $M$ where $(M,\Sigma)$ is of bounded geometry. Then $E$ is said to be of bounded geometry
if its curvature and all its covariant derivatives are bounded.
\end{definition}
\begin{remark
\begin{enumerate}
\item Note that the above definition contains the usual definition of manifold of bounded
geometry without boundary. Moreover, if $(M,g)$ is of bounded geometry,
then $(\Sigma, g|_\Sigma)$ is also of bounded geometry \cite[Corollary 2.24]{Schick01}.
\item For the spinor bundle $\SS_M^{'}$ associated with a $\Spin$ structure, the bounded geometry follows
automatically from the bounded geometry of $M$, \cite[Section 3.1.3]{Ammha}. For a $\Spinc$ manifold the
situation is more general since the $\Spinc$ bundle $\SS_M$ does not only depend
on the geometry of the underlying manifold but also on the geometry of the auxiliary line bundle $L$. But,
$\SS_M = \SS_M^{'}\otimes L^\frac{1}{2}$, where $\SS_M^{'}$
is the locally defined spinor bundle, $L^{\frac{1}{2}}$ is locally defined too and $\SS_M$ is globally defined.
Thus, the assumption that $L$ is of bounded geometry assures that $\SS_M$ is also of bounded geometry.\end{enumerate}
\end{remark}
\framebox{
{\bf Assumption for the rest of the paper:} $(M,\Sigma)$ and $L$ are of bounded geometry.}\\
\paragraph{\bf The Sobolev space $H_1$ on manifolds with boundary.}
We define the $H_1 = H_1 (M, \SS_M)$-norm on $\Gamma_{c}^\infty(M, \SS_M)$ by
\[\Vert \phi\Vert_{H_1(M, \SS_M)}^2 = \Vert \phi\Vert_{L^2(M, \SS_M)}^2 + \Vert
\nabla\phi\Vert_{L^2(M, \SS_M)}^2.\]
Finally, we define $H_1=H_1 (M, \SS_M)$ as the closure of $\Gamma_c^\infty(M,
\SS_M)$ with respect to the $H_1$-norm defined above.
Using the Lichnerowicz formula \eqref{sl},
the Gau\ss\ theorem $(\nabla^*\nabla \phi, \phi)= \Vert
\nabla\phi\Vert^2_{L^2}+\int_{\Sigma} \langle \nabla_\nu\phi,\phi\rangle ds$, \eqref{L2-structure_mod_boundary} and
\eqref{diracgauss}, we obtain another description of the $H_1$-norm: For all $\phi\in \Gamma_c^\infty(M, \SS_M)$, we have \begin{eqnarray}\label{Lich}
\Vert \phi\Vert_{H_1}^2 = \Vert \phi\Vert_{L^2}^2 + \Vert
D\phi\Vert_{L^2}^2-\int_{M} \frac{\s^M}{4}|\phi|^2 dv - \int_{M} \frac {i}{2}
\<\Omega\cdot\phi, \phi \> dv +\int_{{\Sigma}} \langle \phi|_\Sigma, D^W (\phi|_\Sigma) \rangle ds,
\end{eqnarray}
where $D^W = \widetilde{D}^{\Sigma} -\frac n2 H$ is the so-called Dirac-Witten operator. Note that due to the local
expression of $D$ and the Cauchy Schwarz inequality, we always have
\begin{equation}\Vert D\phi\Vert_{L^2}^2 \leq \int_M \left( \sum_{i=1}^{n+1} |\nabla_{e_i} \phi|\right)^2 dv\leq (n+1)\Vert \nabla\phi\Vert_{L^2}^2,
\label{equ_H1D_easydir}\end{equation}
for all $\phi\in H_1(M, \SS_M)$.\\
\paragraph{\bf Spectral theory.} Most of the following can be found in \cite{baer}. In this paragraph, we shortly
review the spectral theory of the Dirac operator $D\colon H_1(N, \SS_N)\subset L^2(N,
\SS_N) \to L^2(N, \SS_N)$ on a complete Riemannian $\Spinc$ manifold $N$ without boundary.
Note that we assume that $N$ is of bounded geometry, and hence the graph norm of $D$, $\Vert.\Vert_D$, and the $H_1$-norm are equivalent. Then $D$ is self-adjoint and the spectrum is real. A real number $\lambda$ is an eigenvalue of $D$ if there exists a nonzero
spinor $\varphi \in H_1$ with $D\varphi = \lambda \varphi$. Then $\varphi$ is
called an eigenspinor to the eigenvalue $\lambda$. Standard local elliptic regularity
theory gives that an eigenspinor is always smooth. The set of all eigenvalues is
denoted by $\sigma_p(D^\Sigma)$ -- the point spectrum. If $N$ is closed, the Dirac operator has a pure point spectrum.
But on open manifolds, the spectrum might have a continuous part. In general,
the spectrum -- denoted by $\sigma(D)$ --
is composed of the point, the continuous and the residual
spectrum. In case of a self-adjoint operator -- as we have -- there is no
residual spectrum. Often another decomposition of the spectrum is used -- the one into discrete
spectrum
$\sigma_d(D)$ and essential spectrum $\sigma_{ess}(D)$.
A real number $\lambda$ lies in the essential spectrum of $D$ if there
exists a sequence
of smooth compactly supported spinors $\varphi_i$ which $\Vert \varphi_i\Vert_{L^2}=1$, $\varphi_i$ converge weakly to zero and
$$
\Vert (D - \lambda )\varphi_i \Vert_{ L^2} \longrightarrow 0.$$
The essential spectrum contains amongst other elements all eigenvalues of infinite multiplicity. In
contrast, the
discrete spectrum $\sigma_{d}(D) := \sigma_{p}(D) \smallsetminus
\sigma_{ess}(D)$ consists of all eigenvalues of finite multiplicity.\\
\paragraph{\bf Closed Range Theorem.} Next, we want to recall briefly (a part of) the Closed Range Theorem for later use.
\begin{thm}\label{closed_range_theorem}\cite[p.205]{Yo} Let $T: X\to Y$ be a
closed linear operator between Banach spaces $X,Y$. Then the range $\ran(T)$ of
$T$ is closed in $Y$ if and only if $\ran (T)=\ker (T^*)^\perp$ where $T^*$ is
the adjoint operator of $T$ and $\ker (T^*)$ is the kernel of $T^*$.
\end{thm}
A linear operator $T: X\to Y$ between Banach spaces is called Fredholm if its
kernel is finite dimensional and its image has
finite codimension.
\section{Trace theorems and extensions}\label{sec_trace}
We consider the restriction operator
\begin{eqnarray*}
R: \Gamma_c^\infty(M, \SS_{M}) &\to&
\Gamma_c^\infty(\Sigma, \SS_{M}|_\Sigma)\\
\phi&\mapsto& \phi|_{\Sigma}.
\end{eqnarray*}
If it is clear from the
context that $R\phi$ is considered instead of $\phi$, we will sometimes abbreviate
by using $\phi$ only.
The first part of this section will be devoted to see how the restriction operator $R$ extends to a bounded linear operator between the Sobolev spaces $H_1(M, \SS_M)$ and $H_{\frac{1}{2}} (\Sigma, \SS_M|_\Sigma)$. This Theorem is known as Trace Theorem and is a very classical result for $\R^n_+$ and compact manifolds with boundary. After reviewing the Euclidean result and basic definitions, we will shortly review how this result extends to manifolds $(M,\Sigma)$ of bounded geometry. In particular, the restriction operator will have a bounded linear right inverse -- that is called extension operator $\mathcal{E}$.
For more details on the definition of bounded geometry on manifolds with boundary see \cite{Schick01}. For the equivalence of all those different definitions of Sobolev-norms involved here and the corresponding theorems for submanifolds (not necessarily hypersurfaces) see \cite{conny}.
For our purpose, Sobolev spaces will not be sufficient later on. The maximal domain of the Dirac operator is bigger than $H_1(\SS_M)$. The rest of this section is devoted to define an extension operator $\tilde{\mathcal{E}}$ such that $\tilde{\mathcal{E}}R: \Gamma^\infty_{c}(M, \SS_M)\to \Gamma^\infty_{c}(M, \SS_M)$ extends to a bounded operator w.r.t. the graph norm of $D$. For the definition of $\tilde{\mathcal{E}}$ we will use the special extension map introduced by B\"ar and Ballmann in \cite{baer_ballmann_11} for closed boundaries.
\subsection{Trace and Extension for Sobolev spaces}
\paragraph{\bf Trace Theorem for functions on $\mathbb{R}_{+}^{n+1}=\{ (x_0,x_1,\ldots, x_n)\in \mathbb{R}^{n+1}\ |\ x_0\geq 0\}$.}
We identify the boundary of $\R_+^{n+1}$ with $\R^n$. First we repeat the definition of the Sobolev spaces $H_s(\R^n, \mC^r)$:
\begin{definition}\label{-12-Sobolev-euclidean}\cite[Definition 3.1]{taylor_81}
Let $s\in \mathbb{R}$. The $H_s:=H_{s}^2$-norm of a compactly supported function
$f:\mathbb{R}^n \mapsto \mathbb{C}^r$ is defined as
\[ \Vert f\Vert_{H_s(\mathbb{R}^n, \mathbb{C}^r)}^2:= \int_{\mathbb{R}^n} \left|\hat{f}(\xi)\right|^2 (1+|\xi|)^s d \xi\] where
$\hat{f}(x):=(2\pi)^{-\frac{n}{2}} \int_{\mathbb{R}^n} e^{-ix\cdot \xi} f(\xi)
d\xi$ denotes the Fourier transform of $f$. The space $H_s(\mathbb{R}^n, \mathbb{C}^r)$ is then defined as the completion of
$\Gamma_c^\infty(\mathbb{R}^n, \mathbb{C}^r)$, the space of smooth compactly supported functions on $\mathbb{R}^n$ with values in $\mathbb{C}^r$, with respect to the $H_s$-norm.
\end{definition}
The spaces $H_s(\R^{n+1}_+, \mathbb{C}^r)$ are defined analogously.
\begin{thm}\, \cite[p.138, Remark 1]{TF},\cite[Theorem I.3.4]{taylor_81}, \cite[Theorem 7.34 and 7.36]{RenRog} Let $s>\frac{1}{2}$. The restriction map for complex valued smooth functions $R: \Gamma_c^\infty(\R^{n+1}_+)\to \Gamma_c^\infty(\R^{n})$, $f\to f|_{\R^n}$ extends to a bounded linear operator from $H_s(\R^{n+1}_+)$ to $H_{(s-\frac{1}{2})} (\R^{n})$. Moreover there is an extension operator $\mathcal{E}: H_{(s-\frac{1}{2})} (\R^{n})\to H_s(\R^{n+1}_+) $ that is a bounded linear operator and a right inverse to $R$.
\end{thm}
The generalization of this theorem to vector-valued Sobolev spaces follows immediately by the following: Let $f=(f_1, \ldots, f_r): \mathbb{R}^n\to
\mathbb{C}^r$. Then the norms $\Vert f\Vert_{H_{s}(\mathbb{R}^n, \mathbb{C}^r)}$
and $\sum_{i=1}^r \Vert f_i\Vert_{H_{s}(\mathbb{R}^n, \mathbb{C})}$ are
equivalent.\\
\paragraph{\bf Trace Theorem on manifolds of bounded geometry.}
From now on, let $M$ be a Riemannian manifold possibly with boundary and of
bounded geometry, as in Definition \ref{bdd_geo}. Moreover, let $E$ be a
hermitian vector bundle over $M$. We assume that $E$ is also of bounded
geometry, see Definition \ref{bdd_geo2}.
In order to obtain a trace theorem for sections in $E$ we need coordinates of the manifold that are adapted to the structure of the boundary. Those will be Fermi coordinates and there will be a adapted synchronous trivialization of $E$. This will allow that we can use the trace theorem on $\R^n$ on the individual charts to obtain the trace theorem on $(M,\Sigma)$.
In the following, we restrict to trace theorems for Sobolev spaces over $L^2$, for more general domains as Sobolev spaces over $L^p$ or Triebel-Lizorkin spaces see \cite{conny}.
Before we define Sobolev spaces for sections of $E$, we introduce Fermi coordinates adapted to the boundary and a corresponding synchronous trivialization of the vector bundle:
\begin{definition}\cite[Definition 4.3 and Lemma 4.4]{conny},\label{def_synch}\cite[Definition 2.3]{Schick01}
Let $(M^n,\Sigma)$ be of bounded geometry, see Definition \ref{bdd_geo} and the notions defined therein.
Let $r=\min\{ \frac{1}{2} r_{\Sigma}, \frac{1}{4}r_M, \frac{1}{2}r_\partial\}$ where $r_{\Sigma}$ is the injectivity radius of ${\Sigma}$ and $r_M$ the one of $M$. Let $p^{\Sigma}_\alpha\in {\Sigma}$ and $p_\beta\in M$ be points such that
\begin{itemize}
\item the metric balls $B_{r}^{\Sigma}(p^{\Sigma}_\alpha)$ in ${\Sigma}$ (i.e. w.r.t. the metric $g|_{\Sigma}$) give a uniformly locally finite cover of ${\Sigma}$
\item the metric balls $B_{r}(p_\beta)$ in $M$ cover $M\setminus U_r(\Sigma)$ where $U_r(\Sigma):=F([0,r)\times \Sigma)$ and those balls are uniformly locally finite on all of $M$.
\end{itemize}
Let $(U_\gamma)_{\gamma}$ be a locally finite covering of $M$ where each $U_\gamma$ is of the form $B_{r}(p_\beta)$ or $U^{\Sigma}_{p_\alpha^{\Sigma}}=F([0,2r) \times B^{\Sigma}_{2r} (p_\alpha^{\Sigma}))$. By construction the covering $(U_\gamma)_{\gamma}$ is locally finite. Coordinates on $U_\gamma$ are chosen to be geodesic normal coordinates around $p_\beta$ in case $U_\gamma=B_{r}(p_\beta)$. Otherwise coordinates are given by Fermi coordinates \[ \kappa_\alpha: U^{\Sigma}_{p_\alpha^{\Sigma}}:=[0, 2r) \times B_{2r}(0)\subset \mathbb{R}^{n} \to U^{\Sigma}_{p_{\alpha}^{\Sigma}},\quad (t,x)\mapsto \exp_{\exp^{\Sigma}_{p_{\alpha}^{\Sigma}}(x)} (t\nu)\] where $\nu$ is the inner normal field of $\Sigma$ and $\exp^{\Sigma}$ is the exponential map on ${\Sigma}$ w.r.t. the induced metric.
We call such coordinates $(U_\gamma, \kappa_\gamma)_\gamma$ Fermi coordinates for $(M,\Sigma)$. If $U_\gamma=B_r(p_\gamma)$, $E|_{U_\gamma}$ is trivialized via parallel transport along radial geodesic and we
identify $E|_{U_\gamma}$ with the trivial $\mathbb{C}^r$-bundle over $U_\gamma$.
Otherwise, $E|_{U_\gamma}$ is trivialized via parallel transport along radial geodesic of the boundary and along the normal direction. The obtained trivialization is denoted by $(\xi_\gamma)_\gamma$.
\end{definition}
In case of manifolds without boundary, the Definition of $\xi_\gamma$ in \ref{def_synch} is the usual definition
of synchronous trivialization as found in
\cite[Section 3.1.3]{Ammha}. Note that by construction the restriction of a synchronous trivialization of $E$ over a manifold $M$ to its boundary $\Sigma$ gives
a synchronous trivialization of $E|_{\Sigma}$.
\begin{lem}\label{part_un}\cite[Lemma 4.8]{conny} There is a partition of unity $h_\gamma$ subordinated to the Fermi coordinates introduced above fulfilling: For all $k\in \mN$ there is $c_k>0$ such that for all $\gamma$ and all multi-indices $\mathfrak{a}=(\mathfrak{a}_1,\ldots, \mathfrak{a}_n)$ with $|\mathfrak{a}|\leq k$
\[|D^\mathfrak{a} (h_\gamma\circ \kappa_\gamma)|\leq c_k.\]
Here, $D^\mathfrak{a}= \frac{\partial^{\mathfrak{a}_1}}{(\partial x_1)^{\mathfrak{a}_1}}\cdots \frac{\partial^{\mathfrak{a}_n}}{(\partial x_n)^{\mathfrak{a}_n}}$ where $x_i$ are the coordinates.
\end{lem}
Now we have all the ingredients to define Sobolev spaces on $E$ via local pullback to vector valued functions over $\mathbb{R}^n$:
\begin{definition}\label{-12-Sobolev}\cite[Definition 5.9]{conny}
Let $s\in \mathbb{R}$. Let $(U_\gamma)_{\gamma}$ be a covering of $M$ together with a synchronous trivialization $\xi_\gamma$ of $E$ as
defined above.
Moreover, let the covering be locally finite, and let $h_\gamma$ be a partition of
unity subordinated to $U_\gamma$ as in Lemma \ref{part_un}.
Then
\[ \Vert \phi \Vert_{H_s(M,E)}^2:=\sum_{\alpha} \Vert \xi_{\alpha*}(h_\alpha
\phi)\Vert_{H_s(\mathbb{R}_+^n,\mathbb{C}^r)}^2.\]
\end{definition}
Note that up to equivalence the $H_s$-norm does not depend on the choices of $(U_\gamma, h_\gamma, \xi_\gamma)$, cp. \cite[Theorem 4.9, 5.11 and Lemma 5.13]{conny}.
\begin{remark}\label{rem_app}
\begin{itemize}
\item[(i)] For $s\in \mathbb{N}$ the definition of $H_s(M,E)$ from above
is equivalent to the usual definition given by
\[ \Vert \phi \Vert_{H_s(M,E)}:=\sum_{i=0}^s \Vert \underbrace{\nabla^E\cdots
\nabla^E}_{i\ \mathrm{times}} \phi\Vert_{L^2(M,E)},\]
cp. \cite{Schick01}, \cite[Theorem 5.7]{conny}.
\item[(ii)] For $s\leq t$ we have $\Vert \phi\Vert_{H_s(M,E)} \leq \Vert \phi\Vert_{H_t(M,E)}$. That is seen for $M=\mathbb{R}^n_+$
immediately using $(1+|\xi|)^s\leq (1+|\xi|)^t$. For general $M$, one just lifts this result by using a partition of unity and a synchronous
trivialization.
\item[(iii)] Let $D^\Sigma: \Gamma_c^\infty (\Sigma, \SS_{\Sigma}) \to
\Gamma_c^\infty (\Sigma, \SS_{\Sigma})$ be the Dirac operator on $\Sigma$. For any
$s\in \mathbb{R}$, there is a unique closed extension of $D^\Sigma$ from $H_{s}(\Sigma,
\SS_\Sigma) \to H_{s-1}(\Sigma, \SS_\Sigma)$.
\end{itemize}
\end{remark}
\begin{theorem}\label{trace_theorem}
Let $M^{n}$ be a Riemannian manifold with boundary $\Sigma$. Assume that $(M, \Sigma)$
is of bounded geometry and that $E$ is a hermitian vector bundle over $M$ that
is also of bounded geometry. Then, for all $s\in\mathbb{R}$ with $s> \frac{1}{2}$ the operator $R:
\Gamma_c^\infty(M, E) \to \Gamma_c^\infty(\Sigma, E|_\Sigma)$ with $\phi\mapsto \phi|_\Sigma$
extends to a bounded linear operator from $H_{s}(M, E)$ to $H_{s-\frac{1}{2}}(\Sigma,
E|_\Sigma)$. Moreover, there is a bounded right inverse $\mathcal{E}: H_{s-\frac{1}{2}}(\Sigma, E_\Sigma)\to H_s(M,E)$ of the trace map $R: H_s(M,E)\to
H_{s-\frac{1}{2}}(\Sigma, E|_\Sigma)$. In particular, $\mathcal{E} (\Gamma_c^\infty(\Sigma, E|_\Sigma)) \subset \Gamma_c^\infty(M, E_M)$.
\end{theorem}
\begin{proof} This theorem is a special case of \cite[Theorem 5.14]{conny}. We shortly sketch the basic idea:
We choose a covering $U_\gamma$ together with a synchronous trivialization $\xi_\gamma$ of $E$ and
a subordinated partition of unity $h_\gamma$ as in Definition \ref{def_synch} and Lemma \ref{part_un}. The restrictions
$U_\gamma\cap \Sigma$ then cover $\Sigma$.
Let $\phi\in H_s(M,E)$. Then, for all $\alpha$ we have
$\xi_{\alpha*}(h_\alpha \phi)\in H_s(\mathbb{R}_+^n, \mathbb{C}^r)$. Thus, there
exists a $C>0$ with $\Vert R(\xi_{\gamma*}(h_\gamma
\phi))\Vert_{H_{s-\frac{1}{2}}(\mathbb{R}^{n-1}, \mathbb{C}^r)}\leq C \Vert
\xi_{\gamma*}(h_\gamma \phi)\Vert_{H_{s}(\mathbb{R}_+^n, \mathbb{C}^r)}$.
With
$R(\xi_{\alpha*}(h_\alpha \phi))=\xi_{\alpha*}(h_\alpha R\phi)$
we get after summing up that $\Vert R\phi\Vert_{H_{s-\frac{1}{2}}(\Sigma, E|_\Sigma)}\leq C \Vert
\phi\Vert_{H_{s}(M,E)}$ since $\xi_\alpha$ is still a synchronous trivialization
for $E|_\Sigma$.
The rest is proven analogously as the Trace Theorem using the original Euclidean version of the extension map $\mathcal{E}: H_{s-\frac{1}{2}}(\R^{n-1})\to H_s(\R^n)$. The last inclusion follows immediately from $\mathcal{E} (\Gamma_c^\infty(\R^{n-1})) \subset \Gamma_c^\infty(\R^n)$. \end{proof}
The last theorem gives immediately
\begin{corollary}\label{H1_ER}
The map $\mathcal{E}R: \Gamma_c^\infty(M, E)\to \Gamma_c^\infty(M,E)$ extends to a bounded linear map $\mathcal{E}R: H_s(M, E)\to H_s(M,E)$ for all $s>\frac{1}{2}$.
\end{corollary}
\begin{lemma}\label{pairing-Sobolev} The $L^2$-product $(\phi, \psi)= \int_\Sigma \langle \phi,\psi\rangle dv$ for $\phi, \psi\in \Gamma_c^\infty (\Sigma, E|_\Sigma)$ extends to a perfect pairing
$H_s(\Sigma,E|_\Sigma) \times H_{-s}(\Sigma,E|_\Sigma)\to \mathbb{C} $ for all $s\in \mathbb{R}$.
\end{lemma}
\begin{proof}
This is also proven in the same way as above -- by lifting the corresponding result from the Euclidean case \cite[Section I.3]{taylor_81}.
\end{proof}
The Trace Theorem now allows to extend the allowed domain for the spinors in the Equalities \eqref{Lich} and \eqref{L2-structure_mod_boundary}.
\begin{lem}\label{extequ} For all $\phi,\psi\in H_1(M, \SS_M)$, Equalities \eqref{Lich} and \eqref{L2-structure_mod_boundary} hold.
\end{lem}
\begin{proof} The proof is a more or less straightforward usage of the Trace Theorem \ref{trace_theorem} and the corresponding equalities on $\Gamma_c^\infty (M,\SS_M)$. Indeed, let $\phi_i$ be a sequence in $\Gamma_c^ \infty(M, \SS_M)$ with $\phi_i\to \phi$ in
$H_1(M, \SS_M)$. The Trace Theorem \ref{trace_theorem} gives $R\phi_i\to R\phi$ in
$H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$ and, hence, $\widetilde{D}^{\Sigma} R\phi_i\to \widetilde{D}^{\Sigma}R\phi$ in
$H_{-\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$, cf. Remark \ref{rem_app}.iii. Clearly,
$\Vert \phi_i-\phi\Vert_{H_1}\to 0$ and with \eqref{equ_H1D_easydir}, this implies
$\Vert \phi_i-\phi\Vert_{D}\to 0$. Moreover, the bounded geometry of $(M, \Sigma)$
implies
$$\int_{M}
{\s^M}|\phi_i|^2 dv \to \int_{M} {\s^M}|\phi|^2 dv,\
\int_{\Sigma} H|\phi_i|^2 ds \to \int_{\Sigma} H|\phi|^2 ds, \text{\ and}$$
$$\left|
\int_{M} \<\Omega\cdot\phi_i, \phi_i \> dv - \int_{M} \<\Omega\cdot\phi, \phi \>
dv\right| \leq \left( \Vert \phi_i-\phi\Vert_{L^2}\Vert \phi\Vert_{L^2}+\Vert \phi_i\Vert_{L^2}\Vert \phi_i-\phi\Vert_{L^2}\right)\sup_M |\Omega|\to
0.$$
Note that due to the bounded geometry of $L$, $\sup_M |\Omega|$ is finite. It remains to consider the term $\int_{\Sigma} \langle R\phi,
\widetilde{D}^{\Sigma}R\phi\rangle ds$. First we note that due to the pairing in Lemma
\ref{pairing-Sobolev}, the Trace Theorem \ref{trace_theorem}, and $\widetilde{D}^{\Sigma}: H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)
\to H_{-\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$, this expression is finite for
all $\phi\in H_{1}(M, \SS_M)$. Abbreviating $R\phi$ by $\phi$, we have
\begin{align*}
| (\widetilde{D}^{\Sigma}\phi_i,\phi_i)_\Sigma - (\widetilde{D}^{\Sigma}\phi,\phi)_\Sigma| & \leq
| (\widetilde{D}^{\Sigma}\phi_i, \phi - \phi_i)_\Sigma| +
|(\widetilde{D}^{\Sigma}\phi-\widetilde{D}^{\Sigma}\phi_i,\phi)_\Sigma|\\
& \leq \Vert \widetilde{D}^{\Sigma}\phi_i\Vert_{H_{-\frac{1}{2}}} \Vert \phi -
\phi_i\Vert_{H_{\frac{1}{2}}} + \Vert
\widetilde{D}^{\Sigma}\phi-\widetilde{D}^{\Sigma}\phi_i\Vert_{H_{-\frac{1}{2}}} \Vert \phi
\Vert_{H_{\frac{1}{2}}},
\end{align*}
which gives the convergence of the last term. This proves Equality \eqref{Lich} for all $\phi\in H_1(M, \SS_M)$.
Now, let $\phi_i,\psi_j$ be sequences in $\Gamma_c^\infty(M, \SS_M)$ with $\phi_i\to \phi$ and $\psi_j\to \psi$ in
$H_1(M, \SS_M)$. Then,
\begin{align*} |(D\psi_j,\phi_i)-(D\psi,\phi)|&\leq |(D\psi_j,\phi_i)-(D\psi_j,\phi)|+|(D\psi_j,\phi)-(D\psi,\phi)|\\
&\leq \Vert D\psi_j\Vert_{L^2} \Vert\phi_i-\phi\Vert_{L^2}+\Vert D(\psi_j-\psi)\Vert_{L^2} \Vert\phi\Vert_{L^2}.
\end{align*}
Using \eqref{equ_H1D_easydir} and that $\phi_i$ and $\psi_j$ are uniformly bounded in $H_1$, we get for a certain constant $C>0$ that \[ |(D\psi_j,\phi_i)-(D\psi,\phi)|\leq C\Vert \phi_i-\phi\Vert_{L^2}+ C\Vert \psi_j-\psi\Vert_{H_1}\to 0.\]
Analogously, one obtains $(\psi_j, D\phi_i)\to (\psi, D\phi)$. Moreover, using again the Trace Theorem \ref{trace_theorem},
we get
\begin{align*} \left| \int_{\Sigma} \langle \nu\cdot R\psi_j, R\phi_i\rangle - \langle \nu\cdot R\psi_j, R\phi\rangle ds\right|&\leq \Vert R\psi_j\Vert_{L^2(\Sigma)} \Vert R(\phi_i-\phi)\Vert_{L^2(\Sigma)}\\
&\leq C \Vert \psi_j\Vert_{H_1} \Vert \phi_i-\phi\Vert_{H_1}\to 0.
\end{align*}
In the same way, $ \left| \int_{\Sigma} \langle \nu\cdot R\psi_j, R\phi\rangle -
\langle \nu\cdot R\psi, R\phi\rangle ds\right|\to 0$. Hence, \[ \left| \int_{\Sigma} \langle \nu\cdot R\psi_j, R\phi_i\rangle - \langle \nu\cdot R\psi, R\phi\rangle ds\right|\to 0.\]
This proves Equality \eqref{L2-structure_mod_boundary} for all $\phi, \psi\in H_1(M, \SS_M)$.
\end{proof}
\subsection{Extension and the graph norm}
\paragraph{\bf Spectral decomposition of the boundary}
Let $(M,\Sigma)$ be of bounded geometry. Then, $(\Sigma, g|_\Sigma)$ is complete and, thus, the Dirac operator $D^\Sigma$ on $\SS_\Sigma$, and thus also $\widetilde{D}^{\Sigma}$ on $\SS_M|_\Sigma$, is self-adjoint.
Let $\{E_I\}_{I\subset \mathbb{R}}$ be the family of projector-valued measures
belonging to the self-adjoint operator $$\widetilde{D}^{\Sigma}: H_1(\Sigma,
\SS_M|_\Sigma) \subset L^2(\Sigma, \SS_M|_\Sigma)\to L^2(\Sigma,
\SS_M|_\Sigma).$$
We define for a connected (not necessarily bounded) interval $I\in \R$ the spectral projection
\begin{align*}
\pi_{I}: L^2(\Sigma, \SS_M|_{\Sigma})\to L^2(\Sigma,
\SS_M|_{\Sigma}),\ \phi\mapsto E_{I}\phi
\end{align*}
and the spaces
\[\Gamma^{\rm APS}_{I}=\{ \phi\in L^2(\Sigma, \SS_M|_{\Sigma})\ |\
\phi=\pi_{I} \phi \}.\]
Next we will show that for every $s\in \mathbb{R}$ the spectral projections extend to bounded linear maps from $H_s(\Sigma, \SS_M|_\Sigma)$ to itself: Firstly, we note that the spectral projections commute with $\widetilde{D}^{\Sigma}$. Moreover, since $(\Sigma, g|_\Sigma)$ has bounded geometry, the norm $\Vert \phi\Vert_{L^2}^2+ \Vert D^k \phi\Vert_{L^2}^2$ and the $H_k$-norm are equivalent on $\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)$ for $k\in \mathbb{N}_0$, cp. \cite[Lemma 3.1.6]{Ammha}.
Hence, $\pi_I$ restricts to a bounded linear map from $H_k(\Sigma, \SS_M|_\Sigma)$ to itself for $k\in \mathbb{N}_0$. Let now $k$ be a negative integer, $\phi\in \Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)$ and $\psi\in H_{-k}(\Sigma, \SS_M|_\Sigma)$. Using that $\pi_I$ is symmetric w.r.t. $L^2$-product on $(\Sigma, \SS_M|_\Sigma)$ and Lemma \ref{pairing-Sobolev} we get
\begin{align*}
|(\pi_I \phi, \psi)_\Sigma|=|( \phi, \pi_I \psi)_\Sigma|\leq C\Vert \phi\Vert_{H_{-k}(\Sigma)}\Vert \pi_I \psi\Vert_{H_k(\Sigma)}\leq C'\Vert \phi\Vert_{H_{-k}(\Sigma)}\Vert \psi\Vert_{H_k(\Sigma)}.
\end{align*}
Thus, $\pi_I$ extends to a bounded linear map from $H_k(\Sigma, \SS_M|_\Sigma)$ to itself for all nonnegative integers $k$. Then by Riesz-Thorin Interpolation Theorem we get that $\pi_I: H_s(\Sigma, \SS_M|_\Sigma)\to H_s(\Sigma, \SS_M|_\Sigma)$ for all $s\in \mathbb{R}$.
We abbreviate $\pi_{>}=\pi_{(0,\infty)}$ and $\pi_{\leq}=\pi_{(-\infty, 0]}$. As in \cite[Section 5]{baer_ballmann_11}, we define for $\phi\in \Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)$
\begin{align*}
\Vert \phi\Vert_{\check{H}}^2&= \Vert \pi_{\leq }\phi\Vert_{H_\frac{1}{2}(\Sigma)}^2 + \Vert \pi_{>}\phi\Vert_{H_{-\frac{1}{2}}(\Sigma)}^2\ \text{\ and\ }
\Vert \phi\Vert_{\hat{H}}^2= \Vert \pi_{\leq }\phi\Vert_{H_{-\frac{1}{2}}(\Sigma)}^2 + \Vert \pi_{>}\phi\Vert_{H_{\frac{1}{2}}(\Sigma)}^2
\end{align*}
and the spaces \begin{align}\label{H_spaces}
\check{H}:=\overline{\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)}^{\Vert.\Vert_{\check{H}}}\
\text{\ and\ }\ \hat{H}:=\overline{\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)}^{\Vert.\Vert_{\hat{H}}}.\end{align}
\paragraph{\bf Local description of the graph norm on $(M,\Sigma)$.}
Let $(M,g)$ be a manifold with boundary $\Sigma$. Let $(U_\gamma, \kappa_\gamma, \xi_\gamma, h_\gamma)_\gamma$ be Fermi coordinates on $(M,g)$ together with a synchronous trivialization $\xi_\gamma$ and a partition of unity $h_\gamma$.
\begin{lemma} \label{lem_equiv_cutoff}
On $\Gamma^\infty_c(M, \SS_M)$ the norms $\Vert.\Vert_D$ and $\left( \sum_\gamma \Vert h_\gamma .\Vert_D^2\right)^\frac{1}{2}$ are equivalent.
\end{lemma}
\begin{proof} All the constants $c_i$ involved here are positive.
Let $\phi\in \Gamma_c^\infty(M,\SS_M)$. Since the cover $U_\gamma$ is uniformly locally finite the norms $\Vert.\Vert_{L^2}$ and $\left(\sum_\gamma \Vert h_\gamma . \Vert_{L^2}^2\right)^{\frac{1}{2}}$ are equivalent.
Thus,
\begin{align*}
\Vert D\phi\Vert_{L^2}^2& \leq c_1 \sum_\gamma \Vert h_\gamma D\phi\Vert_{L^2}^2=c_1 \sum_\gamma \Vert D(h_\gamma \phi)-\nabla h_\gamma\cdot \phi\Vert_{L^2}^2\\
&\leq c_2 \sum_\gamma (\Vert D(h_\gamma \phi)\Vert_{L^2}^2 + \Vert \nabla h_\gamma\cdot \phi\Vert_{L^2}^2)\leq c_3 \sum_\gamma (\Vert D(h_\gamma \phi)\Vert_{L^2}^2 + \Vert \phi|_{U_\gamma}\Vert_{L^2}^2)\\
&\leq c_3\sum_\gamma \Vert D(h_\gamma \phi)\Vert_{L^2}^2 + c_4 \Vert \phi\Vert_{L^2}^2
\end{align*}
where the end of the second line follows by Lemma \ref{part_un}, and the last inequality follows since the cover $U_\gamma$ is uniformly locally finite. Hence, $\Vert \phi\Vert_D^2\leq c_5 \sum_\gamma \Vert h_\gamma \phi\Vert_D^2$.
Conversely we get analogously
\begin{align*}
\sum_\gamma \Vert D(h_\gamma \phi)\Vert_{L^2}^2&= \sum_\gamma \Vert h_\gamma D \phi+\nabla h_\gamma\cdot \phi\Vert_{L^2}^2\leq c_6 \Vert \phi\Vert_{D}^2.
\end{align*} \end{proof}
\begin{lemma}\label{local_inv}
Let $(\Sigma, g|_\Sigma)$ be a manifold of bounded geometry. Then, there is an $\epsilon>0$ smaller than the injectivity radius of $\Sigma$ and a constant $c>0$ such that for all $x\in \Sigma$ and $\phi\in \Gamma^\infty_c(B_\epsilon(x)\subset N, \SS_N)$ we have $\Vert D^N\phi\Vert_{L^2}>c\Vert \phi\Vert_{L^2}$.
\end{lemma}
\begin{proof} Let $\exp^{\Sigma}_x: B_\epsilon(0)\subset \R^n\to B_\epsilon(x)\subset \Sigma$ be the exponential map. Set $\tilde{g}:= (\exp^{\Sigma}_x)^* g|_{B_{\epsilon}(x)}$. We will compare the Dirac operator $D^{\tilde{g}}$ with $D^E$, \cite[Proposition 3.2]{AGHM}:
\[ D^{\tilde{g}}\phi = D^E\phi + \sum_{ij} (b_i^j-\delta_i^j)\partial_i \cdot \nabla_{\partial_j}\phi +\frac{1}{4}\sum_{ijk} \tilde{\Gamma}_{ij}^k e_i\cdot e_j\cdot e_k\cdot \phi\]
where $\phi$ is a smooth spinor over $ B_{\epsilon}(0)$, $\partial_i$ and $e_i$ form an orthonormal basis w.r.t. the Euclidean metric and $\tilde{g}$, respectively. Moreover, $e_i=b_i^j\partial_j$, $\nabla$ is the Levi-Civita connection w.r.t. the Euclidean metric, and $\tilde{\Gamma}_{ij}^k$ are the Christoffel symbols w.r.t. the metric $\tilde{g}$.
By \cite[(11)-(13) and below]{AGHM} $|b_i^j-\delta_i^j|\leq C r^2$ and $|\tilde{\Gamma}_{ij}^k|\leq Cr$ where $r$ is the Euclidean distance to the origin and $C$ can be chosen to only depend on the global curvature bounds of $g$.
Moreover, note that there is a positive constant $C$ also depending only on the global curvature bounds of $g$ such that $C^{-1}\leq f\leq C$ where ${\rm dvol}_{\tilde{g}}= f{\rm dvol}_{g_E}$.
Thus, for $\epsilon$ small enough we can estimate for all smooth spinors $\phi$ compactly supported in $B_\epsilon(0)$
that
\begin{align*}
\frac{\Vert D^{\tilde{g}} \phi\Vert_{L^2(\tilde{g})}^2 }{\Vert \phi\Vert_{L^2(\tilde{g})}^2}
&\geq C_1 \frac{\Vert D^E \phi\Vert_{L^2(g_E)}^2}{\Vert \phi\Vert_{L^2(g_E)}^2} - C_2\epsilon^2 \frac{\Vert \nabla \phi\Vert_{L^2(g_E)}^2}{\Vert \phi\Vert_{L^2(g_E)}^2} - C_3\epsilon\\
&\geq C_4 \frac{\Vert D^E \phi\Vert_{L^2(g_E)}^2}{\Vert \phi\Vert_{L^2(g_E)}^2} - C_5\epsilon
\end{align*}
where the last step uses the equivalence of the graph norm and the $H_1$-norm .
Let $A$ be such that $\Vert D^E \psi\Vert_{L^2(g_E)}^2 \geq A \Vert \psi\Vert_{L^2(g_E)}^2$ for smooth spinors compactly supported in $B_{\epsilon}(0)$. Then one can always choose $\epsilon$ small enough that $C_4A-C_5\epsilon\geq 2^{-1}C_4A=:c$
Thus, the same is true for $D^g$ on $B_\epsilon(x)\subset \Sigma$.
\end{proof}
Let $(\hat{M}, \hat{N}=\partial \hat{M})$ be manifold of bounded geometry with closed boundary
Let $\mathcal{E}_{\rm BB}$ be an extension map as defined in \cite[(43)]{baer_ballmann_11}.
Let $D$ and $D^{\hat{N}}$ be the Dirac operators on $\hat{M}$ and $\hat{N}$, respectively. By \cite[Lemma 6.1, 6.2, (41) and below]{baer_ballmann_11} we have for all $\phi\in \Gamma_c^\infty (\hat{M}, \SS_{\hat{M}}|_{\hat{N}})$
\begin{align}\Vert \mathcal{E}_{\rm BB}R\phi\Vert_D \leq C \Vert \phi\Vert_D\label{BB_Thm}.\end{align} Note that $C$ can be chosen to depend only on curvature bounds of $(\hat{M}, \hat{N})$ including mean curvature, the injectivity radii of $\hat{M}$ and $\hat{N}$, respectively, and the spectral gap of $D^{\hat{N}}$.\\
We now come back to our pair $(M, N)$: Let $\epsilon, c>0$ be constants such that Lemma \ref{local_inv} is fulfilled.
Let $(U_\gamma, \kappa_\gamma, \xi_\gamma, h_\gamma)$ be Fermi coordinates together with a subordinated partition of unity such that there are $x_\gamma\in \Sigma$ with $U_\gamma\cap \Sigma \subset B_{\epsilon} (x_\gamma)$. Let $\hat{U}_\gamma$ be a manifold with closed boundary $\hat{U}'_\gamma:=\partial \hat{U}_\gamma$ such that $\tilde{U}_\gamma :=U_\gamma\cup (\cup_{\alpha; U_\alpha\cap U_\gamma\neq \varnothing} U_\alpha)$ can be isometrically embedded in $\hat{U}_\gamma$, $\tilde{U}_\gamma\cap \Sigma\subset \hat{U}'_\gamma$, such that the spectral gap of the Dirac operator on $\hat{U}'_\gamma$ is at least $[-c, c]$ and such that the curvature, mean curvature of the boundary and the injectivity radii are still uniformly bounded in $\gamma$.
Define the map $\tilde{\mathcal{E}}: \Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)\to \Gamma_c^\infty(M, \SS_M)$ by
\[ \tilde{\mathcal{E}}\psi= \sum_{\gamma,\alpha;\ U_\gamma'\neq \varnothing, U_\gamma\cap U_\alpha\neq \varnothing} h_\alpha \mathcal{E}_{\rm BB} (h_\gamma|_{\Sigma} \psi) \]
where $h_\gamma|_{\Sigma} \phi$ is understood to be a spinor on $U_\gamma\cap N \subset \hat{U}'_\gamma$ and then $\mathcal{E}_{\rm BB} (h_\gamma|_{\Sigma} \psi)$ is a spinor on $\hat{U}_\gamma$. The only reason why $\sum_\alpha h_\alpha$ appears in the definition is to assure that each summand can be seen to live on $M$ and that $R\tilde{\mathcal{E}}=\Id$. Note that just using $h_\gamma$ in front of $\mathcal{E}_{\rm BB}$ would be enough to first requirement but not the second.
\begin{proposition}\label{workaround_image} Using the notations from above,
there is a positive constant $C$ such that for all $\phi\in\Gamma_c^\infty(M, \SS_M)$
\begin{align*}
\Vert \tilde{\mathcal{E}}R\phi\Vert_D\leq C\Vert\phi\Vert_D.
\end{align*}
\end{proposition}
\begin{proof} We abbreviate $h_\gamma':=h_\gamma|_\Sigma$. Using (in this order) the definition of $\tilde{\mathcal E}$, Lemma \ref{lem_equiv_cutoff} the uniform local finiteness of the cover $U_\gamma$, \eqref{BB_Thm}, and again Lemma \ref{lem_equiv_cutoff} we estimate
\begin{align*}
\Vert \tilde{\mathcal E}R\phi\Vert_D^2\leq& C_1 \left\Vert \sum_{\gamma, U'_\gamma\neq \varnothing}\mathcal{E}_{\rm BB} R(h_\gamma \phi)\right\Vert_D^2\leq C_2\sum_{\gamma, U'_\gamma\neq \varnothing} \Vert \mathcal{E}_{\rm BB} R(h_\gamma \phi)\Vert_D^2\\
\leq&
C_3\sum_{\gamma, U'_\gamma\neq \varnothing} \Vert h_\gamma \phi\Vert_{D}^2\leq C \Vert \phi\Vert_D^2.
\end{align*}
\end{proof}
\section{Boundary value problems}\label{boundary_values}
The general theory of boundary value problems for elliptic differential operators of order one on complete
manifolds with closed boundary can be found in \cite{baer_ballmann_11}. The aim of this section is to generalize a part of this theory to noncompact boundaries on manifolds
of bounded geometry. We restrict to the part that gives existence of solutions of boundary value problems as in
Theorem \ref{main}. The property needed to assure
a solution to such a problem is the closedness of the range. For that we introduce a type of coercivity condition which
in general can depend on the boundary values (that is not the case for closed boundaries).
Moreover, we restrict to the classical $\Spinc$ Dirac operator.\\
In the first part, we first give some generalities on domains of the Dirac
operator and introduce a coercivity condition that implies closed range of the Dirac operator. Then,
we extend the trace map $R$ to the whole maximal domain of the Dirac operator
and give some examples and properties of boundary conditions. In particular, we will introduce two boundary
conditions $B_\pm$ which will be used to prove Theorem \ref{main} in Section \ref{proofmain}.
At the end, we give an existence result for boundary value problems in our context.\\
\paragraph{\bf General domains and closed range.} Let $D$ be the Dirac operator acting on $\Gamma_{cc}^\infty(M, \SS)$ on a manifold $M$ with boundary $\Sigma$. If we want to emphasize that $D$ acts on the domain $\Gamma_{cc}^\infty(M, \SS)$, we shortly write $D_{cc}$. We denote the graph norm of $D$ by
\[ \Vert \phi\Vert_D^2=\Vert \phi\Vert_{L^2}^2+\Vert D \phi\Vert_{L^2}^2.\]
By $D_{\mathrm{max}}:=(D_{cc})^*$ we denote the maximal extension of $D$. Here, $A^*$ denotes the adjoint operator of $A$ in the sense of functional analysis. Note that $H_1(M, \SS_M)\subset \dom\, D_{\mathrm{max}}$ and
$$\dom\, D_{\mathrm{max}}=\{\phi\in L^2(M, \SS_M)\ |\ \exists \tilde{\phi}\in L^2(M, \SS_M) \forall \psi\in \Gamma_{cc}^\infty(M, \SS_M): (\tilde{\phi}, \psi)=(\phi, D\psi)\},$$
and together
with $\Vert. \Vert_D$, the space $\dom\, D_\mathrm{max}$ is a Hilbert space. Moreover, we denote by $D_{\mathrm{min}}:= (D_{cc})^{**} =\overline{D_{cc}}^{\Vert.\Vert_D}$ the minimal extension of $D$. Here, $\overline{A}^{\Vert .\Vert_D}$ denotes the closure of the set $A$ w.r.t. the graph norm. Any closed linear subset of $\mathrm{dom}\, D_{\mathrm{max}}$ between $\mathrm{dom}\, D_{\mathrm{min}}$ and
$\mathrm{dom}\, D_{\mathrm{max}}$ gives the domain of a closed extension of $D\colon \Gamma_{cc}^\infty (M, \SS_M) \to \Gamma_{cc}^\infty (M, \SS_M)$. Before examining those domains let us extend the trace map to $\dom\, D_{\rm max}$: \\
{\bf Extension of the trace map.} The Trace Theorem \ref{trace_theorem} extends the trace map
\begin{eqnarray*}
R: \Gamma_c^\infty(M,\SS_M) &\to& \Gamma_c^\infty(\Sigma,\SS_M|_\Sigma)\\
\phi&\mapsto& \phi|_\Sigma
\end{eqnarray*}
to a bounded map $R:H_1(M,\SS_M)\to H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$.
Here, we will extend $R$ further to $\dom\, D_{\mathrm{max}}$. This will generalize the corresponding result \cite[Theorem 6.7.ii]{baer_ballmann_11} for closed boundaries to noncompact boundaries. Moreover, we give some auxiliary lemmata which are found in \cite{baer_ballmann_11} for closed boundaries. Some of the proofs and the order of obtaining them will be a little bit different since we do not use (and cannot use, cf. Example \ref{ex_bd_cond}.iv) the projection to the negative spectrum.Note that in this part we could use an arbitraray extension map as given by Theorem \ref{trace_theorem} and are not restricted to the explicit one defined via the eigenvalue decomposition of $\widetilde{D}^{\Sigma}$ on closed boundaries used in \cite{baer_ballmann_11}.
\begin{lem}\label{dense_in_graph_norm}
The space $\Gamma_c^\infty(M,\SS_M)$ is dense in $\dom\, D_{\mathrm{max}}$ w.r.t. the graph norm.
\end{lem}
\begin{proof} For a closed boundary, this is done in \cite[Theorem 6.7.i]{baer_ballmann_11}. We use a different proof here.
Let $\phi \in \dom\, D_{\mathrm{max}}$. Let $K_i$ be a compact exhaustion of $M$ that comes together with smooth cut-off functions $\eta_i:M \to [0,1]$ such that $\eta_i=1$ on $K_i$, $\eta_i=0$ on $M\setminus K_{i+1}$ and $\max |d\eta_i|\leq \frac{2}{i}$. Then,
$\phi_i=\eta_i\phi$ are compactly supported sections in $\dom\, D_{\max}$ fulfilling
\begin{eqnarray*}
\Vert \phi_i-\phi\Vert_D^2 &=& \Vert \phi_i-\phi\Vert_{L^2}^2 + \Vert D\phi_i-D\phi\Vert_{L^2}^2\\ &\leq&
\Vert (1-\eta_i)\phi\Vert_{L^2}^2 + \left( \Vert (1-\eta_i) D\phi\Vert_{L^2} +\frac{2}{i} \Vert \phi\Vert_{L^2}
\right)^2\to 0.
\end{eqnarray*}
Each $\phi_i$ has now compact support in $K_{i+1}$. Thus, there is a
sequence $\phi_{ij}\in \Gamma_c^\infty (K_{i+1}, \SS_M)$ with $\phi_{ij}\to\phi_i$ in the graph
norm on $K_{i+1}$. Choose $j=j(i)\geq i$ such that $\Vert \phi_{ij}-\phi_i\Vert_D\to 0$ as $i\to \infty$. Then, $\Vert \phi_{ij}-\phi\Vert_D \leq \Vert \phi_{ij}-\phi_i\Vert_D + \Vert \phi_i- \phi\Vert_D \to 0$, too. Then
\begin{eqnarray*}
\Vert \eta_j\phi_{ij}-\phi_{ij}\Vert_D^2 &\leq& \Vert (1-\eta_j)\phi_{ij}\Vert_{L^2}^2 + (\Vert (1-\eta_j)D \phi_{ij}
\Vert_{L^2} + \Vert d\eta_j\cdot \phi_{ij}\Vert_{L^2} )^2
\\ &\leq& ( \Vert \phi_{ij}-\phi_i\Vert_{L^2} + \Vert (1-\eta_j)\eta_i\phi\Vert_{L^2})^2 +
\Big( \Vert D(\phi_{ij}-\phi_i)\Vert_{L^2}\\
&&+\Vert(1-\eta_j)(\eta_i D\phi + d\eta_i \cdot \phi)\Vert_{L^2} + \frac{2}{j} \Vert \phi_{ij}-\phi_i\Vert_{L^2}
+ \frac{2}{j} \Vert \phi\Vert_{L^2}\Big)^2 \to 0
\end{eqnarray*}
for $i\to \infty$. Thus, we have a sequence $\hat{\phi}_i:=\eta_{j(i)}\phi_{ij(i)}\in \Gamma_c^ \infty(M, \SS_M)$ such that $\hat{\phi}_i\to \phi$ in the graph norm as $i\to \infty$.
\end{proof}
Note that the proof of Lemma \ref{dense_in_graph_norm} only uses the completeness of $M$ and not the bounded geometry.
\begin{thm}\label{extended-trace}
The trace map $R: \Gamma_{c}^\infty(M, \SS_M) \to \Gamma_{c}^\infty(\Sigma, \SS_M|_\Sigma)$ can be extended to a bounded operator
$$R: \dom\, D_{\mathrm{max}} \to H_{-\frac{1}{2}} (\Sigma, \SS_M|_\Sigma).$$
\end{thm}
\begin{proof} Let $\phi\in \Gamma_c^\infty (M, \SS_M)$ and $\psi\in H_{\frac{1}{2}} (\Sigma, \SS_M|_\Sigma)$. Then by Theorem \ref{trace_theorem}, the spinor
$\mathcal{E}\psi\in H_1(M, \SS_M)$. Thus, we can use Lemma \ref{extequ}, \eqref{equ_H1D_easydir}, and Theorem \ref{trace_theorem}
to obtain
\begin{align*}
|( \phi|_\Sigma, \nu\cdot \psi)_\Sigma|=&|(D\phi, \mathcal{E} (\nu\cdot\psi))-(\phi, D\mathcal{E}(\nu\cdot\psi))|\\
\leq &\Vert D\phi\Vert_{L^2}\Vert \mathcal{E}(\nu\cdot\psi)\Vert_{L^2}+ \Vert \phi\Vert_{L^2}\Vert D \mathcal{E}(\nu\cdot\psi)\Vert_{L^2}\\
\leq &2\Vert \phi\Vert_{D}\Vert \mathcal{E}(\nu\cdot\psi)\Vert_{D} \leq C \Vert \phi\Vert_{D}\Vert \mathcal{E}(\nu\cdot\psi)\Vert_{H_1}
\leq C' \Vert \phi\Vert_{D}\Vert \nu\cdot\psi\Vert_{H_{\frac{1}{2}}(\Sigma)}.
\end{align*}
Together with Lemma \ref{pairing-Sobolev}, this implies
\[ \Vert \phi|_\Sigma\Vert_{H_{-\frac{1}{2}}(\Sigma)}\leq C' \Vert \phi\Vert_{D}.\]
Since $\Gamma_c^\infty (M, \SS_M)$ is dense in $\dom\, D_{\mathrm{max}}$ w.r.t. the graph norm, cf. Lemma \ref{dense_in_graph_norm}, the claim follows.
\end{proof}
\begin{remark}
Note that $R$ is not surjective here. For closed boundaries the image was specified in
\cite[Theorems 1.7 and 6.7.ii]{baer_ballmann_11}. For noncompact
boundaries the image will be further considered in Lemma \ref{R_banach} and below.
\end{remark}
\begin{lem}\label{againextequ} Equality \eqref{L2-structure_mod_boundary} holds for all
$\phi\in \dom\, D_{\mathrm{max}}$ and $\psi\in H_1(M,\SS_M)$.
\end{lem}
\begin{proof}
The proof is done as the one of Lemma \ref{extequ} starting with $\psi_j, \phi_i\in \Gamma_c^\infty(M, \SS_M)$ where $\psi_j\to \psi$ in $H_1$ and $\phi_i\to \phi$ in the graph norm of $D$ and using the (extended) Trace Theorem \ref{extended-trace}. The only difference is seen in the estimate of the boundary integrals which now read e.g.
\begin{align*} \left| \int_{\Sigma} \langle \nu\cdot R\psi_j, R\phi_i - R\phi\rangle ds\right|&\leq \Vert R\psi_j\Vert_{H_{\frac{1}{2}}(\Sigma)} \Vert R(\phi_i-\phi)\Vert_{H_{-\frac{1}{2}}(\Sigma)}
\leq C\Vert \psi_j\Vert_{H_1} \Vert \phi_i-\phi\Vert_{D}\to 0
\end{align*}
where the last inequality uses both versions of the Trace Theorem \ref{trace_theorem} and \ref{extended-trace}.
\end{proof}
The next Lemma gives a full description of $\dom\, D_{\mathrm{min}}$:
\begin{lem}\label{norm_equ_0} The $H_1$-norm and the graph norm $\Vert . \Vert_D$ are equivalent on
$$\{\phi\in \dom\, D_\mathrm{max}\ |\ R\phi=0\}.$$ In particular,
\begin{eqnarray*}
\dom\, D_{\mathrm{min}}=\overline{\Gamma_{cc}^\infty(M, \SS_M)}^{\Vert.\Vert_D}=\overline{\Gamma_{cc}^\infty(M, \SS_M)}^{\Vert.\Vert_{H_1}}&=&\{\phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi=0\}\\ &=&\{\phi\in H_1(M, \SS_M)\ |\ R\phi=0\}.
\end{eqnarray*}
\end{lem}
\begin{proof}Firstly we show the equivalence on $\{\psi\in \Gamma_{c}^\infty(M, \SS_M)\ |\ R\psi=0\}$: Let $\phi$ be in this set. Then by \eqref{Lich} we have
\[ \Vert \phi\Vert_{H_1}^2=\Vert \phi\Vert_{L^2}^2+\Vert D\phi\Vert_{L^2}^2 -\int_{M} \frac{\s^M}{4}|\phi|^2 dv - \int_{M} \frac {\i}{2}
\<\Omega\cdot\phi, \phi \> dv\leq C \Vert \phi\Vert_D^2, \]
where we used that $M$ and $L$ are of bounded geometry and, hence, $|\s^M|$ and $|\Omega|$ are uniformly bounded on all of $M$. The reverse inequality was seen in \eqref{equ_H1D_easydir}.
From the definition of $\dom\, D_{\mathrm{min}}$ and the equivalence of the norms from above, we already have $\dom\, D_{\mathrm{min}}=\overline{\Gamma_{cc}^\infty}^{\Vert.\Vert_D}=\overline{\Gamma_{cc}^\infty}^{\Vert.\Vert_{H_1}}$. From the Trace Theorem \ref{extended-trace}, we get
$$\overline{\Gamma_{cc}^\infty}^{\Vert.\Vert_D}\subset \{\phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi=0\}.$$ Next we want to show that $D\colon \{\phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi=0\} \to L^2(M, \SS_M)$ already
equals $D_{\mathrm{min}}$. First we note that by the Trace Theorem \ref{extended-trace}, $D$ is a closed extension of $D_{cc}$.
Hence, it suffices to show that $D^*=D_{\mathrm{max}}$. By definition, we have
\[\dom\, D^*=\{ \theta \in L^2(M,\SS_M)\ |\ \exists \chi\in L^2(M, \SS_M)\, \forall \psi\in \dom\, D_{\mathrm{max}}, R\psi=0: (\theta, D\psi)=(\chi,\psi)\}.\]
Let $\theta \in \dom\, D_{\mathrm{max}}$. By Lemma \ref{dense_in_graph_norm}, there exists
a sequence $\theta_i\in \Gamma_c^\infty(M,\SS_M)$ with $\theta_i\to \theta$ in the graph norm. Hence, for all
$\psi\in \dom\, D_{\mathrm{max}}$ with $R\psi=0$ we have $(\theta, D\psi)=\lim_{i\to \infty}(\theta_i,D\psi)$. Then by Lemma
\ref{againextequ} and $R\psi=0$,
we obtain $$(\theta, D\psi)=\lim_{i\to \infty}(D\theta_i,\psi)=(D\theta,\psi)$$
which implies that $\theta\in \dom\, D^*$. Thus, $D^*=D_{\mathrm{max}}$ and $D=D_{\mathrm{min}}$. Together with
\[\dom\, D_{\mathrm{min}}=\overline{\Gamma_{cc}^\infty}^{\Vert.\Vert_{H_1}}\subset \{\phi\in H_1(M,\SS_M)\ |\ R\phi=0\}\subset \{\phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi=0\}=\dom\, D_{\mathrm{min}},\] the rest of the Lemma follows.
\end{proof}
Now we can describe $H_1$ in terms of its image under the trace map.
\begin{lem}\label{H1_dommax}
We have $H_1(M, \SS_M)=\{\phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi\in H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)\}$.
\end{lem}
\begin{proof}
The inclusion '$\subset$' is clear from the Trace Theorem \ref{trace_theorem}. It remains to prove '$\supset$':
Let $\phi\in \dom\, D_{\mathrm{max}}$ with $R\phi\in H_\frac{1}{2} (\Sigma, \SS_M|_\Sigma)$. Then Theorem \ref{trace_theorem}
implies that
$\psi:=\mathcal{E}R\phi\in H_1 (M, \SS_M)$. Thus, $\phi-\psi\in \dom\, D_{\mathrm{max}}$ and $R(\phi-\psi)=0$. But
due to Lemma \ref{norm_equ_0}, $\phi-\psi\in H_1(M, \SS_M)$ and, hence, $\phi\in H_1(M, \SS_M)$.
\end{proof}
In Proposition \ref{workaround_image} we have shown that there is a linear map $\tilde{\mathcal E}$ such that $\tilde{\mathcal{E}}R: \Gamma_c^\infty(M,\SS_M)\to \Gamma_c^\infty(M,\SS_M)$ fulfills for all $\phi\in \Gamma_c^\infty(M, \SS_M)$
\begin{align} \Vert \tilde{\mathcal{E}} R\phi\Vert_D^2\leq C\Vert \phi\Vert_D^2.\label{super_ext}\end{align}
Thus, $\tilde{\mathcal{E}}R$ extends uniquely to a bounded linear map
\begin{align}
\tilde{\mathcal{E}}R: \dom\, D_{\rm max} \to \dom\, D_{\rm max} \label{ER_ext}.
\end{align}
Note that $\tilde{\mathcal{E}}|_{H_\frac{1}{2}}$ is an extension map in the sense of Theorem~\ref{trace_theorem} as can be seen in the following:
Let $\psi \in H_\frac{1}{2} (\Sigma, \SS_M|_\Sigma)$. By Lemma \ref{H1_dommax} there is a $\phi \in H_1(M, \SS_M)$ with $R\phi=\psi$. Thus, by Lemma \ref{norm_equ_0} $\tilde{\mathcal{E}}\psi-\phi\in \dom\, D_{\rm min}\subset H_1(M, \SS_M)$. In particular, $\tilde{\mathcal{E}}|_{H_{\frac{1}{2}}}: H_\frac{1}{2} (\Sigma, \SS_M|_\Sigma)\to H_1(M, \SS_M)$.
From now we choose any extension map $\mathcal{E}$ fulfilling \eqref{super_ext}. Obviously, all those maps lead to equivalent norms $\Vert \mathcal{E}R.\Vert_D$.
\begin{conjecture}
Every extension map in the sense of Theorem \ref{trace_theorem} fulfills \eqref{super_ext} with an appropriate constant $C$.
\end{conjecture}
On $R(\dom\, D_{\mathrm{max}})$, we set $$\Vert \psi\Vert_{\check{R}} := \Vert \mathcal{E}R\phi\Vert_D $$
where $R\phi=\psi$. By Theorem \ref{workaround_image} and \eqref{ER_ext}, this is well defined.
\begin{lemma}\label{R_banach} The space $\check{R}:=(R(\dom\, D_{\mathrm{max}}), \Vert. \Vert_{\check{R}})$ is a Hilbert space with $\check{R}= \overline{\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)}^{\Vert. \Vert_{\check{R}}}$.
\end{lemma}
\begin{proof} From the definition of $\Vert. \Vert_{\check{R}}$, the linearity of the maps $\mathcal{E}$ and $R$, and the fact that $(\dom\, D_{\mathrm{max}}, \Vert. \Vert_D)$ is a Hilbert space, we get immediately that
$\Vert .\Vert_{\check{R}}$ is a norm on $R(\dom\, D_{\mathrm{max}})$. Moreover, $\Vert.\Vert_{\check{R}}$ comes from a scalar product $(\phi,\psi)_{\check{R}}:=(\mathcal{E}\phi,\mathcal{E}\psi)_D\colon=(\mathcal{E}\phi,\mathcal{E}\psi) + (D\mathcal{E}\phi,D\mathcal{E}\psi)$. In order to show that $\check{R}$ is a Hilbert space it remains to show completeness: For that we consider a Cauchy
sequence $\psi_i$ in $\check{R}$. Then, there is a sequence $\phi_i\in \dom\, D_\mathrm{max}$ with
$R\phi_i=\psi_i$. With the definition of the $\check{R}$-norm,
we get that $\mathcal{E}R\phi_i$ is a Cauchy sequence in $(\dom\, D_{\mathrm{max}}, \Vert.\Vert_D)$ and,
hence, there is a $\phi\in \dom\, D_{\mathrm{max}}$ with $\mathcal{E}R\phi_i\to \phi$ w.r.t. the graph norm.
By Theorem \ref{workaround_image}, we get
$$\Vert\mathcal{E}R(\phi_i-\phi)\Vert_D=\Vert\mathcal{E}R(\mathcal{E}R\phi_i-\phi)\Vert_D\leq C\Vert \mathcal{E}R\phi_i -\phi\Vert_D\to 0.$$ Thus, $\mathcal{E}R\phi=\phi$ and $\Vert \psi_i-R\phi\Vert_{\check{R}}= \Vert \mathcal{E} (R\phi_i-R\phi)\Vert_D\to 0$. Hence, $\psi_i\to \psi$ in the $\check{R}$-norm.
Clearly, $\overline{\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)}^{\Vert. \Vert_{\check{R}}}\subset R(\dom\, D_{\rm max})$. Let now $\psi\in R(\dom\, D_{\rm max})$. Then, there is a $\phi \in \dom\, D_{\rm max}$ with $R\phi=\psi$. By Lemma \ref{dense_in_graph_norm} there is a sequence $\phi_i\in \Gamma_c^\infty(M, \SS_M)$ with $\Vert \phi_i-\phi\Vert_D\to 0$ as $i\to \infty$. Thus, by Theorem \ref{workaround_image} the sequence $\psi_i:= R\phi_i\in \Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)$ converges to $\psi$ in the $\check{R}$-norm.
\end{proof}
\begin{remark}\hfill\label{rem_norms}\\
\textbf{(i)} The proof of Proposition \ref{workaround_image} and \cite[Lemma 6.1]{baer_ballmann_11} implies
\[\Vert \tilde{\mathcal E} R\phi\Vert_D^2\leq C' \sum_{\gamma, \hat{U}'_\gamma\neq \varnothing} \Vert R(h_\gamma \phi)\Vert_{\check{H}(\hat{U}'_\gamma)}^2 =: C' \Vert R\phi\Vert^2_{\check{H}_\gamma}.\] On the other hand, by \cite[Lemma 6.2, (41) and below]{baer_ballmann_11} $\Vert R(h_\gamma \phi)\Vert_{\check{H}(\hat{U}'_\gamma)}^2\leq C \Vert h_\gamma \phi\Vert_D^2$ where $C$ again only depends on the curvature bounds of $(M,\Sigma)$ and the spectral gap $c$ on $\hat{U}'_\gamma$. Thus, together with Lemma \ref{lem_equiv_cutoff} the norms $\Vert .\Vert_{\check{R}}$ and $\Vert.\Vert_{\check{H}_\gamma}$ are equivalent.\\
\textbf{(ii)} Using $(i)$ and \cite[Lemma 6.3]{baer_ballmann_11} we see
\begin{align*}
\Vert \tilde{\mathcal E} (\nu\cdot R\phi)\Vert_D^2\leq C' \sum_{\gamma, U'_\gamma\neq \varnothing} \Vert \nu \cdot R(h_\gamma \phi)\Vert_{\check{H}(\hat{U}'_\gamma)}^2 = C' \sum_{\gamma, U'_\gamma\neq \varnothing} \Vert R(h_\gamma \phi)\Vert_{\hat{H}(\hat{U}'_\gamma)}^2=: \Vert R\phi\Vert_{\hat{H}_\gamma}^2.
\end{align*}
Together with \cite[Lemma 6.1]{baer_ballmann_11} we obtain for all $\phi\in \Gamma_c^\infty(M,\SS_M)$
\begin{align*}
\Vert \tilde{\mathcal E} (\nu\cdot R\phi)\Vert_D^2\leq C \Vert \phi\Vert_D^2
\end{align*}
and, thus, $\Vert \psi\Vert_{\hat{R}} := \Vert \mathcal{E}(\nu\cdot R\phi)\Vert_D$ also gives rise to a norm on $R(\dom\, D_{\mathrm{max}})$. Moreover, the analogous statement of Lemma \ref{R_banach} holds for $\hat{R}:=(R(\dom\, D_{\mathrm{max}}), \Vert. \Vert_{\hat{R}})$, and we have $\Vert \psi\Vert_{\check{R}}=\Vert \nu\cdot \psi\Vert_{\hat{R}}$.
In particular, we get as in (i) that the norms $\Vert \tilde{\mathcal E} (\nu\cdot .)\Vert_D$ and $\Vert .\Vert_{\hat{H}_\gamma}$ are equivalent.
\end{remark}
\begin{remark}
Note that by Theorem \ref{extended-trace} and \ref{H1_dommax} \[H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)\subset (R(\dom\, D_{\mathrm{max}}), \Vert.\Vert_{\check{R}\, (\text{resp.\ } \hat{R})}) \subset H_{-\frac{1}{2}}(\Sigma, \SS_M|_\Sigma).\]
\end{remark}
Moreover, the perfect pairing of $\hat{H}_\gamma$ and $\check{H}_\gamma$, induced by the pairing of $H_{\frac{1}{2}}$ and $H_{-\frac{1}{2}}$, gives immediately
\begin{lemma}\label{pair_R}
The $L^2$-product on $\Gamma_c^\infty(\Sigma, \SS_M|_\Sigma)$ extends uniquely to a perfect pairing $\check{R}\times \hat{R} \to \mC$.
\end{lemma}
Up to now we have seen that the $\check{R}$-norm is equivalent to the norm $\Vert.\Vert_{\check{H}_\gamma}$, cp. Remark \ref{rem_norms}.i where the second norm comes with an appropriate trivialization of the manifold near the boundary, see before Proposition \ref{workaround_image}. But we also think that as in the closed case there should be a 'more intrinsic' equivalent norm:
\begin{conjecture}
The $\check{R}$-norm on $R(\dom\, D_{\rm max})$ is equivalent to the $\check{H}$-norm as defined in \eqref{H_spaces}. Moreover, $\check{H}=R(\dom\, D_{\rm max})$ as vector spaces.
\end{conjecture}
{\bf Boundary conditions.}
In this part, we show that each closed extension of $D_{cc}$ can be realized by a closed linear subset of $\check{R}$, and we give some examples.
\begin{lem}\label{Bmax}
Let $D$ be a closed extension of $D_{cc}$ with $B:=R(\dom\, D)\subset H_{-\frac{1}{2}} (\Sigma, \SS_M|_\Sigma)$. Then,
its domain $\dom\, D$ equals $\dom\, D_B\colon=\{ \phi\in \dom\, D_{\mathrm{max}}\ |\ R\phi\in B\}$, and $B$ is a closed linear subset of $\check{ R}$. Conversely, for every closed linear subset $B\subset \check{ R}$ the operator $D_B\colon \dom\, D_B \to L^2(M,\SS_M)$ is a closed extension of $D_{cc}$.
\end{lem}
Due to this Lemma, a closed subspace $B$ of $\check{ R}$ is called {\it boundary condition}.
\begin{proof}
Let $D$ be a closed extension of $D_{cc}$ with domain $\dom\, D$ and $B:= R(\dom\, D)$. Clearly, $\dom\, D\subset \dom\, D_B$.
We have to show that also the converse is true: Let $\phi \in \dom\, D_B$. Then, there exists
$\psi \in \dom\, D$ with $R \phi= R\psi$. By Lemma \ref{norm_equ_0},
$\phi-\psi\in \dom\, D_{\mathrm{min}}\subset \dom\, D$ and, hence, $\phi\in \dom\, D$. This implies that $\dom\, D=\dom\, D_B$.
Moreover, from \eqref{ER_ext} and the definition of the $\check{R}$-norm the maps $R: \dom\, D_{\mathrm{max}} \to \check{ R}$ and $\mathcal{E}: \check{ R}\to \dom\, D_{\mathrm{max}}$ are continuous. Hence, if $\dom\, D$ is closed in $\dom\, D_{\mathrm{max}}$, the set $B=\mathcal{E}^{-1}(\dom\, D)$ is closed in $R(\dom\, D_{\mathrm{max}})$. Conversely, if $B$ is closed in $\check{ R}$,
$\dom\, D=R^{-1}(B)$ is closed in $\dom\, D_{\mathrm{max}}$.
\end{proof}
\begin{lem}\label{equiv_H1_D}
Let $B$ be a boundary condition such that $B\subset H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$. Then, the $H_1$-norm and the graph norm $\Vert.\Vert_D$ are equivalent on $\dom\, D_B$.
\end{lem}
\begin{proof}
Since $B$ is a boundary condition, $\dom\, D_B$ is closed in $(\dom\, D_{\mathrm{max}}, \Vert. \Vert_D)$. Moreover, by $B\subset H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$, Lemma \ref{H1_dommax} and \eqref{equ_H1D_easydir}, $\dom\, D_B$ is closed in $(H_1(M, \SS_M), \Vert.\Vert_{H_1})$. Thus, $(\dom\, D_B, \Vert.\Vert_D)$ and $(\dom\, D_B, \Vert.\Vert_{H_1})$ are both Hilbert spaces. By \eqref{equ_H1D_easydir} the identity map $\Id\colon (\dom\, D_B, \Vert.\Vert_{H_1})\to (\dom\, D_B, \Vert.\Vert_D)$ is a bijective bounded linear map. From the bounded inverse theorem we know that also the inverse is bounded. Hence, the $H_1$- and the graph norm are equivalent on $\dom\, D_B$.
\end{proof}
\begin{remark}\label{comparebb}
The definition of $\dom\, D_B$ in \cite[Section 7]{baer_ballmann_11} uses $H_1^D\colon=\overline{\Gamma_c^\infty(M,\SS_M)}^{\Vert.\Vert_{H_1^D}}$ instead of $H_1$ where the $H_1^D$-norm
is given by $$\Vert \phi\Vert_{H_1^D}^2=\Vert \chi \phi\Vert_{H_1}^2 + \Vert \phi\Vert_{L^2}^2 + \Vert D\phi\Vert_{L^2}^2.$$
Here $\chi$ denotes an appropriate cut-off function such that $\chi\phi$ only lives on a small collar of the boundary. Since we work with the classical Dirac operator on $\Spinc$ manifolds and assume $(M,\Sigma)$ and $L$ being of
bounded geometry, the $H_1$- and the $H_1^D$-norm coincide. Ch. B\"ar and W. Ballmann consider a more general situation where it suffices that
$M$ is only complete but not necessarily of bounded geometry. Then the $H_1^D$-norm is needed. We could also switch
to this more general setup when dropping the condition (i) and (iii) in the Definition \ref{bdd_geo} while still
assuming that $(\Sigma, g|_\Sigma)$ is of bounded geometry and that the curvature tensor and its derivatives
are bounded on $U_\Sigma$. For that situation, we would also obtain Theorem \ref{main}. But
in order to simplify
notation we stick to the bounded geometry of $(M,\Sigma)$.
\end{remark}
\begin{example}\label{ex_bd_cond}
\begin{itemize}
\item[(i)] {\bf Minimal and maximal extension.} $B={0}$ gives the minimal extension $D_{B=0}=D_{\mathrm{min}}$, cf. Lemma \ref{norm_equ_0}. The maximal extension is obtained with $B=R(\dom\, D_\mathrm{max})$.
\item[(ii)] $D_{B=H_{\frac{1}{2}}}\colon H_1(M,\SS_M)\to L^2(M,\SS_M)$ is an extension of $D_{cc}$ but not closed (if the boundary is nonempty): Since $\Gamma_c^\infty(M,\SS_M)\subset H_1$ and $\Gamma_c^\infty(M,\SS_M)$ dense in $\dom\, D_{\mathrm{max}}$, the closure of ${D}_{B=H_{\frac{1}{2}}}$ is $D_{\mathrm{max}}$.
\item[(iii)] \cite[Section 6]{HMZ02}
Let $P_\pm: L^2 (\Sigma, \SS_M|_\Sigma) \to L^2 (\Sigma, \SS_M|_\Sigma), \ \phi\mapsto \frac{1}{2}(\phi \pm \i \nu\cdot \phi)$ and
\[D_\pm\colon \dom\, D_\pm:=\{\phi\in \dom\, D_{\mathrm{max}}\ |\ P_\pm R\phi=0\}\to L^2(M,\SS_M).\]
In Section \ref{boundcond_B+-}, we will show that
$D_\pm$ is a closed extension and that $D_\pm=D_{B_\pm}$ where
\[B_\pm=\{\phi\in H_\frac{1}{2}(\Sigma,\SS_M|_\Sigma)\ | \ P_\pm\phi=0\}. \]
Each $\phi$ decomposes uniquely into $\phi=P_+ \phi +P_-\phi$, and if $\phi\in H_\frac{1}{2}(\Sigma, \SS_M|_\Sigma)$, then
$P_\pm\phi\in H_\frac{1}{2}(\Sigma, \SS_M|_\Sigma)$, too. This assures that the $B_\pm$'s are honestly larger than the trivial boundary condition $B=\{0\}$. More properties of this boundary condition can be found in Section \ref{boundcond_B+-}.
\item[(iv)] {\bf APS boundary conditions.}
An obvious way to generalize the APS boundary conditions for a closed boundary to our situation is given by the following:
Let $(M,\Sigma)$ be of bounded geometry. We use the notations introduced in Section \ref{trace_theorem}.
We set $B^{\mathrm{APS}}_{\geq a}=R(\dom\, D_\mathrm{max})\cap \Gamma_{[a,\infty)}^{\mathrm{APS}}$ and $B^{\mathrm{APS}}_{< a}=R(\dom\, D_\mathrm{max})\cap \Gamma_{ (-\infty, a]}^{\rm APS}$, respectively. In the same ways, let $B^{\mathrm{APS}}_{\leq a}$ and $B^{\mathrm{APS}}_{> a}$ be defined. If a neighbourhood of $a$ is in the spectrum of $D^\Sigma$, $B^{\mathrm{APS}}_{< a}$ and $ B^{\mathrm{APS}}_{> a}$ won't be closed.
We conjecture that for $(M,\Sigma)$ of bounded geometry the sets $B^{\mathrm{APS}}_{\geq a}$ and $B^{\mathrm{APS}}_{\leq a}$ define boundary conditions. But actually we don't know.
\end{itemize}
\end{example}
{\bf Boundary value problems.} In this part we want to prove Theorem \ref{intro-bvp}. For that we need to define first the notion coercivity at infinity:
\begin{definition}\label{coer}
A closed linear operator $D\colon \dom\, D\subset L^2(M,\SS_M)\to L^2(M,\SS_M)$ is said to be $(\dom\, D)$-coercive at infinity if there is a $c>0$ such that
\[ \forall \phi\in \dom\, D\cap \left( \ker D\right)^\perp:\ \Vert D\phi\Vert_{L^2} \geq c\Vert \phi\Vert_{L^2}\]
where $\!\!\phantom{,}^\perp$ denotes the orthogonal complement in $L^2$.
\end{definition}
Note that in case that $D$ is the Dirac operator on a complete manifold without boundary, coercitivity at infinity follows immediately if $0$ is not the essential spectrum. Conversely if the Dirac operator is coercive at infinity then either $0$ is not in the essential spectrum or the kernel is infinite-dimensional. For manifolds with boundary, $D$ is in general no longer self-adjoint. Thus, the spectrum is in general complex and this translation to the essential spectrum is not possible.
In Section \ref{section_coer}, we will compare this coercivity condition with the originally one used in \cite[Defintion 8.2]{baer_ballmann_11} for closed boundaries. But first, we will see how this condition forces the range of the operator to be closed which is crucial in order to apply the Closed Range Theorem \ref{closed_range_theorem} and show existence of preimages for linear operator as we will need in Theorem \ref{intro-bvp}.
\begin{lemma} \label{coercive_closed}
If the closed linear operator $D\colon \dom\, D\subset L^2(M,\SS_M)\to L^2(M,\SS_M)$ is $(\dom\, D)$-coercive at infinity, then the range is closed.
\end{lemma}
\begin{proof}
Let $\phi_i$ be a sequence in $\dom\, D$ with $D\phi_i\to \psi$ in $L^2$. We have to show that $\psi$ is in the image of $D$.
W.l.o.g. we can assume that $\phi_i \perp \ker D$. Then $(\dom\, D)$-coercivity at infinity gives that $\phi_i$ is bounded in $L^2$ and, thus, also in the
graph norm of $D$. Thus, $\phi_i\to \phi$ weakly in $\Vert. \Vert_D$. Let $\eta\in \dom\, D^*$. Then, $(D\phi,\eta)=\lim_{i\to\infty} (D\phi_i, \eta)=\lim_{i\to\infty} (\phi_i, D^*\eta)= (\phi, D^*\eta)$. Thus, $\phi\in \dom\, D$ and closedness of $\dom\, D$ then implies
that $D\phi=\psi$.
\end{proof}
We are now ready to prove
\begin{reptheorem}{intro-bvp} Let $B$ be a boundary condition, and let the Dirac operator
$$D_B\colon \dom\, D_B\subset L^2(M, \SS_M) \to L^2(M, \SS_M)$$ be $B$-coercive at infinity. Let $P_{B}\colon R(\dom\, D_\mathrm{max})\to B$ be a projection.
Then, for all $\psi\in L^2(M, \SS_M)$ and $\tilde{\rho}\in \dom\, D_\mathrm{max}$ where $\psi-D\tilde{\rho}\in (\ker\, (D_{B})^*)^\perp$, the boundary value problem
$$\left\{
\begin{array}{rll}
D\phi &=\psi & \text{on\ } M,\\
(\Id - P_{B})R\phi& = (\Id - P_{B}) R\tilde{\rho}&\text{on\ } \Sigma
\end{array}
\right.
$$
has a solution $\phi\in \dom\, D_\mathrm{max}$ that is unique up to elements of the kernel $\ker\, D_B$.
\end{reptheorem}
Projection only means here that $P_B$ is linear and $P_B|_B=\Id$.
\begin{proof}
Since $D$ is $B$-coercive at infinity, its range is closed by Lemma \ref{coercive_closed}. Thus,
due to the Closed Range Theorem \ref{closed_range_theorem}, the spinor $\psi-D\tilde{\rho}\in \mathrm{ran}\, D_B$. Hence, there exists $\hat{\phi}\in \dom\, D_B$ with $D\hat{\phi}=
\psi-D\tilde{\rho}$. Setting $\phi=\hat{\phi}+\tilde{\rho}$, we get $\phi\in \dom\, D_\mathrm{max}$, $D\phi=\psi$, and $(\Id - P_{B})R\phi= (\Id - P_{B})R\hat{\phi}+ (\Id - P_{B})R\tilde{\rho}= (\Id - P_{B})R\tilde{\rho}$.
\end{proof}
\begin{corollary} \label{bvp-H1} Let $B$ be a boundary condition such that $B\subset H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$. We
assume that the Dirac operator $D\colon \dom\, D_B\subset L^2(M, \SS_M) \to L^2(M, \SS_M)$ is
$B$-coercive at infinity. Let $P_B\colon H_{\frac{1}{2}}(\Sigma,\SS_M|_\Sigma)\to B$ be a projection. Moreover, assume that
$\psi\in L^2(M, \SS_M)$ and $\rho\in H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$ satisfy
\begin{equation}\label{int_cond} (\psi,\chi)+(\nu\cdot \rho, R\chi)_\Sigma=0\end{equation}
for all $\chi\in \ker\, (D_B)^*$. Then,
the boundary value problem
$$\left\{
\begin{array}{rll}
D\phi &=\psi &\text{on\ } M,\\
(\Id - P_{B})R\phi&= (\Id - P_{B}){\rho}&\text{on\ } \Sigma
\end{array}
\right.
$$
has a solution $\phi\in H_1(M,\SS_M)$ that is unique up to elements of the kernel $\ker\, D_B$.
\end{corollary}
\begin{proof} By Lemma \ref{H1_dommax}, $B\subset H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$ implies $\dom\, D_B\subset H_1(M,\SS_M)$. We set
$\tilde{\rho}=\mathcal{E}\rho$. By the Trace Theorem \ref{trace_theorem},
$\tilde{\rho}\in H_1(M,\SS_M)$. Moreover, by Lemma \ref{againextequ} the integrability condition \eqref{int_cond}
implies that $\psi-D\tilde{\rho}\in (\ker\, (D_{B})^*)^\perp$. Hence, together with the Closed Range Theorem there is $\hat{\phi}\in \dom\, D_B\subset H_1(M,\SS_M)$ with $D\hat{\phi}= \psi-D\tilde{\rho}$. Thus, as in the proof of Theorem \ref{intro-bvp} $\phi=\hat{\phi}+\tilde{\rho}$ gives a solution which is now in $H_1(M,\SS_M)$.
\end{proof}
\begin{remark}In order to give a full generalization of the theory given
in \cite{baer_ballmann_11} it would be interesting to examine the following questions:\\
- Consider general boundary conditions, in particular we would like to identify the image of the extended trace map in Theorem \ref{extended-trace}.\\
- Give a generalization of the definition for elliptic boundary conditions for noncompact boundaries (of bounded geometry) and study them.\\
- Consider, more generally, complete Dirac-type operators as in \cite{baer_ballmann_11}.
\end{remark}
\section{On the boundary condition $B_\pm$}\label{boundcond_B+-}
In this section, we briefly recall and give some basic facts on $P_\pm$. Some of them can be found in \cite[Section 6]{HMZ02}. Moreover, we prove the claims of Example \ref{ex_bd_cond}.iii.
\begin{lem}\label{lemmaonP+-} Let $P_\pm\colon L^2(\Sigma, \SS_M|_\Sigma)\to L^2(\Sigma, \SS_M|_\Sigma)$ be the
map $\phi\mapsto \frac{1}{2}(\phi \pm \i\nu \cdot \phi)$ and consider $B_\pm:= \{ \phi\in H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma ) \ |\ P_\pm\phi=0\}$. Then, the following hold
\begin{itemize}
\item[(i)] $P_\pm$ are self-adjoint projections, orthogonal to each other and $\nu P_\pm=P_\pm \nu= \mp \i P_\pm$.
\item[(ii)] For all $s\in \mathbb{R}$, $P_\pm(\phi)= \frac{1}{2}(\phi \pm i\nu \cdot \phi)$ gives an operator from
$H_s(\Sigma, \SS_M|_\Sigma)$ to itself such that for all $\phi\in H_s(\Sigma, \SS_M|_\Sigma)$ and
$\psi\in H_{-s}(\Sigma, \SS_M|_\Sigma)$ we have $(P_+\phi,P_-\psi)_\Sigma=0$ and
$(P_\pm \phi, \psi)_\Sigma= (\phi, P_\pm \psi)_\Sigma$.
\item[(iii)] $\widetilde{D}^{\Sigma}P_{\pm}=P_{\mp} \widetilde{D}^{\Sigma}.$
\item[(iv)] $D_\pm$ (see Example \ref{ex_bd_cond}.iii for the definition) is a closed extension of $D_{cc}$.
\item[(v)] $D_\pm= D_{B_\pm}$.
\item[(vi)] $(D_{B_\pm})^*=D_{B_\mp}$.
\item[(vii)] Let each connected component of $M$ have a non-empty boundary. Then, $\ker D_{B_\pm} =\{0\}$.
\end{itemize}
\end{lem}
\begin{proof}
Assertions (i) and (ii) follow directly by simple calculations, and (iii) follows directly from \eqref{d1}. For (iv) we have by definition of $D_\pm$ (see Example \ref{ex_bd_cond}.iii) that
$D_\pm=D_{\tilde{B}_\pm}$ where $\tilde{B}_\pm=\{\phi\in R(\dom\, D_{\mathrm{max}}) \ |\ P_\pm\phi=0\}$.
In order to show the closedness of $D_\pm$ we want to apply Lemma \ref{Bmax}. For that, we have to show that $\tilde{B}_\pm$ is closed in $\check{R}$: Let $\phi_i\in \tilde{B}_\pm$ with $\phi_i\to \phi$ in
$\check{R}$.
Then, we get together with Remark \ref{rem_norms}.ii that
\begin{align*}\Vert P_\pm \phi\Vert_{\check{R}}=& \Vert P_\pm (\phi-\phi_i)\Vert_{\check{R}}=\Vert\mathcal{E} P_\pm (\phi-\phi_i)\Vert_{D}\leq \frac{1}{2}\left( \Vert\mathcal{E} (\phi-\phi_i)\Vert_{D} +\Vert\mathcal{E} \nu\cdot (\phi-\phi_i)\Vert_{D}\right)\\
\leq& C\Vert \mathcal{E} (\phi-\phi_i)\Vert_{D} = \Vert \phi-\phi_i\Vert_{\check{R}}\to 0.
\end{align*}
Hence, $P_\pm \phi=0$ and $\phi\in \tilde{B}_\pm.$
For (v), we have clearly that $\dom\, D_{B_\pm}\subset \dom\, D_\pm$. It remains to show that
any $\phi\in \dom\, D_\pm$ is already in $H_1(M, \SS_M)$. By Lemma \ref{dense_in_graph_norm},
there is a sequence $\phi_i\in \Gamma_c^\infty(M,\SS_M)$ with $\phi_i\to \phi$ in the graph norm.
Consider $\mathcal{E}P_\pm R\phi_i$. By the linearity of $\mathcal{E}$, \eqref{ER_ext} and Remark \ref{rem_norms}.ii we get
\begin{align*} \Vert \mathcal{E}P_\pm R \phi_i\Vert_D=& \Vert \mathcal{E}P_\pm R(\phi_i-\phi)\Vert_D\\
\leq& \frac{1}{2}\left(\Vert \mathcal{E} R(\phi_i-\phi)\Vert_D + \Vert \mathcal{E}(\nu\cdot R(\phi_i-\phi)\Vert_D \right))\leq C \Vert \phi_i-\phi\Vert_D\to 0.
\end{align*}
Hence, $\psi_i:=\phi_i - \mathcal{E}P_\pm R \phi_i \to \phi$ in the graph norm. Since $\psi_i \in \dom\, D_{B_\pm}$, this implies
that $\dom\, D_{B_\pm}$ is dense in $\dom\, D_\pm$. Moreover, note that with (iii) and (i) we have
$$\int_\Sigma \langle R\psi_i, \widetilde{D}^{\Sigma}R\psi_i\rangle ds= \int_\Sigma \langle P_\mp R\psi_i, \widetilde{D}^{\Sigma}P_\mp R\psi_i\rangle
ds=\int_\Sigma \langle P_\mp R\psi_i, P_\pm \widetilde{D}^{\Sigma} R\psi_i\rangle ds=0.$$
Hence, together with the Lichnerowicz formula in Lemma \ref{extequ}, the bounded geometry, (i) and Lemma \ref{extequ} we get
\begin{align*}
\Vert \psi_i-\psi_j\Vert_{H_1}^2=&\Vert \psi_i-\psi_j\Vert_D^2-\frac{1}{4}\int_M\langle (\s^M+2\i\Omega\cdot)(\psi_i-\psi_j),(\psi_i-\psi_j)\rangle dv\\
&-\frac{n}{2}\int_\Sigma H|R(\psi_i-\psi_j)|^2 ds\\
\leq& C\Vert \psi_i-\psi_j\Vert_D^2 \mp \i \frac{n}{2}\int_\Sigma <\nu\cdot R(\psi_i-\psi_j), H R(\psi_i-\psi_j)>\\
\leq& C\Vert \psi_i-\psi_j\Vert_D^2.
\end{align*}
Thus, $\psi_i$ is even a Cauchy sequence in $H_1$ which implies that $\phi$ is already in $H_1(M,\SS_M)$.
Note that this implies in particular that $B_\pm=\tilde{B}_\pm$.
For (vi),
the domain of the adjoint is defined by
\[\dom\, (D_{+})^*=\{\theta\in L^2(M,\SS_M)\ |\ \exists \chi\in L^2(M,\SS_M)\, \forall \psi\in \dom\, D_{+}:
(\chi, \psi)=(\eta, D\psi) \}.\]
Since, $\Gamma_{cc}^\infty(M,\SS_M) \subset \dom\, D_{+}$, we get
$\dom\, (D_{+})^* \subset \dom\, D_{\mathrm{max}}$. Thus,
$$\dom\, (D_{+})^*=\{\theta\in \dom\, D_{\mathrm{max}}\ |\ \forall \psi\in \dom\, D_{+}:
(D\theta, \psi)=(\theta, D\psi) \}.$$
Due to Lemma \ref{againextequ}, the definition of $\dom\, D_{+}$ and (v), we get
\begin{align*}
\dom\, (D_{+})^*=&\Big\{\theta\in \dom\, D_{\mathrm{max}}\ |\ \forall \psi\in H_1(M,\SS_M):
\int_\Sigma \langle \nu\cdot R\theta, P_-R\psi\rangle ds=0 \Big\}.\end{align*}
By (i) and (ii), we have
\[-\int_\Sigma \langle R\theta, \nu\cdot P_-R\psi\rangle ds=\i\int_\Sigma \langle R\theta, P_-R\psi\rangle ds=\i\int_\Sigma \langle P_-R\theta, R\psi\rangle ds\] and $P_-R\theta\in H_{-\frac12}(\Sigma, \SS_M|_\Sigma)$.
Hence, together with Lemma \ref{H1_dommax} and Lemma \ref{pairing-Sobolev},
\begin{align*}
\dom\, (D_{+})^*=&\Big\{\theta\in \dom\, D_{\mathrm{max}}\ |\ \forall \hat{\psi}\in H_{\frac{1}{2}}(\Sigma,\SS_M|_\Sigma):
\int_\Sigma \langle P_-R\theta, \hat{\psi}\rangle ds=0 \Big\}\\
=&\{\theta\in \dom\, D_{\mathrm{max}}\ |\ P_-R\theta =0 \}=\dom\ D_-.
\end{align*}
The assertion (vii) is proven as in the closed case \cite[Proof of Corollary 6]{HMZ02}: Let $\phi\in \ker D_{\pm}$, i.e. $\phi\in\dom\, D_{\mathrm{max}}$, $D\phi=0$ on $M$, and $P_\pm R\phi=0$ on $\Sigma$. Using this, \eqref{L2-structure_mod_boundary}, Lemma \ref{againextequ}
and (i), we compute
\begin{align*}
0&=\int_M \langle \phi, \i D\phi\rangle dv - \int_M \langle D\phi, \i\phi\rangle dv =\int_\Sigma \langle \nu \cdot R\phi, \i R\phi\rangle ds \\
&=\int_\Sigma \langle \nu \cdot P_\mp R\phi, \i P_\mp R\phi\rangle ds=\pm \int_\Sigma |R \phi|^2 ds.
\end{align*}
Hence, $R\phi=0$ and $\phi\in \dom\, D_{\mathrm{min}}$, cf. Lemma \ref{norm_equ_0}. But due to the strong unique continuation property of the Dirac operator \cite[Section 1.2]{BBL}, $D_{\mathrm{min}}\phi=0$ implies $\phi=0$.
\end{proof}
\section{Examples and the coercivity condition}\label{section_coer}
In Definition \ref{coer}, we defined when an operator
$D_B$ is $(\mathrm{dom}\, D_B)$-coercive at infinity. When working with $B$, we will also use the
short version -- $B$-coercive at infinity. In this passage, we will compare this notion with the one of coercivity at infinity given in \cite[Definition 8.2]{baer_ballmann_11} as cited below and give some examples.
\begin{definition}\label{coerbb}
\cite[Definition 8.2]{baer_ballmann_11}
$D\colon \dom\, D_{\mathrm{max}} \subset L^2(M,\SS_M)\to L^2(M, \SS_M)$ is coercive at infinity if there is a compact subset $K\subset M$ and
a constant $c>0$ such that $$\Vert D\phi\Vert_{L^2}
\geq c\Vert \phi\Vert_{L^2},$$ for all $\phi\in \Gamma_c^\infty (M\setminus K, \SS_M)$.
\end{definition}
By \cite[Lemma 8.4]{baer_ballmann_11}, $D$ is coercive at infinity for a closed boundary
$\Sigma$ if and only if there is a compact subset $K\subset M$ and a constant $c>0$ such that for all
$\phi\in \Gamma_{cc}^\infty(M\setminus K, \SS_M)$ we have $\Vert D\phi\Vert_{L^2}
\geq c\Vert \phi\Vert_{L^2}$. For noncompact boundaries, just the 'only if'-direction survives
since in contrast to closed boundaries there is no compact $K$ such that $\Gamma_c^\infty(M\setminus K,\SS_M)\subset \Gamma_{cc}^\infty(M,\SS_M)$.\\
Before we compare those different coercivity conditions we give some examples:
\begin{example}
\begin{itemize}
\item[(i)] By the unique continuation property, the kernel of $D_\mathrm{min}$ is trivial. Thus,
together with Lemma \ref{norm_equ_0}, we have that $D$ is $(B=0)$--coercive at infinity if
and only if there is a constant $c>0$ such that for all $\phi\in \Gamma_{cc}^\infty(M,\SS_M)$
$$\Vert D\phi\Vert_{L^2} \geq c\Vert \phi\Vert_{L^2}.$$
For closed boundaries, this implies coercivity at infinity by \cite[Lemma 8.4]{baer_ballmann_11} which was cited above. We will see that for closed boundaries also the converse is true, cf. Corollary \ref{cor_coer_closed}.
\item[(ii)] By Lemma \ref{lemmaonP+-}, $\ker D_{B_\pm}=\{0\}$. Thus, $D$ is $B_\pm$-coercive
at infinity if and only if there is a constant $c>0$ such that $$\Vert D\psi\Vert_{L^2}\geq c\Vert \psi\Vert_{L^2}$$
for all $\psi\in H_1(M,\SS_M)$ with $P_\pm R\psi =0$. In particular, this implies $(B=0)$-coercivity at infinity. More generally, if $B_1\subset B_2$ and $\ker D_{B_1}=\ker D_{B_2}$, then $B_2$-coercivity at infinity implies $B_1$-coercivity at infinity.
\end{itemize}
\end{example}
\begin{lemma}\label{equiv_coer_easydir}
Let $D$ be coercive at infinity, and let $B$ be a boundary condition. Assume that
$\dom\, D_B\cap (\ker D_B)^\perp\subset H_1(M,\SS_M)$ and that the $H_1$-norm and the graph norm are
equivalent on $\dom\, D_B\cap (\ker D_B)^\perp$. Then, $D$ is $B$-coercive at infinity.
\end{lemma}
\begin{proof} Since $D$ is coercive at infinity, there is a compact subset $K\subset M$ and a constant $c>0$ such
that $\Vert D\phi\Vert_{L^2}\geq c\Vert \phi\Vert_{L^2}$ for all $\phi\in \Gamma_c^\infty(M\setminus K, \SS_M)$.
Assume that $D$ is not $B$-coercive at infinity. Then, there is a sequence
$\phi_i\in \dom\, D_B\cap (\ker D_B)^\perp$ with $\Vert \phi_i\Vert_{L^2}=1$ and $\Vert D\phi_i\Vert_{L^2}\to 0$. By
equivalence of the norms, $\phi_i$ is also bounded in $H_1$. This implies $\phi_i \to \phi$ weakly in $H_1$ and, thus, locally strongly in $L^2$. Moreover, $D\phi=0$.
Together with $\phi_i\perp \ker D_B$, this implies $\phi=0$. Thus, for
each compact subset $K'\subset M$ we have $\int_{K'} |\phi_i|^2 dv\to 0$ as $i\to \infty$. Let
$\eta\colon M \to [0,1]$ be a cut-off function and $K'$ be a compact subset such that
$K\subset K' \subset M$ and $\eta=0$ on $K$, $\eta=1$ on $M\setminus K'$ and $|d \eta|\leq a$ for a constant $a>0$ big enough.
Then, $\supp\, (\eta\phi_i)\subset M\setminus K$, $\Vert D(\eta \phi_i)\Vert_{L^2}\leq a\Vert \phi_i\Vert_{L^2(K')}+\Vert D\phi_i\Vert_{L^2}\to 0$
and
$$1\geq \Vert\eta \phi_i\Vert_{L^2}\geq \Vert \phi_i \Vert_{L^2}- \Vert (1-\eta)\phi_i
\Vert_{L^2}\geq 1-\Vert \phi_i\Vert_{L^2(K')}\to 1.$$ By Lemma \ref{dense_in_graph_norm}, we can choose a sequence $(\phi_{ij})_j\subset \Gamma_c^\infty(M,\SS_M)$ with $\phi_{ij}\to \phi_i$ in the graph norm as $j\to \infty$. Then, $\eta\phi_{ij}\to \eta \phi_i$ in the graph norm and $\supp\, (\eta\phi_{ij})\in M\setminus K$. Thus, we can find $j=j(i)$ such that $\Vert D(\eta \phi_{ij(i)})\Vert_{L^2}\to 0$ and $\Vert \eta \phi_{ij(i)}\Vert_{L^2}\to 1$ as $i\to \infty$. But this contradicts the assumption that $D$ is coercive at infinity.
\end{proof}
From the last Lemma and Lemma \ref{equiv_H1_D} we obtain immediately
\begin{corollary}\label{easy_dir_cor} If $D$ is coercive at infinity and $B\subset H_{\frac{1}{2}}(\Sigma, \SS_M|_\Sigma)$, then $D$ is $B$-coercive at infinity.
\end{corollary}
Next we give some (very restrictive) conditions that are sufficient to prove that $B$-coercivity at infinity implies coercivity at infinity. Those additional assumptions are needed to make sure that the $\phi_i$ appearing in Definition \ref{coerbb} are in $\dom\, D_B$.
\begin{lemma}\label{equiv_coer}Let $B$ be a boundary condition with $B\subset H_\frac{1}{2}(\Sigma, \SS_M|_\Sigma)$. Assume that there exists a compact subset $K'\subset M$ with $\Gamma_c^\infty(M\setminus K', \SS_M)\subset \dom\, D_B$.
If $D\colon \mathrm{dom}\, D_B\subset L^2(\Sigma, \SS_M|_\Sigma) \to L^2(\Sigma, \SS_M|_\Sigma)$ has a finite
dimensional kernel and $D$ is $B$-coercive at infinity, then $D$ is coercive at infinity.
\end{lemma}
\begin{proof}Assume that $D$ is not coercive at infinity. Then, for all compact subsets $K\subset M$ there exists a sequence $\phi_i\in \Gamma_c^\infty(M\setminus K, \SS)$ with $\Vert \phi_i\Vert_{L^2}=1$ and $\Vert D\phi_i\Vert_{L^2}\to 0$. We choose $K$ such that $K'\subset K$. Then, all those $\phi_i\in \dom\, D_B$. Thus, $\phi_i\to \phi\in \dom\, D_B$ weakly in the graph norm of $D$, $\phi\in \ker\, D_B$ and $\phi=0$ on $K$. We decompose $\phi_i=\phi_i^k +\phi_i^\perp$ where $\phi_i^k \in \ker\, D_B$ and $\phi_i^\perp \in \left( \ker D_B\right)^\perp$. Then $\Vert D\phi_i^\perp\Vert_{L^2}\to 0$. Moreover, we assume that the kernel is finite dimensional, i.e. $\phi_i^k=\sum_{j=1}^l a_{ij}\psi_j$ where the $\psi_j$'s form an orthonormal basis of $\mathrm{ker}\, D_B$. Thus, $\Vert
\phi_i^k\Vert_{L^2}^2=\sum_{j=1}^l|a_{ij}|^2$. Assume now that $\Vert \phi_i^\perp\Vert_{L^ 2}\to 0$. Then $\phi_i^\perp\to 0$ in the graph norm. But $\Vert \phi_i\Vert_{L^2}=1$. This implies that there is at least one $j\in \{ 1, \ldots, l\}$ with $|a_{ij}|$ is bounded away from zero for almost all $i$, i.e. $\phi$ cannot be zero everywhere. Since $\phi$ is zero on $K$, this is a contradiction to the unique continuation principle. Thus, the assumption was wrong and there exists $c>0$ with $\Vert \phi_i^\perp\Vert_{L^2}>c$ and $D$ is not $B$-coercive at infinity.
\end{proof}
Note that the assumption on the existence of $K'$ is very restrictive. If the boundary is closed, it is automatically satisfied and we get the corollary below. If the boundary is noncompact, for a general $\dom\, D$ e.g. for the minimal domain of $D$, it is not true.
But there are also examples for manifolds with noncompact boundary and closed extension of $D_{cc}$ where the assumptions of the last Lemma are satisfied:
\begin{example} Let $(\Sigma, h)$ be a complete Riemannian $\spin$ manifold.
Let $M_\infty=\Sigma\times \R$ and $M=\Sigma\times [0,\infty)$ be equipped with product metric $h+dt^2$. Both manifolds are of bounded geometry. Since $M_\infty$ is complete with no boundary, the Dirac operator on $M_\infty$ is essentially self-adjoint. Assume that the Dirac operator on $M_\infty$ is invertible.
Let $K'\subset M_\infty$ be a compact subset that intersects $\Sigma\times \{0\}$ in a subset of non-zero measure. Define $\mathcal{L}$ to be the linear span of $\Gamma_c^\infty(M\setminus K', \SS_M)\cup \Gamma_{cc}^\infty(M,\SS_M)$ and $\dom\, D_B\colon= \overline{\mathcal{L}}^{\Vert. \Vert_D}$. Then, $B=\overline{\Gamma_c^\infty(\Sigma\setminus K', \SS_M|_\Sigma)}^{\Vert. \Vert_{\check{R}}}$. Note that by construction $\dom\, D_B$ is the domain of a closed extension of $D_{cc}$. But it is honestly smaller than $\dom\, D_{\rm max}$ since all $\phi\in B$ have to vanish on $\Sigma\cap K'$. In particular, by the strong unique continuation property of $D$ \cite[Section 1.2]{BBL} $D_B\colon \dom\, D_B \to L^2(M,\SS_M)$ has trivial kernel.
It remains to show that $D_B$ is $B$-coercive at infinity, i.e. there is $c>0$ such that for all $\phi\in \mathcal{L}$ we have $\Vert D\phi\Vert_{L^2}\geq c\Vert \phi\Vert_{L^2}$. We will show this by contradiction, that is, we assume that there is a sequence $\phi_i\in \mathcal{L}$ with $\Vert \phi_i\Vert_{L^2}=1$ and $\Vert D\phi_i\Vert_{L^2}\to 0$. We will construct a sequence of spinors on $M_\infty$. Let $\tilde{\phi}_i$ be obtained from $\phi_i$ by reflection along $\Sigma$. Clearly, $\tilde{\phi}_i\in L^2(M_\infty, \SS_{M_\infty})$. Moreover, note that $\tilde{\phi}_i$ is everywhere continuous. Let $\nu$ be the inward normal vector field of $M$.
For $\psi\in \Gamma_c^\infty(M_\infty, \SS_{M_\infty})$ we can estimate using \eqref{L2-structure_mod_boundary}
\begin{align*}
|(\tilde{\phi}_i,& D\psi)_{L^2(M_\infty)}|= \left|\int_{\Sigma\times (0,\infty)} \< \tilde{\phi}_i, D\psi\> + \int_{\Sigma\times (-\infty,0)} \< \tilde{\phi}_i, D\psi\>\right|\\
= &\left|\int_{\Sigma\times (0,\infty)} \< D\tilde{\phi}_i, \psi\> + \int_{\Sigma} \< \nu\cdot \tilde{\phi}_i|_\Sigma, \psi|_\Sigma\> + \int_{\Sigma\times (-\infty,0)} \< D\tilde{\phi}_i, \psi\> + \int_{\Sigma} \< -\nu\cdot \tilde{\phi}_i|_\Sigma, \psi|_\Sigma\> \right|\\
\leq& 2\Vert D\phi_i\Vert_{L^2(M)} \Vert \psi\Vert_{L^2(M_\infty)}\to 0.
\end{align*}
In particular this means that $\tilde{\phi}_i\in H_1(M_\infty, \SS_{M_\infty})$ and that $\Vert D\tilde{\phi}_i\Vert_{L^2(M_\infty)}\to 0$ while $\Vert \tilde{\phi}_i\Vert_{L^2(M_\infty)}=2$. This gives a contradiction to the invertibility of the Dirac operator on $M_\infty$.
\end{example}
\begin{corollary}\label{cor_coer_closed} Let the boundary $\Sigma$ be closed.
If $B$ is an elliptic boundary condition as defined in \cite[Definition 7.5]{baer_ballmann_11},
$B$-coercivity at infinity implies coercivity at infinity. In particular, $D$ is $(B=0)$-coercive at infinity if and
only if it is coercive at infinity.
\end{corollary}
\begin{proof}
If the boundary is closed and $B$ is elliptic, $D_B$ has a finite kernel \cite[Theorem 8.5]{baer_ballmann_11}.
The rest of the assumption in Lemma \ref{equiv_coer} is trivially fulfilled which gives the first claim. The rest follows with Corollary \ref{easy_dir_cor}.
\end{proof}
For closed boundaries and spin manifolds, assuming uniformly positive scalar curvature at infinity is a sufficient
condition to have that $D$ is coercive at infinity, see \cite[Example 8.3]{baer_ballmann_11}. For noncompact boundaries, we obtain the following
\begin{lem
\begin{itemize}
\item[(i)] If $\frac 12 \s^M +\i\Omega\cdot$ is a positive operator, the Dirac operator $D$ is $(B=0)$-coercive at infinity.
\item[(ii)] If $\frac 12 \s^M +\i\Omega\cdot$ is a positive operator and $H\geq 0$, the Dirac operator $D$ is $B_\pm$-coercive at infinity.
\end{itemize}
\end{lem}
\begin{proof} Let $c>0$ such that $\frac 12 \s^M +\i\Omega\cdot\geq 2c$. The Lichnerowicz formula
\eqref{Lich} and Lemma \ref{extequ} give
\begin{align*}
\Vert D\phi\Vert_{L^2}^2&=\Vert \nabla \phi\Vert_{L^2}^2+\int_{M}
\frac{\s^M}{4}|\phi|^2 dv + \int_{M} \frac{\i}{2}<\Omega\cdot\phi, \phi> dv -
\int_{\Sigma} \langle R\phi,\widetilde{D}^{\Sigma}(R\phi)\rangle ds\\
& +\frac{n}{2}\int_{\Sigma} H|R\phi|^2ds\geq c\Vert \phi\Vert_{L^2}^2 -\int_{\Sigma} \langle R\phi,
\widetilde{D}^{\Sigma}(R\phi)\rangle ds +\frac{n}{2}\int_{\Sigma} H|R\phi|^2ds,
\end{align*}
for all $\phi\in H_1(M, \SS_M)$. Then (i) follows directly with Lemma \ref{norm_equ_0}. For (ii), let now $H\geq 0$ and $R\phi\in B_\pm$.
Then, together with Lemma \ref{lemmaonP+-}, it implies
\begin{align*}
\Vert D\phi\Vert_{L^2}^2&\geq c\Vert \phi\Vert_{L^2}^2 -\int_{\Sigma} \langle R\phi,
\widetilde{D}^{\Sigma}(R\phi)\rangle ds=c\Vert \phi\Vert_{L^2}^2 -\int_{\Sigma} \langle P_\mp R\phi,
\widetilde{D}^{\Sigma}(P_\mp R\phi)\rangle\\
&= c\Vert \phi\Vert_{L^2}^2 -\int_{\Sigma} \langle P_\mp R\phi,
P_\pm\widetilde{D}^{\Sigma}(R\phi)\rangle= c\Vert \phi\Vert_{L^2}^2.
\end{align*}
\end{proof}
\section{ $\Spinc$ Reilly inequality on possibly open boundary domains}\label{Reilly-sec}
In this section, we shortly review the spinorial Reilly inequality. This inequality together with those boundary value problems
discussed in Section \ref{boundary_values} will be the main ingredient in the proof of Theorem \ref{main}.
\begin{thm} {\bf $\Spinc$ Reilly inequality.}
\label{Reilly} For all $\psi\in H_1(M,\SS_M)$, we have
\begin{eqnarray}\label{inequalityReilly}
\int_{\Sigma}\Big(\langle \widetilde{D}^{\Sigma}\psi,\psi\rangle-\frac{n}{2}
H|\psi|^2\Big)ds\geq\int_{M}\Big(\frac{1}{4} \s^M
\vert\psi\vert^2 +\frac 12\langle \i\Omega\cdot\psi, \psi\rangle -\frac{n}{n+1}|D\psi|^2\Big)dv,
\end{eqnarray}
where $dv$ (resp. $ds$) is the Riemannian volume form of $M$ (resp. $\Sigma$).
Moreover,
equality occurs if and only if the spinor field $\psi$ is a twistor-spinor,
i.e. if and only if $P\psi=0$,
where $P$ is the twistor operator
acting on $\SS_M$ and is locally given by $P_X\psi=\nabla_X\psi+\frac{1}{n+1}X\cdot D\psi$ for all $X \in\Gamma(TM)$.
\end{thm}
\begin{proof} The inequality is proved for $\psi\in \Gamma_c^\infty(M, \SS_M)$ analogously as in the compact $\Spin$
case \cite[(17)]{HMZ1}. For the convenience of the reader, we will shortly recall it here.
Then for all $\psi\in H_1(M,\SS_M)$ the claim follows using the Trace Theorem \ref{trace_theorem} in the same way as in Lemma \ref{extequ}: We define $1$-forms $\alpha$ and $\beta$ on $M$ by
$\alpha(X) = \langle X\cdot D\psi, \psi\rangle$ and $\beta(X) =
\langle\nabla_X\psi, \psi\rangle$ for all $X
\in \Gamma^\infty(TM)$. Then $\alpha$ and $\beta$
satisfy
$$\delta\alpha = \langle D^2\psi, \psi\rangle - \vert D \psi\vert^2, \quad
\delta\beta =
- \langle\nabla^*\nabla\psi, \psi\rangle + \vert\nabla\psi\vert^2.$$
Applying the divergence theorem with \eqref{sl} and \eqref{diracgauss}, we get
\begin{eqnarray}\label{div}
\int_{\Sigma}\Big(\langle\widetilde{D}^{\Sigma} \psi, \psi\rangle -\frac n2
H\vert\psi\vert^2 \Big) ds = \int_{M} \Big(\vert\nabla
\psi\vert^2 - \vert D\psi\vert^2 +\frac 14 \s^M\ \vert\psi\vert^2 + \frac{\i}{2} \langle\Omega\cdot\psi, \psi\rangle\Big) dv.
\end{eqnarray}
On the other hand, for any spinor field $\psi$ we have
\begin{eqnarray}
\label{twistor}
\vert\nabla\psi\vert^2 = \vert P\psi\vert^2 + \frac{1}{n+1} \vert D\psi\vert^2.
\end{eqnarray}
Combining the identities \eqref{twistor}, and \eqref{div} and $\vert
P\psi\vert^2 \geq 0$, the result follows. Equality holds if and only if
$\vert P\psi\vert^2 = 0$, i.e. the spinor $\psi$ is a twistor spinor.
\end{proof}
\section{A lower bound for the first nonnegative eigenvalue of the Dirac operator
on the boundary}\label{proofmain}
In this section, we prove Theorem \ref{main}. For that we won't follow the original proof given in \cite{HMZ1} due to
our problems concerning the $\mathrm{APS}$-boundary conditions as remarked at the end of Example \ref{ex_bd_cond}.iv. But we will use $B_\pm$ as given in Example \ref{ex_bd_cond}.iii.
\begin{proof}[Proof of Theorem \ref{main}]
Since $\Sigma$ is of bounded geometry, $\widetilde{D}^{\Sigma}: H_1(\Sigma, \SS_M|_\Sigma)\to L^2(\Sigma, \SS_M|_\Sigma)$ is self-adjoint and, hence,
$\lambda_1$ is an eigenvalue or in the essential spectrum of $\widetilde{D}^{\Sigma}$. In both cases, there is
a sequence $\phi_i\in H_1(\Sigma, \SS_M|_\Sigma)$ with $\Vert \phi_i\Vert_{L^2(\Sigma)}=1$ and
$\Vert(\widetilde{D}^{\Sigma} -\lambda_1)\phi_i\Vert_{L^2(\Sigma)}\to 0$. Then, $\phi_i\to \phi$ weakly in $L^2(\Sigma, \SS_M|_\Sigma)$. (In case that $\phi\neq 0$, then $\phi$ is an eigenspinor of $\widetilde{D}^{\Sigma}$ to the eigenvalue $\lambda_1$ otherwise $\lambda_1$ is in the essential spectrum of $\widetilde{D}^{\Sigma}$). We assumed that $D$ is $B_-$-coercive at infinity
(everything which follows is also true when assuming
$B_+$-coercivity at infinity when switching the signs). Then by Lemma \ref{coercive_closed},
the range of
$D_{B_-}$ is closed. Moreover, from Lemma \ref{lemmaonP+-} we have $\ker\, (D_{B_-})^*= \ker\, D_{B_+}=\{0\}$. Thus,
due to Corollary \ref{bvp-H1} for each $i$
there exists a unique $\Psi_i\in H_1(M,\SS_M)$ with $D\Psi_i=0$ and $P_+R\Psi_i=P_+\phi_i.$ Using
Theorem \ref{Reilly} and $\s^M+2\i\Omega\cdot \geq 0$, we obtain
\begin{align*}
0\leq \int_{\Sigma} \left( \langle \widetilde{D}^{\Sigma}R\Psi_i,R\Psi_i\rangle
-\frac{n}{2}H|R\Psi_i|^2\right) ds.
\end{align*}
Moreover,
\begin{align*}(\widetilde{D}^{\Sigma} (P_+R\Psi_i +P_-R\Psi_i), P_+R\Psi_i +P_-R\Psi_i)_\Sigma&=
( \widetilde{D}^{\Sigma} P_+ R\Psi_i, P_-R\Psi_i)_\Sigma + ( \widetilde{D}^{\Sigma} P_- R\Psi_i, P_+R\Psi_i)_\Sigma\\&=
( \widetilde{D}^{\Sigma} P_+ R\Psi_i, P_-R\Psi_i)_\Sigma + ( P_- R\Psi_i, \widetilde{D}^{\Sigma} RP_+\Psi_i)_\Sigma,\end{align*}
where we used Lemma \ref{lemmaonP+-} and that $\widetilde{D}^{\Sigma}$ is self-adjoint on $H_1(\Sigma, \SS_M|_\Sigma)$. Hence, summarizing we get that
\begin{align*}
\frac{n}{2}\int_{\Sigma} H|R\Psi_i|^2ds &\leq 2\Re \int_{\Sigma} \langle \widetilde{D}^{\Sigma} P_+ R\Psi_i,P_-R\Psi_i\rangle
ds=2\Re \int_{\Sigma} \langle P_-\widetilde D \phi_i,P_-R\Psi_i\rangle
ds\\
&\leq 2\Re \int_{\Sigma} \langle P_-(\widetilde{D}^{\Sigma}-\lambda_1) \phi_i,P_-R\Psi_i\rangle
ds+ 2\lambda_1 \Re \int_{\Sigma} \langle P_- \phi_i,P_-R\Psi_i\rangle
ds.
\end{align*}
Using $2\Re \int_{\Sigma} \langle P_- \phi_i,P_-R\Psi_i\rangle
ds\leq \Vert P_- \phi_i \Vert_{L^2(\Sigma)}^2 +\Vert P_-R\Psi_i\Vert_{L^2(\Sigma)}^2 $ and $\lambda_1\geq 0$,
we obtain
\begin{align*}
\frac{n}{2}\inf_\Sigma H \Vert R\Psi_i\Vert_{L^2(\Sigma)}^2&\leq 2\Vert (\widetilde{D}^{\Sigma}-\lambda_1) \phi_i\Vert_{L^2} \Vert R\Psi_i\Vert_{L^2}+ \lambda_1 (\Vert P_- \phi_i \Vert_{L^2(\Sigma)}^2 +\Vert P_-R\Psi_i\Vert_{L^2(\Sigma)}^2).
\end{align*}
Moreover, $(\widetilde{D}^{\Sigma} P_\pm \phi_i, P_\mp \phi_i)=(P_\mp (\widetilde{D}^{\Sigma} -\lambda_1) \phi_i, P_\mp \phi_i) +
\lambda_1 \Vert P_\mp \phi_i\Vert_{L^2}^2$. Since $\widetilde{D}^{\Sigma}$ is self-adjoint,
$\Re (\widetilde{D}^{\Sigma} P_+\phi_i, P_-\phi_i)= \Re (\widetilde{D}^{\Sigma} P_-\phi_i, P_+\phi_i)$. Thus, together with
\[|(P_\mp (\widetilde{D}^{\Sigma} -\lambda_1) \phi_i, P_\mp \phi_i)|\leq \Vert (\widetilde{D}^{\Sigma} -\lambda_1) \phi_i\Vert_{L^2}\Vert\phi_i\Vert_{L^2}\to 0\] as $i\to \infty$,
this implies that $\lim_{i\to \infty} \Vert P_-\phi_i\Vert_ {L^2}= \lim_{i\to\infty} \Vert P_+\phi_i\Vert_ {L^2} =\frac{1}{2}$ for $\lambda_1\neq 0$.
Hence, for certain $\epsilon_i$ with $\epsilon_i\to 0$ as $i\to\infty$
\begin{align*}
\frac{n}{2}\inf_\Sigma H \Vert R\Psi_i\Vert_{L^2(\Sigma)}^2&\leq 2\Vert (\widetilde{D}^{\Sigma}-\lambda_1) \phi_i\Vert_{L^2} \Vert R\Psi_i\Vert_{L^2}+ \lambda_1 (\Vert P_+ \phi_i \Vert_{L^2(\Sigma)}^2 +\epsilon_i +\Vert P_-R\Psi_i\Vert_{L^2(\Sigma)}^2)\\
&\leq 2\Vert (\widetilde{D}^{\Sigma}-\lambda_1) \phi_i\Vert_{L^2} \Vert R\Psi_i\Vert_{L^2}+ \lambda_1 (\Vert P_+ R\Psi_i \Vert_{L^2(\Sigma)}^2 +\epsilon_i +\Vert P_-R\Psi_i\Vert_{L^2(\Sigma)}^2)\\
&\leq 2\Vert (\widetilde{D}^{\Sigma}-\lambda_1) \phi_i\Vert_{L^2} \Vert R\Psi_i\Vert_{L^2}+ \lambda_1 (\Vert R \Psi_i \Vert_{L^2(\Sigma)}^2 +
\epsilon_i).
\end{align*}
Hence, $$\frac{n}{2}\inf_\Sigma H \leq 2\Vert (\widetilde{D}^{\Sigma}-\lambda_1) \phi_i\Vert_{L^2} \Vert
R\Psi_i\Vert_{L^2}^{-1}+ \lambda_1 (1+\epsilon_i \Vert R\Psi_i\Vert_{L^2}^{-2}).$$
With $ \Vert R\Psi_i\Vert_{L^2}\geq \Vert P_+ R\Psi_i\Vert_{L^2}=\Vert P_+ \phi_i\Vert_{L^2} \to \frac{1}{2}$, we finally get for $i\to\infty$
\[ \frac{n}{2}\inf_\Sigma H \leq \lambda_1.\]
Next we collect all conditions that have to be fulfilled to obtain the equality $ \frac{n}{2}\inf_\Sigma H = \lambda_1$:
\begin{itemize}
\item[(1)] From the spinorial Reilly Inequality \eqref{inequalityReilly}, $\int_M |P\Psi_i|^2dv\to 0$ which implies together with $D\Psi_i=0$ that $\int_M |\nabla \Psi_i|^2dv\to 0$.
\item[(2)] $\int_M \s^M|\Psi_i|^2+2i\langle \Omega\cdot \Psi_i, \Psi_i\rangle dv\to 0$
\item[(3)] $\Vert \phi_i - R\Psi_i\Vert_{L^2(\Sigma)}\to 0$
\item[(4)] $\int_{\Sigma} (H-\inf_\Sigma H) |R\Psi_i|^2 ds \to 0$.
\end{itemize}
In case that $\lambda_1$ is an eigenvalue of $\widetilde{D}^{\Sigma}$ with eigenspinor $\phi$, one can
choose $\phi_i=\phi$ for all $i$. Then $\Psi_i=:\Psi$ for all $i$ and those equality
conditions reduce to $\phi=R\Psi$, $\Psi$ is a parallel spinor on $M$, $H$ is constant
and $\int_M \s^M|\Psi|^2+2i\langle \Omega\cdot \Psi, \Psi\rangle dv= 0$.
\end{proof}
|
1,116,691,500,917 | arxiv | \section{Introduction}
In some remarkable developments, a surprising connection has been
uncovered between the notions of discrete holomorphicity and
Yang-Baxter integrability \cite{RC,IC,C,LR,IR}. On the one hand,
a parafermionic observable on the lattice is discretely holomorphic,
i.e., the observable satisfies a version of the discrete Cauchy-Riemann
equations. This requirement leads to a set of linear equations
which can be solved to yield the Boltzmann weights of the model
at criticality. The surprise is that, on the other hand, these are
the same Boltzmann weights that follow by solving the star-triangle or
Yang-Baxter equations.
This connection has been observed by Rajabpour and Cardy \cite{RC} for
the $\mathrm{Z}_N$ model and by Ikhlef and Cardy \cite{IC} for the Potts model,
the dilute $O(n)$ loop model and the dilute $C_2^{(1)}$ loop model.
This list also includes the Ashkin-Teller model \cite{LR,IR}.
As yet there is no satisfactory explanation for this unexpected
connection between discrete holomorphicity and Yang-Baxter
integrability. Our aim in this paper is to explicitly establish that
the star-triangle equation, and thus Yang-Baxter integrability, is a
consequence of discrete holomorphicity. We will do this in the context
of general rhombi on a Baxter lattice and the $\mathrm{Z}_N$ model. Our
approach is thus algebraic.
The utility and power of the Yang-Baxter equation is well known.
The importance and full power of discrete holomorphicity is becoming
increasingly recognised. Setting up a discretely holomorphic observable
has been a very useful step for the passage from discrete lattice
models to the continuous functions of conformal field theory
\cite{S,DSlectures}. Among other examples, discrete holomorphicity
is a key ingredient in Duminil-Copin and Smirnov's \cite{DS} remarkable
and long sought after rigorous proof of the connective constant for
self-avoiding walks on the honeycomb lattice.
The exact value had been obtained 30 years ago by Nienhuis
\cite{Nienhuis} in the $n = 0$ limit of the $O(n)$ loop model on the
honeycomb lattice. Nienhuis used various mappings between the vertex
weights of different lattice models. The key point is that for $n=0$
all contributions from closed loops vanish, leaving only self-avoiding
walks.
The crucial first step in the arguments used by Duminil-Copin and
Smirnov is the introduction of a parafermionic observable on the
lattice which is discretely holomorphic.
The second step involved the use of counting arguments in a domain of
the hexagonal lattice. This step built on previous rigorous
mathematical results obtained for decomposition of excursions across
finite domains. A more formal proof has been given by Klazar
\cite{klazar}.
The fact that such exact and rigorous results exist is no doubt related
to the underlying integrability of the $O(n)$ loop model on the
honeycomb lattice \cite{Baxter}.
The arguments used by Duminil-Copin and Smirnov \cite{DS} were extended
to the $O(n)$ loop model on the honeycomb lattice with a boundary by
Beaton, Bousquet-Melou, de Gier, Duminil-Copin and Guttmann \cite{BDG}
to give a mathematically rigorous proof for the critical surface
adsorption temperature of self-avoiding walks on the honeycomb lattice.
This polymer adsorption transition corresponds to the $n = 0$ limit
of the $O(n)$ model at the special surface transition.
The exact value for the adsorption transition had been obtained earlier
by Batchelor and Yung \cite{BY} from the boundary Boltzmann weights
obtained by solving the boundary version of the Yang-Baxter equation.
Discrete holomorphicity at a boundary has been considered by Ikhlef
\cite{I} who was able to recover the boundary weights of the $O(n)$
loop model on the square lattice and to obtain a new set of boundary
weights for the $\mathrm{Z}_N$ model.
\section{From discrete holomorphicity to the star-triangle equation}
For a $\mathrm{Z}_N$ symmetric $N$-state spin model on a graph, the weight
$w(s_a, s_b)$ of the interaction on an edge $\edge{a}{b}$ is
unchanged by the global transformations $s_r \mapsto \omega s_r$
and $s_r \mapsto s_r^*$ of the spins. Here the spins take values
from the $N$th roots of unity, $s_r = \omega^{q_r}$, with
$q_r \in \{0, 1, \ldots, N-1\}$ and
$\omega = \exp\wrap{{2\pi\ii/N}}$. For a review of these models
and other important models contained therein, such as the
Ising and Potts models, see, e.g., \cite{Wu}.
Since the weight depends only
on the difference of the $q_r$ variables, we write it as $W(q_a - q_b)$,
or, by a slight abuse of notation, simply $W(a-b)$. The
Kramers-Wannier duality of the model \cite{WuWang} lets one define
disorder variables $\disorder{r}$ dual to the spins on the dual graph. The effect of
inserting a disorder variable at the dual site $\tilde{r}$ is to
introduce a string connecting the site to a site at infinity or at
the boundary. Whenever the string intersects an edge of the original
lattice, it modifies the weight on that edge by, for definiteness,
lowering the spin $q_r$ to its right by one.
\begin{figure}[h]
\centering \includegraphics{figure1}
\caption{A rhombic embedding of the covering lattice of a
heterogeneous model showing a string attached to a disorder variable.
The empty circles denote the spin variables, while the full circles
denote the disorder variables. The location of the parafermion is
marked with an $\times$.
}
\label{fig:fig1}
\end{figure}
We consider a rhombic embedding of the covering lattice, the union
of the original lattice and its dual, onto the complex plane (figure
\ref{fig:fig1}).
Each elementary rhombus of this lattice contains an edge from the
original lattice and an edge from the dual lattice. The parafermion
introduced by Rajabpour and Cardy \cite{RC}
\begin{equation*}
\RC{r}{r} = \exp\wrap{-\ii\,\sigma\,\theta_{\coveredge{r}{r}}}
\cdot s_r\cdot \disorder{r}
\end{equation*}
lives on the midpoint of the edges. Here, the angle
$\theta_{\coveredge{r}{r}}$ is the angle between the edge traversed
from the spin to the disorder variable and an arbitrary but fixed
axis. The real parameter $\sigma$ can be identified as the
conformal spin of the observable. When the string enters a particular rhombus
of opening angle $\alpha$ through the disorder variable $\disorder{X}$
(figure \ref{fig:fig2}), imposing the condition of discrete
holomorphicity
\begin{equation}
\sum_{\lozenge} \RC{r}{r}\,\Delta z_{\coveredge{r}{r}} = 0
\end{equation}
on the contour sum of the parafermion around the rhombus in the
counterclockwise direction we get,
\begin{equation*}
-\rme^{\ii\,\sigma\,\pi}\,s_g\,\disorder{Y}
-\rme^{\ii\,(1-\sigma)\,\alpha}\,s_a\,\disorder{Y}
+s_a\,\disorder{X}
+\rme^{\ii\,\sigma\,\pi}\,\rme^{\ii\,(1-\sigma)\,\alpha}
\,s_g\,\disorder{X} = 0
\end{equation*}
where the variables $s_a$, $\disorder{X}$, $s_g$, $\disorder{Y}$ are at
the corners of the contour, in that order, and $\Delta z$ is the difference between the complex coordinates of the two endpoints of the side of the
rhombus on which the $\RC{r}{r}$ variable lives, traversed in the direction of traversal of the contour.
As was explained in \cite{RC, I}, we have taken the special case $m = 1$
of the more general definition of the parafermion
\begin{equation*}
\psi^{(m)}_{\coveredge{r}{r}}=\exp\wrap{-\ii\,\sigma_{m}\,\theta_{\coveredge{r}{r}}}
\cdot s^m_r\cdot \mu^{(m)}_{\tilde{r}}
\end{equation*}
where the string attached to the disorder variable
$\mu^{(m)}_{\tilde{r}}$ modifies the weight of an edge it crosses by
lowering the spin to its right by $m$, and $m \in \{1, \ldots, N-1\}$.
These observables, which are also discretely holomorphic at integrable
critical points, are related to the one we consider by the global
transformation $s_r \mapsto {s^m_r}$, and thus, it is enough to
consider the $m=1$ case only without loss of generality.
\begin{figure}[h]
\centering \includegraphics{figure2}
\caption{An elementary rhombus with opening angle $\alpha$. The string
enters the rhombus via the disorder variable $\disorder{X}$. The
arbitrary axis is in the direction of the edge $\edge{a}{X}$. }
\label{fig:fig2}
\end{figure}
With the introduction of
$\phi(\alpha) = \rme^{\ii\,(1-\sigma)\,\alpha}$, the condition takes
the form
\begin{equation} \label{eq:onerhombus}
\phi(-\pi)\,s_g\,\disorder{Y}
-\phi(\alpha)\,s_a\,\disorder{Y}
+s_a\,\disorder{X}
-\phi(\alpha-\pi)\,s_g\,\disorder{X} = 0
\end{equation}
Since, by our characterization of the disorder variables,
\begin{equation*}
\frac{\disorder{Y}}{\disorder{X}}
= \frac{W_{\alpha}(a-(g-1))}{W_{\alpha}(a-g)}
= \frac{W_{\alpha}(n_a+1)}{W_{\alpha}(n_a)}
\end{equation*}
we have
\begin{equation}\label{eq:recurrence}
\wrap{\phi(-\pi)-\phi(\alpha)\,\omega^{n_a}} W_{\alpha}(n_a+1)
+\wrap{\omega^{n_a}-\phi(\alpha-\pi)} W_{\alpha}(n_a)
= 0
\end{equation}
which is a linear recurrence relation in the weights in terms of the
bond varible $n_a$.
This relation specifies the weights up to a physically irrelevant
constant. The $\mathrm{Z}_N$ symmetries, however, put constraints on the value of
the unknown $\sigma$ (see, e.g., \cite{I}). Since the weights are
symmetric and periodic, we must have
$W_{\alpha}(n_a) = W_{\alpha}(N-n_a)$
for every $n_a$, from which the recurrence relation gives
\begin{equation*}
\wrap{1-\phi(2\alpha)}\wrap{\omega^{-1}- \phi(-2\pi)} = 0
\end{equation*}
Taking
\begin{equation}\label{eq:constraint}
\phi(2\pi) = \omega
\end{equation}
makes the parafermion holomorphic for every opening angle $\alpha$.
Therefore,
\begin{equation}
\sigma = 1 - \frac{1}{N} + \ell
\end{equation}
for any integer $\ell$. The case $\ell = 0$ gives the
Fateev-Zamolodchikov solutions \cite{FZ}. Since the subsequent
calculations make use of relation (\ref{eq:constraint}) and not
the actual value of $\sigma$, the proof presented here is more
general than merely proving that the Fateev-Zamolodchikov weights
give us an integrable model. The angle $\alpha$ is proportional to
(in fact, in \cite{FZ}, equal to) the spectral parameter and plays
an analogous r\^ole.
It should be noted that the weights are real because the ratio of
the weights given by the recurrence relation (\ref{eq:recurrence})
is real.
\subsection{Self-duality}
\begin{figure}[h]
\centering \includegraphics{figure3}
\caption{Rhombic embedding of an anisotropic square lattice. The
edge with an opening angle $\alpha$ is horizontal, while that
with an opening angle $\pi-\alpha$ is vertical in the original
lattice.}
\label{fig:fig3}
\end{figure}
We now show that the weights are self-dual. For this, we temporarily
consider the embedding of an anisotropic square lattice (figure
\ref{fig:fig3}). On the one hand, if the horizontal interaction edges
are mapped onto rhombi of opening angle $\alpha$, the vertical edges
are then mapped onto rhombi with opening angle $\pi-\alpha$. We thus
have $\overline{W}_{\alpha}(n_a) = W_{\pi-\alpha}(n_a)$. On the other
hand, taking the discrete Fourier transform of the recurrence
relation (\ref{eq:recurrence}) we have
\begin{equation*}
\wrap{\phi(-\pi)-\phi(-\pi-\alpha)\,\omega^{k_a+1}}
\widetilde{W}_{\alpha}(k_a+1)
+ \wrap{\omega^{k_a} - \phi(-\alpha)}
\widetilde{W}_{\alpha}(k_a)
= 0
\end{equation*}
Using (\ref{eq:constraint}) to replace the extra $\omega$ in the first
term, we see that the dual weights satisfy the same equation with
$\alpha$ replaced by $\pi-\alpha$. Since our unitary Fourier transform
preserves the norm
\begin{equation}
\sum_{n_a\in \mathrm{Z}_N}[W_{\alpha}(n_a)]^2 = 1
\end{equation} this proves
\begin{equation}
\overline{W}_{\alpha}(n_a) = W_{\pi-\alpha}(n_a)
= \widetilde{W}_{\alpha}(n_a)
\end{equation}
and captures the crossing symmetry of the problem.
\subsection{Inversion relations}
\begin{figure}[h]
\centering \includegraphics{figure4}
\caption{Two adjacent rhombi. }
\label{fig:fig4}
\end{figure}
Now we consider another rhombus with angle $\beta$ which sits on top of
the rhombus considered so far (figure \ref{fig:fig4}). The new rhombus
comes with spin $s_b$ and disorder variable $\disorder{Z}$ and shares
the edge $\coveredge{g}{Y}$ with the previous one. We multiply its
holomorphicity equation analogous to (\ref{eq:onerhombus}) by
$\phi(-\beta)$ to orient it correctly and then add the two equations
to get
\begin{eqnarray}\label{eq:tworhombi}
-\phi(\alpha)\,s_a\,\disorder{Y}
+s_a\,\disorder{X}
-\phi(\alpha-\pi)\,s_g\,\disorder{X} \nonumber\\
\quad\quad+ \,\phi(-\pi-\beta)\,s_g\,\disorder{Z}
- s_b\,\disorder{Z}
+\phi(-\beta)\,s_b\,\disorder{Y}
= 0
\end{eqnarray}
The contributions from the common edge cancel as it is traversed
in opposite directions for the two sums. Using
\begin{equation*}
\frac{\disorder{Z}}{\disorder{Y}}
= \frac{W_{\beta}(b - (g-1))}{W_{\beta}(b-g)}
= \frac{W_{\beta}(n_b + 1)}{W_{\beta}(n_b)}
\end{equation*}
the equation (\ref{eq:tworhombi}) gives a quadratic relation
\begin{eqnarray}
\wrap{ \omega^{a}- \phi(\alpha-\pi)\,\omega^{g}}W_{\alpha}(n_a)
\,W_{\beta}(n_b)
\nonumber\\ \quad\quad
+\, \wrap{\phi(-\beta)\,\omega^{b}-\phi(\alpha)\,\omega^{a}}
W_{\alpha}(n_a + 1)\,W_{\beta}(n_b)
\nonumber\\ \quad\quad
+\,\wrap{\phi(-\pi-\beta)\,\omega^{g}-\omega^{b}}W_{\alpha}(n_a + 1)
\,W_{\beta}(n_b+1)
=0
\end{eqnarray}
in the weights.
This relation contains the inversion relations
(see, e.g., \cite{Z}) for the model.
Put $a = b$, so that $n_a = n_b = n$, and $\beta = -\alpha$, then
the equation reduces to
\begin{eqnarray*}
W_{\alpha}(n)\,W_{-\alpha}(n) = W_{\alpha}(n + 1)\,W_{-\alpha}(n+1)
\end{eqnarray*}
thus, the product is independent of $n$,
\begin{equation} \label{eq:ir1}
W_{\alpha}(n)\,W_{-\alpha}(n) = g(\alpha)\,g(-\alpha)
\end{equation}
The other relevant inversion relation
\begin{equation} \label{eq:ir2}
\sum_{g\in\mathrm{Z}_N}W_{\pi+\theta}(a-g)\,W_{\pi-\theta}(b-g)
=\rho(\theta)\,\delta_{a,b}
\end{equation}
follows from this relation and self-duality by expanding the left hand
side in Fourier coefficients and gives
\begin{equation*}
\rho(\theta) = N\,g(\theta)\,g(-\theta)
\end{equation*}
for our normalization.
\subsection{Star-triangle relation}
\begin{figure}[h]
\centering \includegraphics{figure5}
\caption{Three adjacent rhombi making the star. }
\label{fig:fig5}
\end{figure}
We proceed to add the third rhombus with angle
$\gamma = 2\pi - (\alpha+\beta)$ to make the star (figure
\ref{fig:fig5}). Two of the edges, $\coveredge{g}{Z}$ and
$\coveredge{g}{X}$, are shared and a new spin variable $s_c$ is
introduced. Multiplying the holomorphicity equation analogous to
(\ref{eq:onerhombus}) for this rhombus by $\phi(2\pi -\alpha)$ to
orient it correctly, we add it to the contour sum (\ref{eq:tworhombi})
to yield
\begin{eqnarray}
\label{eq:threerhombi}
\phi(\alpha-3\pi)\,s_g\,\bar{\mu}_{\tilde{X}}
-\phi(\alpha-\pi)\,s_g\,\disorder{X}
\nonumber\\ \quad\quad
+\, s_a\,\disorder{X}-\phi(\alpha)\,s_a\,\disorder{Y}
\nonumber\\ \quad\quad
+\,\phi(-\beta)\,s_b\,\disorder{Y}-s_b\,\disorder{Z}
\nonumber\\ \quad\quad
+\,\phi(\alpha - 2\pi)\,s_c\,\disorder{Z}
-\phi(-\beta)\,s_c\,\bar{\mu}_{\tilde{X}} = 0
\end{eqnarray}
Some features of this equation warrant attention. The disorder variable
$\disorder{X}$ is not the same as $\bar{\mu}_{\tilde{X}}$, as might
be na\"ively expected. A possible interpretation of this is that
the string has gone around the spin $s_g$ once, and thus the
parafermion has acquired a phase factor $\phi(2\pi)$, corresponding
to the lowering of the value of $g$, that is, $q_g$, by one in the
configuration. Hence, the contributions from the edge
$\coveredge{g}{X}$ do not cancel.
Making use of
\begin{equation*}
\frac{\bar{\mu}_{\tilde{X}}}{\disorder{Y}}
= \frac{W_{\gamma}(c-(g-1))}{W_{\gamma}(c-g)}
= \frac{W_{\gamma}(n_c+1)}{W_{\gamma}(n_c)}
\end{equation*}
we have a cubic equation in weights which we omit for clarity.
However, summing over $g$, we arrive at
\begin{eqnarray}\label{eq:LHSrecursion}
\wrap{\omega^{a}-\phi(-\beta)\,\omega^{c}}
\,\mathfrak{L}_{\alpha,\beta}(a,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(-\beta)\,\omega^{b}-\phi(\alpha)\,\omega^{a}}
\,\mathfrak{L}_{\alpha,\beta}(a+1,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(\alpha-2\pi)\,\omega^{c}- \omega^{b}}
\,\mathfrak{L}_{\alpha,\beta}(a+1,b+1,c) = 0
\end{eqnarray}
Here
\begin{equation}
\mathfrak{L}_{\alpha,\beta}(a,b,c)
=\sum_{g\in\mathrm{Z}_N}W_{\alpha}(a-g)\,W_{\beta}(b-g)\,W_{\gamma}(c-g)
\end{equation}
is the left hand side of the star-triangle relation. The terms
involving $s_g$ cancel because of the constraint (\ref{eq:constraint})
on $\sigma$, and therefore, the expectation value of the parafermion
is still independent of the path, as is that for the disorder variable
$\disorder{X}$, and consequently the contour sum around the star still
vanishes.
\begin{figure}[h]
\centering \includegraphics{figure6}
\caption{Three adjacent rhombi making the triangle. }
\label{fig:fig6}
\end{figure}
Remarkably, considering the contour sum around the triangle (figure
\ref{fig:fig6}), we get
\begin{eqnarray}\label{eq:RHSrecursion}
\wrap{\omega^{a}-\phi(-\beta)\,\omega^{c}}
\,\mathfrak{R}_{\alpha,\beta}(a,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(-\beta)\,\omega^{b}-\phi(\alpha)\,\omega^{a}}
\,\mathfrak{R}_{\alpha,\beta}(a+1,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(\alpha-2\pi)\,\omega^{c}- \omega^{b}}
\,\mathfrak{R}_{\alpha,\beta}(a+1,b+1,c) = 0
\end{eqnarray}
where
\begin{equation}
\mathfrak{R}_{\alpha,\beta}(a,b,c)
= W_{\pi-\gamma}(a-b)\;W_{\pi-\alpha}(b-c)\;W_{\pi-\beta}(c-a)
\end{equation}
is proportional to the right hand side of the star-triangle relation.
The two sides of the equation therefore obey the same recurrence
relation.
To show that the star-triangle relation follows from these relations,
we have to prove that the two solutions are linearly dependent.
To this end, we consider a general solution of the
recurrence relation $f_{\alpha,\beta}(a, b, c)$.
Taking the complex conjugate of the relation and noting both
$f_{\alpha,\beta}(a, b, c)$ and $\sigma$ are real, we multiply the
resulting equation by $\phi(\alpha-\beta)\,\omega^{a+b}$. The result,
\begin{eqnarray}
\wrap{\phi(\alpha-\beta)\,\omega^{b}-\phi(\alpha)\,\omega^{a+b-c}}
\,f_{\alpha,\beta}(a,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(\alpha)\,\omega^{a}-\phi(-\beta)\,\omega^{b}}
\,f_{\alpha,\beta}(a+1,b,c)
\nonumber\\ \quad\quad
+\,\wrap{\phi(2\pi-\beta)\,\omega^{a+b-c}
- \phi(\alpha-\beta)\,\omega^{a}}
\,f_{\alpha,\beta}(a+1,b+1,c) = 0
\end{eqnarray}
when added to the recurrence relation, cancels the middle term
and gives the ratio
\begin{equation}
\frac{f_{\alpha,\beta}(a+1,b+1,c)}{f_{\alpha,\beta}(a,b,c)}
= \frac{f_{\alpha,\beta}(a,b,c-1)}{f_{\alpha,\beta}(a,b,c)}
\end{equation}
independent of the function $f_{\alpha,\beta}$. The shift in the
arguments is justified by the $\mathrm{Z}_N$ symmetries. Since this ratio is
the same for both $\mathfrak{R}_{\alpha,\beta}$,
and $\mathfrak{L}_{\alpha,\beta}$,
\begin{equation}
\frac{\mathfrak{L}_{\alpha,\beta}(a,b,c)}
{\mathfrak{R}_{\alpha,\beta}(a,b,c)}
= \frac{\mathfrak{L}_{\alpha,\beta}(a,b,c-1)}
{\mathfrak{R}_{\alpha,\beta}(a,b,c-1)}
\end{equation}
The ratio of the two sides is thus independent of the spin $c$. Similar
elimination of the other two terms in the recurrence relation shows that
the ratio is independent of $a$ and $b$ as well, as is demanded by
symmetry.\footnote{Jacques Perk informed us that this style of argument
has been used to prove that the Boltzmann weights
of the more general chiral Potts model satisfy the star-triangle relation \cite{Perk}.}
We therefore have the star-triangle relation
\begin{equation} \label{eq:str}
\mathfrak{L}_{\alpha,\beta}(a,b,c)
= R_{\alpha,\beta}\, \mathfrak{R}_{\alpha,\beta}(a,b,c)
\end{equation}
where $R_{\alpha,\beta}$ depends on the spectral variables only.
As is well-known, the inversion relations (\ref{eq:ir1})-(\ref{eq:ir2})
and the star-triangle relation (\ref{eq:str}) together are sufficient to
establish the existence of commuting transfer matrices parametrized by,
in our case, the angle of the embedded rhombi, or in other words,
Yang-Baxter integrability. By the embedding onto the complex plane,
the schematic diagram of the Yang-Baxter equations acquires a geometric
meaning of rearrangements of the elementary rhombi. Our analysis
shows that the contour sum around a domain picks up only factors
independent of the configuration under such rearrangements.
\subsection{Homogeneous lattices}
\begin{figure}[h]
\centering \includegraphics{figure7}
\caption{Rhombic embedding of the three homogeneous lattices.}
\label{fig:fig7}
\end{figure}
It is interesting to note that the construction presented above
is local in that it does not depend on many of the properties,
e.g., the coordination number, of the underlying graph. To see
the consequences of this lattice independence, we consider the
three archetypical homogeneous lattices, the square, the triangular
and the honeycomb (figure \ref{fig:fig7}). The embedding of these
lattices onto the complex plane tiles the plane with $\alpha$ being
$\pi/2$, $\pi/3$ and $2\pi/3$ respectively. Evaluating the recurrence
relation (\ref{eq:recurrence}) at these points gives an instant
derivation of the known critical weights. For example, for the
Ising model, we have for $W(1)/W(0)$, the known values $\sqrt{2} - 1$,
$1/\sqrt{3}$, and $2 - \sqrt{3}$ for the three lattices respectively
\cite{BaxterBook}.
\section{Concluding remarks}
In this paper, we considered the implication of the condition of
discrete holomorphicity on two and three adjacent rhombi in the context
of the lattice $\mathrm{Z}_N$ model. For two rhombi this led to the quadratic
equation (\ref{eq:tworhombi}) in the Boltzmann weights. This equation
was shown to imply the known inversion relations (\ref{eq:ir1}) and
(\ref{eq:ir2}) for this model. Note that Cardy \cite{Cardy-H}
has effectively established the inversion relations in the $u-v \to 0$
limit of the Yang-Baxter equation.
For three rhombi we obtained the cubic equation (\ref{eq:threerhombi})
in the Boltzmann weights. In establishing this equation the lattice
parafermion picks up a crucial phase factor, with the expectation value
of the parafermion still independent of the path. The importance of such
a phase factor has been highlighted in the topological context of the
loop models by Fendley \cite{F}. Here we have shown that the
star-triangle relation (\ref{eq:str}) follows from the three-rhombus
equation (\ref{eq:threerhombi}).
In the discrete holomorphic approach the two-rhombus equation
(\ref{eq:tworhombi}) and the three-rhombus equation
(\ref{eq:threerhombi}) can thus be considered as analogues of the
two- and three-body conditions for integrability.
However, the simplicity of the discrete holomorphic approach is that
ultimately these conditions are equivalent to the one-rhombus equation
(\ref{eq:onerhombus}).
Indeed, one can push this argument further by building up a transfer
matrix from the rhombi and showing that commuting transfer matrices can
be established as a consequence of discrete holomorphicity, bypassing
the use of the Yang-Baxter equation.
Our results lend further impetus for using discrete holomorphicity as
a tool for investigating lattice models at critical points.
\ack It is a pleasure to dedicate this paper to Fred Wu on the occasion
of his 80th birthday. We thank Vladimir Bazhanov for a number of helpful discussions and
Jacques Perk for a helpful remark.
This work has been partially supported by the Australian Research Council.
\section*{References}
|
1,116,691,500,918 | arxiv | \section{Introduction}
A recurrent theme in structural graph theory is the study of specific properties that arise in graphs when excluding a fixed pattern.
The notion of appearing as a pattern gives rise to various graph containment relations.
Maybe the most famous example is the minor relation that has been widely studied, in particular since the fundamental results of Kuratowski and Wagner who proved that planar graphs are exactly those graphs that contain neither $K_5$ nor $K_{3,3}$ as a (topological) minor.
A graph $G$ contains a graph $H$ as a topological minor if $H$ can be obtained from $G$ by a sequence of vertex deletions, edge deletions and replacing internally vertex-disjoint paths by single edges.
Wagner also described the structure of the graphs that exclude $K_{5}$ as a minor: he proved that $K_{5}$-minor-free graphs can be constructed by ``gluing" together (using so-called clique-sums) planar graphs and a specific graph on $8$ vertices, called Wagner's graph.
Wagner's theorem was later extended in the seminal Graph Minor series of papers by Robertson and Seymour (see e.g.~\cite{RobertsonS03a}), which culminated with the proof of Wagner's conjecture, i.e., that graphs are well-quasi-ordered under minors~\cite{RobertsonS04}, and ended with the proof of Nash-Williams' immersion conjecture, i.e., that the graphs are also well-quasi-ordered under immersions~\cite{RS10}. Other major results in graph minor theory include the (Strong) Structure Theorem~\cite{RobertsonS03a}, the Weak Structure Theorem~\cite{RobertsonS95b}, the Excluded Grid Theorem~\cite{RobertsonS86,RobertsonST94,KawarabayashiK12a}, as well as numerous others, e.g.,~\cite{SeymourT93-Grap,KawarabayashiRW11,DawarGK07}.
Moreover, the structural results of graph minor theory have deep algorithmic implications, one of the most significant examples being the existence of cubic time algorithms for the $k$-{\sc Disjoint Paths} and $H$-{\sc Minor Containment} problems~\cite{RobertsonS95b}. For more applications see, e.g.,~\cite{KawarabayashiW10,DemaineFHT05sube,AdlerKKLST11,KawarabayashiK08,BodlaenderFLPS09}.
However, while the structure of graphs that exclude a fixed graph $H$ as a minor has been extensively studied, the structure of graphs excluding a fixed graph $H$ as a topological minor or as an immersion has not received as much attention. While a general structure theorem for topological minor free graphs was very recently provided by Grohe and Marx~\cite{GM12},
finding an exact characterization of the graphs that exclude $K_5$ as a topological minor remains a notorious open problem.
Recently, Wollan gave a structure theorem for graphs excluding complete graphs as immersions~\cite{Wollan15}.
A graph $G$ contains a graph $H$ as a immersion if $H$ can be obtained from $G$ by a sequence of vertex deletions, edge deletions and replacing edge-disjoint paths by single edges.
Observe that if a graph $G$ contains a graph $H$ as a topological minor, then $G$ also contains $H$ as an immersion, as vertex-disjoint paths are also edge-disjoint.
In 2011, DeVos et al.~\cite{DDFMMS11} proved that if the minimum degree of a graph $G$ is at least $200t$ then $G$ contains the complete graph on $t$ vertices as an immersion.
In~\cite{FGTW08} Ferrara et al. provided a lower bound on the minimum degree of any graph $G$ in order to ensure that a given graph $H$ is contained in $G$ as an immersion.
A common drawback of such general results is that they do not provide sharp structural characterizations for concrete instantiations of the excluded graph $H$.
In the particular case of immersion, such structural results are only known when excluding both $K_{5}$ and $K_{3,3}$ as immersions\cite{GiannopoulouKT15}.
In this paper, we prove a structural characterization of the graphs that exclude $W_{4}$ as an immersion and show that they can be constructed from graphs that are either subcubic or have treewidth bounded by a constant. We denote by $W_4$ the wheel with~4 spokes, i.e., the graph obtained from a cycle on~4 vertices by adding a universal vertex.
The structure of graphs that exclude $W_4$ as a topological minor has been studied by Farr~\cite{Farr88}. He proved that these graphs can be constructed via clique-sums of order at most~3 from graphs of maximum degree at most~3. However, this characterization only applies to simple graphs. In our study we exclude $W_{4}$ as an immersion while allowing multiple edges. Robinson and Farr later extended this result by obtaining similar, albeit more complex, characterizations of graphs that exclude $W_6$ and $W_7$ as a topological minor~\cite{RF09a,RobinsonF14}.
As with the minor relation, many algorithmic results have also started appearing in terms of immersions.
In~\cite{GKMW11}, Grohe et al. gave a cubic time algorithm that decides whether a fixed graph $H$ immerses in any input graph $G$.
This algorithm, combined with the well-quasi-ordering of immersions~\cite{RS10}, implies that the membership of a graph in any graph class that is closed under taking immersions can be decided in cubic time. However, the construction of such an algorithm requires the ad-hoc knowledge of the finite set of excluded immersions that characterizes this graph class (which is called obstruction set). While no general way to compute an obstruction set is known, in~\cite{GiannopoulouSZ14}, Giannopoulou et al. provided sufficient conditions, under which the obstruction set of any graph class that is closed under taking immersions becomes effectively computable.
Another example of explicit construction of immersion obstruction sets is given by Belmonte et al.~\cite{BelmonteHKPT13}, where the set of immersion obstructions is given for graphs of carving-width~3.
Finally, for structural and algorithmic results on immersions in terms of colorings, see~\cite{KawarabayashiK12,Abu-KhzamL03,Lescure1988325,DKMO10}.
Our paper is organized as follows: in Section \ref{sec:preliminaries}, we give necessary definitions and previous results. In Section \ref{sec:invariance}, we show that containment of $W_4$ as an immersion is preserved under 1, 2 and~3-edge-sums. Then, in Section \ref{sec:main}, we provide our main result, i.e., a decomposition theorem for graphs excluding $W_4$ as an immersion. Finally, we conclude with remarks and open problems.
\section{Preliminaries}
\label{sec:preliminaries}
For undefined terminology and notation, we refer to the textbook of Diestel~\cite{Diestel}.
For every integer $n$, we let $[n]=\{1,2,\dots,n\}$.
All graphs we consider are finite, undirected, and without self-loops but may have multiple edges.
Given a graph $G$ we denote by $V(G)$ and $E(G)$
its {\em vertex} and {\em edge set} respectively.
Given a set $F\subseteq E(G)$ (resp. $S\subseteq V(G)$), we denote
by $G\setminus F$ (resp. $G\setminus S$) the graph obtained from $G$ if we remove the edges in $F$ (resp. the vertices in $S$ along with their incident edges).
We denote by ${\cal C}(G)$ the set of the {\em connected components} of $G$.
Given two vertices $u,v\in V(G)$, we also use the notation $G- v=G\setminus \{v\}$ and the notation $uv$ for the edge $\{u,v\}$.
The {\em neighborhood} of a vertex $v\in V(G)$, denoted by $N_{G}(v)$, is the set of vertices in $G$ that are adjacent to $v$. We denote by $E_{G}(v)$ the set of the edges of $G$ that are incident with $v$.
The {\em degree} of a vertex $v\in V(G)$, denoted by $\deg_{G}(v)$,
is the number of edges that are incident with it, that is, $\deg_{G}(v)=|E_{G}(v)|$.
Notice that, as we are working with multigraphs, $|N_{G}(v)|\leq \deg_{G}(v)$.
The degree of a set $S$, denoted by $\partial(S)$, is the number of edges between $S$ and $V(G) \setminus S$, that is $|\{uv\in E(G) \mid u\in S \wedge v \not\in S\}|$.
Given two vertices $u$ and $v$ with $u\in N(v)$ we say that $u$ is an $i$-neighbor of $v$ if $E(G)$ contains exactly $i$ copies of the edge $\{u,v\}$.
Let $P$ be a path and $v,u\in V(P)$. We denote by $P[v,u]$ the subpath of $P$ with endpoints $v$ and $u$.
The {\em maximum degree} of a graph $G$, denoted by $\Delta(H)$ is the maximum of the degrees of the vertices of $G$, that is, $\Delta(G)=\max_{v\in V(G)}\deg_{G}(v)$.
We denote by $W_{k-1}$ the {\em wheel} on $k$ vertices, that is, the graph obtained from the cycle of length $k-1$ after adding a new vertex and making it adjacent to all of its vertices. We call the new vertex {\em center} of the wheel.
\begin{definition}
An immersion of $H$ in $G$ is a function $\alpha$ with domain $V(H) \cup E(H)$, such that:
\begin{itemize}
\item $\alpha(v) \in V(G)$ for all $v \in V(H)$, and $\alpha(u)\neq\alpha(v)$ for all distinct $u,v \in V(H)$;
\item for each edge $e$ of $H$,
$\alpha(e)$ is a path of $G$ with ends $\alpha(u), \alpha(v)$;
\item for all distinct $e,f \in E(H), E(\alpha(e) \cap \alpha(f))=\emptyset$.
\end{itemize}
\end{definition}
\noindent We call the image of every such function $\alpha$ in $G$ {\em model} of the graph $H$ in $G$ and the vertices of the set $\alpha(V(H))$ {\em branch} vertices of $\alpha$.
An {\em edge cut} in a graph $G$ is a non-empty set $F$ of edges that belong to the same connected component of $G$ and such that $G \setminus F$ has more connected components than $G$.
If $G \setminus F$ has one more connected component than $G$ and no proper subset of $F$ is an edge cut of $G$, then we say that $F$ is a {\em minimal} edge cut.
Given a vertex set $S$ such that $G[S]$ and $G \setminus S$ are connected, we denote by $(S,G \setminus S)$ the cut between $S$ and $G \setminus S$.
Let $F$ be an edge cut of a graph $G$ and let $G$ be the connected component of $G$ containing the edges of $F$. We say that $F$ is an {\em internal} edge cut if it is minimal and both connected components of $G \setminus F$ contain
at least 2 vertices.
An edge cut is also called {\em $i$-edge cut} if it has order at most $i$.
\begin{definition}
Let $G$, $G_1$, and $G_2$ be graphs. Let $t \geq 1$ be a positive integer. The graph $G$ is a $t$-edge-sum of $G_1$ and $G_2$ if the following holds.
There exist vertices $v_i \in V(G_i)$ such that $|E_{G_{i}}(v_i)|=t$ for $i\in[2]$ and a bijection $\pi: E_{G_{1}}(v_{1}) \rightarrow E_{G_{2}}(v_2)$ such that $G$ is obtained from $(G_{1} - v_{1}) \cup (G_{2} - v_{2})$ by adding an edge from $x\in V(G_1)-v_1$ to $y\in V(G_2)-v_2$
for every pair of edges $e_{1}$ and $e_{2}$ such that $e_{1}=xv_{1}$, $e_{2}=yv_{2}$, and $e_{2} = \pi(e_{1})$.
We say that the edge-sum is internal if both $G_1$ and $G_2$ contain at least 2 vertices and denote the internal $t$-edge-sum of $G_1$ and $G_2$ by $G_1 \hat{\oplus}_t G_2$.
\end{definition}
Note that if $G$ is the $t$-edge-sum of graphs $G_1$ and $G_2$ for some $t \geq 0$, then the set of edges $\{\{u,v\} \in E(G) \mid u \in V(G_1), v \in V(G_2)\}$ forms a minimal edge cut of $G$ of order $t$.\\
Let $r$ be a positive integer. The {\em $(r,r)$-grid} is the graph with vertex set $\{(i,j)\mid i,j\in [r]\}$ and edge set $\{\{(i,j),(i',j')\}\mid |i-i'| + |j-j'|=1\}$.
The {\em (elementary) wall} of height $r$ is the graph $W_r$ with vertex set $V(W_r) = \{(i,j) \mid i\in [r+1], j \in [2r+2]\}$ in which we make two vertices $(i,j)$ and $(i',j')$ adjacent if and only if either $i=i'$ and $j' \in \{j-1,j+1\}$ or $j'=j$ and $i'=i+(-1)^{i+j}$, and then remove all vertices of degree~1; see Figure~\ref{f-wall} for some examples.
The vertices of this vertex set are called {\em original} vertices of the wall.
A {\em subdivided wall} of height $r$ is the graph obtained from $W_{r}$ after replacing some of its edges by internally vertex-disjoint paths.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.9]{wall_elementary2.pdf}
\caption{Elementary walls of height 2, 3, and 4.}\label{f-wall}
\end{center}
\end{figure}
Let $r$ be a positive integer and notice that the wall of height $r$ is contained in the $((2r+2)\times(2r+2))$-grid as a subgraph. This implies that any graph containing the $((2r+2)\times (2r+2))$-grid as a minor also contains the wall of height $r$ as a minor.
Furthermore, from a folklore result, for any simple graph $H$ with $\Delta(H)\leq 3$ it holds that $H$ is a minor of a graph $G$ if and only if $H$ is a topological minor of $G$.
\begin{theorem}\cite{LeafS12}
\label{thm:big-wallgen}
Let $G$ and $H$ be two graphs, where $H$ is connected and simple, not a tree, and has $h$ vertices. Let also $g$ be a positive integer.
If $G$ has treewidth greater than $3(8h(h-2)(2g+h)(2g+1))^{|E(H)|-|V(H)|}+\frac{3h}{2}$ then $G$ contains either the $(g\times g)$-grid or $H$ as a minor.
\end{theorem}
Theorem~\ref{thm:big-wallgen}, in the case where $g=2r+2$ and $H$ is the wall of height $r$, can be restated
as the well known fact that large treewidth ensures the existence of a large wall as a topological minor:
\begin{theorem}\cite{LeafS12}
\label{thm:big-wall}
Let $G$ be a graph and $r\geq 2$ be an integer. If the treewidth of $G$ is greater than $2^{18r^{2}\log r}$ then $G$ contains the wall of height $r$ as a topological minor.
\end{theorem}
We would like to note here that, regarding the dependence between the treewidth of a graph $G$ and the height of the wall that it contains as a topological minor, very recently, the following theorem was shown:
\begin{theorem}\cite{Chuzhoy15}
\label{thm:chuzhoywall}
There exists a function $f$ such that for any integer $r\geq 1$, any graph of treewidth at least $f(r)$ contains the wall of height $r$ as a topological minor,
where $f(r)=O(r^{36}\text{poly}\log{r})$.
\end{theorem}
However, even though Theorem~\ref{thm:chuzhoywall} proves a tighter dependence between the treewidth of a graph and the height of the contained wall, we will use
Theorem~\ref{thm:big-wall} instead of Theorem~\ref{thm:chuzhoywall}, as it allows us to extract specific constants for fixed values of $r$.
\section{Invariance of $W_4$ containment under small edge-sums}
\label{sec:invariance}
In this section, we show that immersion of $W_{4}$ is completely preserved under edge-sums of order at most~3, i.e., that $W_{4}$ immerses in a graph $G$ if and only if it immerses in at least one of the graphs obtained by decomposing $G$ along edge-sums. Theorem \ref{thm:immersion-preserved} will be necessary in Section~\ref{sec:main} to ensure that our decomposition does not change whether the graphs considered contain $W_4$ as an immersion or not. We first prove the following general lemma.
\begin{lemma}
\label{lem:immersion-preserved}
If $G$, $G_{1}$, and $G_{2}$ are graphs such that $G=G_{1}\hat{\oplus}_{t}G_{2}$, $t\in [3]$, then both $G_{1}$ and $G_{2}$ are immersed in $G$.
\end{lemma}
\begin{proof}
Notice that it is enough to prove that $G_{1}$ is an immersion of $G$. Let $v_1$ and $v_2$ denote the unique vertex of $V(G_1) \setminus V(G)$ and $V(G_2) \setminus V(G)$ respectively. In the case where $G=G_{1}\hat{\oplus}_{1}G_{2}$, let $u_{i}$ be the unique neighbor of $v_{i}$ in $G_{i}$, $i\in [2]$. Then
the function $\{(v,v)\mid v\in V(G_{1}-v_{1})\}\cup \{(v_{1},u_{2})\}$ is an isomorphism from $G_{1}$ to the graph
$G\setminus V(G_{2}-u_{2})$ (by the definition of the edge-sum $u_{2}u_{1}\in E(G)$) which is a subgraph of $G$. Therefore, $G_{1}\subseteq G$ and thus $G_{1}$ also immerses in $G$.
We now assume that $G=G_{1}\hat{\oplus}_{t}G_{2}$, $t=2,3$. Let $e_{j}$, $j\in [|E_{G_{1}}(v_{1})|]$, be the edges of $E_{G_{1}}(v_{1})$ and let $u_{j}$ be the (not necessarily distinct) endpoints of the edges $e_{j}$, $j\in [|E_{G_{1}}(v_{1})|]$, in $G_{1}-v_{1}$.
Notice that in both cases, in order to obtain $G_{1}$ as an immersion of $G$, it is enough to find a vertex $u$ in $V(G)\setminus V(G_{1})$ and for each edge $e_{j}$ of $E_{G_{1}}(v_{1})$ find a path $P_{j}$ from $u$ to $e_{j}$ in $E(G)\setminus E(G_{1})$ such that these paths are edge-disjoint.
In what follows we find such vertex and paths. We distinguish the following cases.\\
\noindent {\em Case 1.} $N_{G_{2}}(v_{2})=\{y\}$. Then, by the definition of the edge-sum, $G$ contains the edges $ye_{j}^{1}$, $j\in [|E_{G_{1}}(v_{1})|]$.
Notice that neither the vertex $y$ belongs to $V(G_{1})$ nor the edges $yu_{j}^{1}$, $j\in [|E_{G_{1}}(v_{1})|]$, belong to $E(G_{1})$ and therefore the claim
holds for $u=y$.\\
\noindent {\em Case 2.} $N_{G_{2}}(v_{2})=\{x,y\}$. First notice that in the case where $G=G_{1}\hat{\oplus}_{3}G_{2}$ one of the $x,y$, say $x$, is a
$2$-neighbor of $v_{2}$. As the edge-sum is internal, the set $E=E(G)\setminus (E(G_{1})\cup E(G_{2}))$ of edges created after the edge-sum is a minimal separator of $G$.
Without loss of generality let $yu_{1}^{1}$, $xu_{2}^{1}$, and (in the case where $G=G_{1}\hat{\oplus}_{3}G_{2}$) $xu_{3}^{1}$ be its edges.
By the minimality of the separator $E$, $G_{2}-v_{2}$ is connected. Therefore there exists a $(x,y)$-path $P$ in $G_{2}-v_{2}$. Observe that the path
$P\cup \{yu_{1}^{1}\}$, the path consisting only of the edge $xu_{2}^{1}$ and (in the case where $G=G_{1}\hat{\oplus}_{3}G_{2}$) the path consisting
only of the edge $xu_{3}^{1}$ are edge-disjoint paths who do not have any edge from $E(G_{1})$ and share $x$ as a common endpoint. Then
the claim holds for $u=x$.\\
\noindent {\em Case 3.} $N_{G_{2}}(v_{2})=\{x,y,z\}$. In this case, it holds that $G=G_{1}\hat{\oplus}_{3}G_{2}$. As above, consider the set
$E=E(G)\setminus (E(G_{1})\cup E(G_{2}))$ of the edges created by the edge-sum and
without loss of generality, let $E=\{xu_{1},yu_{2},zu_{3}\}$.
Since $E$ is a minimal separator, the graph $G_{2}-v_{2}$ is connected.
Therefore, there are a $(x,y)$-path $P$ and a $(y,z)$-path $Q$ in $G_{2}-v_{2}$. Let $z'$ be the vertex in $V(P)\cap V(Q)$ such that $V(Q[z,z'])\cap V(P)=\{z'\}$
and consider the paths $Q[z,z']$, $P[x,z']$, and $P[z',y]$ (in the case where $z'=y$ the path $P[z',y]$ is the graph consisting of only one vertex).
Observe that these graphs are edge-disjoint. Therefore the paths $P[x,z']\cup \{xu_{1}\}$, $P[y,z']\cup\{yu_{2}\}$, and $Q[z,z']\cup\{zu_{3}\}$ are edge-disjoint,
do not contain any edge from $E(G_{1})$, and share the vertex $z'$ as an endpoint. Thus, the claim holds for $u=z'$.
It then follows that $G_{1}$ is an immersion of $G$ and this completes the proof of the lemma.
\end{proof}
\begin{theorem}
\label{thm:immersion-preserved}
Let $G$, $G_{1}$, and $G_{2}$ be graphs such that $G = G_1 \hat\oplus_t G_2$, with $t \in [3]$.
Then, $G$ contains $W_{4}$ as an immersion if and only if $G_{1}$ or $G_{2}$ does as well.
\end{theorem}
\begin{proof}
If $G_{1}$ or $G_{2}$ contains $W_{4}$ as an immersion, then $G$ does as well due to Lemma \ref{lem:immersion-preserved}.
It remains to prove the converse direction.
Let $\alpha$ be an immersion of $W_4$ in $G$. We first prove that either $|\alpha(V(W_4)) \cap (V(G_1)-v_1)| \geq 4$,
or $|\alpha(V(W_4)) \cap (V(G_2)-v_2)| \geq 4$. Indeed, this is due to the fact that any cut $(S,G\setminus S)$ of $W_4$
with $|S|=3$ has order at least~4, whereas the cut $F=E(G)\setminus (E(G_{1})\cup E(G_{2}))$ in $G$ between
$V(G_1)-v_1$ and $V(G_2)-v_2$ has order at most~3.
Moreover, the same argument implies that the image of the center of $W_{4}$, that is, the unique vertex of degree~4 of $W_4$, say $x_{0}$, belongs to
the connected component of $G-F$ that contains at least~4 of the branch vertices of the immersion $\alpha$. Let us assume without loss of generality that $x_0 \in V(G_1)-v_1$.
Assume first that $\alpha(V(W_4)) \cap (V(G_1)-v_1) = 5$.
If for every edge $e$ of $W_{4}$ it holds that $\alpha(e)\cap V(G_{2}-v_{2})=\emptyset$, then clearly $\alpha$ is an immersion of $W_4$ in $G_1-v_1$, and therefore in $G_1$. Moreover, it is easy to observe that there cannot be two distinct
edges $e,e'$ of $W_4$ whose image path in $G$ contains vertices of $G_2-v_2$, since each such path must contain at least~2 edges of $F$, and $|F| \leq 3$.
Hence we may assume that there exists a unique edge $e$ with $\alpha(e) \cap V(G_2-v_2) \neq \emptyset$. Note that $\alpha(e)$ must intersect the
cut $F$ in an even number of edges, since otherwise the path would end in $G_2-v_2$, contradicting our assumption that all branch vertices of
$\alpha$ lie in $G_1-v_1$. Let $P$ be the maximum subpath of $\alpha(e)$ such that $E(P')\cap E(G_{1}-v_{1})=\emptyset$.
Notice that the first and the last edge of such a path are edges of $F$. Let $u_{1}$ and $u_{2}$ be the endpoints of $P$. This implies that we may obtain an immersion $\alpha'$ of $W_{4}$ in $G_{1}$ by replacing in $\alpha$ the path $P$ by the path $u_{1}v_{1}u_{2}$.
Now, we assume that $\alpha(V(W_4)) \cap (V(G_1)-v_1) = 4$, and denote by $x$ the unique branch vertex of $\alpha$ lying in $V(G_2-v_2)$.
We claim that it is possible to create an immersion function $\alpha'$ of $W_4$ in $G_1$ by replacing the vertex $x$ in $\alpha$ with $v_1$.
To show this, we apply the following operations to $G$: let $P_1,P_2,P_3$ be the paths of $\alpha$ whose associated edges in $W_4$ are incident
with $\alpha^{-1}(x_4)$, and let $P'_1,P'_2,P'_3$ be
the subpaths of $P_1,P_2$, and $P_3$
that do not contain edges of $G_1-v_1$. The paths $P'_1,P'_2,P'_3$ are easily observed to be edge-disjoint, and
therefore we may lift the edges in each of these paths. We complete the construction by deleting the vertices in $V(G_2)-\{v_2,x_4\}$. The graph obtained
from this construction is readily observed to be isomorphic to $G_1$ by mapping every vertex of $G_1-v_1$ to itself, and $v_1$ to $x$. Therefore
$W_4$ immerses in $G_1$. This concludes the proof of the theorem.
\end{proof}
\section{Structure of graphs excluding $W_4$ as an immersion}
\label{sec:main}
In this section, we prove the main result of our paper, namely we provide a structure theorem for graphs that exclude $W_4$ as an immersion. We first provide a technical lemma that will be crucial for the proof of Theorem \ref{thm:main}.
\begin{lemma}
\label{lem:big-lemma}
There exists a function $f$ such that for every integer $r \geq 60000$ and every graph $G$
that does not contain $W_4$ as an immersion, has no internal 3-edge cut, and has a vertex $u$ with $d(u) \geq 4$,
if $tw(G) \geq f(r)$, then there exist vertex sets $S_1,\ldots,S_r$, $Z=\{z_1,\ldots,z_r\}$, and $X$ of $G$, that satisfy the following properties:
\begin{enumerate}[(i)]
\item $z_i \in S_{i}, \forall i \in \{1,\ldots,r\}$;
\item $z_i \not\in S_j, \forall i \neq j \in \{1,\ldots,r\}$;
\item $u \in \bigcap_{i \in \{1,\ldots,r\}}S_i$;
\item $\partial(S_i) \leq 6$;
\item $G[S_i]$ is connected, $\forall i \in \{1,\ldots,r\}$;
\item $X \cap S_i = \emptyset, \forall i \in \{1,\ldots,r\}$;
\item For every $Z' \subseteq Z$ such that $|Z'| \geq 7$, there is a 7-flow from $Z'$ to $X$;
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $G$ has treewidth at least $2^{18(6r)^{2}\log (6r)}$, Then, from Theorem~\ref{thm:big-wall}, $G-u$ contains an elementary wall of height $6r$
as a topological minor and hence a subdivided wall $W$ of height $6r$ as a subgraph.
We define the cycles $C_1,\ldots,C_{6r}$ as the ones formed by the (original) vertices $w_{5+20p,3+2q}$ to
$w_{11+20p,3+2q}$ and $w_{11+20p,4+2q}$ to $w_{5+20p,4+2q}$
and the internally vertex-disjoint paths that join them on the wall $W$, for every $p,q \in \{0,\ldots,\lceil\sqrt{6r}\rceil-1\}$.
Observe that $C_1,\ldots,C_{6r}$ is a set of vertex disjoint cycles containing at least~14 vertices in $G-u$.
For every $i \in [6r]$, we denote by $G_{C_i}$ the graph obtained from $G$ by removing the edges of $C_i$ and adding a vertex $v_i$ adjacent exactly to the vertices of $C_i$.
Since $W_4$ does not immerse in $G$, there exists an edge cut $F_i$ of order at most~3 that separates $u$ and $v_i$,
as otherwise we would be able to find 4 edge-disjoint paths from $u$ to the vertices of the cycle $C_{i}$ and thus, an immersion model of $W_{4}$ on $G$.
Moreover, since both $u$ and $v_i$ have degree at least~4, this edge cut is internal.
We now define the set $T_i$, for every $i \in [6r]$, as the set of vertices that lie in the same connected component of $G_{C_i}-F_i$ as $u$.
\begin{claimm}\label{claim:intersection}
For every $i \in \{1,\ldots,6r\}$, $1 \leq |T_i \cap V(C_i)| \leq 3$.
\end{claimm}
\noindent
\textit{Proof of Claim \ref{claim:intersection}.}
The fact that $|T_i \cap V(C_i)| \geq 1$ follows from the observation that if $T_i \cap V(C_i) = \emptyset$, then the cut $F_i$ is not only a cut in $G_{C_i}$, but also in $G$, which contradicts the assumption that $G$ is internally~4-edge-connected.
On the other hand, observe that for every vertex $w \in V(C_i) \cap T_i$, the edge $v_{i}w$ must belong to the cut $F_i$, as otherwise there is a path joining
$u$ and $v_{i}$ in $G-F_{i}$, a contradiction.
Therefore no more than~3 vertices of $C_i$ may lie in $T_i$, which concludes the proof of the claim.
\hfill $\diamond$\\
We now define the set $Z=\{z_1,\ldots,z_{6r}\}$: for every $i \in [6r]$, we choose arbitrarily one vertex of $T_i \cap V(C_i)$ to be the vertex $z_i$. The existence of the vertices $z_i$ follows from Claim~\ref{claim:intersection}. Observe that, by construction of $Z$, it holds that $z_i \in T_i, \forall i \in [6r]$, i.e., the sets $T_i$ satisfy property (i).
Observe that, by construction, $G[T_i]$ is connected.
Moreover, the only edges of $G$ that are not edges of $G_{C_i}$ are the edges of the cycle $C_i$.
Thus, the only edges in the cut $(T_i, G\setminus T_i)$ of $G$ that are not edges of the cut $F_i$ in $G_{C_i}$ are the edges of $C_i$ incident with the vertices of $T_i \cap V(C_i)$.
Furthermore, for every vertex $w$ of $T_i \cap V(C_i)$, the edge $wv_i$ belongs to the cut $F_i$ in $G_{C_i}$, but not to the cut $(T_i, G\setminus T_i)$ in $G$.
Hence, the number of edges of the cut $(T_i, G\setminus T_i)$ in $G$ is at most $|F_i| + 2|T_i \cap V(C_i)| - |T_i \cap V(C_i)|$.
Since $F_i$ and $T_i \cap V(C_i)$ both have order at most~3, it follows that the cut $(T_i, G\setminus T_i)$ in $G$ has order at most~6.
We have therefore proved that properties (iii)-(v) hold for the sets $T_i, i \in [6r]$.
We may now define the set $X$. We first start with the set of (original) vertices $w_{p,q}$ of the wall, with $6r+1 - 36(\lceil\sqrt{6r}\rceil+1) \leq p \leq 6r+1 + 36(\lceil\sqrt{6r}\rceil+1)$ and $6r+1 - (\lceil\sqrt{6r}\rceil+1) \leq q \leq 6r+1 - 73(\lceil\sqrt{6r}\rceil+1)$. This set, denoted $X_0$, contains at least $72(\sqrt{6r}+1)^2 \geq 72r+73$ original vertices of the wall, due to $r \geq 1$. We now need the following:
\begin{claimm}\label{claim:overlap}
For every $i \in [6r]$, $|X_0 \cap T_i| \leq 72$.
\end{claimm}
\noindent
\textit{Proof of Claim \ref{claim:overlap}.}
We prove the claim by showing that for every $i \in [6r]$ and every subset $X'_0$ of $X_0$ that contains at least~73 vertices, there are~7 disjoint paths from vertices of $C_i \setminus T_i$ to vertices of $X'_0$ in $G$. Together with property (iv), this will imply validity of Claim \ref{claim:overlap}.
Consider a subset $X'_0$ of $X_0$ that contains least~73 original vertices of the wall. Observe that there must be 13 vertices that lie on the same horizontal path, or 7 vertices that lie on different horizontal paths. From there, taking into account the dimensions of the wall and the position of the vertices of $C_i$ and $X_0$, it is easy to observe that there always exist vertices $y_1,\ldots,y_7$ in $C_i \setminus T_i$ and $x_1,\ldots,x_7$ in $X'_0$ such that there are~7 disjoint paths between $y_1,\ldots,y_7$ and $x_1,\ldots,x_7$.
\hfill$\diamond$\\
Therefore, the set $X_0 \cap \bigcup T_i$ contains at most $72r$ vertices, which implies that there exists a subset $X$ of $X_0$ containing at least $73$ vertices such that $X \cap T_i = \emptyset$ for every $i \in [6r]$. This proves property (vi) for the sets $T_i, i \in [6r]$.
The validity of property (vii) follows from arguments similar to those given in the proof of Claim \ref{claim:overlap}.
Finally, we show how to select sets $S_1,\ldots,S_r$ among $T_1,\ldots,T_{6r}$ so that property (ii) holds, namely that for every $1 \leq i \neq j \leq r, z_i \not\in S_j$.
In order to find such sets, we proceed as follows: let $H$ be a directed graph such that $V(H)=\{T_1,\ldots,T_{6r}\}$, and $(T_i,T_j)$ is an arc of $H$ if and only if $z_i \in S_j$. We now claim that vertices of $H$ have indegree at most~6. This is shown by combining properties (iv), (vi), and (vii). Assume for contradiction that there is a vertex in $H$ having indegree at least~7, then there exist distinct indices $i_1,\ldots,i_7$ and $j$ such that $z_{i_1},\ldots,z_{i_7} \in S_j$. However, we know that there exist~7 disjoint paths from $\{z_{i_1},\ldots,z_{i_7}\}$ to $X$ by property (vii). Together with property (vi), we obtain a contradiction with property (iv). Therefore, we conclude that the directed graph $H$ has maximum indegree at most~6. Thus, $|E(H)| \leq 36r$, which implies that the average degree of $H$ is at most~6. Hence, $H$ is~6-degenerate and thus contains an independent set of size at least $\frac{|V(H)|}{6} = r$. The vertices of such an independent set correspond to sets $T_{i_1},\ldots,T_{i_
r}$ such that, for every $1 \leq p \neq q \leq r$, $z_{i_p} \not\in T_{i_q}$. Therefore, we choose $S_p := T_{i_p}$ for every $p \in [r]$ and observe that the set $S_1,\ldots,S_r$ as defined indeed satisfy property (ii).
Finally, since every set $T_i$ satisfies properties (i) and (iii)-(vi), and for every $j \in [r]$ there exists $i \in [6r]$ such that $S_j=T_i$, we obtain that the sets $S_i$ satisfy these properties as well.
This concludes the proof of the lemma.
\end{proof}
Lemma \ref{lem:big-lemma} essentially states that large treewidth yields a large number of vertex disjoint cycles that are highly connected to each other, and an additional disjoint set that is highly connected to these cycles. However, this, together with the assumption that $W_4$ does not immerse in $G$, implies that there cannot be a large flow between a vertex of degree at least~4 and one of the cycles. We will combine this fact with the notion of important separators to obtain Lemma~\ref{lem:bounded-treewidth}.
\begin{definition}
Let $X,Y \subseteq V(G)$ be vertices, $S \subseteq E(G)$ be an $(X,Y)$-separator, and let $R$ be the set of vertices reachable from $X$ in $G \setminus S$.
We say that $S$ is an important $(X,Y)$-separator if it is inclusion-wise minimal and there is no $(X,Y)$-separator $S'$ with $|S'| \leq |S|$ such that $R' \subset R$, where $R'$ is the set of vertices reachable from $X$ in $G \setminus S'$.
\end{definition}
\begin{theorem}\cite{Marx06,CLL07}
\label{thm:imp-sep}
Let $X,Y \subseteq V(G)$ be two sets of vertices in graph $G$, let $k \geq 0$ be an integer, and let $S_k$ be the set of all $(X,Y)$-important separators of size at most $k$. Then $|S_k| \leq 4^k$ and $S_k$ can be constructed in time $|S_k| \cdot n^{O(1)}$.
\end{theorem}
Theorem \ref{thm:imp-sep} states that the number of important separators of a certain size is bounded. The next lemma combines this fact with Lemma \ref{lem:big-lemma}.
\begin{lemma}
\label{lem:bounded-treewidth}
Let $G$ be a graph such that $G$ does not contain $W_4$ as an immersion, has no internal 3-edge cut and has a vertex $u$ with $d(u) \geq 4$.
Then the treewidth of $G$ is upper bounded by a constant.
\end{lemma}
\begin{proof}
If $G$ has treewidth at least $2^{18(6r)^{2}\log (6r)}$ for $r \geq 60000$, then there exist sets $Z=\{z_1,\ldots,z_r\}$, $S_1,\ldots,S_r$ and $X$ that satisfy the properties of Lemma~\ref{lem:big-lemma}.
Recall that $F$ is an important $(u,X)$-separator if there is no $(u,X)$-separator $F'$ such that $|F'| \leq |F|$ and the connected component of $G-F$ that contains $u$ is properly contained in the connected component of $G-F'$ that contains $u$.
Additionally, observe that for every set $S_i$, there is an important separator $F$ or order at most~6 such that $S_i$ lies in the same connected component as $\{u\}$ in $G-F$.
Moreover, for any cut $F$ of order at most~6 such that $S_i$ is contained in the same connected component as $u$ in $G-F$, there cannot be~7 disjoint paths from $u$ to $X$ through $F$.
Combined with property (vii) of Lemma \ref{lem:big-lemma} and the fact that every set $S_i$ contains a vertex $z_i$, this implies that for every important separator $F$, there are at most~6 sets $S_{i_1},\ldots,S_{i_p}, p \leq 6$, that are contained in the same connected component as $u$ in $G-F$.
However, Theorem~\ref{thm:imp-sep} ensures that there are at most $4^6$ important $(X,\{u\})$-separators of size at most~6 in $G$.
Therefore, if $r \geq 60000 > 6 \cdot 4^6$, there is a set $S_i$ such that the cut $(S_i,G-S_i)$ has order at least~7. Thus, we conclude that either $G$ has an internal edge cut of order at most~3, or it has no vertex of degree at least~4, or it contains $W_4$ as an immersion. Hence the lemma holds.
\end{proof}
We are now ready to prove the main theorem of our paper.
\begin{theorem}
\label{thm:main}
Let $G$ be a graph that does not contain $W_4$ as an immersion. Then the prime graphs of a decomposition of $G$ via $i$-edge-sums, $i\in [3]$, are either subcubic graphs, or have treewidth upper bounded by a constant.
\end{theorem}
\begin{proof}
Let us consider a decomposition of $G$ via $i$-edge-sums, $i \in [3]$, and let $H$ be a prime graph of such a decomposition. Note first that, since $G$ does not contain $W_4$ as an immersion, then $H$ does not contain it either, due to Theorem~\ref{thm:immersion-preserved}. Now, assume that $H$ is not subcubic. Then there is a vertex $u$ of degree at least~4 in $H$. Moreover, it is clear from Theorem~\ref{thm:immersion-preserved} that $H$ is internally~4 edge-connected. Hence, we may apply Lemma~\ref{lem:bounded-treewidth} and conclude that $H$ has treewidth at most $2^{2^{13}\cdot 3^{6}\cdot 5^{8}\cdot\log (2^{6}\cdot 3^{2}\cdot 5^{4})}$. Thus, the theorem holds.
\end{proof}
We conclude this section by noting that Theorem \ref{thm:main} is in a sense tight: indeed, both the fact that we decompose along edge-sums of order at most~3 and the requirement that a unique vertex of degree at least~4 is sufficient to enforce small treewidth are necessary. The fact that decomposing along internal 3-edge-sums is necessary can be seen from the fact that there are internally~3 edge-connected graphs that have vertices of degree at least~4 and yet do not contain $W_4$ as an immersion, e.g., a cycle where every edge is doubled.
\section{Concluding remarks}
Following the proof of Theorem \ref{thm:main}, the first task is to improve the bound on the treewidth of internally~4 edge-connected graphs that exclude $W_4$ as an immersion and have a vertex of degree at least~4.
Our proof of Theorem \ref{thm:main} relies on the fact that large treewidth ensures the existence of a large number of vertex disjoint cycles that are highly connected to each other.
In order to obtain these cycles, we use the fact that graphs of large treewidth contain a large wall as a topological minor. However, the value of treewidth required to find a sufficiently large wall is currently enormous.
Avoiding to rely on the existence of a large wall would be an efficient way to drastically reduce the constants in Lemma \ref{lem:big-lemma} and Theorem~\ref{thm:main}.
Another question that we leave open is to prove a similar result for larger wheels, i.e.,~$W_k$ for $k \geq 5$. Providing a decomposition theorem for larger wheel seems to be a challenging task, as edge-sums no longer seem to be the proper way to proceed, since, as argued in Section~\ref{sec:main}, $k$ edge connectivity is necessary, but $W_k$-immersion is not preserved under edge-sums of order $k-1$, as seen in Figure~\ref{fig:ex1} and~\ref{fig:ex2}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.17]{ex-W5-1_all.pdf}
\caption{$G_{1}$ contains $W_{5}$ as an immersion but $G=G_{1}\hat{\oplus}_{4}G_{2}$ does not.
The unique vertex in $G_i$ incident with dotted edges is the vertex $v_i$, and the edge-sum maps to each other edges of $G_1$ and $G_2$ with the same label.}
\label{fig:ex1}
\end{center}
\end{figure}
Decomposition theorems exist when small wheels are excluded as topological minors \cite{Farr88,RF09a,RobinsonF14}, however these results do not apply when excluding wheels as immersions, as in this case we must consider multigraphs.
A similar important question is to characterize graphs excluding $K_5$ as an immersion.
Finally, note that the general algorithm to test immersion containment runs in cubic time for every fixed target graph $H$. We believe that Theorem \ref{thm:main} can be used to devise efficient algorithms to recognize graphs that exclude $W_4$ as an immersion.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.23]{ex-W5-2_all.pdf}
\caption{Neither $G_{1}$ nor $G_{2}$ contain $W_{4}$ as immersion but $G=G_{1}\hat{\oplus}_{4}G_{2}$ does.
The unique vertex in $G_i$ incident with dotted edges is the vertex $v_i$, and the edge-sum maps to each other edges of $G_1$ and $G_2$ with the same label.}
\label{fig:ex2}
\end{center}
\end{figure}
|
1,116,691,500,919 | arxiv | \section{Introduction}
The elements heavier than iron are mainly produced via neutron capture reactions \cite{B2FH57, Kappeler11-RMP, Arnould07-PR}. These processes, however, cannot create the so-called $p$-nuclei on the proton-rich side of the
valley of stability. The so-called $\gamma$ process \cite{Rauscher13-RPP} is mainly responsible for the production of these isotopes. The $\gamma$ process occurs in hot, dense astrophysical plasma environments like in thermonuclear
supernovae \cite{Nishimura18-MNRAS,Travaglio18-AJ} or in core collapse supernovae \cite{Woosley78-AJS,Rauscher02-AJ}. The $\gamma$-process reaction network
involves tens of thousands of reactions on thousands of mainly unstable nuclei, thus the reaction rates have to be predicted in a wide mass and temperature range. For this purpose the Hauser-Feshbach (H-F) statistical model \cite{Hauser52-PR} using global optical model potentials (OMPs) is widely used. While the nucleon-nucleus OMP (N-OMP) is relatively well known, the predicted reaction rates may vary over one order of magnitude depending on the chosen $\alpha$-nucleus OMP (A-OMP) \cite{Kiss13-PRC,Netterdon13-NPA}. Recently, several cross-section measurements have been carried out mostly on proton rich isotopes to test the global A-OMPs
(e.g., \cite{Ozkan07-PRC,Sauerwein11-PRC,Filipescu11-PRC,Netterdon13-NPA,Kiss14-PLB,Simon15-PRC,Yalcin15-PRC,Halasz16-PRC,Scholz16-PLB,Mayer16-PRC,Szucs18-PLB,Korkulu18-PRC,Kiss18-PRC}).
In the present work the cross sections of the \auvii \rag \tli , \auvii \ran \tlnull , and \auvii \rann \tlix\ reactions were measured at energies below the Coulomb barrier, reaching the upper end of the Gamow window for typical temperatures of the $\gamma$ process of $T_9 \approx 2 - 3$ (where $T_9$ is the temperature in $10^9$ K). The new experimental results are compared to the predictions of several open-access global A-OMPs. Although \auvii\ is not in the $p$-nuclei mass range, it is only slightly above the heaviest $p$-nucleus $^{196}$Hg, thus it can help understanding the systematic in the mass region. Furthermore, experimental studies are
facilitated by the mechanical and chemical properties of gold and by the fact
that gold is mono-isotopic with the only stable isotope \auvii . The
application of the activation technique \cite{Gyurky19-EPJA} is possible because of the reasonable
half-lives of the residual \tlix , \tlnull , and \tli\ nuclei. However,
$\gamma$-ray spectroscopy had to be complemented by X-ray spectroscopy to
cover all reactions under study in the present work, and the X-ray decay
curves had to followed for a long period to disentangle the contributions of the
different reaction channels.
The paper is organized as follows. In \Sec{sec:react} the reactions under investigation will be presented. In \Sec{sec:exp} the experimental details will be given, while in \Sec{sec:analy} the data analysis is detailed. The experimental results are summarized in \Sec{sec:res}. In \Sec{sec:theo} the obtained data are compared to statistical model calculation. Finally in \Sec{sec:sum} a summary is given.
\begin{table*}[t]
\caption{Decay parameters of the reaction products of $\alpha$-induced reactions on $^{197}$Au \cite{NDS199,NDS200,NDS201}.}
\label{tab:param}
\center
\begin{ruledtabular}
\begin{tabular}{l l c c c c c}
\multirow{2}{*}{Reaction} &Reaction & Half-life & X- or $\gamma$-ray& $I_{rel_i}$ : Relative & $M$ : Multiplicator & Absolute \\
& product & (h) & energy (keV) & intensity (\%) & for absolute intensity & intensity (\%) \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
($\alpha$,$\gamma$) & $^{201}$Tl & 73.01\,$\pm$\,0.04& 70.8 & & & 44.6\,$\pm$\,0.6\\
& & & 167.4 & & & 10.00\,$\pm$\,0.06\\
\noalign{\smallskip}\colrule\noalign{\smallskip}
($\alpha$,n) & $^{200}$Tl & 26.1\,$\pm$\,0.1 & 367.9 & 100 & \multirow{4}{*}{0.87\,$\pm$\,0.06} & 87\,$\pm$\,6\\
& & & 579.3 & 15.8\,$\pm$\,0.8 & & 13.7\,$\pm$\,1.2\\
& & & 828.3 & 12.4\,$\pm$\,0.7 & & 10.8\,$\pm$\,1.0\\
& & & 1205.6 & 34.4\,$\pm$\,1.9 & & 30\,$\pm$\,3\\
\noalign{\smallskip}\colrule\noalign{\smallskip}
($\alpha$,2n) & $^{199}$Tl & 7.42\,$\pm$\,0.08 & 158.4 & 40\,$\pm$\,2 & \multirow{4}{*}{0.124\,$\pm$\,0.012}& 5.0\,$\pm$\,0.5\\
& & & 208.2 & 99\,$\pm$\,5 & & 12.3\,$\pm$\,1.3\\
& & & 247.3 & 75\,$\pm$\,4 & & 9.3\,$\pm$\,1.0\\
& & & 455.5 & 100\,$\pm$\,5 & & 12.4\,$\pm$\,1.4\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{\label{sec:react}Studied reactions}
Gold has only one stable isotope the $^{197}$Au, therefore the 100\% isotopic purity of the targets is naturally granted. In the energy range investigated here not only radiative $\alpha$ capture occurs, but capture can also be followed by one or two neutron emissions\footnote{Many more reaction channels - involving mostly $\alpha$ emissions - are energetically possible, but they typically have much lower cross sections - because of the Coulomb barrier in the exit channel - than the ones studied in the present work.}. All these three reaction channels - detailed below - lead to radioactive nuclei, thus the activation technique can be used to investigate them.
In \tab{tab:param} the decay parameters of the reaction products are summarized.
In the case of the $\gamma$ rays the absolute intensities are obtained from the relative intensities and a multiplicator as given in the decay data compilations \cite{NDS199,NDS200,NDS201}. These are indicated in the table and used in the analysis in order to take the systematic uncertainties correctly into account. In the case of the X ray the absolute intensities per decay is the available quantity in the compilation \cite{NDS201}.
\subsection{$^{197}$Au($\alpha$,$\gamma$)$^{201}$Tl}
There is only one cross-section dataset for this reaction in the literature by Basunia \textit{et al.} \cite{Basunia07-PRC} and several derived upper limits \cite{Capurro88-JRNC, Necheva97-ARI, Kulko07-PAN}. Even if Capurro \textit{et al.} \cite{Capurro88-JRNC} published their data as definite values, they can only be considered as upper limits because of some neglected experimental issues as pointed out by Necheva \textit{et al.} \cite{Necheva97-ARI}. The weak $\gamma$ transition at 167.4~keV of $^{201}$Tl is often buried by the Compton background of the strong 367.9-keV $\gamma$ line from the ($\alpha$,n) reaction product. Owing to this difficulty the lowest measured cross-section point prior to our work was at $E_\alpha = 17.9$~MeV. With the method described in \Sec{sec:analysis} we were able to measure two orders of magnitude lower cross sections down~to~14~MeV.
\subsection{$^{197}$Au($\alpha$,n)$^{200}$Tl}
There are many datasets in the literature for this reaction \cite{Kurz71-NPA, Calboreanu82-NPA, Capurro85-JRNC, Bhardwaj86-NIMA, Shah95-Pra, Necheva97-ARI, Ismail98-Pra, Kulko07-PAN, Basunia07-PRC}. Almost all of these works used the stacked foil technique to measure the reaction cross section at different energies, the only exception is Calboreanu \textit{et al.} \cite{Calboreanu82-NPA}. The energy uncertainty for the reported values are much higher than in this work, where thin single targets were used. All the literature data are in agreement within their error bars; however, either the energy or the cross-section uncertainty is large in the energy range where our investigation has been done. Our new dataset has much higher precision, therefore it provides a better constraint on the theoretical models in this energy region.
\subsection{$^{197}$Au($\alpha$,2n)$^{199}$Tl}
The threshold of the reaction channel with two neutron emission is at $E_\alpha = 17.1$~MeV. Above this energy the cross section increases rapidly. Most of the previously mentioned studies of the ($\alpha$,n) reaction investigated also this reaction channel, and there are a few others \cite{Lanzafame70-NPA, Kurz71-NPA, Calboreanu82-NPA, Capurro85-JRNC, Bhardwaj86-NIMA, Shah95-Pra, Necheva97-ARI, Ismail98-Pra, Kulko07-PAN, Basunia07-PRC}. Similarly to the ($\alpha$,n) channel the literature data are in good agreement within their error bars, but they have limited precision. Our new results are much more precise in the whole investigated energy range, and reach closer to the reaction threshold than ever before.
\section{\label{sec:exp}Experimental details}
\subsection{Targets} Two types of gold targets were used. Either gold layers with typical thicknesses between 0.1 and 0.3~$\mu$m evaporated onto thin aluminum foils or self-supporting gold foils with typical thicknesses of $0.6-0.7$~$\mu$m were used. The absolute number of target atoms were measured for each target by at least two of the following four independent methods.
Both types of targets were investigated by proton induced X-ray emission (PIXE) technique \cite{Koltay11-HNC}. For this purpose the PIXE setup of MTA Atomki installed on the left 45$^\circ$ beamline of the 5-MV Van de Graaff accelerator \cite{Kertesz10-NIMB} was used. A 2-MeV proton beam with about 1- to 3-nA intensity impinged on the targets. Typical PIXE spectra of the two target types can be seen in the left column of \fig{fig:thickness}. The collected spectra were fitted using the GUPIXWIN program code \cite{Campbell10-NIMB}. The final thickness uncertainty of this method is about 4\% including the fit uncertainty and systematic uncertainties concerning the geometry of the setup and the accuracy of the charge measurement.
Besides the thickness determination, the PIXE method allows trace impurity identification in the targets. The self-supporting foils contain Ni and Cu on the 200- and 350-ppm levels, respectively. The aluminum backing of the evaporated targets contains Ti, V, Ni, Cu, Zn, Ga below 50 ppm and Fe with about 3000 ppm.
The targets were investigated also by Rutherford backscattering spectroscopy (RBS) using the Oxford-type Nuclear Microprobe Facility at MTA Atomki \cite{Huszank15-JRNC}.
In the case of the evaporated targets an $\alpha$ beam of 1.6~MeV while for the self-supporting targets a proton beam of 2.0~MeV was used provided by the 5-MV Van de Graaff accelerator. Typical $\alpha$-RBS and proton-RBS spectra are shown in the middle column of \fig{fig:thickness}. The measured RBS spectra were analyzed with the SIMNRA software \cite{SIMNRA}. The uncertainty of the number of target atoms is 3\% from $\alpha$-RBS and 8\% from proton-RBS. The former is mainly the general accuracy of the given RBS system determined from the measured thickness reproducibility of many standards, and partly from the statistical uncertainty of the fit. The higher uncertainty for the proton-RBS is due to the roughness of the samples causing worse fit.
As a third thickness determination method in the case of the evaporated targets, weighing was used. The weight of each Al foil was measured before and after the evaporation. The target thicknesses were then calculated from the known surface area of the target and the weight difference. The uncertainty of this method is between 2-4\% depending on the thickness of the samples taking into account the precision of the weight measurement (better than 5~$\mu$g) and the possible evaporation non-uniformity.
In the case of the self-supporting foils the energy loss of $\alpha$ particles from a triple-nuclide $\alpha$-source was measured in an ORTEC SOLOIST alpha spectrometer. A typical $\alpha$-energy spectrum is shown in right panel of \fig{fig:thickness}, where the difference between the peak position in the calibration and measurement runs gives the energy loss. The total fit plotted by dark blue and light blue is the sum of the fits made for each of the eight $\alpha$ energies from the source (purple and red lines in the figure). Using the known stopping power, the foil thickness was determined with an accuracy of about 8\% stemming mainly from the stopping power uncertainty (i.\,e. 7.4\%) and partly from that of the measured energy loss.
For each foil the different methods gave consistent results. In the analysis the weighted average of the thickness values obtained with the various methods were used (see \tab{tab:target_and_irrad}).
\onecolumngrid
\begin{center}
\begin{figure}[h]
\includegraphics[width=0.32\linewidth]{PIXE}
\includegraphics[width=0.32\linewidth]{RBS}
\includegraphics[width=0.32\linewidth]{tripple-alpha}
\caption{\label{fig:thickness} In left column are typical PIXE-spectra in the case of a self-supporting (top) and an evaporated (bottom) target. Middle top figure shows a proton-RBS spectrum of a self-supporting target, while bottom middle is an $\alpha$-RBS spectrum of an evaporated target. Right panel: $\alpha$-spectra of a triple-nuclide $\alpha$-source used for the target thickness measurement is presented, where the black and gray points are the measured energy spectra with and without the gold foil, respectively. The energy calibration fit is plotted with light blue, while the energy loss fit with dark blue lines (see text for details).}
\end{figure}
\end{center}
\vspace{1cm}
\twocolumngrid
\subsection{Irradiations} For the irradiations, the MGC-20 type cyclotron of Atomki was used. The $\alpha$ particles entered the activation chamber through a beam defining aperture followed by a second aperture supplied with $-300$~V against secondary electrons either escaping from the chamber or emerging from the collimator. The apertures and the chamber was electrically isolated allowing to measure the beam current. The typical $\alpha^{++}$-beam current was 1 -- 2.5~$\mu$A. The length of the irradiations was typically 20 -- 34~h.
The beam current was recorded with a multichannel scaler, thus the small variations in the beam intensity could be taken into account in the data analysis.
For the activation analysis described below, we define the effective projectile fluence for each reaction product by the following equation:
\begin{table}[t]
\caption{Target and irradiation parameters. Here the average target thicknesses and the effective beam fluences for each studied isotopes are presented.}
\label{tab:target_and_irrad}
\center
\begin{ruledtabular}
\begin{tabular}{c c c c c }
\multirow{2}{*}{$E_{\alpha}$ (MeV)} & \multirow{2}{*}{Target thickness ($\frac{10^{17}\mathrm{at}}{\mathrm{cm}^2}$)} & \multicolumn{3}{c}{$F_{eff}$ ($10^{17}$)} \\
\noalign{\smallskip}\cline{3-5}\noalign{\smallskip}
& & $^{199}$Tl & $^{200}$Tl & $^{201}$Tl \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
20.0 & \begin{tabular}{r@{\,$\pm$\,}l} 42.2 & 1.5 \\ \end{tabular} & 1.97 & 3.46 & 4.12 \\
19.5 & \begin{tabular}{r@{\,$\pm$\,}l} 18.66 & 0.24\\ \end{tabular} & 1.09 & 2.02 & 2.44 \\
19.0 & \begin{tabular}{r@{\,$\pm$\,}l} 5.28 & 0.11\\ \end{tabular} & 1.12 & 2.28 & 2.88 \\
18.5 & \begin{tabular}{r@{\,$\pm$\,}l} 5.22 & 0.11\\ \end{tabular} & 1.00 & 2.03 & 2.54 \\
18.0 & \begin{tabular}{r@{\,$\pm$\,}l} 7.10 & 0.13\\ \end{tabular} & 1.45 & 2.58 & 3.06 \\
17.5 & \begin{tabular}{r@{\,$\pm$\,}l} 10.27 & 0.16\\ \end{tabular} & 1.53 & 2.75 & 3.29 \\
17.0 & \begin{tabular}{r@{\,$\pm$\,}l} 17.86 & 0.23\\ \end{tabular} & & 1.88 & 2.34 \\
16.0 & \begin{tabular}{r@{\,$\pm$\,}l} 43.2 & 2.6 \\ \end{tabular} & & 1.88 & 2.26 \\
15.0 & \begin{tabular}{r@{\,$\pm$\,}l} 37.1 & 1.2 \\ \end{tabular} & & 3.31 & 4.15 \\
14.0 & \begin{tabular}{r@{\,$\pm$\,}l} 41.2 & 1.4 \\ \end{tabular} & & 3.29 & 3.95 \\
13.7\footnote{Behind energy degrading foil.} & \begin{tabular}{r@{\,$\pm$\,}l} 7.72 & 0.14\\ \end{tabular} & & 3.29 & \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{equation}
\label{eq:flux}
F_{eff_x} = \sum_{i=1}^{n}\left(\phi_i\,e^{-(n-i)\,\lambda_x\,\Delta t} \right),
\end{equation}
where the sum is over each step of the multichannel scaler assuming constant flux ($\phi_i$) within a single time interval of length $\Delta t$ (1~min in this case). $\lambda_x$ is the decay constant of the given isotope (i.\,e. $^{199}$Tl; $^{200}$Tl; $^{201}$Tl). Typical beam current and effective fluence curves as a function of time are shown in \fig{fig:irradiation} and the final effective fluence for each irradiation is presented in \tab{tab:target_and_irrad}.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{Irradiation}
\caption{\label{fig:irradiation} A typical beam current variation is shown by blue solid line. The time evolution of the effective fluence for the different isotopes are presented by black dashed, red dotted and green dot-dashed lines for $^{199}$Tl, $^{200}$Tl and $^{201}$Tl, respectively.}
\end{figure}
\newpage
\subsection{$\gamma$-ray and X-ray detection}
The produced activity was determined by counting the $\gamma$ and/or X rays following the decay of the reaction products (see \tab{tab:param}).
In the case of the X-ray counting only the 70.81-keV K$_{\alpha_1}$ X-ray line was used, because the other strong K$_{\alpha_2}$ line at 68.894~keV has a contribution from the X-ray fluorescence peak of gold at 68.806~keV. These two peaks were not separable in the spectrum.
For the counting a thin crystal high-purity germanium detector, a so-called Low Energy Photon Spectrometer (LEPS) was used. The detector was equipped with a home made quasi 4$\pi$ shielding consisting of layers of copper, cadmium and lead \cite{Szucs14-AIPConf}.
The detector efficiency calibration was done with $\gamma$ sources of known activity at a counting distance of 10~cm, thus minimizing the true-coincidence summing effect. Since the energies of the decay radiation are between the energies of $\gamma$ rays of the calibration sources, only interpolation was necessary. This was done by fitting log-log polynomial functions to the measured efficiency points. Between 50~keV and 350~keV a 5th order polynomial, while between 250~keV and 1400~keV a 3th order function describes well the measured efficiency. In the overlapping region the two functions are in fair agreement as shown in \fig{fig:eff}.
For the relative efficiency uncertainty, the 1$\sigma$ confidence band of the fits was used.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{Efficiency}
\caption{\label{fig:eff} The measured detector efficiency at 10~cm (black point) and the fit functions (blue and red curves). Vertical lines indicate the energy of the $\gamma$ rays and the X ray used for the analysis.}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.99\linewidth]{an_spe}
\caption{\label{fig:an_spe} 1-h long spectrum taken 10~h after the 18.0~MeV irradiation. Insets (a)-(d) show the zoomed regions around the peaks used for the for $^{199}$Tl activity determination, while (e)-(h) those for $^{200}$Tl.}
\end{figure*}
The targets with lower activity were counted in 3-cm target to detector end-cap distance. At this distance the $\gamma$-ray detection efficiencies were determined by using several targets which were counted both in 10-cm and 3-cm geometry and from the observed count rates, knowing the half-lives of the products and the time difference of the countings, the efficiency-conversion factors were derived. This factor contains the possible loss due to the true-coincidence summing in close geometry. The conversion factors measured with the different sources were consistent, therefore their statistically weighted average was used in the analysis.
The close-geometry efficiency uncertainty contains the uncertainty of the fit and the uncertainty of the conversion factors, thus ranges from $3-8$\%. The highest values are for the lines of $^{199}$Tl, because they sit on the Compton-continuums of the slower decaying lines of $^{200}$Tl, causing higher statistical uncertainty.
The X-ray detection efficiency in close geometry was determined using the target irradiated with 20.0~MeV. It was measured two times both at the 10-cm and 3-cm geometry. The counting times were optimized so that for the first counting pair the $^{200}$Tl, for the second counting pair the $^{201}$Tl dominated the X-ray peak. This was necessary because of the different summing effects characterizing the two isotopes, which lead to slightly different close geometry efficiency of the X ray with the same energy.
Self-absorption effects could be neglected because of the relatively thin targets in the present work. The energy of the detected X ray is just below the K absorption-edge of gold, thus experiences a few times higher absorption as the $\gamma$ rays. For the thickest gold target (see Table II) the X-ray self-absorption is less than 0.2\% considering an even activity distribution in the target, thus can be neglected safely.
\section{\label{sec:analy}Data analysis}
\subsection{($\alpha$,2n) and ($\alpha$,n) cross sections}
\begin{figure*}[t]
\includegraphics[width=0.49\linewidth]{a2n_act}
\includegraphics[width=0.49\linewidth]{an_act}
\caption{\label{fig:rel_act_an} Relative activity as a function of time. The dots are the measured values for a given transition. Horizontal solid lines are the average values, while dotted lines indicate the uncertainty of the average (if it is smaller than the line width of the average, then the lines are not shown). The left and right panels are for the transitions of the ($\alpha$,2n) and ($\alpha$,n) reaction products, respectively.}
\end{figure*}
First the activity of the ($\alpha$,n) and above 17.1~MeV also that of the ($\alpha$,2n) reaction products were determined as follows.
The $\gamma$ peaks were fitted with Gaussian plus linear background in each hourly-recorded spectra (see \fig{fig:an_spe}). If the uncertainty of the peak area from the fit was more than 10\% then spectra were added together until the fit resulted in lower than 10\% statistical uncertainty. This spectrum summing was done separately for each of the studied peaks.
The peak areas were then divided by the corresponding waiting and counting time factors, yielding a number related to the activity of the given isotope at the end of the irradiation (hereafter referred to as the relative activity). The statistically weighted average of these individual numbers (see e.\,g. \fig{fig:rel_act_an}) is the finally obtained relative activity ($A_{rel}$), which was calculated with the following equation:
\begin{equation}
\label{eq:rel_act}
A_{rel_x} = \left( \sum_{i=1}^{n} \frac{C_i}{e^{-\lambda_x t_{w_i}} \left( 1 - e^{-\lambda_x t_{c_i}} \right)} W_i \right) \bigg/ \left( \sum_{i=1}^{n} W_i \right),
\end{equation}
where the summation runs till the last counting which results in better than 10\% fit uncertainty. $C_i$ is the peak area from the $i$-$th$ counting, $\lambda_x$ is the decay constant of isotope $x$, $ t_{w_i}$ and $t_{c_i}$ are the waiting time and length of the $i$-$th$ counting, respectively. The weighting factors $W_i$ are the square of the reciprocal statistical uncertainty coming mainly from the fitted peak areas and partly from the uncertainty of the counting and waiting factors. The uncertainty of the relative activities are the reciprocal square-root of the sum of the weights.
The relative activity determination for the irradiation at 18.0~MeV is plotted in \fig{fig:rel_act_an}.
The relative activities of a given transition were then divided by the relative intensities (${I_{rel_i}}$) and detection efficiencies ($\eta_i$) of the corresponding $\gamma$ rays. Consistent values were obtained for all the studied transitions, thus the created activity of a given isotope at the end of the irradiation was calculated as follows:
\begin{equation}
A_x = \left(\left( \sum_{i=1}^{n} \frac{A_{rel_i}}{I_{rel_i} \eta_i} w_i \right) \bigg/ \left(\sum_{i=1}^{n} w_i \right) \right) \Bigg/ M,
\end{equation}
where the summation goes over the four transition of the isotope in question.
The weights are formed from the combined uncertainty of $A_{rel_i}$, ${I_{rel_i}}$ and $\eta_i$. Finally the average is divided by the multiplicator for the absolute intensity $M$.
At the very end the reaction cross sections are obtained by the following equation:
\begin{equation}
\sigma = \frac{A_x}{D F_{eff_x}},
\end{equation}
where $A_x$ is the created activity of isotope $x$ in the sample, $D$ is the target thickness and $F_{eff_x}$ is the effective irradiation fluence as defined in \eq{eq:flux}.
\subsection{\label{sec:analysis} ($\alpha$,$\gamma$) cross sections}
The $\gamma$ rays from the decay of $^{201}$Tl were only visible at and above 17.5-MeV bombarding energy (see \fig{fig:ag_spe_g}), because of the Compton background of the very intense 367.9-keV $\gamma$ line of the ($\alpha$,n) reaction product and other parasitic activities created on the trace impurities of the targets. Owing to the common systematic uncertainties of the $\gamma$-ray and X-ray counting methods, the final uncertainty would not decrease by averaging. Therefore the adopted cross section was only derived from the X-ray counting as it has much higher precision.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{ag_spe_g}
\caption{\label{fig:ag_spe_g} 101-h long spectrum collected 150~h after the 17.5-MeV irradiation. The inset shows the $\gamma$ peak of $^{201}$Tl.}
\end{figure}
The produced $^{201}$Tl activity from the X-ray counting was determined as follows.
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{ag_spe_X}
\caption{\label{fig:ag_spe_X} 11-h long spectrum collected 453~h after the 17.5-MeV irradiation. The inset shows the X-ray peak of $^{201}$Tl, together with the X-ray fluorescence peaks of gold. Comparing this spectrum to the one in \fig{fig:ag_spe_g} the higher sensitivity of the X-ray counting method is clearly seen.}
\end{figure}
The X-ray peak was fitted by Gaussian and quadratic background. Similar to the $\gamma$-peak fits, spectra were added together so that the peak fit in this case resulted in less than 20\% peak area uncertainty.
Owing to the half-life difference, after about 16 days, the ($\alpha$,$\gamma$) reaction product $^{201}$Tl had a large enough contribution to the X-ray peak. The subtraction of the contribution from the other two reaction products became possible, as discussed below. A typical spectrum used for the X-ray activity determination is shown in \fig{fig:ag_spe_X} where already more than half of the peak counts are caused by the $^{201}$Tl activity.
The subtraction of the contributions from the other isotopes was done as follows.
Below the ($\alpha$,2n) reaction threshold the X-ray decay-curve recorded in the first days after the irradiation was fitted using the $^{200}$Tl half-life, thus the X-ray relative activity of $^{200}$Tl was determined (see lower panel of \fig{fig:decay}).
Above the ($\alpha$,2n) reaction threshold, the X-ray decay curve was fitted with the sum of two exponentials with the known half-lives of $^{199}$Tl and $^{200}$Tl, similarly as e.g. in Kiss \textit{et al.} \cite{Kiss18-PRC}. From the fit the relative X-ray activity for both reaction products was derived. An example of such a fit is also shown in \fig{fig:decay} (upper panel).
In the first days of the countings the contribution of the ($\alpha$,$\gamma$) reaction product is negligible to the X-ray peak as calculated using the literature X-ray intensities and the produced activity previously determined via $\gamma$ counting.
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{decay}
\caption{\label{fig:decay} Time evolution of the count rate in the X-ray peak (black dots). The upper panel shows the dual exponential fit after the 19.5-MeV irradiation. Gray, blue and green solid lines represent the fitted $^{199}$Tl and $^{200}$Tl contribution and their sum, respectively. Lower panel shows a single exponential fit in the case of the 17.0-MeV measurement, where the blue line is the fitted exponential with the $^{200}$Tl half-life.}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{subtr}
\caption{\label{fig:subtr} Upper panel shows the time evolution of the count rate in the X-ray peak (black dots) together with the calculated contribution from the ($\alpha$,n) reaction channel (blue line with error bars) in the case of the last 150 hours of counting of the 17.5-MeV sample. The lower panel shows the relative activity from the X-ray measurement of $^{201}$Tl. Red dots are the decay and counting corrected peak areas after subtraction. The subtraction was possible with reasonable accuracy only after 400 hours.}
\end{figure}
The X-ray countings for the $^{201}$Tl activity were done at least 15~days after the irradiations. After such a waiting time the contribution from the ($\alpha$,2n) reaction product was always negligible.
For the subtraction of the X-ray contribution of the ($\alpha$,n) reaction product, the relative activity ratio of the 367.9-keV $\gamma$ ray and the X ray was used. The ratio was determined from several samples at the actual 3-cm counting geometry. Since the uncertainty of the relative activity from the 367.9-keV $\gamma$ line contains only the uncertainty of the counting statistics, the number to be subtracted is more precise than what would be calculated from the absolute activity. This latter would contain the uncertainties of the detection efficiencies and X-ray branchings.
After subtracting the ($\alpha$,n) contribution, only those points were used where the relative uncertainty of the remaining peak areas was not higher than 50\%. Then those were corrected for the decay and counting time resulting in the X-ray relative-activity values. The final relative activity is the weighted average of those from the subsequent countings, similar to the $\gamma$-ray relative activities (see \fig{fig:subtr}).
The 13.7-MeV point was measured differently from the others. For the 14-MeV irradiation two targets were placed in the irradiation chamber separated by a 2.13~$\mu$m thick Al foil. The beam energy at the position of the second target is calculated using SRIM, from the known thickness of the first target and the Al energy degrader foil. The energy uncertainty of this point is therefore higher.
At this energy the 367.9-keV $\gamma$ line from the ($\alpha$,n) reaction product was not visible during the course of the 1.5-days counting right after the irradiation. Therefore,
the activity was determined from X-ray counting only using the absolute X-ray branching ratio from the literature (see \tab{tab:param}). The X-ray peak count rate followed the half life of the $^{200}$Tl, thus was considered to be populated only by the ($\alpha$,n) reaction product.
In the case of the self-supporting foils having no backing, some activity can be lost when the reaction takes place at the rear of the target layer, and the reaction product cannot be stopped in the remaining part of the foil. A SRIM simulation \cite{srim} was done to estimate this effect. As a starting point of the simulation, Tl nuclei were equally distributed in the gold foil, and each of them had a velocity directing to the rear of the foil, calculated from the reaction kinematics for each irradiation energy.
The simulation showed that about 3\% of the Tl nuclei can leave the gold foils. This loss was finally taken into account in the created activity determination with a conservative 30\% relative uncertainty.
The effective energy in each case is taken at the middle of the target. The energy loss is calculated with SRIM \cite{srim}.
The energy uncertainty determined by the initial beam-energy uncertainty of 0.3\%. The effect of the target thickness and energy loss on beam-energy uncertainty is $0.005-0.05$\%, thus neglected.
Beside the statistical uncertainties propagated with the averaging as discussed before, each data point contains the respective target thickness uncertainty as quoted in \tab{tab:target_and_irrad} and the uncertainty of the Tl loss in the case of the self-supporting targets. As systematic uncertainty, the absolute branching (7-10\%), the absolute detection efficiency (3\%) and the beam current (3\%) uncertainty was quadratically added to get the finally quoted uncertainties.
The absolute detection efficiency uncertainty accounts for the uncertainty of the absolute activity of the calibration sources and that of the counting distance reproducibility.
\section{\label{sec:res} Experimental results}
\subsection{\label{sec:branching} X-ray intensities}
\begin{table}[b]
\caption{Relative intensity of the K$_{\alpha_1}$ X rays to the strongest $\gamma$ ray (marked with 100 in \tab{tab:param}) for several runs. In the case of $^{201}$Tl the statistic of the $\gamma$ ray was sufficient for the analysis only at 20~MeV. The uncertainty of the averaged value includes the relative detection efficiency uncertainty.}
\label{tab:X_branch}
\center
\begin{ruledtabular}
\begin{tabular}{l c c c }
E$_{\alpha}$ (MeV) & \multicolumn{3}{c}{Relative X-ray intensity (\%)} \\
\noalign{\smallskip}\cline{2-4}\noalign{\smallskip}
& $^{199}$Tl & $^{200}$Tl & $^{201}$Tl \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
20.0 & \begin{tabular}{r@{\,$\pm$\,}l} 322 & 15 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 43.5 & 0.2 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 405 & 40 \\ \end{tabular} \\
19.5 & \begin{tabular}{r@{\,$\pm$\,}l} 346 & 7 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 43.2 & 0.4 \\ \end{tabular} & \\
19.0 & \begin{tabular}{r@{\,$\pm$\,}l} 347 & 8 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 43.8 & 0.3 \\ \end{tabular} & \\
17.0 & & \begin{tabular}{r@{\,$\pm$\,}l} 43.2 & 0.8 \\ \end{tabular} & \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
Average & \begin{tabular}{r@{\,$\pm$\,}l} 344 & 6 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 43.5 & 0.5 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 405 & 41 \\ \end{tabular} \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
Ref.\cite{Debertin79-ARI} & & & \begin{tabular}{r@{\,$\pm$\,}l} 446 & 12 \\ \end{tabular}
\end{tabular}
\end{ruledtabular}
\end{table}
The X-ray relative activity of all the created isotopes is determined for several samples. Using the $\gamma$-ray relative activity and the X- and $\gamma$-ray detection efficiencies, the relative X-ray branching ratios are determined here.
For each isotopes \tab{tab:X_branch} presents the X-ray branching ratios relative to the strongest $\gamma$ lines (i.e. 455.5~keV for $^{199}$Tl, 367.9~keV for $^{200}$Tl and 167.4~keV for $^{201}$Tl, respectively).
To avoid the systematic effect of the efficiency scaling, only data points measured at the 10-cm counting geometry are used. The uncertainty of the relative detection efficiency was added to the final value after averaging.
A comparison with other measured relative branching ratios is possible only in the case of $^{201}$Tl. For the other isotopes no published values are available in the literature.
The absolute intensities were calculated by scaling the relative values by the multiplicator shown in \tab{tab:param}. For each isotope the measured absolute intensities can be compared to the values presented in NuDat2 \cite{NuDat}. In the NuDat2 database the X-ray branching ratios are obtained with the RADLIST \cite{RADLIST} program using the internal conversion coefficients. The present experimental data found to be in agreement with the calculated values from the database (see \tab{tab:X_abs_branch}). In the case of $^{199}$Tl and $^{200}$Tl the experiential values are somewhat less precise owing to the uncertainty of the multiplicator. However, for $^{201}$Tl the precision limiting factor was the counting statistics, the obtained branching ratio is more precise than that in the database. For this isotope the latest evaluation also contains experimental data for the X-ray intensities. The new value has to be compared with the more precise evaluated value of 44.6\,$\pm$\,0.6\% \cite{NDS201} stated also in \tab{tab:X_branch} and used for the $^{197}$Au($\alpha$,$\gamma$)$^{201}$Tl cross section determination.
\subsection{\label{sec:XS} Reaction cross sections}
The measured reaction cross sections are shown in \tab{tab:XS}. In the case of the 18.5-MeV data point only $\gamma$ counting was done. In this measurement the $\gamma$ peak from the $^{197}$Au($\alpha$,$\gamma$)$^{201}$Tl reaction was not visible, thus no cross section could be derived. The total uncertainties presented in the table are the quadratic sum of the systematic uncertainties (10.6\%, 8.1\%, and 4.5\% for the ($\alpha$,2n), ($\alpha$,n), and ($\alpha$,$\gamma$) reactions respectively) and statistical uncertainties of the datapoints. The latter varies between 2-6\% for the neutron emitting reactions while for the radiative-capture reaction it is 9-15\% except for the two lowest energy data points (26\% and 54\%).
\begin{table}[t]
\caption{Absolute X-ray intensities in \% determined in the present work and compared to their values from the NuDat2 \cite{NuDat} database.}
\label{tab:X_abs_branch}
\center
\begin{ruledtabular}
\begin{tabular}{l c c }
Isotope & NuDat2 \cite{NuDat} & This work \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
$^{199}$Tl & \begin{tabular}{r@{\,$\pm$\,}l} 45.5& 2.5 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 42.7 & 4.2 \\ \end{tabular} \\
$^{200}$Tl & \begin{tabular}{r@{\,$\pm$\,}l} 40.4& 1.7 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 37.8 & 2.6 \\ \end{tabular} \\
$^{201}$Tl & \begin{tabular}{r@{\,$\pm$\,}l} 37 & 6 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 40.5 & 4.1 \\ \end{tabular} \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Theoretical analysis}
\label{sec:theo}
\subsection{Formalism and general remarks}
\label{sec:form}
The new experimental data were analyzed within the statistical model (SM). In a schematic notation, the cross section $\sigma$\raX\ of an \al -induced reaction is given by
\begin{equation}
\sigma(\alpha,X) \sim \frac{T_{\alpha,0} T_X}{\sum_i T_i} = T_{\alpha,0}
\times b_X
\label{eq:StM}
\end{equation}
with the transmission coefficients $T_i$ into the $i$-th open channel and the branching ratio $b_X = T_X / \sum_i T_i$ for the decay into the channel $X$. The total transmission is given by the sum over all contributing channels: $T_{\rm{tot}} = \sum_i T_i$. The $T_i$ are calculated from global optical potentials for the particle channel and from the $\gamma$-ray strength function (GSF) for the photon channel. The $T_i$ include contributions of all final states $j$ in the respective residual nucleus in the $i$-th exit channel. In practice, the sum over all final states $j$ is approximated by the sum over low-lying excited states up to a certain excitation energy $E_{\rm{LD}}$ (these low-lying levels are typically known from experiment) plus an integration over a theoretical level density for the contribution of higher-lying excited states:
\begin{equation}
T_i = \sum_j T_{i,j} \approx
\sum_j^{E_j < E_{\rm{LD}}} T_{i,j} +
\int_{E_{\rm{LD}}}^{E_{\rm{max}}} \rho(E) \, T_i(E) \, dE
\label{eq:Tsum}
\end{equation}
\onecolumngrid
\begin{center}
\begin{table}[h]
\caption{Measured reaction cross sections with their total uncertainties.}
\label{tab:XS}
\center
\begin{ruledtabular}
\begin{tabular}{l c c c c }
E$_{\alpha}$ (MeV) & E$_{eff}$ (MeV) & $^{197}$Au($\alpha$,2n)$^{199}$Tl (mb) & $^{197}$Au($\alpha$,n)$^{200}$Tl (mb) &
$^{197}$Au($\alpha$,$\gamma$)$^{201}$Tl ($\mu$b) \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
20.0 & \begin{tabular}{r@{\,$\pm$\,}l} 19.92 & 0.06 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 36 & 4 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 37.0 & 3.3 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 20.5 & 2.1 \\ \end{tabular} \\
19.5 & \begin{tabular}{r@{\,$\pm$\,}l} 19.46 & 0.06 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 16.6 & 1.8 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 25.2 & 2.1 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 15.6 & 1.7 \\ \end{tabular} \\
19.0 & \begin{tabular}{r@{\,$\pm$\,}l} 18.99 & 0.06 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 6.3 & 0.7 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 17.6 & 1.5 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 15.4 & 1.8 \\ \end{tabular} \\
18.5 & \begin{tabular}{r@{\,$\pm$\,}l} 18.49 & 0.06 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 1.80 & 0.20\\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 10.6 & 0.9 \\ \end{tabular} & \\
18.0 & \begin{tabular}{r@{\,$\pm$\,}l} 17.99 & 0.05 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 0.34 & 0.04\\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 6.0 & 0.5 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 5.9 & 0.8 \\ \end{tabular} \\
17.5 & \begin{tabular}{r@{\,$\pm$\,}l} 17.48 & 0.05 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 0.046 & 0.005\\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 3.32 & 0.28 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 3.5 & 0.5 \\ \end{tabular} \\
17.0 & \begin{tabular}{r@{\,$\pm$\,}l} 16.96 & 0.05 \\ \end{tabular} & & \begin{tabular}{r@{\,$\pm$\,}l} 1.28 & 0.11 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 1.67 & 0.22 \\ \end{tabular} \\
16.0 & \begin{tabular}{r@{\,$\pm$\,}l} 15.91 & 0.05 \\ \end{tabular} & & \begin{tabular}{r@{\,$\pm$\,}l} 0.226 & 0.023 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 0.45 & 0.07 \\ \end{tabular} \\
15.0 & \begin{tabular}{r@{\,$\pm$\,}l} 14.92 & 0.05 \\ \end{tabular} & & \begin{tabular}{r@{\,$\pm$\,}l} 0.0249 & 0.0022 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 0.081 & 0.021 \\ \end{tabular} \\
14.0 & \begin{tabular}{r@{\,$\pm$\,}l} 13.91 & 0.04 \\ \end{tabular} & & \begin{tabular}{r@{\,$\pm$\,}l} 0.00141 & 0.00013 \\ \end{tabular} & \begin{tabular}{r@{\,$\pm$\,}l} 0.037 & 0.020 \\ \end{tabular} \\
13.7\footnote{Measured with energy degrader foil. See text for details.} & \begin{tabular}{r@{\,$\pm$\,}l} 13.62 & 0.05 \\ \end{tabular} & & \begin{tabular}{r@{\,$\pm$\,}l} 0.00067 & 0.00007 \\ \end{tabular} & \\
\end{tabular}
\end{ruledtabular}
\end{table}
\end{center}
\vspace{1cm}
\twocolumngrid
\noindent For further details of the definition of $T_i$, see \cite{Rauscher11-IJMPE}. $T_{\alpha,0}$ refers to the entrance channel where the target nucleus is in its ground state under laboratory conditions. The calculation of stellar reaction rates \Nsv\ may require further modifications of Eq.~(\ref{eq:StM}) which have to take into account thermal excitations of the target nucleus \cite{Rauscher11-IJMPE}.
From Eqs.~(\ref{eq:StM}) and (\ref{eq:Tsum}), the following properties of the reactions under study can be expected. The total reaction cross section \stot\ (summed over all non-elastic channels) depends only on the transmission $T_{\alpha,0}$ in the entrance channel and is thus only sensitive to the chosen \al -nucleus optical model potential (A-OMP). At very low energies all particle channels are closed. Here the only open reaction channel is the \rag\ channel, leading to a cross section of the \rag\ reaction of about \stot ; consequently, at very low energies the \rag\ cross section is essentially only sensitive to the chosen A-OMP.
The \rap\ channel opens at about 6.5 MeV. However, because of the high Coulomb barrier in the exit channel, the transmission $T_p$ remains practically negligible in the energy range under study, and a more detailed discussion of the \rap\ channel will be omitted.
Contrary to the \rap\ channel, there is no Coulomb barrier for the \ran\ channel. Already close above the \ran\ threshold at 9.7 MeV, the transmission $T_n$ exceeds all transmissions $T_{X \ne n}$ into other channels. Now the \ran\ cross section becomes close to the total reaction cross section \stot , and thus the \ran\ cross section is practically only sensitive to the chosen A-OMP. This finding holds until energies of about 17 MeV where the \rann\ channel opens. At these higher energies the total reaction cross section is essentially distributed among the \ran\ and \rann\ channels; i.e., the sum of the \ran\ and \rann\ cross sections is approximately given by \stot\ and is sensitive to the A-OMP only. But the individual \ran\ and \rann\ cross sections are sensitive to the ratio between $T_{n}$ and $T_{2n}$ which in turn depend on the chosen nucleon-nucleus optical model potential (N-OMP) and on the chosen level densities (LD) for the residual \tlnull\ and \tlix\ nuclei.
At all energies above the \ran\ threshold, the \rag\ cross section depends on the ratio $T_{\alpha,0} T_\gamma / T_{\rm{tot}}$ and is thus sensitive not only to the transmission $T_\gamma$ and the $\gamma$-ray strength function, but also sensitive to all further ingredients like the A-OMP, N-OMP, and LD. The analysis of the \rag\ excitation function alone does not allow to fix any ingredient of the SM calculations because of the complex sensitivity of the \rag\ cross section.
\subsection{Additional data from elastic scattering}
\label{sec:elast}
The total reaction cross section \stot\ can also be derived from the analysis of elastic scattering angular distributions. It has been shown recently that \stot\ extracted from elastic scattering is consistent with the sum over the \raX\ cross sections of all non-elastic channels \cite{Gyurky12-PRC,Ornelas16-PRC}.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{Scat.eps}
\caption{
\label{fig:e25scat} \auvii \raa \auvii\ elastic scattering at $E_\alpha = 24.7$ MeV: a total reaction cross section \stot\ = $520 \pm 20$ mb is derived from the angular distribution of \cite{Budzanowski64-PL}.
}
\end{figure}
The elastic scattering angular distribution at $E_\alpha = 24.7$ MeV by Budzanowski {\it et al.}\ \cite{Budzanowski64-PL} provides the chance to obtain one further data point for \stot\ at the upper end of the energies under study. This value for \stot\ directly constrains the A-OMP at relatively high energies.
The angular distribution of \cite{Budzanowski64-PL} was analyzed in the following way. Optical model fits were performed using either Woods-Saxon potentials of volume type in the real and imaginary part of the OMP, or a folding potential was used in the real part in combination with a surface Woods-Saxon potental in the imaginary part. Furthermore, a phase shift fit was made using the approach of \cite{Chiste96-PRC}. The fits are shown in Fig.~{\ref{fig:e25scat}} and give \stot\ of 563 mb, 525 mb, and 504 mb. Because of the significantly larger $\chi^2$ of the Woods-Saxon fit (which clearly underestimates the elastic cross section around $\vartheta \approx 60^\circ$ and thus slightly overestimates \stot\ with 563 mb), we adopt \stot\ = $520 \pm 20$ mb at 24.7 MeV.
\subsection{$\chi^2$-based assessment}
\label{sec:chi2}
The new experimental data, in combination with the additional data point for \stot\ from \auvii \raa \auvii\ elastic scattering \cite{Budzanowski64-PL} and the further data points of \cite{Basunia07-PRC}, will be used to determine a best-fit set of parameters for the SM calculations. For this purpose the TALYS code (version 1.9) \cite{TALYS-V19} was used which is a well-established open-source code for SM calculations. Similar to previous studies (see \cite{Mohr17-PRC} for $^{64}$Zn + \al , \cite{Talwar18-PRC,Mohr18-PRC} for $^{38}$Ar + \al\ and \cite{Kiss18-PRC} for $^{115}$In + \al ), the complete TALYS parameter space was investigated, and a $\chi^2$-based assessment was used to find the best description of the experimental data. In practice, 14 different A-OMPs were used which turn out to be the most important ingredient of the SM calculation. These 14 A-OMPs were combined with 5 different N-OMPs, 6 LDs, and 8 GSF (with two options for the M1 contribution), leading to an overall calculation of 6720 excitation functions. The N-OMPs, LDs, and GSFs were taken from the built-in TALYS options. For the A-OMPs 14 different options were used which exceed the 8 standard options in TALYS; these 14 A-OMPs will be discussed in further detail. The A-OMPs are also summarized in Table \ref{tab:aomp}.
\begin{table}[t]
\caption{\al -nucleus optical model potentials (A-OMPs): TALYS standard and extensions}
\label{tab:aomp}
\center
\begin{ruledtabular}
\begin{tabular}{cccp{5cm}}
{\it{alphaomp}} & Ref. & Abbr. & comments \\
\noalign{\smallskip}\colrule\noalign{\smallskip}
1 & \cite{Watanabe58-NP} & WAT
& Watanabe: default in earlier TALYS versions \\
2 & \cite{McFadden66-NP} & MCF
& McFadden/Satchler: simple 4-parameter potential \\
3 & \cite{Demetriou02-NPA} & DEM1
& Demetriou {\it et al.}, version 1: real folding, imaginary volume WS \\
4 & \cite{Demetriou02-NPA} & DEM2
& Demetriou {\it et al.}, version 2: real folding, imaginary volume+surface WS \\
5 & \cite{Demetriou02-NPA} & DEM3
& Demetriou {\it et al.}, version 3: real folding plus dispersion relation \\
6 & \cite{Avrigeanu14-PRC} & AVR
& Avrigeanu {\it et al.}: multi-parameter WS \\
7 & \cite{Nolte87-PRC} & --
& Nolte {\it et al.}: not appropriate for low energies \\
8 & \cite{Avrigeanu94-PRC} & --
& Avrigeanu {\it et al.}: not appropriate for low energies \\
9 & \cite{Mohr13-ADNDT} & AT-V1
& Mohr {\it et al.}: systematic potential, adjusted to low-energy scattering data \\
10 & \cite{Demetriou02-NPA} & DEM3x1.1
& Demetriou {\it et al.}, version 3: real part multiplied by 1.1 \\
11 & \cite{Demetriou02-NPA} & DEM3x1.2
& Demetriou {\it et al.}, version 3: real part multiplied by 1.2 \\
12 & \cite{Demetriou02-NPA} & DEM3x0.9
& Demetriou {\it et al.}, version 3: real part multiplied by 0.9 \\
13 & \cite{Demetriou02-NPA} & DEM3x0.8
& Demetriou {\it et al.}, version 3: real part multiplied by 0.8 \\
14 & \cite{Demetriou02-NPA} & DEM3x0.7
& Demetriou {\it et al.}, version 3: real part multiplied by 0.7 \\
\end{tabular}
\end{ruledtabular}
\end{table}
It is well-known that the early A-OMP by Watanabe (WAT) \cite{Watanabe58-NP}
and the simple 4-parameter Woods-Saxon (WS) potential by McFadden and Satchler
(MCF) \cite{McFadden66-NP} ({\it{alphaomp}} 1 and 2 in TALYS) show a trend to
overestimate the cross sections of \al -induced reactions. This trend becomes
pronounced especially towards low energies below the Coulomb barrier. For
completeness it has to be mentioned that a new explanation for the failure
of the MCF potential at low energies was provided recently in
\cite{Mohr19-IJMPE}.
A series of A-OMPs was suggested by Demetriou {\it et al.}\ \cite{Demetriou02-NPA} which are based on the double folding procedure in the real part. The first version DEM1 uses a volume WS potential in the imaginary part where the strength is energy-dependent according to a Brown-Rho parametrization \cite{Brown81-NPA}. In the second version DEM2 the imaginary part is composed of a volume WS and a surface WS. The strength of the real parts in DEM1 and DEM2 is taken from the parametrization of real volume integrals $J_R$ from \al -decay data \cite{Mohr00-PRC}. The third version DEM3 uses an imaginary part very close to DEM2 and additionally introduces the coupling between the real and imaginary part by a dispersion relation. Typically, the DEM1, DEM2, and DEM3 potentials ({\it{alphaomp}} 3, 4, 5 in TALYS) predict smaller cross sections than WAT and MCF. Recently it has been pointed out that an excellent reproduction of experimental data can be obtained if the real part of the DEM3 potential is scaled by factors between 1.1 and 1.2 for heavy nuclei \cite{Scholz16-PLB}; a smaller scaling factor of 0.9 was found for $^{64}$Zn \cite{Mohr17-PRC}. Therefore, different scaling factors for the DEM3 potential were also investigated ({\it{alphaomp}} 10-14).
The recent version of the Avrigeanu potentials \cite{Avrigeanu14-PRC} (AVR, {\it{alphaomp}} 6 in TALYS) consists of a real part in WS parametrization which has been chosen close to folding potentials. The imaginary part is composed of WS volume and surface terms with mass- and energy dependent parameters. Similar to the Demetriou potentials, the AVR potential leads to smaller cross sections than WAT and MCF at low energies.
The potential by Nolte {\it et al.}\ \cite{Nolte87-PRC} ({\it{alphaomp}} 7 in TALYS) and the earlier potential by Avrigeanu {\it et al.}\ \cite{Avrigeanu94-PRC} ({\it{alphaomp}} 8) have been adjusted to experimental data at higher energies. It has been found that these potentials are inappropriate at very low energies \cite{Mohr17-PRC,Mohr18-PRC}. This finding is confirmed in the present work where $\chi^2$ per point of above 50,000 (5,800) was found for the Nolte (early Avrigeanu) potential. These huge $\chi^2$ correspond to average deviations from the experimental data by more than a factor of 3.6 (2.4) whereas all other potentials reach average deviations far below a factor of two.
The ATOMKI-V1 potential \cite{Mohr13-ADNDT} (AT-V1, implemented as {\it{alphaomp}} 9 in TALYS V1.8) is based on a double-folding potential in the real part in combination with a surface WS potential in the imaginary part. The parameters of AT-V1 have been adjusted to elastic scattering in the $89 \le A \le 144$ mass range, i.e.\ below the \auvii\ nucleus under study in this work.
The 14 A-OMPs in Table \ref{tab:aomp} were used in a strict $\chi^2$-based assessment. The experimental data show a clear preference for the DEM3 potential (multiplied by 1.1 and 1.2) and the AVR potential. We find $\chi^2$ per point of about 4.6 (DEM3x1.2), 6.1 (AVR), and 6.2 (DEM3x1.1). This corresponds to an average deviation \fdevbar\ of 1.39, 1.41, and 1.50 for the DEM3x1.2, AVR, and DEM3x1.1 potentials. In the following, all $\chi^2$ will be given per experimental data point. \fdevbar\ is defined by
\begin{equation}
\bar{f}_{\rm{dev}} = \left( \prod_i^N f_{{\rm{dev}},i} \right)^{(1/N)}
\label{eq:fdev}
\end{equation}
and \fdev $_{,i}$ is the larger of the ratios $\sigma_{\rm{calc}}/\sigma_{\rm{exp}}$ or $\sigma_{\rm{exp}}/\sigma_{\rm{calc}}$ for the $i$-th experimental data point.
As pointed out above, the sensitivity to the other ingredients of the SM calculations is relatively minor. About 50 different choices of GSF, N-OMP, and LD in combination with the DEM3x1.2 A-OMP result in a minor increase of $\chi^2$ by less than 1.0 and \fdevbar\ between 1.39 and 1.46.
A strict $\chi^2$ assessment is only valid for statistical uncertainties. Unfortunately, the uncertainties of the present data have a significant contribution from systematic uncertainties (see Table \ref{tab:param} and discussion at the end of Sec.~\ref{sec:exp}). An attempt was made to disentangle the relevance of the statistical and systematic uncertainties. For this attempt we restrict ourselves to our new experimental data from Table \ref{tab:XS}.
In a new $\chi^2$ calculation, the best-fit parameters are derived from our experimental data with statistical uncertainties only. Here we find that the best reproduction of our experimental data is obtained for the AVR A-OMP with $\chi^2 = 34.2$ and \fdevbar\ $ = 1.22$. Compared to the result from all available experimental data, we find a significantly increased $\chi^2$ which results from the smaller (statistical only) uncertainties. The average deviation decreases from about 1.4 to 1.2; this decrease is related to relatively large deviation factors \fdev\ for some data points of the Basunia data, in particular at the lowest energy of Basunia for the \ran\ channel. Interestingly, the best-fit A-OMP changes from DEM3x1.2 to AVR; however, the DEM3x1.2 A-OMP provides $\chi^2 \approx 39$ and \fdevbar\ $\approx 1.25$, i.e.\ very close to the results from the AVR A-OMP.
In \fig{fig:XS} the measured cross sections are shown together with the calculated ones using the AVR A-OMP.
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{XS}
\caption{\label{fig:XS} Experimental cross sections compared to the best-fit statistical model calculation using the AVR A-OMP. Green down triangle, blue up triangles, black dots, and red squares stands for the total, \rann , \ran , and \rag\ cross sections, respectively. Full symbols are the present data, while open symbols are from \cite{Basunia07-PRC} and the calculated total cross section from \cite{Budzanowski64-PL}. Green full line is the SM predicted total cross sections, while blue dashed, black dotted, and red dot-dashed lines are the \rann , \ran , and \rag\ cross sections, respectively.}
\end{figure}
Next, we have to consider the systematic uncertainties which are dominated by the \g -ray intensities in the $\beta$-decays of the residual nuclei (see Table \ref{tab:param}). Under these circumstances the systematic uncertainties are common within each \raX\ channel, but not common to all experimental data; i.e., it is possible that both \ran\ and \rann\ data are higher or lower within their systematic uncertainties, but it is also possible that the \ran\ data are higher and the \rann\ data are lower (and vice versa). To cover the full range of systematic uncertainties, we have scaled the \rag , \ran , and \rann\ data by $\pm 2 \sigma$ of the systematic uncertainties, leading to 27 hypothetical experimental data sets within the systematic uncertainties. The number of 27 results from 3 channels which are varied independently by factors $1-2\sigma$, 1.0, and $1+2\sigma$. Further calculations with finer steps in one channel (and no variation in the other channels) confirm that the overall behavior of the $\chi^2$ landscape is relatively smooth.
For the 27 hypothetical data sets, the best-fit parameters of the SM calculations are derived using the same $\chi^2$-based assessment as before. It is found that the best-fit A-OMP is well constrained to the AVR or DEM3x1.2 potentials. In general, the AVR potential is obtained when the \ran\ and \rann\ cross sections are increased whereas the DEM3x1.2 potential is favored for smaller \ran\ and \rann\ cross sections. The overall smallest $\chi^2 = 8.1$ and \fdevbar\ $= 1.18$ is found for the AVR A-OMP and the case where the cross sections of the \rag , \ran , and \rann\ channels are all increased by $2\sigma$ of the systematic uncertainties, i.e., by about 9\% for the \rag , 16\% for the \ran, and 21\% for the \rann\ channel.
Surprisingly, although the present experimental data cover the \rag , \ran ,
and \rann\ channels over several MeV, it is difficult to provide constraints
for the SM parameters beyond the A-OMP. The variation of the experimental data
within their systematic uncertainties constrains the A-OMP to AVR or DEM3x1.2,
but almost any choice of the N-OMP, GSF, and LD appears in the best-fit
parameters of the 27 hypothetical experimental data sets which represent the
range of systematic uncertainties.
\subsection{\label{sec:astro}Extrapolation to astrophysically relevant
energies}
The A-OMPs are an essential ingredient for the calculation of stellar reaction
rates in the astrophysical $\gamma$ process. This process operates at typical
temperatures of $T_9 \approx 2 - 3$, corresponding to a Gamow window around
8.9 MeV at $T_9 = 2$ and 11.7 MeV at $T_9 = 3$. As the \ran\ channel opens
around 10 MeV and starts to dominate already a few hundred keV above the
threshold, the \rag\ and \ran\ rates above $T_9 \approx 2.5$ must remain
uncertain because the branching between the \rag\ and \ran\ channels depends
on several parameters of the statistical model which cannot be well
constrained (see discussion above). We restrict ourselves to the
analysis of the total reaction cross section \stot\ which depends solely on
the chosen A-OMP.
At $E_{\alpha,{\rm{lab}}} = 11.9$ MeV, corresponding to the center of the
Gamow window at $T_9 \approx 3$, we find \stot\ = 7.34 nb from the AVR
potential and 4.56 nb from the DEM3x1.2 potential; both potentials have been
determined from the $\chi^2$-based assessment in the previous section. Thus,
\stot\ $\approx 6$ nb can be estimated with an uncertainty of about 25\%.
The small uncertainty of 25\% is based on the constraints from the new
experimental data in combination with the similar energy dependence of
\stot\ from the two best-fit potentials down to about 11 MeV.
A further extrapolation down to $E_{\alpha,{\rm{lab}}} = 9.1$ MeV
(corresponding to $T_9 \approx 2$) is obviously more uncertain, but the
predictions from the two best-fit potentials remain within about a factor of
2.5 with \stot\ = 0.123 pb for the AVR potential and \stot\ = 0.047 pb from
the DEM3x1.2 potential. From the average of the two predicted cross sections,
\stot\ $\approx 0.08$ pb with an uncertainty of less than a factor of two can
be recommended. This is a significant achievement, because the range of
predictions at 9.1 MeV from modern A-OMPs (AVR, DEM, AT1) covers two orders of
magnitude from 0.03 to 3 pb, and the MCF potential predicts an even higher
cross section of about 37 pb.
In addition, reaction rates for the \rag\ reaction are calculated at temperatures of $T_9 = 2$ and 3 for the best-fit potentials (DEM3x1.2 and AVR) and compared to the widely used MCF potential. At the higher temperature of $T_9 = 3$ the DEM3x1.2 and AVR rates agree within about a factor of two wheres the MCF rate exceeds the average of the DEM3x1.2 and AVR rates by a factor of 300. The discrepancies increase towards lower temperatures. At $T_9 = 2$ the DEM3x1.2 and AVR rates deviate by a factor of about 3.5, whereas the MCF rate exceeds the DEM3x1.2 and AVR rates by three orders of magnitude.
Finally, it has been pointed out by Rauscher \cite{Rauscher10-PRC} that
the simple Gamow window estimate for the most effective energies is inaccurate
for \al -induced nuclei on heavy nuclei. Typically, the most effective energy
is shifted to lower energies by about $1-2$ MeV, thus further increasing the
range of predicted cross sections at the most effective energies and
increasing the uncertainties of the reaction rates.
\section{\label{sec:sum}Summary}
Alpha-induced reactions were investigated at low energies using the activation
technique in combination with $\gamma$-ray and X-ray spectroscopy. The cross
sections of the \rag , \ran , and \rann\ reactions were measured with
unprecedented sensitivity, and thus far lower cross-section data could be
obtained than available in literature. The lowest data points of the present
work reach the upper end of the Gamow window for temperatures of the
astrophysical $\gamma$ process.
The new dataset allowed us to choose the best $\alpha$-nucleus optical model potential based on a strict
$\chi^2$-based statistical assessment. It was found that the best-fit
theoretical calculations are obtained using either the latest potential by
Avrigeanu {\it et al.}\ \cite{Avrigeanu14-PRC} or the third version of the
Demetriou {\it et al.}\ \cite{Demetriou02-NPA} A-OMP with a scaling factor of
1.2 for the real part. The total reaction cross section is well constrained
within a factor of two uncertainty down to the lowest $\gamma$-process
temperatures. However, due to the systematic uncertainties of the present
data, the other constituents of the statistical model calculations as the
nucleon-nucleus optical model potential, $\gamma$-ray strength function, and level density cannot be constrained; but these constituents typically
have only minor impact for the reaction rate calculations for the most
important \rga\ reactions.
\begin{acknowledgments}
This work was supported by NKFIH (Gr. No. K120666, NN128072), by the New National Excellence Program of the Ministry for Innovation and Technology (\'UNKP-19-3-I-DE-394, \'UNKP-19-4-DE-65), Helmholtz Association (ERC-RA-0016) and European Cooperation in Science and Technology ("ChETEC" COST Action, CA16117).
G.\,G.~Kiss acknowledges support form the J\'anos Bolyai research fellowship of the Hungarian Academy of Sciences.
\end{acknowledgments}
|
1,116,691,500,920 | arxiv | \section{Introduction}
In this work we are interested in solving the strongly convex, composite, unconstrained optimization problem,
\begin{equation}\label{eq:Problem}
\min_{x\in \R^n} \{F(x) \eqdef f(x) + h(x)\}.
\end{equation}
We use $x^*$ to denote the optimal solution of \eqref{eq:Problem}, and $F^*:= F(x^*)$ to denote the associated optimal function value. It is assumed that $h(x)$ is a convex and possibly nonsmooth function. Furthermore, throughout the paper we make the following assumption regarding the function $f$.
\begin{assumption}\label{A_SCL}
The function $f(\cdot)$ is $\mu$-strongly convex and $L$-smooth, i.e., for all $x,y\in\R^n$, it holds that
\begin{align}
f(x) \geq f(y) + \ve{\nabla f(y)}{x-y} + \tfrac{\mu}{2} \tnorm{x-y}, \label{eq:ass1} \\
f(x) \leq f(y) + \ve{\nabla f(y)}{x-y} + \tfrac{L}{2} \tnorm{x-y}. \label{eq:ass2}
\end{align}
\end{assumption}
It is straightforward to show that strong convexity of $f(x)$ implies strong convexity of $F(x)$.
For problems of the form \eqref{eq:Problem}, which satisfy Assumption~\ref{A_SCL}, it is well known that Nesterov's methods \cite{Nesterov04,Nesterov07,Nesterov13} converge linearly, with the accelerated variants converging at the optimal rate of $(1-\sqrt{\mu/ L})$.
Nesterov's acceleration approach, and the idea of adding momentum, has led to the extensive analysis of accelerated first order methods in a variety of settings. This includes a recent surge of interest in investigating stochastic gradient methods \cite{robbins1951stochastic,schmidt2017minimizing,johnson2013accelerating} and their accelerated variants \cite{cotter2011better,shalev2013accelerated,kingma2014adam,nitanda2014stochastic}. Coordinate descent methods \cite{nesterov2012efficiency,richtarik2014iteration} are another class of algorithms that have proved extremely popular, largely because they can take advantage of modern parallel computing architecture \cite{jaggi2014communication,ma2015adding}, and this has also inspired much research into studying their accelerated versions \cite{fercoq2015accelerated,allen2016even,ACOCOA}.
However, while the theoretical and practical performance of Nesterov's methods is well established, a satisfactory geometric interpretation of these approaches has been elusive.
Recently the authors of \cite{Bubeck15,Drusvyatskiy16} proposed algorithms for smooth functions (i.e., $h(x) = 0$ in \eqref{eq:Problem}) that enjoy the same optimal rate of convergence as Nesterov's accelerated method, but also have a novel geometric intuition. Specifically, the geometric descent algorithm \cite{Bubeck15} achieves the optimal linear convergence rate, and shares a geometric intuition similar to that of ellipsoidal methods. The authors illustrate that the optimal rate is achieved by appropriately shrinking two balls that contain $x^*$ (the minimizer of $f(x)$) at each iteration.
Motivated by \cite{Bubeck15}, the paper \cite{Drusvyatskiy16} proposed the Optimal Quadratic Averaging (OQA) algorithm. This algorithm maintains a sequence of quadratic lower bounds o the objective function, and at each iteration the new quadratic lower bound is formed as the optimal average of the current lower bound and the lower bound from the previous iteration. The gap between the function value $f(x_k)$ and the minimum value of lower bound, $\phi_k^*$ say, converges to zero at the optimal rate. Importantly, the lower bound also acts a natural stopping criterion for the algorithm, and when $f(x_k) - \phi_k^* \leq \epsilon$, where $\epsilon >0$ is some stopping tolerance, then the user has a certificate of $\epsilon$-optimality, i.e., it is guaranteed that $f(x_k) - f^* \leq \epsilon$. In practice, the OQA algorithm can be equipped with historical information to achieve further speed up. However, the OQA algorithm and its history based variant need at least two calls of a line search process at every iteration, which can pose a heavy computational burden in terms of function evaluations. The authors in \cite{Drusvyatskiy16} also briefly describe how their \emph{unaccelerated OQA algorithm} can be extended to composite functions, and left as an open problem the possibility of deriving \emph{accelerated} proximal variants.
Very recently, the authors of \cite{Chen16} successfully addressed the open problem in \cite{Drusvyatskiy16} and presented an accelerated algorithm for composite problems of the form \eqref{eq:Problem}, that achieves the optimal linear rate of convergence. Their algorithm, called the geometric proximal gradient (GeoPG) method also has a satisfying geometrical interpretation similar to that in \cite{Bubeck15}. Unfortunately, a major drawback of GeoPG in \cite{Chen16} is that the algorithm is rather complicated, and requires a couple of inner loops to determine necessary algorithm parameters. For example, for GeoPG one must find the root of a specific function and one is also required to compute a minimum enclosing ball via some iterative process; both of these steps must be carried out at every iteration, which is expensive.
In this paper we propose several new algorithms to solve problem \eqref{eq:Problem} that are motivated by, and extend, the previously mentioned works. In particular, we present four algorithms: a Gradient Descent (GD) type algorithm for smooth problems, an accelerated GD type algorithm for smooth problems, a proximal GD type algorithm for composite problems, and an accelerated proximal GD type algorithm for composite problems. Our algorithms all converge linearly, and the accelerated variants converge at the optimal linear rate. These algorithms blend the positive features of Nesterov's methods \cite{Nesterov04,Nesterov07,Nesterov13} and the OQA algorithm \cite{Drusvyatskiy16}, and thus enjoy the advantages of both approaches. First, similarly to Nesterov's methods, no line search is needed by any of our algorithms as long as we make the standard assumption that the Lipschitz constant $L$ is known or is easily computable. Hence, there are no `inner-loops' in any of our algorithm variants, which ensures that the computational cost is low and is fixed at every iteration. Secondly, our algorithms incorporate quadratic lower bounds so they have natural stopping conditions; a feature that is similar to OQA. However, our algorithms update the quadratic lower bound at each iteration by taking a convex combination of the previous two lower bounds, which is different from OQA.
Another contribution of this work is that we also propose the concept of an UnderEstimate Sequence (UES), which is a natural extension of Nesterov's Estimate Sequence \cite{Nesterov83}. Perhaps surprisingly, estimate sequences initially appeared to be largely overlooked, but since Nesterov's work on smoothing techniques in the early 2000s \cite{Nesterov05}, they have seen a significant revival in popularity. For example, the work of Baes in \cite{Baes09}, the development of a randomized estimate sequence in \cite{Lu15} and an approximate estimate sequence in \cite{Lin15}. To the best of our knowledge, this is the first work which proposes estimate sequences that form \emph{lower bounds} on the objective function. The UES framework is the powerhouse of our convergence analysis; we prove that each of our proposed algorithms generates a UES, and subsequently the algorithms converge (linearly) to the optimal solution of problem \eqref{eq:Problem}. While we describe 4 new algorithms in this work, we stress that the UES framework is general, and it allows a plethora of algorithms to be developed. Moreover, any developed algorithm whose iterates generate a UES is guaranteed to converge linearly to the optimal solution $F^*$.
\subsection{Contributions}
In this section we state the main contributions of this paper (listed in no particular order).
\begin{itemize}
\item \textbf{Underestimate Sequence.} We introduce the concept of an UnderEstimate Sequence (UES), which extends Nesterov's work on estimate sequences in \cite{Nesterov83}. The UES consists of three sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$, where for all $k$, $\phi_k(x)$ is a \emph{global lower bound} on the objective function $F(x)$. While there have been several extensions and variants of Nesterov's work \cite{Nesterov83}, to the best of our knowledge this is the first time that the estimate sequence framework has been adapted to act as a \emph{lower bound} or \emph{under}estimate of $F(x)$. The UES framework is general, conceptually simple, and it allows the construction of a wide variety of algorithms to solve \eqref{eq:Problem}.
\item \textbf{New algorithms.} We present 4 new algorithms that are computationally efficient and adhere to the UES framework. Crucially, two of our algorithms solve the \emph{composite} problem \eqref{eq:Problem}. The algorithms are: (i) SUESA, a GD type algorithm for smooth problems; (ii) ASUESA, an accelerated GD type algorithm for smooth problems; (iii) CUESA, a proximal GD type algorithm for composite problems, and (iv) ACUESA, an accelerated proximal GD type algorithm for composite problems.
\item \textbf{Algorithms with optimal convergence rate.} Each of the algorithms generate iterates that form a UES, so all four algorithms are guaranteed to converge linearly to the optimal solution of \eqref{eq:Problem}. Moreover, the accelerated algorithms (ASUESA and ACUESA) are guaranteed to converge linearly \emph{at the optimal rate.}
\item \textbf{Algorithms with convergence certificates.} The underestimate sequence builds a global lower bound of $F(x)$ at each iteration, and the gap between the (minimum of the) lower bound and $F(x_k)$ tends to zero. Thus, this difference acts as a kind of surrogate ``duality gap'', and once this gap falls below some (user defined) stopping tolerance $\epsilon$, it is guaranteed that the point returned by the algorithm is $\epsilon$-optimal.
\item \textbf{No line search.} The algorithms developed in this work are computationally efficient and do not involve any `inner loops'. In contrast, the methods in \cite{Bubeck15,Drusvyatskiy16,Chen16} all involve an exact linesearch or a root finding process to determine necessary algorithmic parameters, which comes with an additional computational cost.
\end{itemize}
\subsection{Paper Outline}
The paper is organized as follows. In the next section we introduce the concept of an Underestimate Sequence (UES), and present a proposition which shows that if one has a UES, then it is guaranteed that $F(x_k) - F^* \to 0$ at a linear rate. Section~\ref{sec:lb} is dedicated to the discussion of lower bounds for the function $F(x)$ (in both the smooth and composite cases), and these lower bounds are a critical part of the underestimate sequences framework. In Section~\ref{sec:smooth} we propose two algorithms for solving \eqref{eq:Problem} in the smooth case ($h=0$) and in Section~\ref{sec:composite} we present two algorithms for solving composite problems of the form \eqref{eq:Problem} ($h\neq 0$). All algorithms in Sections~\ref{sec:smooth} and \ref{sec:composite} are supported by convergence theory, which shows that they are guaranteed to converge to the optimal solution of \eqref{eq:Problem} at a linear rate. In Section~\ref{sec:adaptiveL} we present another algorithm which uses an adaptive Lipschitz constant, rather than the true Lipschitz constant. Section~\ref{sec:numericalexperiments} presents numerical experiments to demonstrate the practical advantages of our proposed algorithms, and we give concluding remarks in Section~\ref{sec:conclusion}.
\section{Underestimate Sequence}\label{sec:UES}
In this section, we present the definition of an Underestimate Sequence (UES) and a proposition showing that if one has a UES then $F(x_k) - F^* \to 0$.
\begin{definition}\label{def:UES}
A series of sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$, where $\alpha_k\in (0,1)$ for all $k\geq0$, is called an Underestimate Sequence (UES) of the function $F(x)$ if, for all $x\in\R^n$ and for all $k\geq 0$ we have,
\begin{align}
\phi_k(x) &\leq F(x), \label{eq:def1}\\
F(x_{k+1}) -\phi_{k+1}^* &\leq (1-\alpha_k) (F(x_{k}) - \phi_{k}^*), \label{eq:def2}
\end{align}
where $\phi_k^* := \min_x \phi_k(x)$.
\end{definition}
\begin{proposition}\label{prop:UES}
If $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ is an UES of $F(x)$, then
\begin{equation}\label{eq:linconverge}
F(x_k) -\phi_k^* \leq \lambda_k(F(x_0) - \phi_0^*),
\end{equation}
where $\lambda_k = \prod_{i=0}^k (1-\alpha_k) .$ Furthermore, since both $\phi_k^*\leq \phi_k(x^*) \leq F^*$ and $\lambda_k\in(0,1)$ hold for all $k\geq0$, the above inequality implies that $\{F(x_k) - F^*\}_{k=0}^\infty$ converges to $0$.
\end{proposition}
Definition~\ref{def:UES} is different from Nesterov's Estimate Sequence (ES) in two ways. Firstly, both our UES and Nesterov's ES contain a sequence of estimators $\{\phi_k(x)\}_{k=0}^\infty$ for $F(x)$, and $\phi_k^*$ converges to $F^*$ as $k$ increases. However, in Definition~\ref{def:UES} $\phi_k(x)$ must be a \emph{lower/under estimator} of $F(x)$ for all $k\geq0$, while this does not necessarily hold for an ES. Nesterov's proof is based on the fact that $F(x_{k}) \leq \phi^*_{k}$, but this \emph{does not hold} in our case. Secondly, the definition of an ES only contains two sequences, while the UES has an extra sequence of points $\{x_k\}_{k=0}^\infty$. This enables us to show that the gap between the function value at $x_k$ and $\phi_k^*$ decreases in the $k$th iteration.
Proposition~\ref{prop:UES} shows that any sequences that form a UES (i.e., any sequences that satisfy Definition~\ref{def:UES}) are guaranteed to converge to the optimal solution of problem \eqref{eq:Problem} \emph{and} the estimate of the duality gap $F(x_k) -\phi_k^*$ is also guaranteed to converge at a linear rate. Thus, the UES construction provides a general framework for determining whether an optimization algorithm for problem \eqref{eq:Problem} will converge (linearly). In particular, if the iterates generated by an optimization algorithm satisfy Definition~\ref{def:UES}, then that algorithm is not only convergent, but also achieves a linear rate of convergence.
The UES framework is not only interesting from a theoretical perspective, but it also provides a major practical advantage. In particular, $F(x_k) -\phi_k^*$ provides a natural stopping criterion when designing algorithms, due to the fact that $F(x_k)$ and $\phi_k^*$ are upper and lower bounds for $F^*$, respectively. This difference is a kind of surrogate for the duality gap, and subsequently, algorithms that adhere to the UES framework are provided with a certificate of optimality, which is a highly desirable attribute.
\section{Lower Bounds via Quadratic Averaging}\label{sec:lb}
The purpose of this section is to introduce (global) lower bounds for the function $F(x)$ defined in \eqref{eq:Problem}, in both the smooth ($h(x)=0$) and nonsmooth cases. Lower bounds are the cornerstone of the UES set up, as seen in \eqref{eq:def1} in Definition~\ref{def:UES}. Being able to efficiently construct global lower bounds for $F(x)$ will allow the development of practical algorithms whose convergence is guaranteed via the UES framework.
Before stating the lower bounds, several technical results are presented that will be used throughout this paper.
\subsection{Preliminary Technical Results}
The proximal map is defined as
\begin{equation}\label{eq:proxmap}
\prox{x}{\gamma} \eqdef \arg\min_{u} \{ h(u) + \tfrac \gamma 2 \tnorm{x-u} \},
\end{equation}
and the proximal gradient is
\begin{equation}\label{eq:proxgrad}
G_\gamma(x) \eqdef \gamma\left(x - \prox{x- \tfrac{1}{\gamma} \nabla f(x) }{\gamma}\right).
\end{equation}
Definitions \eqref{eq:proxmap} and \eqref{eq:proxgrad} will be used with $\gamma \equiv L$. Given some point $x\in \R^n$, a short step and a long step are denoted by
\begin{eqnarray}
x^{+} &\eqdef& x - \tfrac{1}{L} G_L(x),\label{eq:shortproxstep}\\
x^{++} &\eqdef& x - \tfrac{1}{\mu} G_L(x). \label{eq:longproxstep}
\end{eqnarray}
In the smooth case ($h\equiv0$), the proximal gradient is simply the gradient $\nabla f(\cdot)$, so the short and long steps (\eqref{eq:shortproxstep} and \eqref{eq:longproxstep}) simplify as
\begin{eqnarray}
x^+ &=& x - \tfrac1 L \nabla f(x) \label{eq:shortstep}\\
x^{++} &=& x - \tfrac{1}{\mu} \nabla f(x).\label{eq:longstep}
\end{eqnarray}
The following Lemma characterizes elements of the subdifferential of $h(x^+)$.
\begin{lemma}\label{smalllemma}
Let $G_L(x)$ and $x^+$ be defined in \eqref{eq:proxgrad} and \eqref{eq:shortproxstep}, respectively. Then, for all $x\in\R^n$, $G_L(x) - \nabla f(x)\in \partial h(x^+)$.
\end{lemma}
\begin{proof} For a given point $x\in\R^n$,
\begin{eqnarray*}
x^+ &\overset{\eqref{eq:shortproxstep}}{=}& x - \tfrac{1}{L} G_L(x)\\
&\overset{\eqref{eq:proxgrad}}{=}& x - \tfrac{1}{L}L (x - \prox{x - \tfrac{1}{L} \nabla f(x)}{L} ) \\
&=& \prox{x - \tfrac{1}{L} \nabla f(x)}{L} \\
&\overset{\eqref{eq:proxmap}}{=}& \arg\min_u \left\{h(u) + \tfrac L2 \tnorm{u- (x - \tfrac{1}{L} \nabla f(x))} \right\}.
\end{eqnarray*}
This gives
$$ 0\in \tfrac 1L \partial h(x^+) + x^+ - (x - \tfrac{1}{L} \nabla f(x) ) \overset{\eqref{eq:shortproxstep}}{=} \tfrac 1L \partial h(x^+) - \tfrac 1L (G_L(x) -\nabla f(x)).$$
Multiplying through by $L$, and rearranging, gives the result.\qed
\end{proof}
\subsection{A Lower Bound for Smooth Functions}\label{s:lbsmooth}
For any point $y\in \R^n$, one can define a lower bound
\begin{equation}\label{eq:lowerboundDF}
\phi(x;y) := f(y) - \tfrac{1}{2\mu}\tnorm{\nabla f(y)} + \tfrac{\mu}{2} \tnorm{x- y^{++} } \leq f(x),
\end{equation}
which holds with equality $\phi(x;y) = f(x)$ if and only if $x = y$. The lower bound in \eqref{eq:lowerboundDF} is a consequence of the assumption in \eqref{eq:ass1} and the equivalence
\begin{eqnarray}\label{eq:normequiv}
\tfrac{\mu}2\tnorm{x- y^{++}} \overset{\eqref{eq:longstep}}{=} \tfrac{\mu}2\tnorm{x-y} + \langle x-y,\nabla f(y)\rangle + \tfrac1{2\mu}\tnorm{\nabla f(y)}.
\end{eqnarray}
Now, a sequence of lower bounds $\{\phi_k(x)\}_{k=0}^\infty$ can be defined in the following way. Using \eqref{eq:lowerboundDF} and a given initial point $x_0$, define the function
\begin{equation}\label{eq:phi0}
\phi_0(x) \eqdef \phi(x; x_0) = \phi_0^* + \tfrac{\mu}{2}\tnorm{x-v_0},
\end{equation}
where
\begin{eqnarray}\label{eq:c0v0}
\phi_0^* = f(x_0) - \tfrac{1}{2\mu}\tnorm{\nabla f(x_0)} \quad \text{and}\quad v_0 = x_0^{++}.
\end{eqnarray}
Differentiating the expression in \eqref{eq:phi0} w.r.t. $x$ shows that $\phi_0^*$ and $v_0$ in \eqref{eq:c0v0} are the minimum value and minimizer of $\phi_0(x)$, respectively. This motivates the following construction:
\begin{enumerate}
\item $\phi_0(x) \eqdef \phi(x; x_0) = \phi_0^* + \tfrac{\mu}{2}\tnorm{x-v_0}$
\item For $k\geq 0$, $\alpha_k \in (0,1)$, and some point $y_k$ between $x_k$ and $v_k$, recursively define
\begin{equation}\label{eq:zzzzz4}
\phi_{k+1}(x) := (1- \alpha_k) \phi_k(x) + \alpha_k \phi(x; y_k).
\end{equation}
\end{enumerate}
\begin{lemma}
For all $k\geq0$, $\phi\kp$ can be written in the canonical form
\begin{equation}\label{eq:phicanonical}
\phi\kp(x) = \phi\kp^* + \tfrac{\mu}2 \|x - v\kp\|^2,
\end{equation}
where $\alpha_k \in (0,1)$ and
\begin{eqnarray}
v_{k+1} &\eqdef& (1- \alpha_k) v_k + \alpha_k y_k^{++} \label{eq:zzzzzz0}\\
\phi_{k+1}^* &\eqdef& (1- \alpha_k)(\phi_k^* +\tfrac{\mu}{2} \tnorm{v_{k+1}-v_k } ) \notag\\
&&+\alpha_k\left( f(y_k) - \tfrac{1}{2\mu}\tnorm{\nabla f(y_k)} + \tfrac{\mu}{2} \tnorm{v_{k+1} - y_k^{++}}\right).\label{eq:zzz1}
\end{eqnarray}
\end{lemma}
\begin{proof}
Using the definitions \eqref{eq:zzzzzz0} and \eqref{eq:zzz1}, for all $k\geq 0$ and all $x\in\R^n$, $\phi\kp(x)$ can be expressed in the form
\begin{eqnarray*}
\phi\kp(x) &=& (1-\alpha_k)\left(\phi_k^* + \tfrac{\mu}2 \|x - v_k\|^2 \right)\\
&&+ \alpha_k\left(f(y_k) - \tfrac{1}{2\mu}\tnorm{\nabla f(y_k)} + \tfrac{\mu}{2} \tnorm{x - y_k^{++}}\right)\\
&=& \phi\kp^* + \tfrac{\mu}2 \|x - v\kp\|^2.
\end{eqnarray*}
By taking the derivative of \eqref{eq:phicanonical} w.r.t. $x$, we see that the minimizer of $\phi_k(x)$ is $x = v\kp$, where $v\kp$ is defined in \eqref{eq:zzzzzz0}. Substituting this minimizer into \eqref{eq:phicanonical} gives the minimum value $\phi_{k+1}^*$ as in \eqref{eq:zzz1}.\qed
\end{proof}
\begin{lemma}\label{l:equivphikstar}
An equivalent expression for $\phi_{k+1}^*$ in \eqref{eq:zzz1} is
\begin{align}\label{eq:phistarequiv}
\phi_{k+1}^* &= (1-\alpha_k)\left(\phi_k^* + \alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\right) +\alpha_k( f(y_k) - \tfrac{1}{2\mu}\tnorm{\nabla f(y_k)}).
\end{align}
\end{lemma}
\begin{proof}
Using \eqref{eq:zzzzzz0} gives the equivalences
\begin{eqnarray}\label{eq:vkp1vk}
\|v_{k+1}-v_{k}\|^2 &=& \|(1-\alpha_k)v_k + \alpha_k y_k^{++}-v_{k}\|^2 = \alpha_k^2\|v_{k}-y_k^{++}\|^2
\end{eqnarray}
and
\begin{eqnarray}
\notag
\|v_{k+1} - y_k^{++}\|^2 &=& \|(1-\alpha_k)v_k + \alpha_k y_k^{++} - y_k^{++}\|^2\\
&=& (1-\alpha_k)^2\|v_k - y_k^{++}\|^2.\label{eq:vkp1xkpp}
\end{eqnarray}
Combining \eqref{eq:vkp1vk} and \eqref{eq:vkp1xkpp} gives
\begin{eqnarray}
\notag
&&\hspace{-5mm}(1-\alpha_k)\|v_{k+1}-v_{k}\|_2^2 + \alpha_k\|v_{k+1} - y_k^{++}\|^2\\
\notag
&=&\alpha_k^2(1-\alpha_k)\|v_{k}-y_k^{++}\|_2^2 + \alpha_k(1-\alpha_k)^2\|v_k - y_k^{++}\|^2\\
\notag
&=&\alpha_k(1-\alpha_k)(\alpha_k + (1-\alpha_k)) \|v_{k}-y_k^{++}\|_2^2 \\
&=&\alpha_k(1-\alpha_k)\|v_{k}-y_k^{++}\|_2^2.\label{eq:vkequiv}
\end{eqnarray}
Substituting \eqref{eq:vkequiv} into \eqref{eq:zzz1} gives the result.\qed
\end{proof}
The following Lemma shows that $\phi_k(x)$ is a (global) lower bound for $f(x)$.
\begin{lemma}\label{lem:lb}
For all $k\geq0$, let $\alpha_k \in (0,1)$. Then, for all $x\in\R^n$, $\phi_k(x)\leq f(x)$.
\end{lemma}
\begin{proof}
We proceed by induction. When $k=0$, the result holds trivially. Now, assume that $\phi_k(x) \leq f(x)$. Then
\begin{eqnarray*}
\phi_{k+1}(x) &\overset{\eqref{eq:zzzzz4}}{=}& (1- \alpha_k) \phi_k(x) + \alpha_k \phi(x; y_k),\\
&\overset{\eqref{eq:lowerboundDF}}{\leq}& (1- \alpha_k) f(x) + \alpha_k f(x) = f(x).
\end{eqnarray*}
\qed
\end{proof}
\subsection{A Lower Bound for Composite Functions}\label{s:lbcomposite}
Here, the previous results are extended from the smooth to the composite setting, so it is assumed that $h(x)$ is not equivalent to the zero function.
The following Lemma defines a lower bound for $F(x)$ in \eqref{eq:Problem}. The lower bound is the same as that presented in \cite{Drusvyatskiy16} and \cite{Chen16}, with the roles of $x$ and $y$ reversed here; the proof is included for completeness.
\begin{lemma}[Lemma~6.1 in \cite{Drusvyatskiy16}; Lemma~3.1 in \cite{Chen16}]
Given a point $y \in \R^n$, let $G_L(y)$ and $y^+$ be defined in \eqref{eq:proxgrad} and \eqref{eq:shortproxstep}, respectively. Then for all $x\in \R^n$
\begin{eqnarray}\label{eq:lbcomposite}
\varphi(x;y) \eqdef F(y^+) + \langle G_L(y),x-y\rangle + \tfrac{\mu}2\|x-y\|^2 + \tfrac1{2L}\|G_L(y)\|^2 \leq F(x).
\end{eqnarray}
\end{lemma}
\begin{proof}
By Assumption~\ref{A_SCL} ($\mu$-strongly convex)
\begin{eqnarray}\label{eq:mustrong}
f(y) + \ve{\nabla f(y) }{x-y} + \tfrac \mu 2 \tnorm{x-y} \leq f(x), \qquad \forall x,y\in\R^n,
\end{eqnarray}
and ($L$-smooth)
\begin{eqnarray}\label{eq:Lsmooth}
\notag
f(y^+) &\leq& f(y) + \langle \nabla f(y), y^+ - y \rangle + \tfrac L{2 }\| y^+-y \|_2^2\\
&\overset{\eqref{eq:shortproxstep}}{=}& f(y) -\tfrac1L \langle \nabla f(y), G_L(y)\rangle + \tfrac1{2L}\|G_L(y)\|_2^2.
\end{eqnarray}
Combining \eqref{eq:mustrong} and \eqref{eq:Lsmooth} gives
\begin{eqnarray*}
\notag
F(y^+) &\leq& F(x) - \ve{\nabla f(y) }{x-y} - \tfrac \mu 2 \tnorm{x-y} -\tfrac1L \langle \nabla f(y), G_L(y)\rangle\\
&& + \tfrac1{2L}\|G_L(y)\|_2^2 + (h(y^+) - h(x))\\
&=& F(x) - \ve{\nabla f(y) }{x-y^+} - \tfrac \mu 2 \tnorm{x-y} \\
&& + \tfrac1{2L}\|G_L(y)\|_2^2 + (h(y^+) - h(x))\\
&=& F(x) - \ve{\nabla f(y) - G_L(y)}{x-y^+} - \tfrac \mu 2 \tnorm{x-y}\\
&& + \tfrac1{2L}\|G_L(y)\|_2^2 + (h(y^+) - h(x)) - \ve{G_L(y)}{x-y^+}\\
&\leq& F(x)- \tfrac \mu 2 \tnorm{x-y} + \tfrac1{2L}\|G_L(y)\|_2^2 - \ve{G_L(y)}{x-y^+}\\
&=& F(x)- \tfrac \mu 2 \tnorm{x-y} - \tfrac1{2L}\|G_L(y)\|_2^2 - \ve{G_L(y)}{x-y}.
\end{eqnarray*}
Rearranging gives the result.\qed
\end{proof}
Before stating the next result, which shows that $\varphi(x;y)$ is a quadratic lower bound, we give the following equivalence, which is the composite version of \eqref{eq:normequiv},
\begin{eqnarray}\label{eq:normequivcomposite}
\tfrac{\mu}2\tnorm{x- y^{++}} \overset{\eqref{eq:longproxstep}}{=} \tfrac{\mu}2\tnorm{x-y} + \langle x-y,G_L(y)\rangle + \tfrac1{2\mu}\tnorm{\nabla G_L(y)}.
\end{eqnarray}
\begin{lemma}\label{lem:varphixycanon}
For all $x,y\in\R^n$, the lower bound \eqref{eq:lbcomposite} has the canonical form
\begin{equation}\label{eq:varphixycanon}
\varphi(x;y) = \varphi^* + \tfrac{\mu}{2} \tnorm{x-y^{++}},
\end{equation}
where
\begin{equation}\label{eq:varphiminval}
\varphi^* = F(y^+) + \left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(y)\|^2.
\end{equation}
\end{lemma}
\begin{proof}
Minimizing $\varphi(x;y)$ in \eqref{eq:lbcomposite} w.r.t. $x$, and using the definition in \eqref{eq:proxmap}, yields the minimizer
\begin{equation}\label{eq:varphiminimizer}
y^{++} = \arg\min_x \varphi(x;y).
\end{equation}
The corresponding minimal value is
\begin{align*}
\varphi^* &\eqdef \min_x\varphi(x;y) = \varphi(y^{++};y)\\
&\overset{\eqref{eq:lbcomposite}}{=} F(y^+) + \langle G_L(y),y^{++}-y\rangle + \tfrac{\mu}2\|y^{++}-y\|^2 + \tfrac1{2L}\|G_L(y)\|^2\\
&\overset{\eqref{eq:longproxstep}}{=} F(y^+) - \tfrac1{\mu}\langle G_L(y),G_L(y) + \tfrac{\mu}2\|\tfrac1{\mu}G_L(y)\|^2 + \tfrac1{2L}\|G_L(y)\|^2\\
&= F(y^+) + \left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(y)\|^2,
\end{align*}
which is equivalent to \eqref{eq:varphiminimizer}. (Note also that \eqref{eq:varphiminimizer} and \eqref{eq:varphiminval} are the minimizer and minimum value of \eqref{eq:varphixycanon}, respectively.) Furthermore,
\begin{eqnarray*}
\varphi(x;y) &\overset{\eqref{eq:varphixycanon}}{=}& \varphi^* + \tfrac{\mu}{2} \tnorm{x-y^{++}}\\
&=& F(y^+) + \left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(y)\|^2 + \tfrac{\mu}{2} \tnorm{x-y^{++}}\\
&\overset{\eqref{eq:normequivcomposite}}{=}& F(y^+) + \left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(y)\|^2\\
&& + \tfrac{\mu}{2} \tnorm{x-y}+ \tfrac1{2\mu} \tnorm{G_L(y)} + \langle G_L(y), x-y\rangle\\
&=&F(y^+) + \tfrac1{2L}\|G_L(y)\|^2 + \tfrac{\mu}{2} \tnorm{x-y} +\langle G_L(y), x-y\rangle,
\end{eqnarray*}
which confirms that \eqref{eq:varphixycanon} is equivalent to \eqref{eq:lbcomposite}.\qed
\end{proof}
\begin{remark}
Lemma~\ref{lem:varphixycanon} shows that the lower bound \eqref{eq:lbcomposite} (equivalently \eqref{eq:varphixycanon}) is a quadratic lower bound for $F(x)$.
\end{remark}
Now, a sequence of lower bounds $\{\varphi_k(x)\}_{k=0}^\infty$ can be defined in the following way. Using \eqref{eq:lbcomposite} and a given initial point $x_0$, define the function
\begin{equation}\label{eq:compositecanonical}
\varphi_0(x) \eqdef \varphi(x;x_0) = \varphi_0^* + \tfrac{\mu}2\|x-v_0\|_2^2,
\end{equation}
where
\begin{eqnarray}\label{eq:minvalminmize}
\varphi_0^* \eqdef F(x_0^+) + \left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(x_0)\|^2,\quad v_0 = x_0^{++}.
\end{eqnarray}
Differentiating \eqref{eq:compositecanonical} w.r.t. $x$ shows that the minimum value and minimizer of $\varphi_0(x)$ are given by \eqref{eq:minvalminmize}. This motivates the following construction
\begin{enumerate}
\item $\varphi_0(x) \eqdef \varphi(x;x_0) = \varphi_0^* + \tfrac{\mu}2\|x-v_0\|_2^2,$
\item For $k\geq 0$ and some point $y_k$ between $x_k$ and $v_k$ we recursively define
\begin{equation}\label{eq:varphiconvexcomb}
\varphi_{k+1}(x) \eqdef (1-\alpha_k) \varphi_k(x) + \alpha_k \varphi(x;y_k).
\end{equation}
\end{enumerate}
\begin{lemma}
For all $k \geq 0$, $\varphi_{k+1}$ can be written in the canonical form
\begin{eqnarray}\label{eq:canon}
\varphi_{k+1}(x) \eqdef \varphi_{k+1}^* + \tfrac{\mu}2\|x-v_{k+1}\|_2^2,
\end{eqnarray}
where
\begin{eqnarray}
v_{k+1} &\eqdef&(1-\alpha_k)v_k + \alpha_ky_k^{++}\label{eq:vkcomp}\\
\varphi_{k+1}^* &\eqdef& (1-\alpha_k)(\varphi_k^* + \tfrac{\mu}2\|v_{k+1}-v_{k}\|_2^2)\label{eq:phikcomp}\\
\notag
&&+\; \alpha_k\left(F(y_k^+) + \left(\tfrac1{2L} - \tfrac1{2\mu}\right)\|G_L(y_k)\|^2 + \tfrac{\mu}2\|v_{k+1} - y_k^{++}\|^2\right).
\end{eqnarray}
\end{lemma}
\begin{proof}
Using \eqref{eq:varphiconvexcomb} and \eqref{eq:canon}, for all $k\geq 0$ and all $x\in \R^n$, $\varphi_{k+1}(x)$ can be expressed in the form
\begin{eqnarray*}
\varphi_{k+1}(x) &=& (1-\alpha_k)(\varphi_k^* + \tfrac{\mu}2\|x-v_{k}\|_2^2)\\
&&+ \; \alpha_k\left(F(y_k^+)+ \left(\tfrac1{2L} - \tfrac1{2\mu}\right)\|G_L(y_k)\|^2 + \tfrac{\mu}2\|x - y_k^{++}\|^2\right)\\
&=& \varphi_{k+1}^* + \tfrac{\mu}2\|x-v_{k+1}\|_2^2.
\end{eqnarray*}
Taking the derivative of \eqref{eq:canon} w.r.t. $x$ shows that the minimizer of $\varphi_k(x)$ is $x = v_{k+1}$. Substituting $x = v_{k+1}$ into the above gives \eqref{eq:phikcomp}.\qed
\end{proof}
\begin{lemma}
An equivalent expression for $\varphi_{k+1}^*$ in \eqref{eq:phikcomp} is
\begin{eqnarray}\label{eq:varphistarequiv}
\varphi_{k+1}^* &=& (1-\alpha_k)\left(\varphi_k^* + \alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\right)\\
\notag
&&+ \alpha_k\left(F(y_k^+) + \left(\tfrac1{2L} -\tfrac1{2\mu} \right)\|G_L(y_k)\|^2\right).
\end{eqnarray}
\end{lemma}
\begin{proof}
The proof follows the same arguments as for Lemma~\ref{l:equivphikstar}; noting that \eqref{eq:zzzzzz0} and \eqref{eq:vkcomp} are equivalent, and then combining \eqref{eq:vkequiv} and \eqref{eq:phikcomp} gives the result. \qed
\end{proof}
\begin{lemma}
For all $k\geq 0 $ and $\forall x\in\R^n$, $\varphi_k(x) \leq F(x)$.
\end{lemma}
\begin{proof}
When $k=0$, the result holds trivially. Now assume that $\varphi_k(x) \leq F(x)$. Then
\begin{eqnarray*}
\varphi_{k+1}(x) &=&(1-\alpha_k)\varphi_k(x) + \alpha_k\varphi(x;y_k)\\
&\leq& (1-\alpha_k)F(x) + \alpha_kF(x) = F(x).
\end{eqnarray*}
This completes the proof.\qed
\end{proof}
\section{Algorithms and Convergence Guarantees for Smooth Functions}\label{sec:smooth}
The purpose of this section is to demonstrate that the UES framework, and the previously presented lower bounds, are \emph{useable} definitions that give rise to \emph{efficient implementable algorithms}. Throughout this section we consider smooth optimization problems (problems of the form \eqref{eq:Problem} with $h\equiv 0$) and, as for all results in this work, we suppose that Assumption~\ref{A_SCL} holds.
We present two algorithms whose iterates fit the Underestimate Sequence framework described in Section~\ref{sec:UES}, and use the lower bounds developed in Section~\ref{s:lbsmooth}. The first algorithm is a gradient descent type method, while the second algorithm is a gradient descent type method that incorporates an acceleration strategy. As will be shown, both algorithms are supported by convergence guarantees, which are established via the UES framework.
\subsection{An Underestimate Sequence Algorithm for Smooth Functions}
We are now ready to present an algorithm that fits our UES framework; a brief description follows.
\begin{algorithm}[h!]
\caption{Smooth Underestimate Sequence Algorithm (SUESA)}
\label{alg:SUESA}
\begin{algorithmic}[1]
\STATE Initialization: Set $k=0$, $\epsilon >0$, initial point $x_0\in\R^n$ and compute $\mu$, $L$.
\STATE Set $\phi_0(x)$ as in \eqref{eq:phi0}, with $v_0$ and $\phi_0^*$ as in \eqref{eq:c0v0}, and let $\alpha_k= \frac{\mu}{L}$.
\WHILE {$f(x_k) - \phi_k^* > \epsilon$}
\STATE Set $y_k = x_k$ and $y_k^{++} = x_k^{++}$.
\STATE Set $x_{k+1} = x_k - \frac{1}{L} \nabla f(x_k)$.
\STATE Update $v_{k+1}$ and $\phi_{k+1}^*$ as in \eqref{eq:zzzzzz0} and \eqref{eq:zzz1}, respectively.
\STATE $k=k+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The Smooth (functions) UnderEstimate Sequence Algorithm (SUESA) presented in Algorithm~\ref{alg:SUESA} solves the problem \eqref{eq:Problem} in the smooth case, i.e., when $h = 0$. The algorithm proceeds as follows. First, an initial point $x_0\in \R^n$ is chosen, as well as some stopping tolerance $\epsilon>0$. Secondly, the point $v_0 = x_0^{++}$ (i.e., $v_0$ is the long step from $x_0$) is constructed, as well as the lower bound $\phi_0(x)$ with minimum value $\phi_0^*$. The algorithm uses a fixed step size of $\alpha = \mu/L$ at every iteration. Next, the main loop commences and an iteration proceeds as follows. One sets $y_k = x_k$ (i.e., $y_k$ is not explicitly used in SUESA); $x_k$ is updated by taking a gradient descent step with the step size $\tfrac{1}{L}$, resulting in the new point $x_{k+1}$; the point $v_{k+1} = x_{k+1}^{++}$ is constructed and the lower bound $\phi_{k+1}(x)$ is updated.
The algorithm constructs two points at every iteration, namely $x_k$ and $v_k$, and the values $\phi_{k}(x)$ and $\phi_{k}^*$. The point $v_k$ and the value $\phi_{k}^*$ are used for the lower bound, which is essential for the stopping criterion. The stopping condition $f(x_k) - \phi_k^* \leq \epsilon$ provides a certificate of optimality; once the stopping condition is satisfied, it is guaranteed that $x_k$ gives a function value $f(x_k)$ that is at most $\epsilon$ from the true solution $f^*$.
If Step~5 is considered in isolation, then one sees that at every iteration of SUESA, the point $x_k$ is updated via a standard gradient descent step. That is, a step of size $1/L$ in the direction of the negative gradient is taken from the current point $x_k$, resulting in the new point $x_{k+1}$. However, SUESA is different from the standard gradient descent method, because SUESA also involves several other ingredients, including the points $v_k$ and lower bound values $\phi_k^*$.
The following result provides a convergence guarantee for SUESA. In particular, Theorem~\ref{thm1} shows that the iterates generated by Algorithm~\ref{alg:SUESA} form an underestimate sequence (i.e., they satisfy Definition~\ref{def:UES}) and therefore, Algorithm~\ref{alg:SUESA} is guaranteed to converge (linearly) to the solution of problem \eqref{eq:Problem}.
\begin{theorem}\label{thm1}
Let Assumption~\ref{A_SCL} hold. The sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by SUESA (Algorithm~\ref{alg:SUESA}) form a UES.
\end{theorem}
\begin{proof}
We must show that the iterates generated by Algorithm~\ref{alg:SUESA} satisfy the conditions of Definition~\ref{def:UES}. Note that, $\alpha_k = \mu/L \in (0,1)$ for all $k \geq 0$, so by Lemma~\ref{lem:lb}, \eqref{eq:def1} holds. Thus, it remains to prove \eqref{eq:def2}. Combining the definition of $x_{k+1}$ (Step~5 in Algorithm~\ref{alg:SUESA}) with \eqref{eq:ass2}, gives
\begin{eqnarray}\label{eq:zzzz3}
f(x_{k+1}) &\leq& f(x_k) + \ve{\nabla f(x_k)}{-\tfrac{1}{L} \nabla f(x_k)}+\tfrac{L}{2} \tnorm {\tfrac{1}{L} \nabla f(x_k)} \notag\\
&=& f(x_k) - \tfrac{1}{2L} \tnorm{\nabla f(x_k)}.
\end{eqnarray}
Subtracting $\phi_{k+1}^*$ in \eqref{eq:phistarequiv} from both sides of the above gives,
\begin{eqnarray*}
f(x_{k+1}) - \phi_{k+1}^*
&\leq& f(x_k) - \tfrac{1}{2L} \tnorm{\nabla f(x_k)}
- (1-\alpha_k)\left(\phi_k^* + \alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\right)\\
&& -\alpha_k( f(y_k) - \tfrac{1}{2\mu}\tnorm{\nabla f(y_k)})\\
&\leq & (1- \alpha_k) (f(x_k) -\phi_k^*) - (\tfrac{\alpha_k}{2\mu} - \tfrac{1}{2L} )\tnorm{\nabla f(x_k)} \\
&&- (1-\alpha_k)\alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2 \\
&\leq & (1- \alpha_k) (f(x_k) -\phi_k^*),
\end{eqnarray*}
where the last step follows because $\alpha_k = \tfrac{\mu}{L}$, so $\tfrac{\alpha_k}{2\mu} - \tfrac{1}{2L}=\tfrac{\mu}{2\mu L} - \tfrac{1}{2L}=0$, and $(1-\alpha_k)\alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\geq 0$.
Therefore, the sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by Algorithm~\ref{alg:SUESA}
form a UES. \qed
\end{proof}
\begin{corollary}\label{coro1}
Let Assumption~\ref{A_SCL} hold. Then the sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by SUESA (Algorithm~\ref{alg:SUESA}) form a UES, so SUESA converges at a linear rate
\begin{align}
f(x_k) -\phi_k^* \leq (1- \tfrac{\mu}{L})^k (f(x_0) -\phi_0^*).
\end{align}
\end{corollary}
Corollary~\ref{coro1} is simply a consequence Proposition~\ref{prop:UES}, which states that if $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ form an underestimate sequence, then \eqref{eq:linconverge} holds (i.e., linear convergence). Theorem~\ref{thm1} shows that SUESA (Algorithm~\ref{alg:SUESA}) generates iterates forming a UES, implying that SUESA converges linearly to the optimal solution. Moreover, $\alpha_k = \tfrac{\mu}L$ for all $k$ in SUESA, so recalling the definition of $\lambda_k$ in Proposition~\ref{prop:UES}, confirms the rate $(1- \tfrac{\mu}{L})$ in Corollary~\ref{coro1}.
We remark that there are other ways to prove convergence of Algorithm~\ref{alg:SUESA}. For example, one can proceed by proving that the distance between $x_k$ and the minimizer of the lower bound in $k$th iteration shrinks at a fixed rate. That is, since $\alpha_k = \tfrac{\mu}{L}$, we have the following equality,
\begin{align}\label{eq:xkvk}
x_{k+1} - v_{k+1} &= \left(x_k - \tfrac{1}{L} \nabla f(x_k)\right) - \left((1- \alpha_k)v_k - \alpha_k (x_k - \tfrac{1}{\mu} \nabla f(x_k) )\right) \notag\\
& = (1- \tfrac{\mu}{L}) (x_k - v_k).
\end{align}
Equation \eqref{eq:xkvk} illustrates that, after each iteration of Algorithm~\ref{alg:SUESA}, the line joining $x_{k+1}$ and $v_{k+1}$ is parallel to the line joining $x_k$ and $v_k$ from the previous iteration (see the blue lines in Figure~\ref{fig:a1o1}). Moreover, the distance between the two points is reduced by precisely $(1- \tfrac{\mu}{L})$ at every iteration. Intuitively, the solution $x_k$ and the minimizer $v_k$ are becoming ever closer, and eventually they both converge to the optimal solution $x^*.$
One can visualize the fact above using the following toy example. Consider the (smooth) regularized logistic regression problem, i.e., problem \eqref{eq:Problem} with $h=0$ and
\begin{align*}
f(x) = \sum_{i=1}^m \log(1+\exp(-y_i \ve{x}{a_i})) + \tfrac \lambda 2 \|x\|^2,
\end{align*}
where $a_i \in \R^n$ is the $i$th feature vector with corresponding (binary) label $y_i\in \R$.
For this example we randomly generate 100 two dimensional data points with binary labels $\{a_i, y_i\}$ (so $m=100$ and $n=2$) as shown in the left hand plot in Figure~\ref{fig:a1o1}. (Each point $a_i$ is plotted on a 2D grid, and the point is colored green or red to highlight its label $y_i$). Parameter $\lambda= 0.01$, so the strong convexity constant is $\mu=0.01$. Algorithm~\ref{alg:SUESA} is used to solve this problem, starting from the point $x_0 = (-20,10)^T$, and the iterates are shown in the right hand plot in Figure~\ref{fig:a1o1}.
\begin{figure}[H]
\centering
\includegraphics[scale=.35]{fig1.eps}
\includegraphics[scale=.3]{a1.eps}
\caption{Left: Randomly generated two classes of 2D data. Right: A 2D illustration for Option 1. The blue, red and green points represent $\{x_k\}, \{v_k\}, \{x_k^{++}\}$ respectively.}
\label{fig:a1o1}
\end{figure}
\subsection{An Accelerated Underestimate Sequence Algorithm for Smooth Functions}
We now present an accelerated first order algorithm for solving problems of the form \eqref{eq:Problem} when $h=0$; a description will follow.
\begin{algorithm}
\caption{Accelerated Smooth Underestimate Sequence Algorithm (ASUESA)}
\label{alg:ASUESA}
\begin{algorithmic}[1]
\STATE Initialization: Set $k=0$, $\epsilon >0$, initial point $x_0\in\R^n$ and compute $\mu$, $L$.
\STATE Set $\phi_0(x)$ as in \eqref{eq:phi0}, with $v_0$ and $\phi_0^*$ as in \eqref{eq:c0v0}. Let $\alpha_k= \sqrt{\frac{\mu}{L}}$, $\beta_k = \frac{1}{1+\alpha_k}$.
\WHILE {$f(x_k) - \phi_k^* > \epsilon$}
\STATE Set $y_k = \beta_k x_k + (1- \beta_k) v_k$.
\STATE Set $x_{k+1} = y_k - \frac{1}{L} \nabla f(y_k).$
\STATE Update $v_{k+1}$ and $\phi_{k+1}^*$ as in \eqref{eq:zzzzzz0} and \eqref{eq:zzz1}, respectively.
\STATE $k=k+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The Acclerated Smooth UnderEstimate Sequence Algorithm (ASUESA) presented in Algorithm~\ref{alg:ASUESA} solves \eqref{eq:Problem} in the smooth case, i.e., when $h = 0$, and can be described as follows. Algorithm initialization is similar to that of SUESA (Algorithm~\ref{alg:SUESA}), where an initial point $x_0\in \R^n$ and some stopping tolerance $\epsilon>0$ are chosen, the point $v_0 = x_0^{++}$ is constructed and the lower bound $\phi_0(x)$ and minimum value $\phi_0^*$ are evaluated. For ASUESA one sets $\alpha_k = \sqrt{\mu/L}$ and the parameter $\beta_k = \frac{1}{1+\alpha_k}$ is also used. Parameter $\alpha_k$ is fixed for all iterations, and subsequently so too is $\beta_k$. The main loop proceeds as follows. At every iteration one sets $y_k$ to be a convex combination of the points $x_k$ and $v_k$; a gradient descent step is taken \emph{from} $y_k$, resulting in the new point $x_{k+1}$; the point $v_{k+1}$ is constructed using \eqref{eq:zzzzzz0} and the lower bound $\phi_{k+1}(x)$ is updated via \eqref{eq:zzz1}.
Notice that Algorithm~\ref{alg:ASUESA} can be viewed as an accelerated version of Algorithm~\ref{alg:SUESA}. In contrast to Algorithm~\ref{alg:SUESA}, ASUESA constructs \emph{three} points at every iteration, namely $x_k$, $v_k$ and $y_k$, where the intermediate vector $y_k$ is a convex combination of the points $x_k$ and $v_k$ (i.e., for ASUESA $x_k \neq y_k$.) Notice also that $x_{k+1}$ is the result of a gradient descent step taken from the point $y_k$. The variable $\phi_{k}^*$ is also maintained and is used in the stopping condition.
The following result provides a convergence guarantee for ASUESA. Theorem~\ref{thm2} shows that the iterates generated by Algorithm~\ref{alg:ASUESA} fit the UES framework (i.e., they satisfy Definition~\ref{def:UES}) and therefore, Algorithm~\ref{alg:ASUESA} is guaranteed to converge (linearly at the optimal rate) to the solution of problem \eqref{eq:Problem} (see Corollary~\ref{coro2}).
\begin{theorem}\label{thm2}
Let Assumption~\ref{A_SCL} hold. The series of sequences $\{x_k\}_{k=0}^\infty$, $\{\phi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by ASUESA in Algorithm~\ref{alg:ASUESA} form an UES.
\end{theorem}
\begin{proof}
At every iteration of ASUESA the function value is reduced as follows
\begin{align}
\notag
f(y_k^+) = f(y_k - \tfrac{1}{L}\nabla f (y_k))&\leq f(y_k) + \langle\nabla f(y_k),\tfrac{-\nabla f(y_k)}{L}\rangle + \tfrac{L}{2}\tnorm{ \tfrac{\nabla f(y_k)}{L}}\\
& = f(y_k) -\tfrac{1}{2L}\tnorm{\nabla f(y_k) }. \label{eq:ibuij}
\end{align}
Moreover,
\begin{eqnarray*}
f(x_{k+1})-\phi_{k+1}^* &\overset{\eqref{eq:phistarequiv}}{=} &f(y_k^+) -(1-\alpha_k) \phi_k^*\\
&&- \alpha_k\left(f(y_k) - \tfrac{\tnorm{\nabla f(y_k) }}{2\mu} + \tfrac{\mu}{2}(1-\alpha_k )\tnorm{v_k -y_k^{++}}\right) \\
&\overset{\eqref{eq:ibuij}}{\leq} & f(y_k) - \tfrac{1}{2L}\tnorm{\nabla f(y_k) } - (1-\alpha_k)\phi_k^* \\
&&- \alpha_k\left(f(y_k) - \tfrac{\tnorm{\nabla f(y_k) }}{2\mu} + \tfrac{\mu}{2}(1-\alpha_k )\tnorm{v_k -y_k^{++}}\right)\\
&=& (1-\alpha_k)(f(y_k) - \phi_k^*) - \left(\tfrac{1}{2L} - \tfrac{\alpha_k}{2\mu}\right)\tnorm{\nabla f(y_k) }\\
&& - \alpha_k\tfrac{\mu}{2}(1-\alpha_k )\tnorm{v_k -y_k^{++}}.
\end{eqnarray*}
By completing the square term one obtains
\begin{eqnarray}
&&\hspace{-4mm}f(x_{k+1})-\phi_{k+1}^* \notag\\
&= & (1-\alpha_k)\left(f(y_k)-\phi_k^* \right)- \left(\tfrac{1}{2L} - \tfrac{\alpha_k}{2\mu}\right)\tnorm{\nabla f(y_k) } \notag \\
&&-\tfrac{\mu}{2}\alpha_k(1-\alpha_k) \Big(\tnorm{v_k - y_k}+\tnorm{\tfrac{1}{\mu}\nabla f(y_k) }
+ \tfrac{2}{\mu} \langle\nabla f(y_k),v_k -y_k\rangle\Big) \notag\\
&= & (1-\alpha_k)\left(f(y_k)-\phi_k^* \right)- \left(\tfrac{1}{2L} - \tfrac{\alpha_k}{2\mu}+ \tfrac{\alpha_k(1-\alpha_k)}{2\mu}\right)\tnorm{\nabla f(y_k) } \notag \\
&&-\alpha_k(1-\alpha_k) \langle\nabla f(y_k),v_k -y_k\rangle \notag\\
&\leq & (1-\alpha_k)\left(f(y_k)-\phi_k^* \right) - \left(\tfrac{1}{2L} - \tfrac{\alpha_k^2}{2\mu}\right)\tnorm{\nabla f(y_k) } \notag\\
&& - \alpha_k(1-\alpha_k)\langle\nabla f(y_k),v_k -y_k\rangle\notag\\
&=& (1-\alpha_k)\left(f(y_k)-\phi_k^* \right)- \alpha_k(1-\alpha_k)\langle\nabla f(y_k),v_k -y_k\rangle,\label{eq:fphi}
\end{eqnarray}
where the last step follows because $\alpha_k = \sqrt{\frac{\mu}{L}}$ in ASUESA, so $\tfrac{1}{2L} - \tfrac{\alpha_k^2}{2\mu} = 0$.
Now, rearranging the expression for $y_k$ in Step~6 in Algorithm~\ref{alg:ASUESA} gives
\begin{equation}\label{eq:vk}
v_k = \tfrac1{1-\beta_k}(y_k - \beta_k x_k) = y_k + \tfrac{\beta_k}{1-\beta_k}(y_k - x_k),
\end{equation}
and notice also that $\beta_k = \tfrac{1}{1+\alpha_k}$ for all $k$, so
\begin{eqnarray}\label{eq:alphabeta1}
1-\beta_k = 1-\tfrac{1}{1+\alpha_k} = \tfrac{\alpha_k}{1+\alpha_k} =\alpha_k\beta_k \quad \Rightarrow \quad \tfrac{\alpha_k\beta_k}{1-\beta_k}= 1.
\end{eqnarray}
Thus, by the convexity of $f$ we have
\begin{eqnarray}\label{eq:pqr}
-\langle\nabla f(y_k),v_k -y_k\rangle \notag
&=& -\langle\nabla f(y_k),y_k + \tfrac{\beta_k}{1- \beta_k} (y_k-x_k) -y_k\rangle \notag\\
&\leq& \tfrac{\beta_k}{1- \beta_k} \big(f(x_k) - f(y_k) \big).
\end{eqnarray}
Using \eqref{eq:pqr} in \eqref{eq:fphi} gives
\begin{align*}
f(x_{k+1})-\phi_{k+1}^*
\leq &(1-\alpha_k)\left(f(y_k)-\phi_k^* \right)+\alpha_k(1-\alpha_k)\tfrac{\beta_k}{1-\beta_k}(f(x_k)-f(y_k))\\
\overset{\eqref{eq:alphabeta1}}{=} &(1-\alpha_k)\left(f(y_k)-\phi_k^* \right)+(1-\alpha_k)(f(x_k)-f(y_k))\\
= & (1-\alpha_k) (f(x_k)-\phi_k^*).
\end{align*}
\qed
\end{proof}
\begin{corollary}\label{coro2}
Let Assumption~\ref{A_SCL} hold. Then, the sequence of iterates $\{x_k\}_{k\geq0}$ generated by Algorithm \ref{alg:ASUESA} exhibits the optimal linear rate of convergence
\begin{align*}
f(x_k) -\phi_k^* \leq \left(1- \sqrt{\tfrac{\mu}{L}}\right)^k (f(x_0) -\phi_0^*).
\end{align*}
\end{corollary}
Corollary~\ref{coro2} shows that ASUESA converges linearly at the optimal rate. The difference in convergence rates between Algorithms~\ref{alg:SUESA}~and~\ref{alg:ASUESA} is essentially explained by the quadratic term $\tnorm{v_k -y_k^{++}}$, which is entirely ignored in the proof of Theorem \ref{thm1}. Thus, in the proof of Theorem~\ref{thm2}, one is able to incorporate another term containing $\tnorm{\nabla f(y_k)}$, which leads to a larger allowable value of $\alpha_k$, and ultimately, a tighter bound for Algorithm~\ref{alg:ASUESA}.
\section{Algorithms and Convergence Guarantees for Composite Functions}\label{sec:composite}
The purpose of this section is to extend the results presented in Section~\ref{sec:smooth} from the smooth to the composite setting, i.e., here we suppose that $h(x) \neq 0$. In particular, we present two algorithms whose iterates fit the Underestimate Sequence framework described in Section~\ref{sec:UES}, and use the lower bounds developed in Section~\ref{s:lbcomposite}. Both algorithms appear to fit the composite setting very naturally; the first algorithm is a proximal gradient descent type method, while the second algorithm is an accelerated proximal gradient variant. The algorithms also incorporate stopping conditions that provide a certificate of optimality. We establish convergence guarantees for both algorithms via the UES framework and for all results we suppose that Assumption~\ref{A_SCL} holds.
\subsection{A Composite Underestimate Sequence Algorithm}
We now present an algorithm to solve \ref{eq:Problem}, which is based on the UES framework. A brief description will follow.
\begin{algorithm}
\caption{Composite UES Algorithm (CUESA)}
\label{alg:CUESA}
\begin{algorithmic}[1]
\STATE Initialization: Set $k=0$, $\epsilon >0$, initial point $x_0\in\R^n$ and compute $\mu$, $L$.
\STATE Set $\varphi_0(x)$ as in \eqref{eq:compositecanonical}, with $v_0$ and $\varphi_0^*$ as in \eqref{eq:minvalminmize}. Let $\alpha_k= \frac{\mu}{L}$.
\WHILE {$F(x_k) - \varphi_k^* > \epsilon$}
\STATE Set $x_{k+1} = x_k - \tfrac{1}{L} G_L (x_k),$
\STATE Set $y_k = x_k$, $y_k^+ = x_k^+$, and $y_k^{++} = x_k^{++}$
\STATE Update $v_{k+1}$ and $\varphi_{k+1}^*$ as in \eqref{eq:vkcomp} and \eqref{eq:phikcomp} respectively.
\STATE $k=k+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The Composite (functions) UnderEstimate Sequence Algorithm (CUESA) presented in Algorithm~\ref{alg:CUESA} solves problem~\eqref{eq:Problem} when $h \neq 0$. The algorithm is described now. First, an initial point $x_0\in \R^n$ is chosen, as well as some stopping tolerance $\epsilon>0$. Secondly, the point $v_0 = x_0^{++}$ is constructed, as well as the lower bound $\varphi_0(x)$ with minimum value $\varphi_0^*$. The algorithm uses a fixed step size of $\alpha_k = \tfrac{\mu}{L}$ at every iteration. Next, the main loop commences and an iteration proceeds as follows. One sets $y_k = x_k$ (so $y_k$ is not explicitly used in CUESA); $x_k$ is updated by taking a \emph{proximal} gradient descent step with the step size $\tfrac{1}{L}$, resulting in the new point $x_{k+1}$; the point $v_{k+1} = x_{k+1}^{++}$ is constructed and the lower bound $\varphi_{k+1}(x)$ is updated.
The algorithm utilizes two points at every iteration, namely $x_k$ and $v_k$, as well as the values $\varphi_{k}(x)$ and $\varphi_{k}^*$. The point $v_k$ and the value $\varphi_{k}^*$ are used for the lower bound, which is essential for the stopping criterion.
Considering only Step~5, one sees that at every iteration of CUESA the point $x_k$ is updated via a proximal gradient descent step. That is, a step of size $1/L$ in the direction of the negative proximal gradient is taken from the current point $x_k$, resulting in the new point $x_{k+1}$. What makes CUESA distinct from a standard proximal gradient method is the inclusion of several other ingredients related to the lower bound $\varphi_k(x)$, which guarantee an $\epsilon$-optimal solution.
Now we present a convergence guarantee for CUESA. Theorem~\ref{thm3} shows that the iterates generated by Algorithm~\ref{alg:SUESA} form an underestimate sequence (i.e., they satisfy Definition~\ref{def:UES}) and therefore, Algorithm~\ref{alg:SUESA} is guaranteed to converge (linearly) to the solution of problem \eqref{eq:Problem}.
\begin{theorem}\label{thm3}
Let Assumption~\ref{A_SCL} hold. The sequences $\{x_k\}_{k=0}^\infty$, $\{\varphi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by CUESA (Algorithm~\ref{alg:CUESA}) form a UES.
\end{theorem}
\begin{proof}
From Step~5 in CUESA, one sees that $y_k = x_k$ for all $k$, so it also follows that $y_k^+ = x_{k+1}$ for all $k$. Now, using $y = x= x_k$ in the lower bound \eqref{eq:lbcomposite} gives
\begin{eqnarray}\label{eq:compFreduce}
F(x_{k+1}) \leq F(x_k) - \tfrac1{2L} \|G_L(x_k)\|_2^2.
\end{eqnarray}
Thus,
\begin{align*}
&F(x_{k+1})-\varphi_{k+1}^*\\
&=(1-\alpha) F(x_{k+1}) +\alpha F(x_{k+1}) -\varphi_{k+1}^*\\
&\overset{\eqref{eq:phistarequiv}}{=} (1-\alpha) F(x_{k+1}) +\alpha F(x_{k+1}) -
(1-\alpha_k)\left(\varphi_k^* + \alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\right)\\
&\qquad-\alpha_k\left(F(y_k^+) + \left(\tfrac1{2L} -\tfrac1{2\mu} \right)\|G_L(y_k)\|^2\right)\\
&= (1-\alpha) F(x_{k+1}) -
(1-\alpha_k)\left(\varphi_k^* + \alpha_k\tfrac{\mu}2\|v_{k}-y_k^{++}\|_2^2\right)\\
&\qquad-\alpha_k\left(\tfrac1{2L} -\tfrac1{2\mu} \right)\|G_L(y_k)\|^2\\
&\leq (1-\alpha) \left(F(x_{k+1}) - \varphi_k^*\right)-\alpha_k\left(\tfrac1{2L} -\tfrac1{2\mu} \right)\|G_L(y_k)\|^2
\end{align*}
\begin{align*}
&\overset{\eqref{eq:compFreduce}}{\leq}
(1-\alpha)\left( F(x_{k}) - \tfrac1{2L} \|G_L(x_k)\|^2 - \phi_k^*\right)- \alpha\left(\tfrac1{2L}- \tfrac{1}{2\mu}\right)\|G_L(x_k)\|^2\\
&\leq
(1-\alpha)
\left( F(x_{k})- \phi_k^*\right)+ \left(
\tfrac{\alpha}{2\mu} -\tfrac{\alpha}{2L}
-(1-\alpha) \tfrac1{2L}
\right) \| G_L(x_k)\|^2\\
&\leq (1-\alpha_k)(F(x_k)-\varphi_k^*),
\end{align*}
where the last step follows because $\alpha_k = \tfrac{\mu}{L}$ so
$\tfrac{\alpha_k}{2\mu} -\tfrac1{2L}= \tfrac{1}{2L}-\tfrac1{2L} = 0.$\qed
\end{proof}
\begin{corollary}\label{coro3}
Let Assumption~\ref{A_SCL} hold. Then, the sequence of iterates $\{x_k\}_{k\geq0}$ generated by Algorithm \ref{alg:CUESA} exhibits a linear rate of convergence
\begin{align*}
F(x_k) -\varphi_k^* \leq \left(1- \tfrac{\mu}{L}\right)^k (F(x_0) -\varphi_0^*).
\end{align*}
\end{corollary}
\subsection{An Accelerated Composite UES Algorithm}
An accelerated algorithm for convex composite problems is now presented.
\begin{algorithm}
\caption{Accelerated Composite UES Algorithm (ACUESA)}
\label{alg:ACUESA}
\begin{algorithmic}[1]
\STATE Initialization: Set $k=0$, $\epsilon >0$, initial point $x_0\in\R^n$ and compute $\mu$, $L$.
\STATE Set $\varphi_0(x)$ as in \eqref{eq:compositecanonical}, with $v_0$ and $\varphi_0^*$ as in \eqref{eq:minvalminmize}.
Let $\alpha_k= \sqrt{\frac{\mu}{L}}$, $\beta_k = \frac{1}{1+\alpha_k}$.
\WHILE {$F(x_k) - \varphi_k^* > \epsilon$}
\STATE Set $y_k = \beta_k x_k + (1- \beta_k) v_k$.
\STATE Set $x_{k+1} = y_k - \frac{1}{L} G_L(y_k).$
\STATE Update $v_{k+1}$ and $\varphi_{k+1}^*$ as in \eqref{eq:vkcomp} and \eqref{eq:phikcomp} respectively.
\STATE $k=k+1$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The Acclerated Composite UnderEstimate Sequence Algorithm (ACUESA) presented in Algorithm~\ref{alg:ACUESA} solves \eqref{eq:Problem} when $h \neq 0$. The algorithm proceeds as follows. ACUESA is initialized with a starting point $x_0\in \R^n$, a stopping tolerance $\epsilon>0$, the point $v_0 = x_0^{++}$ as well as the construction of the lower bound $\varphi_0(x)$ and minimum value $\varphi_0^*$. For ACUESA one sets $\alpha_k = \sqrt{\mu/L}$ and the parameter $\beta_k = \frac{1}{1+\alpha_k}$ is also used. Notice that parameters $\alpha_k$ and $\beta_k$ are fixed for all iteration. The main loop proceeds as follows. At every iteration one sets $y_k$ to be a convex combination of the points $x_k$ and $v_k$; a gradient descent step is taken \emph{from} $y_k$, resulting in the new point $x_{k+1}$; the point $v_{k+1}$ is constructed and the lower bound $\varphi_{k+1}(x)$ is updated.
Algorithm~\ref{alg:ACUESA} can be viewed as the accelerated version of Algorithm~\ref{alg:CUESA}. In contrast to Algorithm~\ref{alg:CUESA}, ACUESA constructs \emph{three} points at every iteration, namely $x_k$, $v_k$ and $y_k$, where the intermediate vector $y_k$ is a convex combination of the points $x_k$ and $v_k$ (i.e., for ACUESA $x_k \neq y_k$.) Notice also that $x_{k+1}$ is the result of a gradient descent step taken from the point $y_k$. The variable $\varphi_{k}^*$ are also maintained and is used in the stopping condition.
The following result provides a convergence guarantee for ACUESA. Theorem~\ref{thm4} shows that the iterates generated by Algorithm~\ref{alg:ACUESA} fit the UES framework (i.e., they satisfy Definition~\ref{def:UES}), so Algorithm~\ref{alg:ACUESA} is guaranteed to converge (linearly at the optimal rate) to the solution of problem \eqref{eq:Problem} (see Corollary~\ref{coro4}).
\begin{theorem}\label{thm4}
Let Assumption~\ref{A_SCL} hold. The sequences $\{x_k\}_{k=0}^\infty$, $\{\varphi_k(x)\}_{k=0}^\infty$ and $\{\alpha_k\}_{k=0}^\infty$ generated by ACUESA (Algorithm~\ref{alg:ACUESA}) form a UES.
\end{theorem}
\begin{proof}
From Step~7 in ACUESA,
\begin{equation}\label{eq:xkp1yp}
x_{k+1} = y_k - \tfrac1LG_L(y_k) \overset{\eqref{eq:shortproxstep}}{\equiv} y_k^+.
\end{equation}
Hence,
\begin{eqnarray}
F(x_{k+1}) - \varphi_{k+1}^* &=& (1-\alpha_k)F(x_{k+1}) + \alpha F(x_{k+1}) - \varphi_{k+1}^*\notag\\
&\overset{\eqref{eq:phistarequiv}}{=}&(1-\alpha_k)F(x_{k+1}) + \alpha F(x_{k+1}) - (1-\alpha_k)\varphi_k^* - \alpha_kF(y_k^+)\notag\\
&& -\alpha(1-\alpha)\tfrac{\mu}2\|v_{k} - y_k^{++}\|_2^2 - \alpha_k\left(\tfrac1{2L} - \tfrac1{2\mu} \right)\|G_L(y_k)\|^2)\notag\\
&\overset{\eqref{eq:xkp1yp}}{=}&(1-\alpha_k)F(x_{k+1})- (1-\alpha_k)\varphi_k^*\notag\\
&& -\alpha_k(1-\alpha_k)\tfrac{\mu}2\|v_{k} - y_k^{++}\|_2^2+\left(\tfrac{\alpha_k}{2\mu} -\tfrac{\alpha_k}{2L}\right)\|G_L(y_k)\|^2\notag\\
&=&(1-\alpha_k)\left(F(x_{k+1})-\varphi_k^* -\alpha_k\tfrac{\mu}2\|v_{k} - y_k^{++}\|_2^2\right)\notag\\
&&+\left(\tfrac{\alpha_k}{2\mu} -\tfrac{\alpha_k}{2L}\right)\|G_L(y_k)\|^2 \label{eq:Fkp11}
\end{eqnarray}
By considering the expression for $y_k$ in Step~6 in Algorithm~\ref{alg:ACUESA} and noticing that $\beta_k = \tfrac{1}{1+\alpha_k}$ for all $k$, \eqref{eq:vk} and \eqref{eq:alphabeta1} hold. Thus, combining \eqref{eq:vk} and \eqref{eq:longproxstep} gives
\begin{align}
&\tfrac{\alpha_k\mu}2\|v_k - y_k^{++}\|^2\notag \\
&= \tfrac{\alpha_k\mu}2\|\tfrac{\beta_k}{1-\beta_k}(y_k - x_k) + \tfrac1{\mu}G_L(y_k)\|^2\notag\\
&= \tfrac{\alpha_k\mu}2\tfrac{\beta_k^2}{(1-\beta_k)^2}\|x_k-y_k\|^2 + \tfrac{\alpha_k}{2\mu}\|G_L(y_k)\|^2
- \tfrac{\alpha_k\beta_k}{1-\beta_k}\langle G_L(y_k),x_k-y_k\rangle\notag\\
&\overset{\eqref{eq:alphabeta1}}{=} \tfrac{\mu}2\tfrac{\beta_k}{1-\beta_k}\|x_k-y_k\|^2 + \tfrac{\alpha_k}{2\mu}\|G_L(y_k)\|^2 - \langle G_L(y_k),x_k-y_k\rangle.\label{eq:vkyknorm}
\end{align}
Substituting \eqref{eq:vkyknorm} into \eqref{eq:Fkp11} results in
\begin{eqnarray*}
&& \hspace{-5mm} F(x_{k+1}) - \varphi_{k+1}^*\\
&=& (1-\alpha_k)\left(F(x_{k+1})-\varphi_k^* -\tfrac{\mu}2\tfrac{\beta_k}{1-\beta_k}\|x_k-y_k\|^2 + \langle G_L(y_k),x_k-y_k\rangle \right)\\
&& -(1-\alpha_k)\tfrac{\alpha_k}{2\mu}\|G_L(y_k)\|^2+\left(\tfrac{\alpha_k}{2\mu} -\tfrac{\alpha_k}{2L}\right)\|G_L(y_k)\|^2\\
&=&(1-\alpha_k)\left(F(x_{k+1})-\varphi_k^* -\tfrac{\mu}2\tfrac{\beta_k}{1-\beta_k}\|x_k-y_k\|^2 + \langle G_L(y_k),x_k-y_k\rangle \right)\\
&& + (1-\alpha_k)\tfrac{1}{2L}\|G_L(y_k)\|^2.
\end{eqnarray*}
Using a rearrangement of the lower bound \eqref{eq:lbcomposite}, and \eqref{eq:xkp1yp}, gives
\begin{eqnarray*}
&& \hspace{-5mm}F(x_{k+1}) - \varphi_{k+1}^*\\
&\leq&(1-\alpha_k)\left(F(x_k)- \langle G_L(y_k),x_k-y_k\rangle - \tfrac{\mu}2\|x_k-y_k\|^2 - \tfrac1{2L}\|G_L(y_k)\|^2 -\varphi_k^*\right) \\
&& +(1-\alpha_k)\left(-\tfrac{\mu}2\tfrac{\beta_k}{1-\beta_k}\|x_k-y_k\|^2 + \langle G_L(y_k),x_k-y_k\rangle + \tfrac{1}{2L}\|G_L(y_k)\|^2\right)\\
&=&(1-\alpha_k)(F(x_k) -\varphi_k^*) +(1-\alpha_k)\left( - \tfrac{\mu}2\|x_k-y_k\|^2 -\tfrac{\mu}2\tfrac{\beta_k}{1-\beta_k}\|x_k-y_k\|^2\right)\\
&\leq&(1-\alpha_k)(F(x_k) -\varphi_k^*) - \tfrac{\mu}2(1-\alpha_k)\tfrac{1}{1-\beta_k}\|x_k-y_k\|^2\\
&\leq&(1-\alpha_k)(F(x_k) -\varphi_k^*).
\end{eqnarray*}
Thus, the iterates generated by ACUESA form a UES.\qed
\end{proof}
\begin{corollary}\label{coro4}
Let Assumption~\ref{A_SCL} hold. Then, the sequence of iterates $\{x_k\}_{k\geq0}$ generated by Algorithm \ref{alg:ACUESA} exhibits the optimal linear rate of convergence
\begin{align*}
F(x_k) -\varphi_k^* \leq \left(1- \sqrt{\tfrac{\mu}{L}}\right)^k (F(x_0) -\varphi_0^*).
\end{align*}
\end{corollary}
\section{An algorithm with adaptive $L$}\label{sec:adaptiveL}
In the algorithms presented so far, the Lipschitz constant $L$ is explicitly used in each algorithm. However, by studying the convergence proofs for Algorithms~\ref{alg:SUESA}--\ref{alg:ACUESA} one notices that the role of the Lipschitz constant $L$ is to enforce a reduction in the function value from one iteration to the next (see the first step in the proofs of Theorems~\ref{thm1}--\ref{thm4}). Thus, it is natural to ask the question, `Can an \emph{adaptive} Lipschitz constant, $L_k$ say, be used in place of the true Lipschitz constant $L$?'. In this section we show that, using a strategy similar to that proposed by Nesterov in \cite{Nesterov07,Nesterov13}, it is possible to employ an adaptive Lipschitz constant while preserving convergence guarantees.
\subsection{The Inequality}
When the Lipschitz constant $L$ is unknown, or is expensive to compute, it may be preferable to employ an `adaptive' Lipschitz constant, $L_k$ say, i.e., determine a value $L_k$ that approximates $L$ locally. This approach has been previously studied by Nesterov in \cite{Nesterov07,Nesterov13}, and it has the added advantage that $L_k$ may be smaller than the true Lipschitz constant $L$, which can lead to large step sizes. Throughout the algorithm certain inequalities must hold to ensure that convergence guarantees are maintained. The relevant inequalities are as follows.
\paragraph{Smooth case.} For smooth functions, \eqref{eq:zzzz3} and \eqref{eq:ibuij} must hold for SUESA and ASUESA, respectively. This means that at every iteration, if $L_k$ satisfies
\begin{equation}\label{eq:smoAdapL}
f\left(y_k - \tfrac1{L_k} \nabla f(y_k) \right)
\leq f(y_k ) - \tfrac1{2 L_k} \|\nabla f(y_k)\|^2,
\end{equation}
then convergence guarantees for SUESA and ASUESA are maintained.
If $L_k$ is chosen to satisfy \eqref{eq:smoAdapL}
then we will show that we have the improvement $\alpha_k = \mu/L_k$ (or $\alpha_k = \sqrt{\mu/L_k}$ for the accelerated case) at every iteration.
\paragraph{Non-smooth case.}
For composite functions, \eqref{eq:compFreduce} and \eqref{eq:Fkp11} must hold for CUESA and ACUESA, respectively. So, if $L_k$ satisfies
\begin{eqnarray}
\label{eq:comAdapL}
F\left(y_k - \tfrac1{L_k} G_{L_k}(y_k)\right) \leq F(y_k) - \tfrac1{2L_k} \|G_{L_k}(y_k)\|_2^2,
\end{eqnarray}
then the algorithms are still guaranteed to converge. This also implies the improvement $\alpha_k = \mu/L_k$ (or $\alpha_k = \sqrt{\mu/L_k}$ for the accelerated case) at every iteration.
With these two inequalities in mind, the adaptive Lipschitz process can be described briefly as follows. When initializing Algorithms~\ref{alg:SUESA}--\ref{alg:ACUESA}, choose an initial estimate $L_0>0$, and increase and decrease factors $u>1$ and $d>1$ respectively. To find the appropriate $L_k$, at iteration $k$, one starts with the value $L_{k-1}$ (i.e., the adaptive Lipschitz constant from the previous iteration) and increases it via multiplication with $u$, or decreases it via division by $d$, until \eqref{eq:smoAdapL} (or \eqref{eq:comAdapL}) is satisfied. At iteration $k$, once an $L_k$ is found such that \eqref{eq:smoAdapL} (or \eqref{eq:comAdapL} in the composite case) holds, then the iteration proceeds with $L_k$ used in place of $L$.
Note that, using this process, it is possible that at some iteration, $L_k < L$, i.e., $L_k$ may be smaller than $L$. In this case, the stepsize $1/L_k$ is used, which is larger than $1/L$.
The psuedocode is presented in Algorithm~\ref{alg:adapL}. Note that determining the adaptive Lipschitz constant occurs as an inner loop within one of Algorithms~\ref{alg:SUESA}--\ref{alg:ACUESA}. Thus, we use the iteration counter $s$ in Algorithm~\ref{alg:adapL} to distinguish it from the outer loop counter $k$.
\begin{algorithm}
\caption{Finding $L_k$ in iteration $k$ of Algorithms~\ref{alg:ASUESA} and \ref{alg:ACUESA}.}
\label{alg:adapL}
\begin{algorithmic}[1]
\STATE Input: $x_k, v_k$ $u>1, d>1$ and $L_{k-1}$.
\STATE Initialize: If $k=0$ let $L_{s} = L_0$, or if $k\geq1$ then $L_{s} = L_{k-1}/d$.
\FOR{$s = 0,1,2,\dots$}
\STATE $\alpha_{s} = \sqrt{\tfrac{\mu}{L_{s}}}$, $\beta_{s} = \tfrac{1}{1+\alpha_{s}}$.\label{ref:mmmmm0}
\STATE Set $y_s = \beta_{s} x_k + (1- \beta_{s}) v_k$.
\STATE Set $x_{s} = y_s - \frac{1}{L_s} G(y_s).$
\IF{\eqref{eq:smoAdapL} or \eqref{eq:comAdapL} holds}
\STATE Break.
\ELSE
\STATE $L_{s+1}$ = $u \cdot L_s$.\label{ref:mmmmm1}
\ENDIF
\ENDFOR
\STATE Output: $L_k = L_s$, $\alpha_k = \alpha_{s}$, $\beta_k = \beta_{s}$, $y_k = y_s$ and $x_{k+1} = x_s$.
\end{algorithmic}
\end{algorithm}
Note that the strategy above holds for Algorithms~\ref{alg:ASUESA} and \ref{alg:ACUESA}, but it is straightforward to adapt it to Algorithms~\ref{alg:SUESA} and \ref{alg:CUESA} by modifying the variables $\alpha_s$ and $\beta_s$.
We now present several theoretical result related to this setup.
\begin{lemma}
Let $u,d >1$. If $L_0 \leq L$ then for all $k$, $L_k \leq L \cdot u$. If $L_0 \geq L$ then for the first $k=\frac{\log \frac{L}{L_0}}{\log d}$ iterations,
$L_k = L_0 \cdot d^k$.
\end{lemma}
\begin{proof}
Note that, if $L_k \geq L$ then $L_k$ is guaranteed to satisfy \eqref{eq:smoAdapL} or \eqref{eq:comAdapL}.
Now suppose that at the start of iteration $k$, we have a trial value $L_s < L$. If $L_s$ satisfies either \eqref{eq:smoAdapL} or \eqref{eq:comAdapL} then one sets $L_k=L_s$ and terminates the inner loop, noting that $L_k < L$. If $L_s$ does not satisfy \eqref{eq:smoAdapL} or \eqref{eq:comAdapL} then it is increase by multiplication with $u$. Suppose that $L_s$ is the largest possible trial value satisfying $L_s < L$ and $L_s\cdot u \geq L$, but with \eqref{eq:smoAdapL} or \eqref{eq:comAdapL} not holding. Then multiplying $L_s$ by $u$ once results in $L_s\cdot u \geq L$ and $L_s\cdot u \leq L\cdot u$, and this value is guaranteed to satisfy \eqref{eq:smoAdapL} or \eqref{eq:comAdapL} so we set $L_k = L_s$.
On the other hand, suppose that at iteration $k=0$, the initial value happens to satisfy $L_0 > L$. Then this $L_0$ will satisfy \eqref{eq:smoAdapL} or \eqref{eq:comAdapL} so it will be accepted with $L_k = L_0$ at iteration $k=0$. At iteration $k=1$ the first trial value is set to $L_s = L_{k-1}/d = L_0/d$. If $L_s \geq L$, then one accepts $L_k = L_1 = L_s$ and the iteration proceeds. At the start of any iteration, if the previous value $L_{k-1}\geq L$, then the first trial value is divided by $d>1$. Thus, it is guaranteed that $\frac{L_0}{d^K} \geq L, $
where
$$ K = \left \lfloor{\frac{\log \frac{L_0}{L}}{\log d}}\right \rfloor.$$
In other words, $L_k\geq L$ must hold for $k=\{1,2,...,K\}$.
\end{proof}
\begin{proposition}
The maximum number of times that Lines~\ref{ref:mmmmm0}~to~\ref{ref:mmmmm1} are executed during the first $K$ iterations is bounded by
\begin{equation}\label{eq:case1adapL}
\left \lceil{\frac{\max\{\log (Ld/L_0),0\}}{\log u}}\right \rceil + (K-1)\left \lceil{\frac{\log d}{\log u}+1}\right \rceil.
\end{equation}
\end{proposition}
\begin{proof}
In the first iteration, at most we need to execute the procedure from line \ref{ref:mmmmm0} to line \ref{ref:mmmmm1} of Algorithm \ref{alg:adapL} $\left \lceil{\frac{\log (Ld/L_0)}{\log u}}\right \rceil $ times, assuming $L_0/d <L$. In the case of $L_0/d\geq L$, this procedure is carried out once. In the $k$th iteration when $k\geq 2$, since we know that $L_{k-1}\geq L$, the above procedure should be run $\left \lceil{\frac{\log d}{\log u}+1}\right \rceil $ times if $L_{k-1}/d <L$ occurs. Thus, we obtain \eqref{eq:case1adapL}.
\end{proof}
\section{Numerical Experiments}\label{sec:numericalexperiments}
In this section, we present numerical results to compare our proposed algorithms with several other methods that have an optimal convergence rate. The algorithms are as follows, and are summarized in Table~\ref{table-DiscAlgs}.
\emph{OQA.} The Optimal Quadratic Averaging algorithm (OQA) \cite{Drusvyatskiy16}, which builds upon the work in \cite{Bubeck15}, maintains a quadratic lower bound on the objective function value at every iteration. The quadratic lower bound is called `optimal' because it is the `best' lower bound that can be obtained as a convex combination of the previous 2 quadratic lower bounds. In OQA, $x_{k+1}$ is set to be the minimizer of $f(x)$ on the line joining the points $x_k^+$ and the minimizer of the current quadratic lower bound. In \cite{Drusvyatskiy16} the author suggest a variant of OQA, which we call OQA+ here, that computes $x_{k}^+$ via a line search that does not use the true Lipschitz constant $L$. We compare both OQA and OQA+ in our experiments.
\emph{NEST.} We use NEST to denote the algorithm described in Chapter 2 of \cite{Nesterov04}. Further, NEST+ is a variant of NEST in which the Lipschitz constant $L$ is adaptively update via the strategy in \cite{Nesterov07,Nesterov13}.
\emph{GD.} We also implement a Gradient Descent (GD) method which uses a fixed stepsize of $\tfrac{1}{L}$. Note that this is similar to Algorithm~\ref{alg:SUESA}, although GD does not maintain any kind of lower bound. As the only non-optimal algorithm, Gradient Descent provides a benchmark that will enable us to observe any performance advantages of the optimal methods.
\begin{table}[H]\tiny
\centering
\begin{tabular}{l|l} \toprule
{Algorithm} & {Description} \\ \midrule
{OQA} & {Optimal Quadratic Averaging Algorithm} \\
{OQA+} & {Optimal Quadratic Averaging Algorithm with $x_k^+= \text{line-search} (x_{k},x_k-\nabla f(x_k))$} \\
{ASUSEA} & {Accelerated Smooth Underestimate Sequence Algorithm}\\
{ASUSEA+} & {Accelerated Smooth Underestimate Sequence Algorithm with adaptive Lipschitz constant} \\
{SUESA} & {Smooth Underestimate Sequence Algorithm} \\
{NEST} & {Algorithm described in Chapter 2 of \cite{Nesterov04}} \\
{NEST+} & {Algorithm described in Chapter 2 of \cite{Nesterov04} with adaptive Lipschitz constant} \\\midrule
{CUSEA} & {Composite Underestimate Sequence Algorithm} \\
{CUESA+} & {Composite Underestimate Sequence Algorithm with adaptive Lipschitz constant} \\
{ACUESA} & {Accelerated Composite Underestimate Sequence Algorithm} \\
{ACUESA+} & {Accelerated Composite Underestimate Sequence Algorithm with adaptive Lipschitz constant} \\
{CNEST} & {Algorithm (4.9) described in \cite{Nesterov07} with fixed Lipschitz constant} \\
{CNEST+} & {Algorithm (4.9) described in \cite{Nesterov07} with adaptive Lipschitz constant} \\\bottomrule
\end{tabular}
\normalsize
\caption{Description of implemented algorithms}
\label{table-DiscAlgs}
\end{table}
\subsection{Empirical Risk Minimization}
We consider two Empirical Risk Minimization (ERM) problems, which are popular in the machine learning literature. In particular, we study ERM with a squared hinge loss
\begin{equation}\label{eq:squaHingeLoss}
f(x) = \frac{1}{m} \sum_{i=1}^{m} \big(\max \{0,y_i - a_i^Tx\}\big)^2 + \tfrac{\lambda}{2}\tnorm{x},
\end{equation}
and ERM with a logistic loss (also called logistic regression)
\begin{equation}\label{eq:logisticRe}
f(x) = \frac{1}{m} \sum_{i=1}^{m} \log\big( 1+ e^{-y_ia_i^Tx}\big)^2 + \tfrac{\lambda}{2}\tnorm{x}.
\end{equation}
In each case $y_i \in\{-1,+1\}$ is the label and $a_i \in \R^n$ represents the training data for $i = \{1,2,...,m\}$. All the datasets in our experiments come from LIBSVM database \cite{Chang11}. Also note that for all experiments we have $\mu = \lambda$.
\subsubsection*{Comparison on Decreasing Objective Values}
In the first experiment we compare the OQA, ASUESA and NEST algorithms (both the standard and adaptive Lipschitz variants) and investigate how the objective function values behave on several test problems. The test problems considered in this experiment are the \texttt{ala} dataset with a squared hinge loss and a value $\lambda = 10^{-4}$, the \texttt{rcv1} dataset with a logistic loss and a value $\lambda = 10^{-4}$, and the \texttt{covtype} dataset with a squared hinge loss and a value $\lambda = 10^{-5}$.
\begin{figure*}[h!]
\centering
\includegraphics[scale=.15]{1a1a_2_1e-4_2.eps}
\includegraphics[scale=.15]{1rcv1_1_1e-4_2.eps}
\includegraphics[scale=.15]{1covtype_2_1e-5_2.eps}
\includegraphics[scale=.15]{1a1a_2_1e-4_3.eps}
\includegraphics[scale=.15]{1rcv1_1_1e-4_3.eps}
\includegraphics[scale=.15]{1covtype_2_1e-5_3.eps}
\caption{Evolution of the gap $f(x_k)-\phi_k^*$ for each algorithm compared with the number of function evaluations and cputime.}
\label{fig:exp11}
\end{figure*}
In Figure~\ref{fig:exp11} we plot the gap $f(x_k)-\phi_k^*$ vs the number of function evaluations and the gap $f(x_k)-\phi_k^*$ vs the cpu time. The figure shows the advantages of using an adaptive Lipschitz constant with the adaptive methods performing better than their original versions in most cases. Figure~\ref{fig:exp11} also shows that ASUESA+ performs very well, being the best algorithm on the first dataset, and the second best algorithm on the other two datasets.
\subsubsection*{Theory and Practice for OQA and ASUESA}
In this numerical experiment we study ASUESA and OQA and investigate how their practical performance compares with that predicted by theory. For the OQA algorithm a line search is needed to determine a necessary algorithmic variable, and to ensure that theory for OQA holds, the line search should be exact. In this experiment we will use bisection to compute this variable, but we will restrict the number of bisection steps allowed to $b=2,5,20$. Figure~\ref{fig:bisection} plots the ratio $(f(x_k) - \phi^*_{k})/(f(x_{k-1}) - \phi^*_{k-1})$ for ASUESA and for three instances of OQA, where each instance uses a different number of bisection steps $b=2,5,20$. We also plot $1-\sqrt{\tfrac{\mu}{L}}$ (black dots), which is the amount of decrease in the gap $f(x_k)-\phi_k^*$ at each iteration predicted by the theory. (In theory, we should have $(f(x_k) - \phi^*_{k})/(f(x_{k-1}) - \phi^*_{k-1}) = 1-\sqrt{\tfrac{\mu}{L}}$ for all $k\geq 0$.)
\begin{figure*}[h!]
\centering
\includegraphics[scale=.15]{11gap_a1a_1_1e-4_1.eps}
\includegraphics[scale=.15]{11gap_covtype_2_1e-5_1.eps}
\caption{Comparison of $\tfrac{f(x_k) - \phi^*_{k}}{f(x_{k-1}) - \phi^*_{k-1}}$ for ASUESA and for OQA with different numbers of bisection steps ($b=2,5,20$). The black dots are $1-\sqrt{\tfrac{\mu}{L}}$.}
\label{fig:bisection}
\end{figure*}
From the plots in Figure~\ref{fig:bisection} we see that ASUESA performs very well, and as predicted by the theory, with the ratio $(f(x_k) - \phi^*_{k})/(f(x_{k-1}) - \phi^*_{k-1})$ always strictly below the theoretical bound. On the other hand, the quality of line search affects OQA significantly. The fewer the number of line search (bisection) iterations, the more likely it is for OQA to violate the theoretical results. Note that this is not necessarily surprising because the theory for OQA requires the exact minimizer along a line segment to be found, so 2 or 5 iterations of bisection may be simply too few to find it. Notice that when $b=2$, the green line shows that OQA behaves erratically, with the ratio $(f(x_k) - \phi^*_{k})/(f(x_{k-1}) - \phi^*_{k-1})$ being greater than 1 on many iterations, indicating that the gap is growing on those iterations. When we use OQA with $b=5$ steps of bisection at each iteration (light blue line), the algorithm performs better, and often, but not always, the ratio is less than 1. Finally, the dark blue line shows the behaviour of OQA when $b=20$ steps of bisection at each iteration. The dark blue line is always below the theoretical bound of $1-\sqrt{\tfrac{\mu}{L}}$, indicating good algorithmic performance (often better than predicted by theory). However, the line search needed by OQA comes at an additional computational cost, which can still mean that the overall runtime is longer for OQA than for ASUESA, as we now show.
Here a similar experiment is performed to compare the theoretical and practical performance of SUESA and ASUESA. We have already seen that the theoretical results for ASUESA give a proportional reduction of $1-\sqrt{\tfrac{\mu}{L}}$ in the gap at every iteration. However, for SUESA, the proportional reduction in the gap is $1-\tfrac{\mu}{L}$. We investigate how these theoretical bounds compare with the practical performance of each of these algorithms. We use the \texttt{ala}, \texttt{rcv1} and \texttt{covtype} datasets for this experiment, and for each of the three datasets we form both a logistic loss, and a squared hinge loss to create 6 problem instances. The results are shown in Figure~\ref{fig:thvsprac}.
\begin{figure*}[h!]
\centering
\includegraphics[scale=.15]{1gap_a1a_1_1e-4.eps}
\includegraphics[scale=.15]{1gap_rcv1_1_1e-4.eps}
\includegraphics[scale=.15]{1gap_covtype_1_1e-5.eps}
\includegraphics[scale=.15]{1gap_a1a_2_1e-4.eps}
\includegraphics[scale=.15]{1gap_rcv1_2_1e-4.eps}
\includegraphics[scale=.15]{1gap_covtype_2_1e-5.eps}
\caption{Comparison of $\tfrac{f(x_k) - \phi^*_{k}}{f(x_{k-1}) - \phi^*_{k-1}}$ for SUESA and ASUESA and $1 - \tfrac{\mu}{L}$(green line) and $1-\sqrt{\tfrac{\mu}{L}}$ (black line). }
\label{fig:thvsprac}
\end{figure*}
Figure~\ref{fig:thvsprac} presents the ratio $\tfrac{f(x_k) - \phi^*_{k}}{f(x_{k-1}) - \phi^*_{k-1}}$ for SUESA and ASUESA. Also displayed is the theoretical (unaccelerated) rate $1 - \tfrac{\mu}{L}$ (the green line) and the theoretical (accelerated) rate $1-\sqrt{\tfrac{\mu}{L}}$ (the black line). One sees that the practical performance of SUESA is very similar to that predicted by the theory because the blue line matches the green line closely. Another observation is that for the accelerated algorithm (ASUESA), in practice, the reduction in the gap $f(x_k) - \phi^*_{k}$ is often more optimistic than the theoretical rate.
\subsection{Experiments on composite functions}
In this section we perform several numerical experiments on problems with a composite objective. Specifically, we consider the elastic net problem, which is problem \eqref{eq:Problem} with
\begin{equation}\label{eq:squaHingeLoss}
F(x) = \frac{1}{n} \sum_{i=1}^{n} (a_i^Tx -y_i)^2 + \tfrac{\lambda_1}{2}\|x\|_2^2 + \tfrac{\lambda_2}{2}\|x\|_1.
\end{equation}
Notice that the first two terms in \eqref{eq:squaHingeLoss} are smooth, while the $\ell_1$-norm term makes \eqref{eq:squaHingeLoss} nonsmooth overall.
We compare our Algorithm \ref{alg:CUESA} and \ref{alg:ACUESA} (CUESA and ACUESA) with the one proposed in \cite{Nesterov07} (NEST). As stated previously, each of these algorithms can be implemented with either a fixed $L$ or an adaptive $L$, and we will compare each algorithm under both of these two options.
For these experiments we again use the 3 datasets \texttt{ala}, \texttt{rcv1} and \texttt{covtype}. For the \texttt{ala} data the regularization parameters were set to $\lambda_1= \lambda_2 = 10^{-4}$, for the \texttt{rcv1} data the regularization parameters were set to $\lambda_1 = 10^{-4}$ and $ \lambda_2 = 10^{-5}$, and for the \texttt{covtype} data the regularization parameters were set to $\lambda_1 = 10^{-4}$ and $ \lambda_2 = 10^{-6}$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.15]{a1a_3_1e-4_1e-4_2.eps}
\includegraphics[scale=.15]{rcv1_3_1e-4_1e-5_2.eps}
\includegraphics[scale=.15]{covtype_3_1e-4_1e-6_2.eps}
\includegraphics[scale=.15]{a1a_3_1e-4_1e-4_3.eps}
\includegraphics[scale=.15]{rcv1_3_1e-4_1e-5_3.eps}
\includegraphics[scale=.15]{covtype_3_1e-4_1e-6_3.eps}
\caption{Comparison of how gaps between the objective values and the minimum amount of the lower bounds decreases for different algorithms. We observe the advantage of ACUESA+ in both number of function evaluations and running time.}
\label{fig:comp}
\end{figure}
The results of this experiment are presented in Figure~\ref{fig:comp}, and they show the clear practical advantage of the ACUESA algorithm. The ACUESA algorithm outperforms the CNEST algorithm in all problem instances. Interestingly, on the \texttt{rcv1} dataset, the CUESA+ algorithm (CUESA with an adaptive Lipschitz constant) performs better than the accelerated ACUESA algorithm, although the ACUESA+ (accelerated plus adaptive Lipschitz constant) algorithm is still the best overall.
In the final numerical experiment presented here, we investigate the theoretical vs practical performance of CUESA and ACUESA. We set up three problems using each of the 3 datasets already described, and the results are presented in Figure~\ref{fig:comptheoryvsprac}.
\begin{figure}[H]
\centering
\includegraphics[scale=.15]{Non_gap_a1a_3_1e-4_1e-5.eps}
\includegraphics[scale=.15]{Non_gap_covtype_3_1e-5_1e-6.eps}
\includegraphics[scale=.15]{Non_gap_rcv1_3_1e-4_1e-6.eps}
\caption{Comparison of $\tfrac{f(x_k) - \phi^*_{k}}{f(x_{k-1}) - \phi^*_{k-1}}$ for CUESA and ACUESA and $1 - \tfrac{\mu}{L}$(green line) and $1-\sqrt{\tfrac{\mu}{L}}$ (black line). Here we have a similar observation as in Figure \ref{fig:comp}.}
\label{fig:comptheoryvsprac}
\end{figure}
As before, the green line represents the theoretical (unaccelerated) rate $1 - \tfrac{\mu}{L}$ and the black line represents the theoretical (accelerated) rate $1 - \sqrt{\tfrac{\mu}{L}}$. Note that the practical performance of CUESA closely matches the theoretical rate. We also observe that the practical performance of ACUESA is always at least as good as the theoretical rate, and can often get better decrease in the gap per iteration than $1 - \sqrt{\tfrac{\mu}{L}}$.
All the numerical results presented in this section strongly support the practical success of the SUESA, ASUESA, CUESA and ACUESA algorithms.
\section{Conclusion}\label{sec:conclusion}
In this paper we studied efficient algorithms for solving the strongly convex composite problem \eqref{eq:Problem}. We proposed four new algorithms --- SUESA, ASUESA, CUESA and ACUESA --- to solve \eqref{eq:Problem} in both the smooth and composite cases. All of these algorithms maintain a global lower bound on the objective function value, which can be used as an algorithm stopping condition to provide a certificate of convergence. Moreover, we proposed a new underestimate sequence framework that incorporates three sequences, one of which is a global lower bound on the objective function, and this framework was used to establish convergence guarantees for the algorithms proposed here. Our algorithms have a linear rate of convergence, and the two accelerated variants (ASUESA and ACUESA) converge at the optimal linear rate. We also presented a strategy to adaptively select a local Lipschitz constant for the situation when one does not wish to, or cannot, compute the true Lipschitz constant. Numerical experiments show that our algorithms are computationally competitive when compared with other state-of-the-art methods including Nesterov's accelerated gradient methods and optimal quadratic averaging methods.
\bibliographystyle{plain}
|
1,116,691,500,921 | arxiv | \section{Introduction}
At the time of writing, the world is dominated by a worldwide pandemic called COVID-19.
Developing a vaccine against it is one of the most important possibilities to fight this virus.
Nevertheless, time is rare, as the pandemic already caused more than 2.15 million deaths\cite{Worldmeter2021} in the last 12 months.
To speed up this process, vast processing power is needed to simulate the folding of the virus proteins.
This simulated folding process helps scientists in finding new possibilities for a vaccine.
However, this processing power amount cannot be reached with a single supercomputer or a server farm without enormous costs.
To solve this problem, a more sophisticated approach can be applied: volunteer-based distributed computing.
It is called volunteer-based Distributed Computing, which is used by the folding@home project.
By combining the idle power of a large portion of computers worldwide, enormous processing power can be formed.
At the time of writing, the combined processing power reached 0.22 ExaFLOPS\cite{Stats2021}.
In peak time (2020-03), the processing power even exceeded 1.5 ExaFLOPS \cite{Shilov2020} which is even higher than the currently fastest computer globally with about 0.44 ExaFLOPS\cite{Top5002021}.
Furthermore, in contrast to the world's fastest computer, the folding@home project does not need to take all its clients' operating costs into consideration because this cost is donated by the participants who get credits in exchange.
In the following, we introduce the concept and architecture behind the folding@home project.
In contrast, we present a trust-based approach and discuss whether such an approach including trust communities is applicable to a volunteer-based Distributed Computing system like folding@home.
\section{Related Work}
This section gives an overview of Grid Computing, how the folding@home project uses it, and how a Trusted-Desktop-Grid relates to it.
\subsection{Grid Computing}
Grid Computing integrates many computers into a single unit, supported by a robust high-speed network.
Grid Computing can solve complex problems and large amounts of data.
There are five different Grid Computing methods, each of them specialized for a particular type of problem.
(1) \textit{Distributed Supercomputing} is used to solve large and complex problems a single machine would not be able to. Multiple high capacity resources are used for this method.
(2) \textit{High Throughput Computing} uses a large number of CPU cores which can process multiple tasks in parallel and makes it possible to process a large number of small tasks in a short time.
(3) \textit{On-demand Computing} enables resources to be accessible through the grid.
(4) \textit{Data-Intensive Computing} multiple systems share the amount of data to be processed.
(5) \textit{Logistical Networking}, similar to warehouse logistics, schedules - compared to traditional networks - the transport and storage of data inside the grids network.
\cite{Guharoy2017}
The Trusted-Desktop-Grid approach analyzed below in (\ref{tdg}) uses the Distributed Supercomputing method.
\begin{figure}
\centering
\includegraphics[width=0.80\linewidth]{Grid-Computing-Architecture.pdf}
\caption{The architecture of Grid Computing (based on \cite{Guharoy2017})}
\label{arch}
\end{figure}
The architecture of Grid Computing is made out of five layers. (Fig \ref{arch})
First, the \textit{fabric layer} provides the resources to be shared, such as computational or network resources and storage systems.
On top of that is the \textit{connectivity layer} which controls the communication inside the grids network.
It makes use out of the fabric layer.
Next is the \textit{resources layer} which is responsible for monitoring the grid and its resources.
This is followed by the \textit{collective layer} with its APIs and SDKs that provide access to the resources.
It is directly used by finally the \textit{application layer}.
This last layer contains all the applications that make use of the grid. \cite{Guharoy2017}
Scientific research is one area that uses Grid Computing, as scientists often face complex problems that can not be solved with a single machine.
Furthermore, it enables them to work with large amounts of data.
Another solution similar to Grid Computing is Cloud Computing.
The difference to Grid Computing is that computing capabilities come from a computer infrastructure provided by a company via TCP/IP.
Therefore Cloud Computing provides systems dedicated to this task. In contrast, the Grid Computing solution systems share their unused resources \cite{Guharoy2017}.
That dedication has the advantage that there is always the same amount of computing power.
As with Grid Computing, the amount varies over time.
In contrast, Grid Computing uses only the resources that already exist and would otherwise be left unused.
Both solutions have their advantages and disadvantages, and choosing depends on the central operation context.
This article focuses on Grid Computing and adds a trust- and volunteer-based approach to it.
\subsection{The folding@home project}
\label{fah}
The idea of folding@home is to use volunteer-based Grid Computing for simulating the folding of proteins which is a process of self-assembling and is related to many diseases.
It depends on whether a protein folds or misfolds, leading to "disrupting ordinary cellular functions" \cite{Beberg2009}.
The simulation of protein folding is so complex that a single-core CPU can only simulate around 20 nanoseconds of the molecular process per day.
However, the whole simulation takes from milliseconds to seconds, making it impossible to compute for a single computer or cluster \cite{Beberg2009}.
Moreover, the costs for such a cluster would be enormous.
Grid Computing is an excellent solution to this problem.
There are hundreds and thousands of computers around the globe, many of them just running idle.
These machines are controlled by many independent people and not by one organization.
They have to be convinced to contribute voluntarily to the project.
One solution to this is gamification.
Every contributor gets credits for the work units his machine has completed.
The folding@homes statistic website lists all the users ranked by the credit points earned so far.
The participants can additionally form teams to accumulate their credit points.
More prominent organizations can use this reputation system for better social standing.
Gamification also comes with some disadvantages, like people running the client software on someone else's computer to gain more credit points.
According to \cite{Beberg2009} both "installing the clients on machines they do not own at school or work" and "the use of Trojan horses on P2P file sharing systems to install the client and gain in the statistics" have been taken into consideration, and these problems are treated by banning the participants as well as deleting their scores.
All of this leads to volunteer-based Grid Computing.
In the case of folding@home, this grid has a central control instance.
Every volunteer downloads the \textit{client software} from the webserver and installs it on his local machine.
The client provides some settings like how much of the computer's performance should be used and whether to do it when the computer is idle.
Then the client begins to ask the \textit{assignment server} for work that has to be done.
The result is the address of a \textit{work server} which provides the binaries, also called work unit, to compute.
Each work unit has to be compatible with the computer's architecture and operating system.
Providing binaries as work units makes it very flexible for the task to be done, as this binary can contain any algorithm as long as it is compatible.
The client itself has no information about the tasks hardcoded into it.
The work unit's result is then sent back to the work server, or, if this server is not available, to a \textit{collection server} which acts as a buffer and sends the information to the work server as soon as it comes back online.
The work server computes the results, shares them with the \textit{web and stats server}, and determines the next work unit to be done.
This flow works as long as the client machine is running.
However, as the client software runs in the background, the participant can shutdown his machine at any time, or it might even crash.
Therefore, the results of the work unit have to be stored recurrently \cite{Beberg2009}.
At the time of writing, the most important diseases related to protein folding is COVID-19.
This world-dominating pandemic let "the project grew from $\thicksim$30,000 active devices to over a million devices around the globe" \cite{Zimmerman2020}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.6\linewidth]{trust-and-reputation-system.png}
\caption{"The life-cycle of an eTC: During the pre-organisation phase, potential members are searched. Then, the eTC is formed (TC formation phase). Afterwards, a TCM is elected and the eTC is in the TC operation phase, where the TCM and the members use strategies, e.g. to observe the environment and control the eTC." \cite{Edenhofer2016}}
\label{trust-reputation}
\end{figure*}
\subsection{Trusted-Desktop-Grid}
\label{tdg}
A Trusted-Desktop-Grid (TDG) at its core is similar to Grid Computing - with the addition of a trust-based selection method.
Like Grid Computing, it consists of a set of agents that collectively work towards a common goal.
However, in contrast, a TDG does not need a central server that determines which agent should do which work unit.
A TDG acts more organic by selecting an interaction partner by their reputation.
Furthermore, it can handle multiple submitters of work units.
Every agent can either accept a work unit by giving a pessimistic deadline or reject it.
Even accepted work units can be rejected afterward.
Based on the work unit result, the participant gets rated.
These ratings $r$ are between $-1$ and $1$, meaning bad and good.
Based on a set of ratings $ R = \{r_1,r_2,r_3\} $ a reputation is calculated using an aggregation function $\tau(R) \in [0,1]$ with $0.5$ being a neutral reputation \cite{Kantert2016}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{replication-factor.png}
\caption{"We calculate the minimum replication factor $f_{min}$ for each worker. First, we interpolate $f_{min}$ based on the reputation $\tau$ between defined limits (here from the interval $[1.5,5]$). Then, we round $f_{min}$ to the next integer using a roulette wheel random generator." \cite{Kantert2016}}
\label{replication-factor}
\end{figure}
With these reputation values, a minimum replication factor $f_{min}$ can be calculated. (Fig. \ref{replication-factor})
The replication factor describes how many other agents should receive the same work unit to get a trustable result.
An agent with a good reputation will have a low replication factor as this agent's result is trustable and has to be checked by only a small amount of other agents or eventually no agents at all.
For the calculation, it is necessary to define limits for the replication factor.
A higher maximum limit for the replication factor means more reliability but less throughput, as more agents have to work on the same work unit.
The minimum replication factor represents the number of other agents that calculate the same work unit even if the agent has the highest reputation.
If an agent with a miserable replication factor ($\tau(R) < 0.4$) is selected, the replication will be higher.
The minimum replication factor can be used for three distribution strategies.
1) The \textit{Dynamic Random Distribution Strategy (DRDS)} is the most simple of them.
It randomly selects one agent and uses its $f_{min}$ to select the same amount of other agents.
\cite{Kantert2016}
2) The \textit{Dynamic Ordered Distribution Strategy (DODS)} orders the agents by their $f_{min}$ in the first step.
The first work unit is then distributed to the agent with the lowest $f_{min}$ and to all the following until the highest $f_{min}$ of them is satisfied.
The second work unit is then distributed to the next agent with the lowest $f_{min}$ with no work unit \cite{Kantert2016}.
3) The \textit{Dynamic Grouping Distribution Strategy (DGDS)} groups the agents into "trusted" ($\tau > 0.7$), "untrusted" ($\tau \leq 0.4$) or "undecided" ($0.4 < \tau \leq 0.7$).
One "untrusted" agent and up to $\lfloor \frac{f_{min} - 1}{2} \rfloor$ other "untrusted" agents are selected in the initial step.
The same amount of "trusted" agents are selected then.
Lastly, the group gets filled up by "undecided" agents to match the group's highest $f_{min}$.
It ensures that the "untrusted" agents cannot form the majority \cite{Kantert2016}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.7\linewidth]{eTC.pdf}
\caption{This is an eTC with a TCM distributing WUs to the members and inviting a new agent to join, trying to separate the agents with a good reputation from those with a bad one.}
\label{etc}
\end{figure*}
\subsection{Trusted-Desktop-Grid with Trust Communities}
A further approach to TDG is the concept of Trust Communities (TC).
A TC is a subset of agents participating in a TDG, where each agent can trust another.
Every member of a TC is free to leave at any time, and new members are selected by their reputation.
In this article, we use explicit TCs (eTC) to compare them to the folding@home project.
Explicit TCs have the specialty to have a Trust Community Manager (TCM) which organizes the eTC.
The eTC has a life-cycle that repeats (Fig. \ref{trust-reputation}).
In the pre-organization phase, the first step of forming an eTC, all the agents are unrated and begin to rate each other based on the work results.
They receive a rating between -1 (e.g., the working unit was rejected) and 1 (e.g., correct and in time).
As soon as a certain number of agents have strong trust relations, they can decide to form an eTC, based on whether they would profit from this eTC or not.
In this TC formation phase, the agents negotiate with all the other potential members because they might not know every single one yet.
During this process and at every other time, all the agents are free to leave the eTC.
If enough agents have decided to stay, they elect the Trust Community Manager (TCM).
This election can be done by criteria like trust, reputation, or availability.
After this election, the operational phase begins.
In this phase, the TCM's organizational tasks are to monitor the community's performance, remove members that are performing worse than before, and let agents join to increase the eTC's performance.
The monitor task is also distributed to the other agents, as it is not possible to monitor all agents by the TCM.
If the benefit of operating inside an eTC is too low, the agents decide to leave the eTC.
As soon as a certain threshold is reached, the eTC gets dissolved by the TCM.
Then, this whole life-cycle begins again \cite{Edenhofer2016}.
\section{Approaches}
This section will summarize the centralized approach of the folding@home project like it is currently used.
After that, we will elaborate on a second approach that uses the previously introduced Trusted-Desktop-Grid combined with an explicit Trust Community.
\subsection{Centralized Distributed Computing}
As mentioned earlier, the folding@home project uses a centralized approach for selecting workers for the work units.
This approach needs a Server-Client architecture with a server that coordinates all the work units and clients that compute the work units.
As mentioned in \ref{fah}, the assignment server balances the clients' incoming work requests between the work servers, which then forward the work units to the clients.
Theoretically, the client has an unlimited amount of time to process the work unit (WU).
When the user shuts his machine down, and the client cannot continue to process, this unlimited amount of time is used.
However, in the usual case, the WU gets processed in a suitable amount of time.
Then the results are returned to the work server.
Based on the WU's complexity, the client receives credits for his work, which are collected on the statistics server.
\subsection{Trust-Based Organic Distributed Computing}
A trust-based based approach for organic Distributed Computing is way more organic.
No strict server-client architecture is needed, but instead, the participants, also called agents, organize themselves using a reputation system.
As already mentioned, we want to take a closer look at the explicit Trust Communities (eTC) and how they can be applied to the folding@home project (Fig. \ref{etc}).
First, we remove the server-client architecture, and every participant of the project becomes an agent.
To make both approaches comparable, the software of the agents, previously known as the clients, has to be limited to only accept work units and not also send new ones to the grid, except they become the Trust Community Manager.
Each organization that was previously hosting a work-server becomes a fully functional agent with no limitations.
We will call agents with the ability to distribute WUs work agents.
Therefore, these agents could also accept work units from other organizations.
To match the credit system of the foldign@home project, we will also give credits for solving WUs.
However, they are shared evenly between all the agents that worked on the same WU.
This sharing of credits makes it preferable to join an eTC as the replication factor is likely lower.
The on average lower replication factor leads to fewer agents working on the same WU, resulting in more credits for solving it.
The higher amount of credits makes it profitable for the agents to join an eTC because their only goal is to gain as many credits as possible.
Aiming for the highest performance, we have to consider that too egoistic agents might try to solve the WUs independently, so they do not need to share credits with other agents.
This behavior has to be prevented, as it lowers the throughput and making the whole system unprofitable.
As a result, we expect the eTCs to consist of the highest-rated agents, increasing the average performance, and even more critical, the throughput.
This results from fewer agents being involved in the processing of one work unit, which increases the parallelism.
To reduce the need for a central statistics server, we are using a blockchain to keep track of the credits.
Every agent can check their balance of credits by himself.
With the new setup, we now initiate the process of forming eTCs.
Each agent randomly distributes its work units to the other agents and rates them afterward by criteria like correctness, time to compute, or rejection.
As a result, the work agents begin to invite the agents with the best reputation to join their eTC.
If enough agents decide to join the eTC, the Trust Community Manager will be elected.
The election can be based on several criteria, e.g., availability or responsiveness.
For simplicity, we will use availability as the main criteria.
Therefore, the work agent that invited the other agents to join the eTC becomes the first TCM.
In case this agent becomes unavailable, the next available agent with the longest time participating in the eTC becomes the next TCM.
The TCM has the role of organizing the eTC.
First, to monitor the performance of all the members and assign other agents the role to monitor.
Second, to look for new members with a good reputation that would increase the eTC's performance.
Third, to remove members who have a decrease in performance.
Lastly, to dissolve the eTC as soon as the project is completed.
In our approach, we expand this role also to distribute the work units.
This new role makes it possible to continue distributing WUs if the former agent becomes unavailable and a new TCM is elected.
To enable this role, we need every member of an eTC to store a copy of the binary needed to construct new WUs.
\section{Comparison}
The newly developed trust-based approach might look slightly similar to the centralized one, but it performs differently.
Imagine an agent/client that receives a work unit and does not process it in order to harm the grid or because its owner shut down the agent/client.
With the traditional approach, this work unit would have to be redistributed to a new client after some threshold.
This redistribution costs additional time, while the new approach can use the other agents' results and penalize the non-responding agent by awarding a negative rating.
With this negative rating, it is less likely that this happens again, in contrast to the centralized approach, where the client can still receive new WUs when it is back online.
When already operating in an eTC, the agent's behavior can result in removal from the eTC, which has a more critical impact on preventing this situation.
Another possible situation is when the work server becomes unavailable, which would cause the traditional approach to come to a complete hold as soon as all WUs are finished.
This approach still can collect the results with the collection servers, but the clients cannot receive new WUs.
However, with the new approach, it is possible to keep the work running.
In response to the work unit distributing TCM becoming unavailable, a new TCM can be elected.
This new TCM continues the distribution of work units and keeps the system running.
The results are then returned to the initial TCM when it is available again.
With this new ability comes a considerable overhead, as more information has to be exchanged between the agents.
This overhead is one of the disadvantages of the trust-based approach.
Selecting the agents with the best reputation to form an eTC makes the higher throughput another advantage of the new approach over the traditional one.
If the WUs are smaller and well parallelizable, the higher throughput becomes even more significant regarding the grid's performance.
A considerable disadvantage of the trust-based approach is the necessary complexity of the agent's software.
The former clients now can have the same task of distributing WUs previously exclusive to the work server.
\section{Conclusion}
We presented the concepts of a Trusted-Desktop-Grid and an explicit Trust Community and used these to develop a trust-based extension for the folding@home project.
This approach shows that trust-based distributed computing is indeed applicable to the folding@home project.
Furthermore, it gives some performance and reliability advantages over the centralized approach, although some overhead will occur as a drawback.
Nevertheless, the new approach could take the abilities of the folding@home project even further.
\bibliographystyle{IEEEtran}
|
1,116,691,500,922 | arxiv | \section{Rationale}\label{introduction}
Sunspots have always been benchmarks
to test our understanding of magneto
convection. It is known for long
that magnetic forces impede the
free plasma motions, thus reducing the
efficiency of convection \citep{bie41,cow53}.
However,
convection occurs in sunspots despite the
strength of the magnetic field, and the
high conductivity of the photospheric
plasma. The problem arises as to what is the
mode of convection, i.e., how
magnetic fields and plasma flows
adjust one another to allow
transporting the energy that balances
the radiation losses.
This long-lasting problem is far from
been settled, and it is particularly
severe in the penumbrae of sunspots with
predominantly horizontal magnetic fields
and mass flows
\citep[for recent reviews see, e.g.,][]{sol03,tho04,sch08,san09,tho09}.
From an observational point of view, the problem
lies in the small physical
scales at which the convective transport
is organized. Even
with the best spatial resolutions achieved
at present, we cannot follow the rise, cooling,
and subsequent submergence of plasma blobs.
The topology of the magnetic field lines and
flows must be inferred indirectly.
The present paper is devoted to analyze
and interpret a recent observation
that may be central to constrain the
topology of the magnetic fields in penumbrae.
\citet{ich07} report the presence
of a strongly redshifted magnetic component in the
penumbrae of sunspots with a polarity opposite to the
main sunspot polarity. This component shows up throughout
the penumbra, a property used to argue
that the redshift must be due to vertical velocities.
The observations were carried out with the stallelite
{\em Hinode} \citep[][]{kos07},
which yields a spatial resolution of 0\farcs 32
at the working wavelength
\citep[$\simeq$\,6302\,\AA;][]{tsu08}
\,\citeauthor{ich07}
finding seems to be at variance with the magnetograms
taken by \citet{lan05,lan07} with the Swedish Solar Tower
\citep[SST,][]{sch02}, which do not show magnetic fields of reverse polarity
in penumbrae.
This poses a serious problem since SST has
twice {\em Hinode} spatial resolution and, therefore,
it should be simpler for SST to resolve and
detect mixed polarities. SST magnetograms only
reveal a decrease of
the magnetograph signals coinciding with the
dark cores, i.e., the dark lanes outlined
by bright filaments discovered by \citet{sch02}.
This work shows how {\em Hinode} and SST observations
can be naturally understood within the two
component semi-empirical model penumbra derived
by \citet[][hearafter SA05]{san04b},
provided that the dark cores are associated
with the reverse polarity.
SA05 works out the model
MIcro-Structured Magnetic Atmospheres (MISMAs\footnote{
The acronym was coined by \citet{san96}
to describe magnetic atmospheres having optically-thin
substructure, which naturally produce asymmetric
spectral lines.})
required to quantitatively
reproduce the asymmetries of the
Stokes profiles\footnote{As usual, the Stokes parameters
are used to characterize
the polarization; $I$ for the intensity,
$Q$ and $U$ for the two independent types
of linear polarization, and $V$ for the
circular polarization. The
Stokes profiles are
graphs of $I$, $Q$, $U$ and $V$ versus
wavelength for a particular spectral line.
They follow well defined symmetries when
the atmosphere has constant magnetic field
and velocity
\citep[see, e.g.,][]{lan92}.}
observed in a large sunspot.
Other inversion techniques have succeed in
reproducing the observed line shapes
\citep[e.g.,][]{san92b,wes01a,mat03},
but there is something unique to the MISMA inversion,
namely, the model demands two opposite
polarities. This unexpected result
has been often criticized as unreal
\citep[e.g.,][]{lan06,bel09}, however,
it is the ingredient that
naturally explains \hinode\ reversals.
The MISMA model sunspot includes two
magnetic components.
The major component
contains most of the mass of each resolution element,
and it has the polarity of the sunspot.
It is generally combined with a minor component
of opposite polarity and having large velocities.
In typical 1\arcsec-resolution observations, the outcoming
light is systematically dominated by the major component,
and the resulting Stokes profiles have rather regular
shapes. An exception occurs in the so-called
{\em apparent neutral line}, where
the Stokes~$V$ profiles show a characteristic shape with
three or more lobes termed {\em cross-over effect}
\citep[see][and references therein]{san92b}.
At the neutral line the mean magnetic
field vector is perpendicular to the
line-of-sight, and
the contribution of the major component almost
disappears
in Stokes~$V$
due to projection effects.
The cancellation of the two components is expected to be
less effective when improving the spatial resolution,
leading to
the appearance of cross-over profiles.
Actually, {\em Hinode} often finds
cross-over Stokes $V$ profiles, and they show
up precisely at the location
of the reverse polarities
\citep[Fig.~5 in][and also \S~\ref{hinode}]{ich07}.
The observed cross-over profiles have two polarities:
the main sunspot polarity close to the line
center, and the reverse polarity at the far red wing.
Since the
reverse polarity patches detected in penumbrae
by {\em Hinode}
produce cross-over profiles, they
seem to correspond to structures where the
polarity is not well defined, with positive
and negative polarities coexisting in each pixel.
The paper is structured as follows:
\S~\ref{model} shows how the model MISMAs
from \paperiii\ qualitatively
reproduce both {\em Hinode} and SST observations.
We work out a simple model penumbral filament to show that it
grasps the essential features of the observed ones. The
same agreement is found when the model MISMAs are inferred
by fitting
actual
{\em Hinode} Stokes profiles (\S~\ref{hinode}).
The implications in the context of the
penumbral magnetic field topology and the Evershed
effect are discussed in \S~\ref{discussion}, where
we also put forward a specific test that could
confirm or falsify our explanation.
Empirical and theoretical
difficulties for the dark cores to be
associated with the reverse polarities
are also discussed in \S~\ref{discussion}.
\section{Model MISMA for penumbral filaments with
dark core}\label{model}
As we describe in the
introductory section, the model MISMAs often
require two magnetic components
with opposite polarities to
reproduce the observed Stokes profiles.
The major component has the polarity
of the sunspot, and it is combined with a
minor component of opposite polarity and having
large velocities.
The outcoming
light is dominated by the major component, so that
the reverse polarity seldom produces an
obvious signal in the spatially
integrated Stokes profiles.
Within this scenario, improving
the spatial resolution would reduce the
spatial smearing, allowing extreme cases
to show up.
In order to mimic the effect of improving
spatial resolution,
several randomly chosen model MISMAs in \paperiii\
were modified by increasing the fraction of
atmosphere occupied by the minor component.
Now the minor
\footnote{
Here and throughout, {\em minor} and {\em major}
refer to the two components in the model sunspot by SA05.
When applied to the components in the model atmospheres
worked out in the paper, it only implies that their magnetic
and velocity properties are similar to the minor and
major components in SA05.
}
component shows up in Stokes~$V$.
The behavior described next is common
to all the models, but we only examine in detail
the example given in
Fig.~\ref{almeida-hinode0}.
The resulting Stokes~$I$, $Q$ and $V$
profiles
of {Fe}~{\sc i}~$\lambda$6302.5~\AA\ are represented
as solid lines in Figs.~\ref{almeida-hinode0}a,
\ref{almeida-hinode0}b and \ref{almeida-hinode0}c,
respectively. They correspond
to a point in the
limb-side penumbra of a sunspot at $\mu=0.95$ (18$^\circ$
heliocentric angle).
Note how Stokes~$I$ is redshifted and
deformed, and how Stokes~$V$ shows the cross-over
effect.
Consequently, the improvement of spatial resolution
with respect to traditional earth-based spectro-polarimetric
observations naturally explains the abundance
of cross-over Stokes~$V$ profiles found by {\em Hinode}.
Figures~\ref{almeida-hinode0}a, \ref{almeida-hinode0}b,
and \ref{almeida-hinode0}c
also show the case where the major component
dominates (the dashed line). The strong
asymmetries have disappeared, rendering
Stokes~$V$ with reasonably
antisymmetric shape and the
sign of the dominant
polarity.
Recall that the two sets of Stokes profiles
in Figs.~\ref{almeida-hinode0}a,
\ref{almeida-hinode0}b, and
\ref{almeida-hinode0}c
(the solid lines and the dashed lines)
have been produced with exactly
the same magnetic field vectors
and mass flows (shown in
Figs.~\ref{almeida-hinode0}e and \ref{almeida-hinode0}f).
The atmospheres differ because of the relative importance
of major and minor components, and because of a global
scaling factor in the temperature stratification.
One of them is
some 80\% cooler than the other one.
The coolest
renders
asymmetric profiles with low continuum
intensity, suitable to
mimic dark features (see the Stokes~$I$ continua
in Fig.~\ref{almeida-hinode0}a).
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{dark_corec.ps}
\caption{
(a) Stokes~$I$ profiles in one of the representative
model MISMAs in \paperiii , which has been
slightly modified to represent a dark core
(the solid line), and its bright sides (the dashed line).
They are normalized to the quiet Sun continuum intensity.
(b) Stokes~$Q$ profiles.
(c) Stokes~$V$ profiles.
(d) Continuum optical depth $\tau_c$ vs height in the
atmosphere for the dark core and the bright
sides, as indicated in the inset.
(e) Magnetic field strength vs height for the two
magnetic components of
the model MISMAs. They are identical for
the dark core and the bright sides.
(f) Velocities along the magnetic field lines for
the two magnetic components of the model MISMAs.
They are identical for
the dark core and the bright sides.
}
\label{almeida-hinode0}
\end{figure*}
Understanding {\em Hinode} observations in terms
of MISMAs also explains the lack of reverse polarity
in SST magnetograms. Stokes~$V$ in
reverse polarity regions shows cross-over
effect (Fig.~5 in \citealt{ich07}, and
the solid line in Fig.~\ref{almeida-hinode0}c),
i.e.,
it presents two polarities depending on the
sampled wavelength.
It has the main sunspot polarity
near line center, whereas the polarity
is reversed in the far red wing.
SST magnetograms are taken at line center
($\pm 50$\ m\AA ), which explains why the
reverse polarity does not show up.
A significant reduction of
the Stokes~$V$ signal occurs, though. Such reduction
naturally explains the observed
weakening of magnetic signals in
dark cores
\citep[][ and \S~\ref{introduction}]{lan05,lan07},
provided that
the dark cores are associated with an
enhancement of the opposite polarity,
i.e., if the dark cores produce cross-over profiles.
In order to illustrate the argument, we have
constructed images, magnetograms, and dopplergrams
of a (na\"\i ve) model dark-cored filament.
It is formed by a uniform 100 km wide dark strip,
representing the dark core,
bounded by two bright strips of the same width,
representing the bright sides.
The Stokes profiles of the dark core have been
taken as the solid lines in Figs.~\ref{almeida-hinode0}a
and \ref{almeida-hinode0}c, whereas the bright sides are modelled
as the dashed lines in the same figures.
The color filters employed
by \citet{lan05,lan07}
are approximated by Gaussian functions
of 80\,m\AA\ FWHM, and shifted
$\pm 50$\,m\AA\ from the line center
(see the dotted lines in Fig.~\ref{almeida-hinode0}a).
The magnetogram signals are computed from the
profiles as
\begin{equation}
M=-{{\Delta\lambda}\over{|\Delta\lambda|}}
{{{\int V(\lambda) f(\lambda-\Delta\lambda)\,d\lambda}}\over{
\int I(\lambda) f(\lambda-\Delta\lambda)\,d\lambda}},
\end{equation}
with the wavelength $\lambda$ referred
to the central wavelength of the line,
$f(\lambda)$ the transmission curve of the
filter centered at $\lambda=0$, and $\Delta\lambda=-50$\,m\AA .
Similarly, the Doppler signals are given by
\begin{equation}
D={{\Delta\lambda}\over{|\Delta\lambda|}}
{{\int I(\lambda)\, [f(\lambda+\Delta\lambda)-f(\lambda-\Delta\lambda)]\,d\lambda}
\over {\int I(\lambda)\, [f(\lambda+\Delta\lambda)+
f(\lambda-\Delta\lambda)]\,d\lambda}},
\end{equation}
but here we employ the Stokes~$I$ profile of the non-magnetic
line used by
\citeauthor{lan07}~(\citeyear{lan07};
i.e., {Fe}~{\sc i}~$\lambda$5576~\AA).
When $\Delta\lambda < 0$,
the signs of $M$ and $D$ ensure
$M >0$ for the main polarity of the sunspot,
and $D> 0$ for redshifted profiles.
The continuum intensity
has been taken as $I$ at -0.4\,{\rm \AA} from the
line center. The continuum image of this model filament
is shown in Fig.~\ref{almeida-hinode1}, with the dark core
and the bright sides marked as DC and BS, respectively.
The dopplergram and the magnetogram are also included in the same
figure. The dark background in all images indicates
the level corresponding to no signal.
In agreement with \citeauthor{lan07} observations,
the filament shows redshifts ($D > 0$), which are
enhanced in the dark core.
In agreement with \citeauthor{lan07},
the filament shows the main polarity of the sunspot
($M > 0$), with the signal
strongly reduced in the dark core.
Figure~\ref{almeida-hinode1}, bottom,
includes the
magnetogram to be observed at the far red wing
($\Delta\lambda=200$\,m\AA). The dark core now shows
the reverse polarity ($M< 0$), whereas the bright
sides still maintain the main polarity with an
extremely weak signal. This specific
prediction of the modeling is amenable
for direct observational test (see \S~\ref{discussion}).
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{dark_cored.ps}
\caption{
Schematic modeling of SST observations
of penumbral filaments by \citet{lan05,lan07}.
A dark core (DC) surrounded by two
bright sides (BS)
is located in the limb-side
penumbra of a sunspot at $\mu=0.95$
(i.e., 18$^\circ$ heliocentric angle).
The three top images show a
continuum image, a dopplergram,
and a magnetogram, as labelled.
The convention is such that both the
sunspot main polarity
and a redshift produce positive signals.
The dark background in all
images has been included for reference, and
it corresponds to signal equals zero.
The fourth image ({\tt Magneto Red})
corresponds to a magnetogram in the far red wing
of Fe~{\sc i}~$\lambda$6302.5~\AA , and it reveals a
dark core with a polarity opposite
to the sunspot main polarity.
The continuum image and the dopplergram
have been scaled from zero (black) to
maximum (white). The scaling of the two
magnetograms is the same, so that their
signals can be compared directly.
}
\label{almeida-hinode1}
\end{figure}
Two additional remarks on our modeling
are in order. First, the magnetogram signal
in the dark core is much weaker than in the
bright sides, despite the fact that the (average)
magnetic field strength is larger in the core
(see Fig.~\ref{almeida-hinode0}e, keeping in mind that
the minor component dominates).
Second, the model dark core is depressed
in height
with respect to the bright sides.
Figure~\ref{almeida-hinode0}d shows the continuum
optical depth $\tau_c$ as a function of the
height in the atmosphere. When the two atmospheres
are in lateral pressure balance,
the layer $\tau_c=1$ of the
dark core is shifted by some 100~km downward
with respect to the same layer in the
bright sides.
The depression of the observed
layers in the dark core is produced by two
conspiring effects;
the decrease of density associated with the increase
of magnetic pressure \citep[e.g.,][]{spr76},
and the decrease of opacity associated
with the reduction of temperature \citep[e.g.,][]{sti91}.
We mention the association of our dark cores with enhanced
field strength and with geometrical depression because these
properties contrast with some popular models of penumbral
magneto-convection \citep[e.g.][]{sch06,rem08}.
Note, however, that the association between field strength,
brightness and geometric height is far from being
established.
Not all models predict dark features coinciding with weaker
field. The siphon flow model of the Evershed effect has
enhanced field strengths in the downflowing leg
\citep[e.g.,][]{sch02d}. The plasma has already cool down
when reaching this footpoint and, so, one expects
downflows associated with stronger fields and colder
plasmas.
As for the elevation of the dark cores, there is a solid
observational result disfavoring it.
\citet{sch04} found that the limb-side penumbra and
the center-side penumbra are darker than the rest. They
interpret this observation as an effect of the depression of
the dark penumbral filaments, which are obscured
by the bright ones when observed sideways.
Obviously, this is an average result, but it strongly
suggests that if dark cores are common, then they must be
depressed with respect to the bright sides.
We are showing here how the reduced magnetograph
signals observed in dark cores can be produced
even if their field strength is enhanced.
\section{Reproducing {\em Hinode} Stokes profiles}\label{hinode}
We have gone a step further, and the
exercise in the previous section has been repeated
using model MISMAs
derived directly from {\em Hinode}/SP data.
We use a small
set of seven Stokes profiles selected so that they
represent extreme cases among
the whole range of redshifted and blueshifted profiles.
They were observed in a simple, positive-polarity
sunspot (NOAA10944) when the target was almost at
the solar disk center (heliocentric angle 1\fdg 1), so that the
line-of-sight direction and the vertical direction
coincide to most purposes.
The observation is described in \citet{ich08},
and we refer to this work for images, a
logbook, and further details.
Our data correspond
to those taken at 18:25 UT
on February 28th, 2007.
Normal scan maps were obtained with the
Spectro-Polarimeter (SP) of the Solar
Optical Telescope
\citep[SOT;][]{tsu08,sue08}
aboard Hinode \citep{kos07}.
The SP took full Stokes profiles of
\linea\ and \lineb\ with 0.1\% photometric accuracy,
and a spatial sampling of 0\farcs 16.
The MISMA inversion procedure described in \citet{san97b}
provides fair fits in all cases.
Two examples are shown in Figs.~\ref{r1_fit}
and \ref{b4_fit}. The dotted lines in Fig.~\ref{r1_fit}
correspond to one representative reverse polarity site.
The fit, shown as solid lines, yields the model
atmosphere represented in Fig.~\ref{r1_model}.
The inversions were carried out as described
in \paperiii , and we refer to that paper for
details. The only significant difference
was the setting
up of the absolute wavelength scale, which we zeroed from
the average intensity profile in a quiet Sun
region far from the sunspot. The wavelength of the
core of \lineb\ is assumed to correspond to a
global velocity equals to the convective blueshift
of the line measured by \citet{dra81}.
Figures~\ref{b4_fit} and \ref{b4_model} are
similar to Figs.~\ref{r1_fit} and \ref{r1_model},
except that they represent a point with clear blueshift.
The model atmospheres are similar,
except for the important detail that the
minor component does not have reverse polarity
in the case of these strongly blueshifted profiles.
The four panels in Figs.~\ref{r1_model}
and \ref{b4_model} represent the stratification
with height in the atmosphere
of (a) magnetic field strength, (b) density,
(c) fraction of atmosphere occupied by each
component, and (d) velocity
along magnetic field lines. The minor
component occupies a significant fraction of the
atmosphere (some 40\% in the examples
in the figures), and it has low density and
high field strength. Field strengths and
densities are similar to those found
in \paperiii,
however, the fraction of atmosphere
occupied by the minor component is
significantly higher (almost twice the
typical 20\% in \paperiii ).
This is to be expected since we have selected for
inversion pixels with particularly large asymmetries,
where the contribution of the minor component
must exceed the average to cause a significant
impact on the Stokes profiles.
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{r1_fit.ps}
\caption{
Set of Stokes profiles of \linea\ and \lineb\
observed by {\em Hinode} in one of the reverse polarity regions (the dotted
lines).
(The ordinate axis
labels identify the Stokes parameter.)
Note how Stokes~$V$
shows the cross-over effect (i.e., three lobes rather than two).
The solid lines correspond to a MISMA inversion of this
set of profiles, and it renders the model atmosphere shown
in Fig.~\ref{r1_model}.
Wavelengths are referred to the laboratory wavelength
of \linea . The vertical solid lines indicate the
laboratory wavelengths of \linea\ and \lineb .
}
\label{r1_fit}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{r1_model.ps}
\caption{
Model MISMA for one of the typical
cross-over profiles.
It has been derived from inversion of the profiles
shown in Fig.~\ref{r1_fit}, the dotted lines.
(a) Stratification of magnetic field strength.
As the inset in (b) indicates, the minor and the
major components can be identified by the type
of line.
(b) Stratification of density. (c) Fraction of the
atmosphere occupied by the two components.
(d) Stratification of velocity along magnetic field lines.
Note that in order to get the Doppler shifts,
the velocity $U$ has to be corrected for the
inclination of the magnetic field.
As the minor and major components have opposite
polarities, both yield redshifts.
(The magnetic field inclinations
of the major
and minor components are
74$^\circ$\ and 144$^\circ$, respectively,
so that the major component has positive polarity
whereas the minor component has negative polarity.
)
The symbols correspond to the quantities
used as free parameters during fitting,
which set the full stratification of the atmosphere
via MHD constraints \citep{san97b}.
}
\label{r1_model}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{b4_fit.ps}
\caption{
Same as Fig.~\ref{r1_fit} but for one of the
strongly blueshifted regions. In this case the
solid lines correspond to the model atmosphere in
Fig.~\ref{b4_model}.
}
\label{b4_fit}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{b4_model.ps}
\caption{
Model MISMA reproducing one of the typical blueshifted
profiles (the dotted lines in Fig.~\ref{b4_fit}).
See caption of Fig.~\ref{r1_model} for
a description of the various plots.
The inclinations
of the major
and minor components are
66$^\circ$\ and 39$^\circ$, respectively,
so that both components have positive polarity.
After correcting $U$ for the magnetic field
inclination,
the velocity of the minor component
along a vertical line-of-sight corresponds
to blueshifts. The change
with respect to Fig.~\ref{r1_model} is due
to the flip of the minor component polarity,
which is negative in Fig.~\ref{r1_model} and
positive here.
}
\label{b4_model}
\end{figure*}
We have repeated the exercise leading to the synthetic
magnetograms and dopplergrams in Fig.~\ref{almeida-hinode1},
but using the model MISMAs in Figs.~\ref{r1_model} and \ref{b4_model}.
Specifically, we use the profiles in the blueshifted region to
represent the bright sides,
and the redshifted profiles for the
dark core.
The result is shown in Fig.~\ref{hinode3}. The main features
of Fig.~\ref{almeida-hinode1} remain: (1) the bright
filament has a dark core, (2) the line-center
magnetogram has a weakening coinciding with the dark core,
(3) Bright sides are blueshifted with respect to the dark cores,
and (4) the far red wing magnetogram
shows opposite polarity coinciding
with the dark core. Note that features 1--3 are in agreement
with SST observations.
A clarification may be appropriate.
The association between the Stokes
profiles in Fig.~\ref{r1_fit} and dark cores,
and the profiles in Fig.~\ref{b4_fit} and
bright sides is a mere working hypothesis.
\hinode\ spectra barely resolve
bright sides and dark cores
and, therefore, the used profiles
do not correspond to identifiable
bright sides and dark cores.
We have selected them because
they illustrate the properties to be expected
for bright sides and dark cores
according to the modeling in
\S~\ref{model}. Even with limitted
resolution, just by chance,
some pixels may have enhanced contribution of
bright sides and dark cores.
\begin{figure}
\includegraphics[width=0.55\textwidth]{dark_core_hinodea.ps}
\caption{
Set of synthetic images, dopplergrams and magnetograms
equivalent to Fig~\ref{almeida-hinode1}, but
using the model MISMAs directly derived from {\em Hinode}
spectra to represent the dark core and its bright sides.
The main features remain as in Fig~\ref{almeida-hinode1}.
}
\label{hinode3}
\end{figure}
\section{Discusion}\label{discussion}
The model MISMAs by SA05 predict and produce
strongly redshifted reverse polarity
structures
similar to those
found by {\em Hinode}
in penumbrae (\S~\ref{introduction}).
In addition to pointing out this agreement
(\S~\ref{model}),
we have applied the kind of modelling employed
by SA05 to quantitatively reproduce some representative
very asymmetric Stokes profiles observed by
{\em Hinode} (\S~\ref{hinode}).
In order to fit the
Stokes~$V$ profiles with three lobes observed
in reverse polarity regions
(cross-over profiles), the model MISMAs
have two
components of opposite polarity in each
resolution element. The minor component holds the
reverse polarity, and it always carries strong magnetic-field-aligned
flows.
High spatial resolution SST magnetograph observations of
penumbra do not show reverse polarities. They just
indicate a
weakening of the Stokes~$V$ signal coinciding
with the dark cores \citep[][]{lan05,lan07}.
The presence of reverse polarities and
the absence of their signals in SST
magnetograms can be explained if the dark cores are
associated with reverse polarity Stokes~$V$ profiles.
Such association has not been revealed so far because the
existing magnetograms were taken at line core,
whereas the reverse polarity only shows up at the
far red wing of the spectral lines.
By tuning the bandpass of SST magnetograms to
the appropriate wavelength, this specific prediction
of our modeling can be tested
observationally\footnote{Even if our prediction
turns out to be incorrect,
the disagreement between {\em Hinode} and SST magnetograms
is a serious problem that urges solution.}.
The association between cross-over Stokes~$V$
profiles and dark cores is still a mere conjecture. However,
we would like to mention an independent Hinode/SP observation
which also suggests such association.
\citet[][]{bel07} find and discuss the case of
a limb-side penumbra dark core which clearly shows
cross-over Stokes~$V$ profiles (see their Fig.~2).
As we argue in the introduction, Hinode/SP spatial
resolution does not suffice to properly resolve
dark cores, but \citeauthor{bel07} observation is
encouraging.
It indicates that the dark cores are associated
with several magnetic field inclinations
in the resolution element.
The magnetic fields that we use in \S~\ref{hinode}
to reproduce the observed
asymmetries are rather horizontal.
However, the flows along field lines are so intense
(in excess of 10 km~s$^{-1}$ in the minor component;
see the botton right
panels in Figs.~\ref{r1_model}
and \ref{b4_model}) that
the vertical component of the velocities are
of the order of a few~km\,s$^{-1}$.
Order of magnitude estimates show that
1~km\,s$^{-1}$
suffices to explain the transport of energy
by convection in penumbra
\citep[e.g., ][]{spr87,stei98,san09}.
For the transport to be effective,
vertical velocities of such magnitude should
be present everywhere. It is still unclear whether
such large velocities
are common enough to be responsible for the
required convective transport, and such possibility
has to be studied in detail.
{\em Hinode} observations show the
reversals to prefer the outer penumbra, whereas
asymmetric blueshifted profiles (like those in
Fig.~\ref{b4_fit})
cluster toward the inner penumbra \citep{ich07}.
One might think that this fact compromises our
interpretation, because dark cores tend to appear in the
inner penumbra. However, beware of oversimple
interpretations of {\em Hinode} spectra.
{\em Hinode}/SP does not resolve individual dark cores and
bright sides, but spatially integrate them.
If the bright sides dominate, then the
resulting Stokes~$V$ profiles would show properties
of the bright sides, even if the pixel contains a dark core. Actually,
blueshifted profiles seem to be associated
with bright continuum features, supporting that
the bright sides dominate in these points. As one
moves toward the outer penumbra, the magnetic field of the bright sides
becomes more horizontal, reducing the Stokes~$V$ signals,
and allowing the dark core polarization to show up. In
agreement with this conjecture,
the reverse polarity patches seem to coincide with dark lanes
\citep{ich07}.
Insufficient resolution can also be
invoked to explain the different morphological
appearance of the \hinode\ polarity
reversals and the SST dark cores. Dark cores are
elongated features,
but the locations of downflows described by \citet{ich07}
are much more point-like features. Cross-over Stokes~$V$
signals critically depend on the orientation of the magnetic
fields with respect to the line-of-sight. Small
modifications of the magnetic field geometry
would make the reverse polarity unobservable,
overwhelmed by the Stokes~$V$ signals of
the main sunspot polarity. The reverse polarity
shows up only when a number of conditions are met,
but this need to satisfy several delicate tradeoffs
makes the presence of identifiable reverse polarities
rare, therefore, they tend to be spatially scattered,
discontinuous, and so point-like. On the contrary,
these delicate balances do not affect the intensity and,
therefore, unpolarized light SST images tend to
show more continuous structures with the filamentary
appearance characteristic of penumbrae.
A final important comment is in order.
Stokes~$V$ profiles like those in Figs.~\ref{r1_fit} and
\ref{b4_fit} have net circular polarization (NCP), i.e.,
the wavelength integral of the Stokes~$V$ profile differs
from zero. NCP can {\em only} be produced by
variation of the magnetic field and velocity
{\em along} the line-of-sight,
a well known fact from the early works by
\citet{ill75} and \citet{aue78} trying to explain the
broad-band circular polarization in
sunspots \citep[see, e.g.,][ and references therein]{san92b}.
This implies that strongly asymmetric Stokes~$V$ profiles
showing NCP will be always present in sunspots,
no matter the spatial resolution of the observation
{\em across} the line-of-sight.
Even if we improve the resolution of our telescope
to infinity, we will never be able to separate the
penumbrae into pixels where the magnetic field is uniform.
In other words, resolving the fine scale structure of
the penumbral magnetic
field is not (only) a question of improving the
spatial resolution, but it requires understanding
line asymmetries.
Whether this understanding requires MISMAs or can
be accomplished with a smoother magnetic field
distribution is still a matter of debate
(e.g., SA05, Sect.~5; \citeauthor{lan06}~\citeyear{lan06}).
\begin{acknowledgements}
{\em Hinode} is a Japanese mission developed and launched by ISAS/JAXA,
with NAOJ as domestic partner and NASA and STFC (UK) as
international partners. It is operated by these agencies
in co-operation with ESA and NSC (Norway).
The work has partly been funded by the Spanish Ministry of Science
and Technology, project AYA2007-66502, as well as by
the EC SOLAIRE Network (MTRN-CT-2006-035484).
This work was partly carried out at the
NAOJ Hinode Science Center, which is
supported by the Grant-in-Aid for Creative Scientific Research,
The Basic Study of SpaceWeather Prediction from MEXT,
Japan (Head Investigator: K. Shibata),
generous donations from Sun Microsystems,
and NAOJ internal funding.
\end{acknowledgements}
|
1,116,691,500,923 | arxiv | \section{Introduction} \label{sec-intro}
As the population grows in urban areas, commuting between and within large cities is time and resource demanding. Due to the growing passenger demand, the number of vehicles on the road for both public and private transportation has increased to handle the demand.
The public transportation system is unable to keep up with the demand in terms of service quality.
This pushes many people to use personal vehicles for work commute.
In the United States, personal vehicles are the main transportation mode~\cite{CSS20}.
However, the occupancy rate of personal vehicles in the United States is 1.6 persons per vehicle in 2011~\cite{USDT-G,USDTFHA-S} (and decreased to 1.5 persons per vehicle in 2017~\cite{CSS20}), which can be a major cause for congestion and pollution.
Many people in populated cities opt for shared mobility for its convenience; this kind of ridesharing/ridehailing service is called mobility-on-demand (MoD) and offered by companies such as Uber, Lyft and DiDi.
Although MoD service may improve convenience, the problem of traffic congestion is not resolved.
This is the reason municipal governments encourage the use of public transit;
the major drawback of public transit is the inconvenience of last mile/leg (or first mile) transportation compared to personal vehicles~\cite{TS14-W}.
With the increasing popularity in ridesharing/ridehailing service, there may be potential in integrating private and public transportation.
From the research report~\cite{TRB16-M}, it is recommended that public transit agencies should build on mobility innovations to allow public-private engagement in ridesharing because the use of shared modes increases the likelihood of using public transit.
As pointed out by Ma et al.~\cite{TRELTR19-M}, some basic form of collaboration between MoD services and public transit already exists (for first and last mile transportation).
There is an increasing interest of collaboration by private companies and public sector entities~\cite{SMT18-R}.
Integrating public and private transportation can be an effective way to solve traffic congestion for work commute.
We investigate the potential effectiveness of integrating public transit with ridesharing of personal vehicles to reduce traffic congestion for work commute.
For example, people who drive their vehicles to work can pick-up \textit{riders} (who use public transit regularly) and drop-off them at some transit stops, and those riders can take public transit to their destinations.
In this way, riders are presented with a cheaper alternative than ridesharing for the entire trip, and it is more convenient than using public transit only.
The transit system also gets a higher ridership, which matches the recommendation of~\cite{TRB16-M} for a more sustainable transportation system.
Our research focuses on a centralized system that is capable of matching drivers and riders satisfying their trips' requirements while achieving some optimization goal; the requirements of a trip may include an origin and a destination locations, time constraints, capacity of a vehicle, and so on (formal definition in Section~\ref{sec-preliminary}).
When a rider is assigned a driver, we call this \emph{ridesharing route}, and it is compared with the fastest \emph{public transit route} for this rider which uses only public transit.
If the ridesharing route is faster than the public transit route, the ridesharing route is provided to both the rider and driver.
To increase the number of rider participants, our system-wide optimization goal is to maximize the number of riders, each of whom is assigned a ridesharing route. We call this the \emph{maximization problem}.
In the literature, there are many papers about standalone ridesharing/carpooling, from theoretical to empirical studies (e.g.,~\cite{TRBM11-A,PNAS17-AM,GLZ20,TRBM20-X}).
For literature reviews on ridesharing, readers are referred to~\cite{EJOR12-A,TRBM13-F,TRBM19-M,SS20-T}.
On the other hand, there are only a few papers study the integration of public transit with dynamic ridesharing.
Aissat and Varone~\cite{ICEIS15-A} proposed an approach in which a public transit route for each rider is given, and their algorithm tries to substitute part(s) of every rider's route with ridesharing.
Any part of a rider's original transit route is replaced only if ridesharing substitution is better than the original part.
Their algorithm finds the best route for each rider in first-come first-serve basis (system-wide optimization goal is not considered) and is computational intensive.
Huang et al.~\cite{TITS19-H} presented a more robust approach, compared to~\cite{ICEIS15-A},
by combining two networks $N, N'$ (representing the public transit and ridesharing network respectively) into one single routable graph $G$.
The graph $G$ uses the \emph{time-expanded model} to maintain the information about all public vehicles schedule, riders' and drivers' origins, destinations and time constraints.
In general, a \emph{stop node} in $G$ represents a public vehicle's/driver's stop location, and a \emph{time node} represents time events of this vehicle/driver at this stop.
An edge between two nodes implies possible transfer for riders from one vehicle to the other (i.e., the departure time of a vehicle is after the arrival of the other); this also implies a rider can be pick-up/dropped-off from/at a public stop within time constraints.
The authors apply this idea to create the ridesharing network graph $N'$ and connect the two networks $N, N'$ by creating edges between them whenever a rider can be pick-up/dropped-off from/at a public stop within time constraints.
For each rider travel query, a shortest path is found on $G$.
Their approach is also first-come first-serve basis and does not achieve system-wide optimization goal.
Ma~\cite{EEEIC17-M} and Stiglic et al.~\cite{COR18-S} proposed models to integrate ridesharing and public transit as graph matching problems to achieve system-wide optimization goals.
Algorithm presented in~\cite{EEEIC17-M} uses the shareability graph (RV-graph)~\cite{PNAS14-S} and the extension of RV-graph, called RTV-graph~\cite{PNAS17-AM}.
In fact, the approach used by Stiglic et al.~\cite{COR18-S} is similar, except~\cite{COR18-S} supports more rideshare match types.
A set of driver and rider trip announcements and a public transit network with a fixed cyclic timetable are given.
For a pre-transit rideshare match, a set of riders is assigned to a driver, and the driver pick-ups each rider by traveling to each rider's origin, then drop-of them at some public transit stops.
For a post-transit rideshare match, a driver picks-up a set of riders at a public transit stop and then transport each rider one by one to their destinations.
A set of riders can only be assigned to a driver if certain constraints are met, such as capacity of the driver's vehicle and the travel time constraints of the driver and riders.
Each of the driver and rider is represented by a node. There is an edge between a driver and a rider if the rider can be served by the driver.
If a group of riders can be served by a driver, a node containing the group is created, and an edge between the driver and the group is also created.
From this graph, a matching problem is formulated as an integer linear program (ILP) and solved by standard branch and bound (CPLEX).
The optimization goal in~\cite{EEEIC17-M} is to minimize cost related to waiting time and travel time, but ridesharing routes are not guarantee to be better than transit route.
Although the optimization goal in~\cite{COR18-S} aligns with ours, there are some limitations in their approach;
they limit at most two passengers for each rideshare match and use the closest transit stop to the destination as the drop-off stop, and more importantly, ridesharing routes assigned to riders is more likely to be longer than public transit routes.
In this paper, we use a similar solution approach as in~\cite{EEEIC17-M,COR18-S}. We extend~\cite{COR18-S} to eliminate the limitations described above.
We also give approximation algorithms for the optimization problem to ensure solution quality.
Our discrete algorithms allow to control the trade-off between solution quality and computational time.
We conduct a numerical study based on real-life data in Chicago City.
Our main contributions are summarized as follows:
\begin{enumerate}
\setlength\itemsep{0em}
\item We propose a centralized system that integrates public transit and ridesharing for work commute, along with an exact algorithm approach.
\item We prove our maximization problem is NP-hard and give a 2-approximation algorithm for the problem. We show that previous $O(k)$-approximation algorithms~\cite{SWAT00-B,SODA99-C} for the $k$-set packing problem are 2-approximation algorithms for our maximization problem. Our approximation algorithm is more time and space efficient than previous algorithms.
\item We conduct a numerical study based on real-life data to evaluate the potential of having an integrated transit system and the effectiveness of different approximation algorithms.
\end{enumerate}
The rest of the paper is organized as follows.
In Section~\ref{sec-preliminary}, we give the preliminaries of the paper, describe a centralized system that integrates public transit and ridesharing, and define the maximization problem.
In Section~\ref{sec-exact}, we describe our solution approach and exact algorithm. We then propose approximation algorithms in Section~\ref{sec-approximate}.
We discuss our numerical experiments and results in Section~\ref{sec-experiment}.
Finally, Section~\ref{sec-conclusion} concludes the paper.
\section{Problem definition and preliminaries} \label{sec-preliminary}
In the problem \textit{multimodal transportation with ridesharing} (MTR), a set $\mathcal{A} = D \cup R$ of trip announcements is given, where $D$ is the set of driver announcements, $R$ is the set of rider announcements and $D \cap R = \emptyset$.
Each trip announcement is expressed by an integer label $i$, and a trip announcement is referred to as a \emph{trip} for short.
Each trip consists of an individual, a vehicle (for driver trip) and some requirements.
A connected public transit network with a fixed timetable $T$ is given.
In this paper, we assume the timetable $T$ is part of the centralized system and can be accessed quickly.
Specifically, we assume that for any source $o$ and destination $d$ in the public transit network, $T$ gives the fastest travel time from $o$ to $d$.
A \emph{ridesharing route} $\pi_i$ for a rider $i \in R$ is a travel plan using a combination of public transportation modes and ridesharing to reach $i$'s destination satisfying the requirements of $i$, whereas a \emph{public transit route} $\hat{\pi}_i$ for a rider $i$ is a travel plan using only public transportation modes.
The multimodal transportation with ridesharing problem asks to provide at least one feasible route (if possible) for every rider $i \in R$. We denote an instance of multimodal transportation with ridesharing problem by $(N,\mathcal{A},T)$, where $N$ is an edge-weighted directed graph (network) for both private and public transportation.
We call a public transit station or stop just \emph{station}.
The terms rider and passenger are used interchangeably (although passenger emphasizes that a rider who has been provided with a ridesharing route).
The requirements of each trip $i$ in $\mathcal{A}$ are specified by $i$'s parameters submitted by the individual.
In general, the parameters of a trip $i$ contain an origin location, a destination location, an earliest departure time, a latest arrival time and a maximum trip time.
A driver trip $i$ also contains a capacity of the vehicle, a preferred path (optional) to reach the destination, a limit on the detour time/distance, and a limit on the number of stops a driver wants to make to pick-up/drop-off passengers.
A rider trip $i$ also contains an acceptance rate of a route with ridesharing, that is,
a ridesharing route $\pi_i$ is given to a rider $i$ if the travel time of $\pi_i$ is shorter than any public transit route $\hat{\pi}_i$ by acceptance rate $\theta_i$ (between 0 and 1). For example, suppose the best public transit route $\hat{\pi}_i$ takes 100 minutes for rider $i$ and $\theta_i = 0.9$. A ridesharing route $\pi_i$ is given to $i$ if $t(\pi_i) \leq \theta_i \cdot t(\hat{\pi}_i) = 90$ minutes, where $t(\cdot)$ is the travel time.
We consider two match types for practical reasons.
\begin{itemize}
\setlength\itemsep{0em}
\item \textbf{Type 1 (rideshare-transit)}: a driver may make multiple stops to pick-up different passengers, but makes only one stop to drop-off all passengers. In this case, the \emph{pick-up locations} are the passengers' origin locations, and the \emph{drop-off location} is a public station.
\item \textbf{Type 2 (transit-rideshare)}: a driver makes only one stop to pick-up passengers and may make multiple stops to drop-off all passengers. In this case, the \emph{pick-up location} is a public station and the \emph{drop-off locations} are the passengers' destination locations.
\end{itemize}
Riders and drivers specify one of the match types to participate in; they are allowed to choose either but not both in hope to increase the chance being selected. For the latter case, the system assigns them one of the match types such that the optimization goal is achieved.
The optimization goal is to assign accepted ridesharing route to as many riders as possible.
Formally, we are considering the problem of maximizing the number of passengers, each of whom is assigned a ridesharing route $\pi_i$ for every $i \in R$ such that $t(\pi_i) \leq \theta_i \cdot t(\hat{\pi}_i)$ for any public transit route $\hat{\pi}_i$.
We make some simplifications in our model:
\begin{itemize}
\item Given a source-destination station pair $(o, d)$ in a public transit system with departure time $t$ at $s$, we use a simplified transit system in our experiments to calculate the fastest public transit route from $o$ to $d$.
\item The time it takes to pick-up and drop-off riders, walking time between a ridesharing service and public transit, and waiting time for transit are not considered.
\item Uncertainty in travel times is ignored (we assume average travel time for any route).
\end{itemize}
For a trip $i$ in $\mathcal{A}$, let $o_i$ and $d_i$ denote the origin and the destination of $i$ respectively.
A ridesharing route $\pi_j$ of Type 1 for a passenger $j$ is \emph{feasible} if all requirements of both trips $i$ and $j$ are satisfied ($i$ is the driver), and
there exist paths $(o_i,o_j,s)$ and $(s,d_i)$ in $N$, where $s$ is a drop-off location.
We extend this to a set $\sigma(i) \setminus \{i\} = \{j_1,j_2,\ldots,j_k\}$ of passengers that can be served by driver $i$ ($i \in \sigma(i)$ as well):
a feasible ridesharing route $\pi_{j_y}$ for every $j_y \in \sigma(i)$, there exists a path ($o_i,o_{j_1},o_{j_2},\ldots,o_{j_k}, s$) with drop-off location $s$, and a path from $s$ to $d_i$.
For match Type 2, it is symmetric: a ridesharing route $\pi_j$ offered by $i$ to pick-up $j$ at station $s$ is \emph{feasible} if all requirements of trips $i$ and $j$ are satisfied, and there exist paths in $N$ from $o_i$ to $s$ and from $d_j$ to $d_i$.
The extension to a set of passengers $\sigma(i) \setminus \{i\} = \{j_1,j_2,\ldots,j_k\}$ served by a driver $i$ is similar:
a feasible ridesharing route $\pi_{j_y}$ for every $j_y \in \sigma(i)$, there exists a path ($s,o_{j_1},o_{j_2},\ldots,o_{j_k}$) with pick-up location $s$, and a path from the final passenger's drop-off location to $d_i$.
for every passenger $j \in \sigma(i) \setminus \{i\} = \{j_1,j_2,\ldots,j_k\}$, there exists a feasible ridesharing route $\pi_j$ with $i = d(\pi_j)$ such that the destination of $\pi_j$ is $d_j$ and there is a path from the last passenger's drop-off location $o_{j_k}$ to $d_i$.
A set $\sigma(i)$ is \emph{feasible} if route $\pi_j$ is feasible for every trip $j \in \sigma(i)$, which must satisfy the constraints (requirements) specified by the parameters of trips in $\sigma(i)$ collectively.
We outline a list of general constraints below:
\begin{itemize}
\setlength\itemsep{0em}
\item \textit{Capacity constraint} limits the number of passengers a driver is willing to serve.
\item \textit{Travel time constraint} enforces the total travel duration and travel time of each trip (driver and passenger) to be within its specified range of departure and arrival time.
\item \textit{Stop constraint} limits the number of unique locations visited by driver $i$ to pick-up all passengers of $\sigma(i)$ (symmetric for Type 2).
\item \textit{Acceptance constraint} enforces the travel duration of each passenger $j$'s ridesharing route to be within the acceptable range ($t(\pi_j) \leq \theta \cdot t(\hat{\pi}_j)$ for $0 < \theta \leq 1$).
\end{itemize}
A set $\sigma(i)$ with at least one passenger satisfying the above constraints is called a \emph{feasible match}.
Therefore, a rideshare-transit (Type 1) feasible match $\sigma(i)$ means all passengers in match $\sigma(i)$ are picked-up at their origins and dropped-off at a public station. Then, $i$ drives to destination $d_i$ while each passenger $j$ of $\sigma(i)$ takes transit to reach destination $d_j$. A transit-rideshare (Type 2) feasible match $\sigma(i)$ means all passengers in match $\sigma(i)$ are picked-up at a public station and dropped-off at their destinations. Then, $i$ drives to destination $d_i$ after dropping the last passenger.
We describe our algorithms for Type 1 only. Algorithms for Type 2 can be described with the constraints on the drop-off location and pick-up location of a driver exchanged, and we omit the description.
\section{Exact algorithm} \label{sec-exact}
An exact algorithm is presented in this section, which is similar to the matching approach described in~\cite{PNAS17-AM,PNAS14-S} for ridesharing and in~\cite{EEEIC17-M,COR18-S} for MTR.
\subsection{Integer program formulation} \label{sec-exact-IP}
The exact algorithm is summarized as follows.
First, we compute all feasible matches for each driver $i$. Then, we create a bipartite (hyper)graph $H(D,R,E)$, where $D(H)$ is the set of drivers, and $R(H)$ is the set of riders.
There is a hyperedge $e = (i, J)$ in $E(H)$ between $i \in D(H)$ and a non-empty subset $J \subseteq R(H)$ if $\{i\} \cup J$ is a feasible match, denoted by $\sigma_J(i)$, for driver $i$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.66\linewidth]{figs/graph.pdf}
\caption{A bipartite hypergraph for all possible matches of an instance $(N,\mathcal{A},T)$.}
\label{fig-hypergraph}
\end{figure}
An example is given in Figure~\ref{fig-hypergraph}.
Any driver $i$ and rider $j$ with no feasible match is removed from $D(H)$ and $R(H)$ respectively, namely, no isolated vertex.
Let $E_j$ be the set of edges in $E$ associated with trip $j \in \mathcal{A}$, that is, for any $e=(i,J)$ in $E_j$, either $i=j$ or $j \in J$.
For an edge $e=(i,J)$, let $A(e) = \{i\} \cup J$ and $p(e) = |J|$ be the number of riders represented by $e$.
To solve the problem of maximizing the number of passengers, each of whom is given a ridesharing route (refer to as the \textit{\textbf{maximization problem}}), we give an integer program formulation:
\begin{alignat}{4}
& \text{maximize } & &\sum_{e \in E(H)} p(e) \cdot x_{e} & \qquad \label{obj-1}\\
& \text{subject to } & \qquad &\sum_{e \in E_j} x_{e} \leq 1, & & \forall \text{ } j \in \mathcal{A} \label{constraint-1}\\
& & &x_{e} \in \{0,1\}, & &\forall \text{ } e \in E(H) \label{constraint-2}
\end{alignat}
The binary variable $x_e$ indicates whether the edge $e = (i, J)$ is in the solution ($x_e = 1$) or not ($x_e = 0$).
If $x_e = 1$, it means that all passengers in $J$ are served by $i$.
Inequality (2) in the ILP formulation guarantees that each driver serves at most one feasible set of passengers and each passenger is served by one driver.
\begin{observation}
A match $\sigma(i)$ for any driver $i \in D$ is feasible if and only if for every subset $P$ of $\sigma(i)\setminus\{i\}$, the match between $i$ and $P$ is feasible~\cite{TRBM15-S}.
\label{obs-1}
\end{observation}
From Observation~\ref{obs-1}, we have the following proposition.
\begin{prop}
Let $i_1, i_2,\ldots, i_j$ be a set of drivers in $D$ and $P$ be a maximal set of passengers served by $i_1, \ldots, i_j$.
There always exists a solution such that $\sigma(i_a) \cap \sigma(i_b) = \emptyset$ $(1 \leq a \neq b \leq j)$ and $\bigcup_{i_1 \leq a \leq i_j} \sigma(a) = P$.
\label{prop-1}
\end{prop}
\begin{theorem}
Given a bipartite graph $H(D,R,E)$ representing an instance of the multimodal transportation with ridesharing maximization problem.
An exact solution to the integer program (\ref{obj-1})-(\ref{constraint-2}) solves the maximization problem.
\end{theorem}
\begin{proof}
From inequality (\ref{constraint-1}) in the integer program, the solution found by the integer program is always feasible to the maximization problem.
By Proposition~\ref{prop-1} and objective function~(\ref{obj-1}), the maximum number of passengers are served.
\end{proof}
\subsection{Computing feasible matches}
Let $i$ be a driver in $D$. The maximum number of feasible matches for $i$ is $\sum_{j = 0}^{n_i} \binom{|R|}{j}$.
Assuming the capacity $n_i$ is a very small constant (which is reasonable in practice), the above summation is polynomial in $R$, that is, $O((|R|+1)^{n_i})$ (partial sums of binomial coefficients).
Let $K = \max_{i \in D} {n_i}$ be the maximum capacity among all vehicles (driver trips).
Then, in the worst case, $|E(H)| = O(|D| \cdot (|R|+1)^K)$.
We compute all feasible matches for each trip in two phases. In phase one, for each driver $i$, we find all feasible matches $\sigma(i)=\{i,j\}$ with one rider $j$.
In phase two, for each driver $i$, we compute all feasible matches $\sigma(i)=\{i,j_1,..,j_p\}$ with $p$ riders, based on the feasible matches $\sigma(i)$ with $p-1$ riders computed previously, for $p=2$ and upto the number of passengers $i$ can serve.
Before describing how to compute the feasible matches, we first introduce some notations and specify the feasible match constraints we consider.
Each trip $i \in \mathcal{A}$ is specified by the parameters $(o_i, d_i, n_i, z_i, p_i, \delta_i, \alpha_i, \beta_i, \gamma_i, \theta_i)$, where the parameters are summarized in Table~\ref{table-notation} along with other notation.
\begin{table}[!ht]
\footnotesize
\centering
\begin{tabular}{| c | l |}
\hline
\textbf{Notation} & \textbf{Definition} \\ \hline
$o_i$ & Origin (start location) of $i$ (a vertex in $N$) \\
$d_i$ & Destination of $i$ (a vertex in $N$) \\
$n_i$ & Number of seats (capacity) of $i$ available for passengers (driver only) \\
$z_i$ & Detour time $i$ willing to make for offering services (driver only) \\
$p_i$ & A preferred path of $i$ from $o_i$ to $d_i$ in $N$ (driver only) \\
$\delta_i$ & Maximum number of stops $i$ willing to make to pick-up passengers for match \\
& Type 1 and to drop-off passengers for match Type 2. \\
$\alpha_i$ & Earliest departure time of $i$ \\
$\beta_i$ & Latest arrival time of $i$ \\
$\gamma_i$ & Maximum trip time for $i$ \\
$\theta_i$ & Acceptance rate ($0 \leq \theta_i < 1$) for a ridesharing route $\pi_i$ (rider only) \\
$\pi_i$ & Route for $i$ using a combination of public transit and ridesharing (rider only) \\
$\hat{\pi}_i$ & Route for $i$ using only public transit (rider only) \\
$d(\pi_i)$ & The driver of ridesharing route $\pi_i$ \\
$t(p_i)$ & Travel time for traversing path $p_i$ by private vehicle \\
$t(\pi_i)$ \& $t(\hat{\pi}_i)$ & Travel time for traversing route $\pi_i$ and $\hat{\pi}_i$ resp. \\
$t(u,v)$ \& $\hat{t}(u,v)$ & Travel time from location $u$ to $v$ by private vehicle and public transit resp. \\ \hline
\end{tabular}
\caption{Parameters for a trip announcement $i$.}
\label{table-notation}
\end{table}
The maximum trip time $\gamma_i$ of a driver $i$ can be calculated as $\gamma_i = t(p_i) + z_i$ if $p_i$ is given; otherwise $\gamma_i = t(o_i,d_i) + z_i$. For a passenger $j$, $\gamma_j$ is more flexible; it is default to be $\gamma_j = t(\hat{\pi}_i)$, where $\hat{\pi}_i$ is the fastest public transit route.
For any driver $i \in D$, a match $\sigma(i)$ is \emph{feasible} if all trips of $\sigma(i)$ satisfy the following constraints collectively.
(I) \textbf{Capacity constraint}: $|\sigma(i) \setminus \{i\}| \leq n_i$.
(II) \textbf{Travel time constraint}: every trip $i \in \mathcal{A}$ departs from $o_i$ no earlier than $\alpha_i$ and arrives at $d_i$ no later than $\beta_i$; the total travel duration of $i$ is at most $\gamma_i$.
(III) \textbf{Stop constraint}: the number of unique locations visited by driver $i$ to pick-up for (Type 1) or drop-off for (Type 2) all passengers of $\sigma(i)$ is at most $\delta_i$.
(IV) \textbf{Acceptance constraint}: for route $\pi_j$ of each passenger $j \in \sigma(i) \setminus \{i\}$, $t(\pi_j) \leq \theta_j \cdot t(\hat{\pi}_j)$, where $\hat{\pi}_j$ is the public transit route with shortest travel time for $j$.
\subparagraph{Phase one (Algorithm 1).}
Now we describe how to compute a feasible match between a driver and a passenger for Type 1. The computation for Type 2 is similar and we omit it.
For every trip $i \in D \cup R$, we first compute the set $S_{do}(i)$ of feasible drop-off locations for trip $i$.
Each element in $S_{do}(i)$ is a station-time tuple $(s, \alpha_i(s))$ of $i$, where $\alpha_i(s)$ is the earliest possible time $i$ can reach station $s$.
The station-time tuples are computed by the following preprocessing procedure.
\begin{itemize}[leftmargin=*]
\setlength\itemsep{0em}
\item We find all feasible station-time tuples for each passenger $j \in R$. A station $s$ is \emph{feasible} for $j$ if $j$ can reach $d_j$ from $s$ within time window $[\alpha_j, \beta_j]$, $t(o_j,s) + \hat{t}(s,d_j) \leq \gamma_j$ and $t(o_j,s) + \hat{t}(s,d_j) \leq \theta_j \cdot \hat{t}(o_j,d_j)$.
\begin{itemize}
\setlength\itemsep{0em}
\item The earliest possible time to reach station $s$ for $j$ can be computed as $\alpha_j(s) = \alpha_j + t(o_j,s)$ without pick-up and drop-off time. Since we do not consider walking time and waiting time, $\alpha_j(s)$ also denotes the earliest departure time of $j$ at station $s$.
\item Let $\hat{t}(s,d_j)$ be the travel time of a fastest public route. $s$ is \emph{time feasible} if $\alpha_j(s) + \hat{t}(s,d_j) \leq \beta_j$, $t(o_j,s) + \hat{t}(s,d_j) \leq \gamma_j$ and $t(o_j,s) + \hat{t}(s,d_j) \leq \theta_j \cdot \hat{t}(o_j,d_j)$.
\end{itemize}
\item Next, we find all feasible station-time tuples for each driver $i \in D$ using a similar calculation.
\begin{itemize}[leftmargin=*]
\setlength\itemsep{0em}
\item Without considering pick-up and drop-off time, the earliest arrival time of $i$ to reach $s$ is $\alpha_i(s) = \alpha_i + t(o_i,s)$. Station $s$ is \emph{time feasible} if $\alpha_i(s) + t(s,d_i) \leq \beta_i$ and $t(o_i,s) + t(s,d_i) \leq \gamma_i$.
\end{itemize}
\end{itemize}
After the preprocessing, the Algorithm~1 finds all matches consists of a single passenger.
For each pair $(i, j)$ in $D \times R$, let $\alpha_i(o_j) = \max\{\alpha_i,\alpha_j - t(o_i,o_j)\}$ be the latest departure time for driver $i$ from $o_i$ such that $i$ can still pick-up $j$ at the earliest; this minimizes the time (duration) needed for driver $i$ to wait for passenger $j$, and hence, the total travel time of $i$ is minimized.
The process of checking if the match $\sigma(i) = \{i,j\}$ is feasible for all pairs of $(i,j)$ can be performed as in Algorithm~1 in Figure~\ref{alg-feas-single}.
\begin{figure}[htbp]
\small
\textbf{Algorithm~1} Single passenger
\begin{algorithmic}[1]
\For {each pair $(i, j)$ in $D \times R$}
\For {each station $s$ in $S_{do}(i) \cap S_{do}(j)$}
\State $t_1 = t(o_i,o_j) + t(o_j,s)$; $t_2 = t(o_j,s)$; \hspace*{2mm} /* travel duration for $i$ and $j$ to reach $s$ resp. */
\State $t = \alpha_i(o_j) + t_1$; \hspace*{6mm} /* earliest departure time at station $s$ */
\If {$t \leq \beta_i(s) \wedge (t_1 + t(s, d_i) \leq \gamma_i) \wedge t \leq \beta_j(s) \wedge (t_2 + \hat{t}(s, d_j) \leq \{\gamma_j \text{ and } \theta_j \cdot \hat{t}(o_j, d_j)\})$}
\State create an edge $(i, J=\{j\})$ in $E(H)$ to represent $\sigma(i) = \{i,j\}$.
\State \textbf{break} inner for-loop; \hspace*{2mm} /* can be allowed to run to completion for a better route */
\EndIf
\EndFor
\EndFor
\end{algorithmic}
\caption{Algorithm for computing matches consists of a single passenger.}
\label{alg-feas-single}
\vspace{-1mm}
\end{figure}
\subparagraph{Phase two (Algorithm 2).}
We extend Algorithm~1 to create matches with more than one passenger.
Let $H(D,R,E)$ be the graph after computing all possible matches consists of a single passenger (instance computed by Algorithm~1).
We start with computing feasible matches consists of two passengers, then three passengers, and so on.
Let $\varSigma(i)$ be the set of matches found so far for driver $i$ and $\varSigma(i,p) = \{\sigma(i) \mid p =|\sigma(i) \setminus \{i\}|\}$ be the set of matches with $p$ passengers.
Let $r_i = (l_0,l_1,\ldots,l_p,s)$ denotes an ordered potential path (travel route) for driver $i$ to pick-up all $p$ passengers of $\sigma(i)$ and drop-off them at station $s$, where $l_0$ is the origin of $i$ and $l_y$ is the pick-up location (origin of passenger $j_y$), $1 \leq y \leq p$.
We extend the notion of $\alpha_i(o_j)$, defined above, to all locations of $r_i$.
That is, $\alpha_i(l_p)$ is the latest departure time of $i$ to pick-up all passengers $j_1,\ldots,j_p$ such that the waiting time of $i$ is minimized, and hence, travel time of $i$ is minimized.
All possible combinations of $r_i$ are enumerated to find a feasible path $r_i$; the process of finding $r_i$ is described in the following.
\begin{itemize}
\setlength\itemsep{0em}
\item First, we fix a combination of $r_i$ such that $|\sigma(i)| \leq n_i + 1$ and $r_i$ satisfies the stop constraint. The order of the pick-up origin locations is known when we fix a path $r_i$.
\item The algorithm determines the actual drop-off station $s$ in $r_i = (l_0,l_1,\ldots,l_{p},s)$.
Let $j_{y}$ be the passenger corresponds to pick-up location $l_y$ for $1 \leq y \leq p$ and $l_0 = o_i$.
For each station $s$ in $\bigcap_{0 \leq y \leq p} S_{do}(j_y)$, the algorithm checks if $r_i = (l_0,l_1,\ldots,l_{p},s)$ admits a time feasible path for each trip in $\sigma(i)$.
\begin{itemize}
\item The total travel time (duration) for $i$ from $l_0$ to $s$ is $t_i = t(l_0, l_1) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$.
The total travel time (duration) for $j_y$ from $l_y$ to $s$ is $t_{j_y} = t(l_y,l_{y+1}) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$, $1 \leq y \leq p$.
\item Since the order for $i$ to pick up $j_y$ ($1 \leq y \leq p$) is fixed, $\alpha_i(l_p)$ can be calculated as $\alpha_i(l_p) = \max\{\alpha_i, \alpha_{j_1} - t(l_0,l_1), \alpha_{j_{2}} - t(l_0,l_1) - t(l_1,l_2), \ldots, \alpha_{j_{p}} - t(l_0,l_1) - \cdots - t(l_{p-1},l_p)\}$.
The earliest arrival time at $s$ for all trips in $\sigma(i)$ is $t = \alpha_i(l_p) + t_i$.
\item If $t \leq \beta_i(s)$, $t_i + t(s, d_i) \leq \gamma_i$, and for $1\leq y\leq p$, $t \leq \beta_{j_{y}}(s)$, $t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq \gamma_{j_{y}}$ and $t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq \delta_{j_{y}} \cdot \hat{t}(o_{j_{y}}, d_{j_{y}})$, then $r_i$ is feasible.
\end{itemize}
\item If $r_i$ is feasible, add the match corresponds to $r_i$ to $H$. Otherwise, check next combination of $r_i$ until a feasible path $r_i$ is found or all combinations are exhausted.
\end{itemize}
The pseudo code for the above process is given in Algorithm~2 ( Figure~\ref{alg-feas-all}).
\begin{figure}[!htbp]
\small
\textbf{Algorithm~2} Compute all matches
\begin{algorithmic}[1]
\For {$i$ = 1 to $|D|$}
\State $p = 2$;
\While {($p \leq n_i$ and $\varSigma(i,p-1) \neq \emptyset$)}
\For {each match $\sigma(i)$ in $\varSigma(i,p-1)$}
\For {each $j \in R$ s.t. $j \notin \sigma(i)$}
\State /* check if $\sigma(i) \cup \{j\}$ satisfies Observation~\ref{obs-1}, and if not, skip $j$ */
\State \textbf{if} {$((\sigma(i) \setminus \{q\}) \cup \{j\}) \in \varSigma(i,p-1)$ for all $q \in \sigma(i) \setminus \{i\}$} \textbf{then}
\State \hspace{0.5cm} \textbf{if} {($\sigma(i) \cup \{j\}$ has not been checked) and (feasibleInsert($\sigma(i), j$))} \textbf{then}
\State \hspace{1cm} create an edge $(i, J)$ in $E(H)$ to represent $\sigma_J(i) = \{i,J\}$.
\State \hspace{1cm} add $\sigma(i) \cup \{j\}$ to $\varSigma(i,p)$.
\State \hspace{0.5cm} \textbf{end if}
\EndFor
\EndFor
\State $p = p + 1$;
\EndWhile
\EndFor
\\
\textbf{Procedure} feasibleInsert($\sigma(i), j$) \hspace{3mm} /* find a feasible path for $i$ to serve $\sigma(i) \cup \{j\}$ if exists */
\\ Let $r_i = (l_0,l_1,\ldots,l_p,s)$ denotes a potential path for driver $i$ to serve trips in $\sigma(i) \cup \{j\}$.
\For {each station $s$ in $\bigcap_{0 \leq y \leq p} S_{do}(j_y)$}
\For {each combination of $r_i = (l_0,\ldots,l_p,s)$ that satisfies the stop constraint}
\State $t_i = t(l_0, l_1) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$; $t_{j_y} = t(l_y,l_{y+1}) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$;
\State $t = \alpha_i(l_p) + t_i$; /* the earliest arrival time at $s$ for all trips in $\sigma(i)$ */
\State \textbf{if} [($t \leq \beta_i(s) \wedge (t_i + t(s, d_i) \leq \gamma_i)$) and (for $1\leq y \leq p$, $t \leq \beta_{j_{y}}(s) \wedge (t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq$
\State \hspace{0.5cm} $\theta_{j_{y}} \cdot \hat{t}(o_{j_{y}}, d_{j_{y}})) \wedge (t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq \gamma_{j_{y}}))$] \textbf{then}
\State \hspace{0.5cm} \Return True;
\EndFor
\EndFor
\\ \Return False;
\end{algorithmic}
\caption{Algorithm for computing matches consists of multiple passengers.}
\label{alg-feas-all}
\vspace{-1mm}
\end{figure}
It can be easily shown that the calculation of $\alpha_i(l_p)$ indeed minimizes the total travel time of $i$ to reach $l_p$.
\begin{theorem}
Given a feasible path $r_i = (l_0,\ldots,l_p,s)$ for driver $i$ that serves $p$ passengers in a match $\sigma(i)$.
The latest departure time $\alpha_i(l_p)$ calculated above minimizes the total travel time of $i$ to reach $l_p$.
\end{theorem}
\begin{proof}
Prove by induction. For the base case $\alpha_i(l_1) = \max\{\alpha_i,\alpha_{j_{1}} - t(l_0,l_{1})\}$, $i$ does not need to wait for $j_1$. Hence, the total travel time of $i$ to pick-up $j_1$ is minimized with departure time $\alpha_i(l_1)$.
Assume the lemma holds for $1 \leq y-1 < p$, that is, $\alpha_i(l_{y-1})$ minimizes the total travel time of $i$ to reach $l_{y-1}$. We prove for $y$.
From the calculation, $\alpha_i(l_{y}) = \max\{\alpha_i(l_{y-1}), \alpha_{j_{y}} - t(l_0,l_{1}) - t(l_{1},l_{2}) - \cdots - t(l_{y-1},l_y)\}$. By the induction hypothesis, $\alpha_i(l_{y})$ minimizes the total travel time of $i$.
\end{proof}
The running time of Algorithm~2 heavily depends on the number of subsets of passengers to be checked for feasibility.
One way to speed up Algorithm~2 is to use dynamic programming (or memoization) to avoid redundant checks on a same subset.
For each feasible match $|\sigma(i)| = p$ of a driver $i \in D$, we store every feasible path $r_i = \{i, j_1,\ldots,j_p,s\}$ and extend from each feasible path $r_i$ to insert a new trip to minimize the number of ordered potential paths we need to test.
We can further make sure that no path is tested twice during execution.
First, the set $R$ of riders is given a fixed ordering (based on the integer labels).
For a feasible path $r_i$ of a driver $i$, the check of
inserting a new rider $j$ into $r_i$ is performed only if $j$ is larger than every rider in $r_i$ according to the fixed ordering.
A heuristic approach to speed up Algorithm~2 is given in Section~\ref{sec-instances}.
\section{Approximation algorithms} \label{sec-approximate}
We show that the maximization problem defined in Section 3 is NP-hard and give approximation algorithms for the problem.
When every edge in $H(D,R,E)$ consists of only two vertices (one driver and one passenger), the maximization problem is equivalent to maximum matching, which can be solved in polynomial time.
However, if the edges consist of more than two vertices, they become hyperedges. In this case, the integer program~(\ref{obj-1})-(\ref{constraint-2}) becomes a formulation of the maximum weighted set packing problem, which is NP-hard~~\cite{CINP79-GJ,Karp72}.
Our maximization problem is a special case of the maximum weighted set packing problem.
We first show our maximization problem instance $H(D,R,E)$ is indeed NP-hard.
\subsection{NP-hardness}
It was mentioned in~\cite{PNAS14-S} that their minimization problem related to shareability hyper-network is NP-Complete, which is similar to our maximization problem formulation. However, an actual reduction proof was not described.
In this section, we prove our maximization problem is NP-hard by a reduction from a special case of the maximum 3-dimensional matching problem (3DM).
An instance of 3DM consists of three disjoint finite sets $A$, $B$ and $C$, and a collection $\mathcal{F} \subseteq A \times B \times C$.
That is, $\mathcal{F}$ is a collection of triplets $(a,b,c)$, where $a \in A, b \in B$ and $c \in C$.
A 3-dimensional matching is a subset $\mathcal{M} \subseteq \mathcal{F}$ such that all sets in $\mathcal{M}$ are pairwise disjoint.
The decision problem of 3DM is that given $(A, B, C, \mathcal{F})$ and an integer $q$, decide whether there exists a matching $\mathcal{M} \subseteq \mathcal{F}$ with $|\mathcal{M}| \geq q$.
We consider a special case of 3DM: $|A| = |B| = |C| = q$; it is still NP-complete~\cite{CINP79-GJ,Karp72}.
Given a 3DM instance $(A,B,C,\mathcal{F})$ with $|A| = |B| = |C| = q$, we construct an instance $H(D,R,E)$ (bipartite hypergraph) of the maximization problem as follows:
\begin{itemize}
\item $D(H) = A$, the set of drivers and $R(H) = B \cup C$, the set of passengers.
\item For each $f \in \mathcal{F}$, create a hyperedge $e(f)$ in $E(H)$ containing elements $(a,b,c)$, where $a$ represents a driver and $\{b,c\}$ represent two different passengers.
Further, create edges $e'(f) = \{a, b\}$ and $e''(f) = \{a, c\}$.
\end{itemize}
\begin{theorem}
It is NP-hard to maximize the number of passengers in $R$, each of whom is given a ridesharing route.
\end{theorem}
\begin{proof}
We prove the theorem by showing that an instance $(A,B,C,\mathcal{F})$ of the maximum 3-dimensional matching problem has a solution $\mathcal{M}$ of cardinality $q$ if and only if the bipartite hypergraph instance $H(D,R,E)$ has a solution $X$ with $2q$ passengers.
Assume that $(A,B,C,\mathcal{F})$ has a solution $\mathcal{M} = \{m_1, m_2,\ldots, m_q\}$. For each $m_i$ ($1 \leq i \leq q$), add the corresponding hyperedge $e(m_i) \in E(H)$ to $X$.
Since $m_i \cap m_j = \emptyset$ for $1 \leq i \neq j \leq q$ and each edge $e \in X$ contains two passengers, $X$ is a valid solution to $H(D,R,E)$ with $2q$ passengers.
Assume that $H(D,R,E)$ has a solution $X$ with $2q$ passengers served.
For every $e(f) \in X$, add the corresponding set $f \in \mathcal{F}$ to $\mathcal{M}$.
In order to serve $2q$ passengers, $|X| = |D| = q$ and every $e(f) \in X$ must contain two different passengers.
Hence, $\mathcal{M}$ is a valid solution to $(A,B,C,\mathcal{F})$ s.t. $|\mathcal{M}| = q$.
The size of $H(D,R,E)$ is polynomial in $q$. It takes a polynomial time to convert a solution of $H(D,R,E)$ to a solution of the 3DM instance $(A,B,C,\mathcal{F})$ and vice versa.
\end{proof}
\subsection{2-approximation algorithm}
For consistency, we follow the convention in~\cite{SWAT00-B,SODA99-C} that a $\rho$-approximation algorithm for a maximization problem is defined as $\rho \cdot w(\mathcal{C}) \geq OPT$ for $\rho > 1$, where $w(\mathcal{C})$ and $OPT$ are the values of approximation and optimal solutions respectively.
In this section, we give a $2$-approximation algorithm to the maximization problem instance $H(D,R,E)$.
Our $2$-approximation algorithm (refer to as \textit{ImpGreedy}) is a simplified version of the simple greedy~\cite{SWAT00-B,SODA99-C,PNAS14-S} discussed in Section~\ref{sec-app-algs}, except the running time and memory usage are significantly improved by computing a solution directly from $H(D,R,E)$ without solving the independent set/weighted set packing problem.
\subsubsection{Description of ImpGreedy Algorithm}
For a maximization problem instance $H(D,R,E)$, we use $\Gamma$ to denote a current partial solution, which consists of a set of matches represented by the hyperedges in $E(H)$.
Let $P(\Gamma)=\bigcup_{e \in \Gamma} J_e$ (called \textit{covered passengers}).
Initially, $\Gamma = \emptyset$.
In each iteration, we add a match with the most number of uncovered passengers to $\Gamma$, that is, select an edge $e=(i,J_e)$ such that
$|J_e \setminus P(\Gamma)|$ is maximum, and then add $e$ to $\Gamma$.
Remove $E_e = \cup_{j \in A(e)} E_j$ from $E(H)$ ($E_j$ is defined in Section~\ref{sec-exact}).
Repeat until $P(\Gamma) = R$ or $|\Gamma| = |D|$.
The pseudo code of ImpGreedy algorithm is shown in Figure~\ref{alg-new-approx}.
\begin{figure}[htbp]
\small
\textbf{Algorithm~3} ImpGreedy Algorithm \\
\textbf{Input:} The hypergraph $H(D,R,E)$ for problem instance $(N,\mathcal{A},T)$. \\
\textbf{Output:} A solution $\Gamma$ to $(N,\mathcal{A},T)$ with $2$-approximation ratio.
\begin{algorithmic}[1]
\\ $\Gamma = \emptyset$; $P(\Gamma) = \emptyset$;
\While{($P(\Gamma) \neq R$ and $|V(\Gamma) < |D|$)}
\State compute $e = \text{argmax}_{e \in E(H)} |J_e \setminus P(\Gamma)|$;
$\Gamma = \Gamma \cup \{e\}$; update $P(\Gamma)$; remove $E_e$ from $E(H)$;
\EndWhile
\end{algorithmic}
\caption{$2$-approximating algorithm for problem instance $(N,\mathcal{A},T)$.}
\label{alg-new-approx}
\vspace{-1mm}
\end{figure}
\noindent In ImpGreedy Algorithm, when an edge $e$ is added to $\Gamma$, $E_e$ is removed from $E(H)$, so Property~\ref{property-gamma} holds for $\Gamma$. From this, the following holds.
\begin{property}
At most one edge $e$ from $E_i$ for every $i \in D$ can be selected in any solution.
\label{property-gamma}
\end{property}
\subsubsection{Analysis of ImpGreedy Algorithm}
Let $\Gamma = \{x_1, x_2,\ldots, x_a\}$ be a solution found by Algorithm~3, where $x_i$ is the $i^{th}$ edge added to $\Gamma$.
Throughout the analysis, we use $OPT$ to denote an optimal solution, that is, $P(OPT) \geq P(\Gamma)$.
Further, $\Gamma_i = \bigcup_{1 \leq b \leq i} x_b$ for $1 \leq i \leq a$, $\Gamma_0 = \emptyset$ and $\Gamma_a = \Gamma$.
The driver of match $x_i$ is denoted by $d(x_i)$.
The main idea of our analysis is to add up the maximum difference between the number of covered passengers by selecting $x_i$ in $\Gamma$ and not selecting $x_i$ in $OPT$.
For each $x_i\in \Gamma$, by Property~\ref{property-gamma}, there is at most one $y \in OPT$ with $d(y)=d(x_i)$.
We order $OPT$ and introduce dummy edges to $OPT$ such that $d(y_i) = d(x_i)$ for $1 \leq i\leq a$.
Formally, for $1\leq i\leq a$, define
\[
OPT(i)=\{y_1,\ldots,y_i \mid 1\leq b \leq i, d(y_b)=d(x_b) \text{ if } y_b \in OPT, \text{ otherwise } y_b \text{ a dummy edge}\}.
\]
A dummy edge $y_b\in OPT(i)$ is defined as $d(y_b) = d(x_b)$ with $J_{y_b}=\emptyset$.
The gap of an edge $x_i \in \Gamma$ is defined as
\[
{\mathop{\rm gap}}(x_i) = |J_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}|,
\]
where $J'_{x_i} = (J_{x_i} \setminus P(\Gamma_{i-1})) \cap P(OPT \setminus \Gamma)$ is the maximum subset of passengers in $J_{x_i} \setminus P(\Gamma_{i-1})$ that are also covered by drivers in $OPT \setminus \Gamma$.
The intuition is that the sum of ${\mathop{\rm gap}}(x_i)$ for all $x_i \in \Gamma$ states the maximum possible number of passengers may not be covered by $\Gamma$.
Let $P(OPT(i)) = \bigcup_{1 \leq b \leq i} J_{y_b}$ and $P(OPT'(i)) = \bigcup_{1 \leq b \leq i} J'_{x_b}$ for any $i \in [1,\ldots,a]$.
Then the maximum gap between $\Gamma$ and $OPT$ can be calculated as $\sum_{x \in \Gamma_a} {\mathop{\rm gap}}(x) = |P(OPT(a))| + |P(OPT'(a))| - |P(\Gamma_{a})|$.
First, we show that $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$.
\begin{prop}
Let $\Gamma = \{x_1,\ldots,x_a\}$, $P(OPT(a)) = \bigcup_{1 \leq i \leq a} J_{y_i}$ and $P(OPT'(a)) = \bigcup_{1 \leq i \leq a} J'_{x_i}$.
Then, $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$.
\label{prop-opt-size}
\end{prop}
\begin{proof}
By definition, $P(OPT)=P(OPT(a)) \cup P(OPT \setminus OPT(a))$.
For any $z$ in $OPT\setminus OPT(a)$, $d(z) \neq d(x)$ for every $x \in \Gamma$.
If $J_z \setminus P(\Gamma) \neq \emptyset$, then $z$ would have been found and added to $\Gamma$ by Algorithm~3.
Hence, $J_z \setminus P(\Gamma) = \emptyset$, implying $J_z \subseteq P(OPT'(a))$ and $P(OPT \setminus OPT(a)) \subseteq P(OPT'(a))$.
\end{proof}
\begin{lemma}
Let $OPT$ be an optimal solution and $\Gamma = \{x_1, x_2,\ldots, x_a\}$ be a solution found by the algorithm.
For any $1 \leq i \leq a$, $\sum_{x \in \Gamma_i} {\mathop{\rm gap}}(x) = |P(OPT(i))| - |P(\Gamma_{i})| + |P(OPT'(i))| \leq |P(\Gamma_i)|$.
\label{lemma-max-gap}
\end{lemma}
\begin{proof}
Recall that $OPT(i)=\{y_1,\ldots,y_i\}$ as defined above. For $y_b \in OPT(i), 1 \leq b \leq i, d(y_b)=d(x_b)$.
We prove the lemma by induction on $i$.
Base case $i=1$: $|P(OPT(1))| - |P(\Gamma_1)| + |P(OPT'(1))| \leq |P(\Gamma_1)|$.
By definition, ${\mathop{\rm gap}}(x_1) = |J_{y_1}| - |J_{x_1} \setminus \Gamma_0| + |J'_{x_1}|$.
Since $x_1$ is selected by the algorithm, it must be that $|J_{x_1}| \geq |J_u|$ for all $u \in V(G')$, so $|J_{y_1}| \leq |J_{x_1}|$.
Thus,
\begin{align*}
{\mathop{\rm gap}}(x_1) &= |J_{y_1}| - |J_{x_1} \setminus \Gamma_0| + |J'_{x_1}| \\
&\leq |J'_{x_1}| \leq |J_{x_1}|.
\end{align*}
Assume the statement is true for $i-1 \geq 1$, that is, $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) \leq |P(\Gamma_{i-1})|$, and we prove for $i \leq a$.
By the induction hypothesis, both $P(OPT(i-1))$ and $P(OPT'(i-1))$ are included in the calculation of $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)$. More precisely, $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) = |P(OPT(i-1))| - |P(\Gamma_{i-1})| + |P(OPT'(i-1))| \leq |P(\Gamma_{i-1})|$.
If $|J_{y_i}| \leq |J_{x_i} \setminus P(\Gamma_{i-1})|$, the lemma is true since we can assume $|J'_{x_i}| \leq |J_{x_i}|$.
Suppose $|J_{y_i}| > |J_{x_i} \setminus P(\Gamma_{i-1})|$.
Before $x_i$ is selected, the algorithm must have considered $y_i$ and found that $|J_{x_i} \setminus P(\Gamma_{i-1})| \geq |J_{y_i} \setminus P(\Gamma_{i-1})|$.
Then, $|J_{y_i}| > |J_{x_i} \setminus P(\Gamma_{i-1})| \geq |J_{y_i} \setminus P(\Gamma_{i-1})|$, implying $J_{y_i} \cap P(\Gamma_{i-1}) \neq \emptyset$.
We have
\begin{align}
|J_{x_i} \setminus P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})|
\geq |J_{y_i} \setminus P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})| = |J_{y_i}|.
\label{eq-4}
\end{align}
Let $J''_{y_i} \subseteq (J_{y_i} \cap P(\Gamma_{i-1}))$ be the set of passengers covered by $P(OPT(i-1)) \cup P(OPT'(i-1))$, namely $J''_{y_i} \subseteq (P(OPT(i-1)) \cup P(OPT'(i-1)))$.
Then by the induction hypothesis,
\begin{align}
\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) \leq P(\Gamma_{i-1}) - |J_{y_i} \cap P(\Gamma_{i-1})| + |J''_{y_i}|.
\label{eq-5}
\end{align}
Adding $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)$ and ${\mathop{\rm gap}}(x_i)$ together:
\begin{align*}
&(\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)) + ({\mathop{\rm gap}}(x_i)) \\
&= |P(OPT(i-1))| - |P(\Gamma_{i-1})| + |P(OPT'(i-1))| + |J_{y_i} \setminus J''_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \\
&\leq (|P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J''_{y_i}|) + |J_{y_i} \setminus J''_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \hspace*{16mm} \text{from } (\ref{eq-5}) \\
&= |P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \\
&\leq |P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})| + |J'_{x_i}| \hspace*{53mm} \text{from } (\ref{eq-4}) \\
&= |P(\Gamma_{i-1})| + |J'_{x_i}| \leq |P(\Gamma_{i-1})| + |J_{x_i} \setminus P(\Gamma_{i-1})| \hspace*{45mm} \text{by defintion of } J'_{x_i} \\
&= P(\Gamma_{i})
\end{align*}
Therefore, by the property of induction, the lemma holds.
\end{proof}
\begin{theorem}
Given the hypergraph instance $H(D,R,E)$.
Algorithm~3 computes a solution $\Gamma$ to $H$ such that $2|P(\Gamma)| \geq |P(OPT)|$, where $OPT$ is an optimal solution, with running time $O(|D| \cdot |V|)$.
\label{theorem-ImpGreedy}
\end{theorem}
\begin{proof}
Let $\Gamma = \{x_1,\ldots,x_a\}$, $P(OPT(a)) = \bigcup_{1 \leq i \leq a} J_{y_i}$ and $P(OPT'(a)) = \bigcup_{1 \leq i \leq a} J'_{x_i}$.
By Proposition~\ref{prop-opt-size}, $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$, and by Lemma~\ref{lemma-max-gap}, $|P(OPT(a))| + |P(OPT'(a))| - |P(\Gamma_{a})| \leq |P(\Gamma_a)|$.
We have
\[
|P(OPT)| \leq |P(OPT(a))| + |P(OPT'(a))| \leq 2|P(\Gamma)|.
\]
In each iteration of the while-loop, it takes $O(E)$ to find an edge $x$ with maximum $|J_x \setminus P(\Gamma)|$,
and there are at most $|D|$ iterations. Hence, Algorithm~3 runs in $O(|D| \cdot |E|)$ time.
\end{proof}
\subsection{Approximation algorithms for maximum weighted set packing}\label{sec-app-algs}
Now, we explain the algorithms for the maximum weighted set packing problem, which solve our maximization problem.
Given a universe $\mathcal{U}$ and a family $\mathcal{S}$ of subsets of $\mathcal{U}$, a \emph{packing} is a subfamily $\mathcal{C} \subseteq \mathcal{S}$ of sets such that all sets in $\mathcal{C}$ are pairwise disjoint.
Every subset $S \in \mathcal{S}$ has at most $k$ elements and is given a real weight.
The maximum weighted $k$-set packing problem (MWSP) asks to find a packing $\mathcal{C}$ with the largest total weight.
We can see that the maximization problem on $H(D,R,E)$ is a special case of the maximum weighted $k$-set packing problem, where the trips of
$D \cup R$ is the universe $\mathcal{U}$ and $E(H)$ is the family $\mathcal{S}$ of subsets, and every $e \in E(H)$ represents at most $k = K+1$ trips ($K$ is the maximum capacity of all vehicles).
Hence, solving MWSP also solves our maximization problem.
Chandra and Halld\'{o}rsson~\cite{SODA99-C} presented a $\frac{2(k+1)}{3}$-approximation and a $\frac{2(2k+1)}{5}$-approximation algorithms (refer to as \textit{BestImp} and \textit{AnyImp} respectively), and
Berman~\cite{SWAT00-B} presented a $(\frac{k+1}{2} + \epsilon)$-approximation algorithm (refer to as \textit{SquareImp}) for the weighted $k$-set packing problem (here, $k = K + 1$), where the latter still has the best approximation ratio.
The three algorithms in~\cite{SWAT00-B,SODA99-C} (AnyImp, BestImp and SquareImp) solve the weighted $k$-set packing problem by first transferring it into a weighted independent set problem, which consists of a vertex weighted graph $G(V,E)$ and asks to find a maximum weighted independent set in $G(V,E)$.
We briefly describe the common local search approach used in these three approximation algorithms.
A \emph{claw} $C$ in $G$ is defined as an induced connected subgraph that consists of an independent set $T_C$ of vertices (called talons) and a center vertex $C_z$ that is connected to all the talons ($C$ is an induced star with center $C_z$).
The \textit{local search} of AnyImp, BestImp and SquareImp uses the same central idea, summarized as follows:
\begin{enumerate}
\item The approximation algorithms start with an initial solution (independent set) $I$ in $G$ found by a \textbf{simple greedy} (refer to as \textit{Greedy}) as follows: select a vertex $u \in V(G)$ with largest weight and add to $I$.
Eliminate $u$ and all $u$'s neighbors from being selected. Repeatedly select the largest weight vertex until all vertices are eliminated from $G$.
\item While there exists claw $C$ in $G$ w.r.t. $I$ such that independent set $T_C$ improves the weight of $I$ (different for each algorithm),
augment $I$ as $I = (I \setminus N(T_C)) \cup T_C$; such $T_C$ is called an \emph{improvement}.
\end{enumerate}
To apply these algorithms to our maximization problem, we need to convert the bipartite hypergraph $H(D,R,E)$ to a weighted independent set instance $G(V,E)$, which is straightforward.
Each hyperedge $e \in E(H)$ is represented by a vertex $v_e \in V(G)$. The weight $w(v_e) = p(e)$ for each $e \in E(H)$ and $v_e \in V(G)$. There is an edge between $v_{e}, v_{e'} \in V(G)$ if $e \cap e' \neq \emptyset$ where $e, e' \in E(H)$.
We observed the following property.
\begin{property}
When the size of each set in the set packing problem is at most $k$ $(|e| = k, e \in E(H))$, the graph $G(V,E)$ has the property that it is $(k+1)$-claw free, that is, $G(V,E)$ does not contain an independent set of size $k+1$ in the neighborhood of any vertex.
\end{property}
Applying this property, we only need to search a claw $C$ consists of at most $k$ talons, which upper bounds the running time for finding a claw within $O(n^k)$, where $n = |V(G)|$.
When $k$ is very small, it is practical enough for solving our maximization problem instance $H(D,R,E)$ computed by Algorithm~2 from $(N,\mathcal{A},T)$.
It has been mentioned in~\cite{PNAS14-S} that the approximation algorithm in~\cite{SODA99-C} can be applied to the ridesharing problem. However, only the simple greedy (\textit{Greedy}) with $k$-approximation was implemented in~\cite{PNAS14-S}.
Notice that algorithm ImpGreedy (Algorithm 3) is a simplified version of algorithm Greedy, and Greedy is used to get an initial solution in algorithms AnyImp, BestImp and SquareImp. From Theorem~\ref{theorem-ImpGreedy}, we have Corollary~\ref{corollary-approximate}.
\begin{corollary}
Greedy, AnyImp, BestImp and SquareImp algorithms compute a solution to $H(D,R,E)$ with 2-approximation ratio.
\label{corollary-approximate}
\end{corollary}
Since ImpGreedy finds a solution directly on $H(D,R,E)$ without converting it to $G(V,E)$ and solving the independent set problem of $G(V,E)$, it is more time and space efficient than the algorithms for MWSP.
In the rest of this paper, Algorithm 3 is referred to as ImpGreedy.
\section{Numerical experiments} \label{sec-experiment}
We create a simulation environment, which consists of a public transit system and a ridesharing system.
We implement our proposed approximation algorithm (ImpGreedy) and Greedy, AnyImp and BestImp algorithms for the $k$-set packing problem to evaluate the benefits of having an integrated transportation system supporting public transit and ridesharing.
The results of SquareImp are not discussed because its performance is same as AnyImp; this is due to the implementation of the search/enumeration order of the verices and edges in the independent set instance $G(V,E)$ being fixed.
\subsection{Description and characteristics of datasets}
We built a simplified transit network of Chicago to simulate practical scenarios of public transit and ridesharing.
The roadmap data of Chicago is retrieved from OpenStreetMap\footnote{Planet OSM. \url{https://planet.osm.org}}.
We used the GraphHopper\footnote{GraphHopper 1.0. \url{https://www.graphhopper.com}} library to construct the logical graph data structure of the roadmap.
The Chicago city is divided into 77 officially community areas, each of which is assigned an area code.
We examined two different dataset in Chicago to reveal some basic traffic pattern (the datasets are provided by the Chicago Data Portal (CDP) and Chicago Transit Authority (CTA)\footnote{CDP. \url{https://data.cityofchicago.org}. CTA. \url{https://www.transitchicago.com}}, maintained by the City of Chicago).
The first dataset is bus and rail ridership, which shows the monthly averages and monthly totals for all CTA bus routes and train station entries. We denote this dataset as \textit{PTR, public transit ridership}.
The PTR dataset range is chosen from June 1st, 2019 to June 30th, 2019.
The second dataset is rideshare trips reported by Transportation Network Providers (sometimes called rideshare companies) to the City of Chicago. We denote this dataset as \textit{TNP}.
The TNP dataset range is chosen from June 3rd, 2019 to June 30th, 2019, total of 4 weeks of data.
Table~\ref{table-PTRdata} and Table~\ref{table-TNPdata} show some basic stats of both datasets.
\begin{table}[htbp]
\small
\captionsetup{font=small}
\parbox{.5\linewidth}{
\centering
\begin{tabular}{| p{3.7cm} | p{3.3cm} |}
\hline
Total Bus Ridership & 20,300,416 \\
Total Rail Ridership & 19,282,992 \\ \hline
12 busiest bus routes & 3, 4, 8, 9, 22, 49, 53, 66, 77, 79, 82, 151 \\ \hline
The busiest bus routes selected & 4, 9, 49, 53, 77, 79, 82 \\
\hline
\end{tabular}
\caption{Basic stats of the PTR dataset \label{table-PTRdata}}
}
\hfill
\parbox{.49\linewidth}{
\centering
\begin{tabular}{| p{4.3cm} | p{2.8cm} |}
\hline
\# of original records & 8,820,037 \\ \hline
\# of records considered & 7,427,716 \\
\# of shared trips & 1,015,329 \\
\# of non-shared trips & 6,412,387 \\ \hline
The most visited community areas selected & 1, 4, 5, 7, 22, 23, 25, 32, 41, 64, 76 \\
\hline
\end{tabular}
\caption{Basic stats of the TNP dataset \label{table-TNPdata}}
}
\end{table}
In the PTR dataset, the total ridership for each bus route is recorded; there are 127 bus routes in the dataset.
We examined the 12 busiest bus routes based on the total ridership and
selected 7 out of the 12 routes as listed in Table~\ref{table-PTRdata} to build the transit network (excluded bus routes either serve a small community or too close to train stations).
We also selected all the major trains/metro lines within the Chicago area except the Brown Line and Purple Line since they are too close to the Red and Blue lines.
Note that the PTR dataset also provides the total rail ridership. However, it only provides the number of riders entering every station in each day; it does not provide the number of riders exit from a station nor the time related to the entries.
Each record in the TNP dataset describes a passenger trip served by a driver who provides the rideshare service;
a trip record consists of the pick-up and drop-off time and the pick-up and drop-off community area of the trip, and exact locations are provided sometimes.
We removed records where the pick-up or drop-off community area is hidden for privacy reason or not within Chicago, which results in 7.4 million ridesharing trips.
We calculated the average number of trips per day departed from/arrived at each area.
The results are plotted in Figure~\ref{fig-OD-pairs}; the community areas that have the highest number of departure trips are almost the same as that of the arrival trips.
We selected 11 of the 20 most visited areas as listed in Table~\ref{table-TNPdata} (area 32 is Chicago downtown, areas 64 and 76 are airports) to build the transit network for our simulation.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/ODs.pdf}
\captionsetup{font=small}
\caption{The average number of trips per day departed from and arrived at each area.}
\label{fig-OD-pairs}
\end{figure}
From the selected bus routes, trains and community areas, we create a simplified public transit network connecting the selected areas, depicted in Figure~\ref{fig-transit-network}.
\begin{figure}[!bp]
\centering
\includegraphics[width=\textwidth]{figs/transit_chicago2.pdf}
\captionsetup{font=small}
\caption{Simplified public transit network of Chicago with 13 urban communities and 3 designated locations. Figure on the right has the Chicago city map overlay for scale.}
\label{fig-transit-network}
\end{figure}
Each rectangle on the figure represents an urban community within one community area or across two community areas, labeled in the rectangle.
The blue dashed rectangles/urban communities are chosen due to the busiest bus routes from the PTR dataset.
The rectangles/urban communities labeled with red area codes are chosen due to the most visited community areas from the TNP dataset.
The dashed lines are the trains, which resemble the major train services in Chicago. The solid lines are the selected bus routes connecting the urban communities to their closest train stations.
There are also three designated locations/destinations that many people want to travel to/from throughout the day; they are the two airports and downtown region in Chicago.
The travel time between two locations (each location consists of the latitude and longitude coordinates) uses the fastest/shortest route computed by the GraphHopper library, which is based on personal cars.
The shortest paths are \textbf{computed in real-time}, unlike many previous simulations where the shortest paths are precomputed and stored.
As mentioned in Section~\ref{sec-preliminary}, we do not explicitly consider \textit{service time}, which consists of: pick-up and drop-off time, walking time and waiting time between a ridesharing service and public transit service.
Instead, we multiply a small constant $\epsilon > 1$ to the fastest route to mimic the service time and the waiting time for public transit.
For instance, consider two consecutive metro stations $s_1$ and $s_2$. The travel time $t(s_1,s_2)$ is computed by the fastest route, and the travel time by train between from $s_1$ to $s_2$ is $\hat{t}(s_1,s_2) = 1.15 \cdot t(s_1,s_2)$. The constant $\epsilon$ for bus service is 2.
Rider trips originated from most locations must take a bus to reach a metro station when ridesharing service is not involved.
\subsection{Generating instances}\label{sec-instances}
In our simulation, we partition the time from 6:00 to 23:59 each day into 72 time intervals (each has 15 minutes), and we only focus on weekdays.
To see ridesharing traffic pattern, we calculated the average number of served trips per hour for each day of the week using the TNP dataset.
The dashed (orange) line and solid (blue) line of the plot in Figure~(\ref{fig-sub-originalTrips}) represent shared trips and non-shared trips respectively.
A set of trips are called \emph{shared trips} if this set of trips are matched for the same vehicle consecutively such that their trips overlap, namely, more than one passenger are presented in the same vehicle.
For all other trips, we call them \textit{non-shared trips}.
From the plot, the peak hours are between 7:00 AM to 9:00AM and 4:00PM to 7:00PM on weekdays for both non-shared and shared trips.
The number of trips generated for each interval is plotted in Figure~(\ref{fig-sub-nTrips}), which is a scaled down and smoothed version of the TNP dataset for weekdays.
The ratio between the number of drivers and riders generated is roughly 1:3 (1 driver and 3 riders) for each interval.
\begin{figure}[!ht]
\captionsetup{font=small}
\centering
\begin{subfigure}{.62\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/avgTrips2.pdf}
\caption{Average numbers of shared and non-shared trips in TNP dataset.}
\label{fig-sub-originalTrips}
\end{subfigure}%
\hfill
\begin{subfigure}{.37\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figs/nTrips.pdf}
\caption{Total number of driver and rider trips generated for each time interval.}
\label{fig-sub-nTrips}
\end{subfigure}
\caption{Plots for the number of trips for every hour from data and generated.}
\label{fig-nTrips-plot}
\vspace*{-1mm}
\end{figure}
For each time interval, we first generate a set $R$ of riders and then a set $D$ of drivers.
We do not generate a trip where its origin and destination are close. For example, no trip with origin Area25 and destination Area15 is generated.
\paragraph{Generation of rider trips.}
We first assume that the numbers of riders entering and exiting a station are the same each day.
Next we assume that the the numbers of riders in PTR over the time intervals each day follow a similar distribution of the TNP trips over the time intervals.
Each day is divided into 6 different consecutive time periods (each consists of multiple time intervals):
morning rush, morning normal, noon, afternoon normal, afternoon rush, and evening time periods.
Each time period determines the probability and distribution of origins and destinations.
Based on the PTR dataset and Rail Capacity Study by CTA~\cite{CTA19}, many riders are going into downtown in the morning and leaving downtown in the afternoon.
To generate a rider trip $j$ during morning rush time period, we first decide a \emph{pickup area} which is a community area selected uniformly at random. The origin $o_j$ is a random point within the selected pickup area.
Then, we use standard normal distribution to determine the \emph{dropoff area}, where downtown area is within two SDs (standard deviations), airports are more than two and at most three SDs, and the community areas are more than three SDs away from the mean.
The destination $d_j$ is a random point within the selected dropoff area.
The above is repeated until $a_t$ riders are generated, where $a_t + a_t / 3$ (riders + drivers) is the total number of trips for time interval $t$ shown in Figure~(\ref{fig-sub-nTrips}).
For any pickup area $c$ and time interval $t$, $c_t$ denotes be the number of generated riders originated from $c$ for time interval $t$, that is, $\sum_c c_t = a_t$.
Other time periods follow the same procedure, and all community areas and locations can be selected as pickup and dropoff areas:
\begin{enumerate}
\setlength\itemsep{0em}
\item Morning normal: for pickup area, community areas are within two SDs, downtown is more than two and at most three SDs and airports are more than three SDs away from the mean; and destination area is selected using uniform distribution.
\item Noon: both pickup and dropoff are selected using uniform distribution.
\item Afternoon normal: for pickup area, downtown and airport are within two SDs and community areas are more than two SDs away from the mean; for dropoff area, community areas are within two SDs and downtown and airports are more than two SDs away from the mean.
\item Afternoon rush: for pickup area, downtown is within two SDs, airports are more than two SDs and at most three SDs and community areas are more than three SDs away from the mean; and for dropoff area, community areas are within two SDs, airports are more than two SDs and at most three SDs and downtown is more than three SDs away from the mean.
\item Evening: for both pickup and dropoff areas, community areas are within two SDs, downtown is more than two and at most three SDs and airports are more than three SDs away from the mean.
\end{enumerate}
\paragraph{Generation of driver trips.}
We examined the TNP dataset to determine if there are enough drivers who can provide ridesharing service to riders that follow match Types 1 and 2 traffic pattern.
First, we removed any trip from TNP if it is too short (less than 15 minutes or origin and destination are adjacent areas).
We calculated the average number of trips per hour originated from every pre-defined area in the transit network (Figure~\ref{fig-transit-network}), and then plotted the destinations of such trips in a grid heatmap.
In other words, each cell $(c,r)$ in the heatmap represents the the average number of trips per hour originated from area $c$ to destination area $r$ in the transit network (Figure~\ref{fig-transit-network}).
An example of heatmap is depicted in Figure~\ref{fig-OD-distribution}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/OD_distribution.pdf}
\caption{Traffic heatmaps for the average number of trips originated from one area (x-axis) during hour 7:00 (left) and hour 17:00 (right) to every other destination area (y-axis).}
\label{fig-OD-distribution}
\end{figure}
From the heatmaps, many trips are going into the downtown area (A32) in the morning; and as time progresses, more and more trips leave downtown. This traffic pattern confirms that there are enough drivers to serve the riders in our simulation.
The number of shared trips shown in Figure~\ref{fig-sub-originalTrips} also suggests that many riders are willing to share a same vehicle.
We slightly reduce the difference between the values of each cell in the heatmaps and use the idea of marginal probability to generate driver trips.
Let $d(c,r,h)$ be the value at the cell $(c,r)$ for origin area $c$, destination $r$ and hour $h$.
Let $P(c,h)$ be sum of the average number of trips originated from area $c$ for hour $h$ (the column for area $c$ in the heatmap corresponds to hour $h$), that is, $P(c, h) = \sum_{r} d(c,r,h)$ is the sum of the values of the whole column $c$ for hour $h$.
Given a time interval $t$, for each area $c$, we generate $c_t/3$ drivers ($c_t$ is defined in Generation of rider trips) such that each driver $i$ has origin $o_i = c$ and destination $d_i = r$ with probability $d(c,r,h)/P(c,h)$, where $t$ is contained in hour $h$.
The probability of selecting an airport as destination is fixed at 5\%.
After the origin and destination of a rider/driver trip have been determined, we decide other parameters of the trip.
The capacity $n_i$ of drivers' vehicles is selected from three ranges: the {\em low range} [1,2,3], {\em mid range} [3,4,5], and {\em high range} [4,5,6].
During morning/afternoon peak hours, roughly 95\% and 5\% of vehicles have capacities randomly selected from the low range and mid range respectively.
It is realistic to assume vehicle capacity is lower for morning and afternoon peak-hour commute.
While during off-peak hours, roughly 80\%, 10\% and 10\% of vehicles have capacities randomly selected from low range, mid range and high range respectively.
The number $\delta_i$ of stops equals to $n_i$ if $n_i \leq 3$, else it is chosen uniformly at random from $[n_i-2, n_i]$ inclusive.
The detour limit $z_i$ of each driver is within 5 to 20 minutes because traffic and service time are not considered.
The general information of the base instances is summarized in Table~\ref{table-simulation}.
\begin{table}[htbp]
\footnotesize
\centering
\begin{tabular}{ l | p{11.3cm} }
\hline
Major trip patterns & from urban communities to downtown and vice versa for peak and off-peak hours respectively;
trips specify one match type for peak hours and can be in either type for off-peak hours \\
\# of intervals simulated & Start from 6:00 AM to 11:59 PM; each interval is 15 minutes \\
\# of trips per interval & varies from [350, 1150] roughly, see Figure~\ref{fig-nTrips-plot} \\
Driver:rider ratio & 1:3 approximately \\
Capacity $n_i$ of vehicles & low: [1,3], mid: [3,5] and high: [4,6] inclusive \\
Number $\delta_i$ of stops limit & $\delta_i=n_i$ if $n_i \leq 3$, or $\delta_i \in [n_i-2,n_i]$ if $n_i \geq 4$ \\
Earliest departure time $\alpha_i$ & immediate to 2 intervals after a trip announcement is generated \\
Driver detour limit $z_i$ & 5 minutes to min\{$2 \cdot t(o_i,d_i)$ (driver's fastest route), 20 minutes\} \\
Latest arrival time $\beta_i$ & at most $1.5 \cdot (t(o_i,d_i) + z_i) + \alpha_i$ \\
Travel duration $\gamma_i$ of driver $i$ & $\gamma_i = t(o_i,d_i) + z_i$ \\
Travel duration $\gamma_j$ of rider $j$ & $\gamma_j = t(\hat{\pi}_i)$, where $\hat{\pi}_i$ is the fastest public transit route \\
Acceptance rate & 80\% for all riders (0.8 times the fastest public transit route) \\
Train and bus travel time & average at 1.15 and 2 times the fastest route by car, respectively \\ \hline
\end{tabular}
\captionsetup{font=small}
\caption{General information of the base instances.}
\label{table-simulation}
\end{table}
When the number of trips increases, the running time for Algorithm~2 and the time needed to construct the $k$-set packing instance also increase. This is due to the increased number of feasible matches for each driver $i \in D$.
In a practical setup, we may restrict the number of different matches a driver can have.
We call each match produced by Algorithm~1 \emph{base match}, which consists of exactly one driver and one passenger.
To make the simulation feasible, we limit the numbers of base matches for each driver and each rider, and the number of total feasible matches for each driver. More specifically, we use \emph{reduction configuration} $(x\%, y, z)$ to denote that for each driver $i$, the number of base matches of $i$ is reduced to $x$ percentage and at most $y$ total feasible matches are computed for $i$; and for each rider $j$, at most $z$ base matches containing $j$ are used.
We also call reduction configuration just \emph{Config} for short.
\subsection{Computational results}
We use the same transit network and same set of generated trip data for all algorithms.
All experiments were implemented in Java and conducted on Intel Core i7-2600 processor with 1333 MHz of 8 GB RAM available to JVM.
Since the optimization goal is to assign accepted ridesharing route to as many riders as possible, the performance measure is focused on the number of riders served by ridesharing routes, followed by the total time saved for the riders as a whole.
We record both of these numbers for each approximation algorithm.
The base case instance uses the parameter setting described in Section~\ref{sec-instances} and Config (30\%, 600, 20).
The experiment results are shown in Table~\ref{table-base-result}.
\begin{table}[htbp]
\footnotesize
\centering
\begin{tabular}{ l | c | c | c | c }
\hline
& ImpGreedy & Greedy & AnyImp & BestImp \\ \hline
Total number of riders served & 27413 & 27413 & 28248 & 28258 \\
Avg number of riders served per interval & 380.736 & 380.736 & 392.333 & 392.472 \\
Total time saved of all riders (sec) & 21274094 & 21274094 & 21951637 & 21956745 \\
Avg time saved of riders per interval (sec) & 295473.53 & 295473.53 & 304883.85 & 304954.79 \\ \hline
\multicolumn{2}{| l |}{Total number of riders and public transit duration} & \multicolumn{3}{l |}{45314 and 83024638 seconds}\\ \hline
\end{tabular}
\captionsetup{font=small}
\caption{Base case solution comparison between the approximation algorithms.}
\label{table-base-result}
\end{table}
The results of ImpGreedy and Greedy are aligned since they are essentially the same algorithm - 60.5\% of total passengers are assigned ridesharing routes and 25.6\% of total time are saved.
The results of AnyImp and BestImp are similar because of the density of the graph $G(V,E)$ due to Observation~\ref{obs-1}.
For AnyImp and BestImp, roughly 62.4\% of total passengers are assigned ridesharing routes and 26.4\% of total time are saved.
On average, passengers are able to reduce their travel time by 25-26\% by using public transit plus ridesharing.
The results of these four algorithms are not too far apart.
However, it takes too long for AnyImp and BestImp to run to completion.
A 10-second limit is set for both algorithms in each iteration for finding an independent set improvement.
With this time limit, AnyImp and BestImp run to completion within 15 minutes for almost all intervals.
We also examine this from the drivers' perspective; we recorded both the mean occupancy rate and vacancy rate of drivers.
The mean occupancy rate is calculated as, in each interval, the number of passengers served divided by the number of drivers who serve them.
The mean vacancy rate is calculated as, in each interval, the number of drivers with feasible matches who are not assigned any passenger divided by the total number of drivers with at least one feasible match.
The results are depicted in Figure~\ref{fig-OR-VR}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{figs/OR-VR.pdf}
\captionsetup{font=small}
\caption{The mean occupancy rate and vacancy rate of drivers for each interval.}
\label{fig-OR-VR}
\end{figure}
The occupancy rate results show that in many intervals, 1.9-2 passengers are served by each driver on average.
The vacancy rate of drivers show that 3-8\% (0-4\% resp.) of drivers are not assigned any passenger while such drivers have some feasible matches for ImpGreedy (BestImp respectively) during all hours except afternoon peak hours; on the other hand, this time period has the highest occupancy rate.
This is most likely due to the origins of many trips are from the same area (downtown). If the destinations of drivers and riders do not have the same general direction from downtown, the drivers may not be able to serve any riders. On the other hand, when their destinations are aligned, drivers are likely to serve more riders.
Another major component of the experiment is to measure the computational time of the algorithms, which is highly affected by the base match reduction configurations.
By reducing more matches, we are able to improve the running time of AnyImp and BestImp significantly, but sacrifice performance slightly.
We tested 12 different Configs:
\begin{itemize}[leftmargin=*]
\setlength\itemsep{0em}
\begin{footnotesize}
\item \textit{Small1} (20\%,300,10), \textit{Small2} (20\%,600,10), \textit{Small3} (20\%,300,20), \textit{Small4-10} (20\%,600,20).
\item \textit{Medium1} (30\%,300,10), \textit{Medium2} (30\%,600,10), \textit{Medium3} (30\%,300,20), \textit{Medium4-10} (30\%,600,20).
\item\textit{Large1} (40\%,300,10), \textit{Large2} (40\%,600,10), \textit{Large3-10} (40\%,300,20), and \textit{Large4-10} (40\%,600,20).
\end{footnotesize}
\end{itemize}
Configs with label ``-10'' have a 10-second limit to find an independent set improvement, and all other Configs have 20-second limit.
Notice that all 12 Configs have the same sets of driver/rider trips and base match sets but generate different feasible match sets.
The performance and running time results of all 12 Configs are depicted in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time} respectively.
The results are divided into peak and off-peak hours for each Config (averaging all intervals of peak hours and off-peak hours).
The running time of ImpGreedy and Greedy are within seconds for all Configs as shown in Figure~\ref{fig-configurations-time}.
On the other hand, it may not be practical to use AnyImp and BestImp for peak hours since they require around 15 minutes for most Configs.
Since AnyImp and BestImp provide better performance than ImpGreedy/Greedy when each Config is compared side-by-side, one can use ImpGreedy/Greedy for peak hours and AnyImp/BestImp for off-peak hours so that it becomes practical.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{figs/configurations-perf.pdf}
\captionsetup{font=small}
\caption{Average performance of peak and off-peak hours for different configurations.}
\label{fig-configurations-perf}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{figs/configurations-time.pdf}
\captionsetup{font=small}
\caption{Average running time of peak and off-peak hours for different configurations.}
\label{fig-configurations-time}
\end{figure}
The increase in performance from Small1 to Small3 is much larger than that from Small1 to Small2 (same for Medium and Large), implying any parameter in a configuration should not be too small.
The increase in performance from Large1 to Large4 is higher than that from Medium1 to Medium4 (similarly for Medium and Small).
Therefore, it is more important to have a balanced configuration than a configuration emphasizes only one or two parameters.
Because ImpGreedy does not create the independent set instance, it runs quicker than Greedy. More importantly, ImpGreedy uses less memory space than Greedy does.
We tested ImpGreedy and Greedy with the following Configs: \textit{Huge1} (100\%,600,10), \textit{Huge2} (100\%,2500,20) and \textit{Huge3} (100\%,10000,30) (these Configs have the same sets of driver/rider trips and base match sets as those in the previous 12 Configs). The focus of these Configs is to see if Greedy can handle large number of feasible matches.
The results are shown in Table~\ref{table-hugeConfig}.
\begin{table}[htbp!]
\footnotesize
\centering
\begin{tabular}{ l | c | c | c }
\hline
\textbf{ImpGreedy} & Huge1 & Huge2 & Huge3 \\ \hline
Avg running time for peak/off-peak hours (sec) & 0.08 / 0.03 & 0.43 / 0.12 & 1.2 / 0.29 \\
Avg number of riders served for peak/off-peak hours & 406.9 / 339.0 & 458.8 / 355.4 & 484.1 / 361.9 \\
Avg time saved of riders per interval (sec) & 284891.8 & 302774.1 & 310636.9 \\ \hline
\textbf{Greedy} & Huge1 & Huge2 & Huge3 \\ \hline
Avg running time & N/A & N/A & N/A \\
Avg instance size $G(V,E)$ of afternoon peak ($|E(G)|$) & 0.02 billion & 0.38 billion & 5.47 billion \\
Avg time creating $G(V,E)$ of afternoon peak (sec) & 14.6 & 320.9 & 3726.79 \\ \hline
\end{tabular}
\captionsetup{font=small}
\caption{The results of ImpGreedy and Greedy using Unlimited reduction configurations.}
\label{table-hugeConfig}
\end{table}
Greedy cannot run to completion for all configurations because in many intervals, the whole graph $G(V,E)$ of the independent set instance is too large to hold in memory.
The average number of edges for afternoon peak hours is 0.02, 0.38 and 5.47 billion for Huge1, Huge2 and Huge3 respectively.
Further, the time it takes to create $G(V,E)$ can excess practicality.
Hence, using Greedy (AnyImp and BestImp) for large instances may not be practical.
In addition, the performance of ImpGreedy with Huge3 is better than that of AnyImp/BestImp with Large4.
Lastly, we also looked at the total running times of the approximation algorithms including the time for computing feasible matches (Algorithms 1 and 2). The running time of Algorithm 1 solely depends on computing the shortest paths between the trips and stations.
Table~\ref{table-algorithms-time} shows that Algorithm 1 runs to completion within 500 seconds on average for peak hours.
As for Algorithm 2, when many trips' origins/destinations are concentrated in one area, the running time increases significantly, especially for drivers with high capacity.
Running time of Algorithm 2 can be reduced significantly by Configs with aggressive reductions.
\begin{table}[htbp]
\scriptsize
\setlength\tabcolsep{5pt}
\centering
\begin{tabular}{ l | c | c | c | c | c | c || c|c|c|c }
& Alg1 & Alg2 & ImpGreedy & Greedy & AnyImp & BestImp & \multicolumn{4}{c}{Total computational time} \\
& & & & & & & ImpGreedy & Greedy & AnyImp & BestImp \\ \hline
{Small3} & 485.2 & 26.8 & 0.021 & 2.0 & 840.5 & 876.4 & 512.1 & 514.1 & 1352.5 & 1388.5 \\
{Small4} & 485.2 & 28.2 & 0.029 & 3.6 & 599.1 & 629.9 & 513.4 & 517.0 & 1112.5 & 1143.3 \\
{Medium3} & 485.2 & 43.6 & 0.031 & 3.7 & 1312.1 & 1371.0 & 532.5 & 543.0 & 1840.9 & 1899.9 \\
{Medium4} & 485.2 & 50.1 & 0.048 & 7.7 & 971.5 & 990.0 & 535.3 & 543.0 & 1506.8 & 1525.3 \\
{Large4} & 485.2 & 72.0 & 0.076 & 12.2 & 1121.3 & 1167.2 & 557.3 & 569.5 & 1678.6 & 1724.4 \\
{Huge3} & 485.2 & 339.4 & 1.2 & N/A & N/A & N/A & 825.8 & N/A & N/A & N/A \\
\end{tabular}
\captionsetup{font=small}
\caption{Average computational time (in seconds) of peak hours for all algorithms.}
\label{table-algorithms-time}
\end{table}
Combining the results of this and previous (Table~\ref{table-hugeConfig}) experiments, ImpGreedy is capable of handling large instances while providing quality solution compared to other approximation algorithms.
From the experiment results in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time}, it is beneficial to dynamically select different algorithms and reduction configurations for each interval depending on the number of trips.
With large problem instances, previous approximation algorithms are not efficient (time and memory consuming), so they require aggressive reduction to reduce the instance size.
On the other hand, ImpGreedy is much faster and capable of handling large instances.
The running time of ImpGreedy can also be an advantage to improve the quality of solutions. For example, as shown in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time}, for the same set of drivers and riders, ImpGreedy assigns more riders when taking Meduim/Medium4 as inputs than AnyImp/BestImp on Small1/Small2, and uses less time than AnyImp/BestImp.
When the size of an instance is not small and a solution must be computed within some time-limit, ImpGreedy has a distinct advantage over the previous approximation algorithms.
\section{Conclusion and future work} \label{sec-conclusion}
Based on real-world transit datasets in Chicago, our study has shown that integrating public and private transportation can benefit the transit system as a whole,
Recall that we focus on work commute traffic, and we only consider two match types that emphasize this transit pattern (with the flexibility to choose either type).
Just from these two types, our base case experiments show that more than 60\% of the passenger are assigned ridesharing routes and able to save 25\% of travel time.
Majority of the drivers are matched with at least one passenger, and vehicle occupancy rate has improved close to 3 (including the driver) on average.
These results suggest that ridesharing can be a complement to public transit.
Our experiments show that the whole system is capable of handling more than 1000 trip requests in real-time using ordinary computer hardware.
It is likely that the performance results of ImpGreedy can be further improved by extending it with the local search strategy of AnyImp and BestImp.
Perhaps the biggest challenge for scalability comes from computing the base matches (Algorithm 1) since it has to compute many shortest paths in real-time;
it may be worth to apply heuristics to reduce the running time of Algorithm 1 for scalability.
To better understand scalability and practicality, it is important to include different match types and a more sophisticated simulation which includes real transit schedule and transit demand.
|
1,116,691,500,924 | arxiv | \section{Introduction}
In this paper we study the notion of maximality and strong maximality among finite-valued propositional logics. Recall the usual notion of maximality found in the literature: a propositional logic $L_1$, that is a sublogic of another logic $L_2$ (in the sense of inclusionship of their consequence relations over the same signature), is called maximal
with respect to $L_2$ if, roughly speaking, $L_1$ extended with any theorem of $L_2$ which is not a theorem of $L_1$, coincides with $L_2$. Similarly, recall the stronger notion of strong maximality following \cite{ArieliAZ10,AvronAZ10,RibCon12}: $L_1$ is called strongly maximal with respect to $L_2$ if, roughly speaking again, $L_1$ extended with a rule of inference valid in $L_2$ but not a valid in $L_1$, coincides with $L_2$.
The problem of finding and characterizing maximal sublogics (in both senses) of a given logic has already been addressed in the literature, specially in the context of paraconsistent logics, where being maximal with respect to classical logic is felt as a desirable or ideal feature, c.f. \cite{ArieliAZ11a,CarCon16}. Indeed, being maximal means that, while still allowing non-trivial inconsistent theories, it retains as much as possible of classical logic.
In the present paper we approach the general problem of characterizing maximality (not necessarily for paraconsistent logics) in two different scenarios. The first one considers a very general class of finite-valued logics, those defined by almost arbitrary finite logical matrices. In such a context, we provide a sufficient condition for a logic to be maximal w.r.t. another one with less truth-values under very general conditions. This result, inspired on the notion of recovery operators from paraconsistent logics, turns out to be very powerful and encompasses many maximality results scattered in the literature.
The second scenario considers a particular class of finite-valued logics, the class of $n$-valued {\L}ukasiewicz logics \L$_{n}$ and their related logics defined by order filters. We show that these logics, for $n$ being prime, are maximal but not strongly maximal with respect to classical logic. Actually, we show that each of these logics can always be uniquely extended with a sort of explosion inference rule such that the obtained logic is the unique one below classical logic, and hence strongly maximal.
The paper is structured as follows. After this introduction, we provide in Section \ref{recovery} a very general condition for a finite matrix logic to be maximal w.r.t. another one with less truth-values, and we analyze in particular the case of 3-valued logics. In the rest of the paper we focus our attention on the class of finite-valued {\L}ukasiewicz logics $\mathsf{L}_n^i$ defined by order filters. In Section \ref{Sect-max} we identify which of these logics are maximal with respect to classical logic, while in Section \ref{sectLqi} we study their status regarding the property of strong maximality. It is in Section \ref{Joan} where we fully characterize, by algebraic techniques, conditions of strong maximality. {Finally, in Section~\ref{sectIdeal} the question of ideal paraconsistent logics (as introduced in~\cite{ArieliAZ11a}) will be analized in the present framework. Specifically, it will be shown that the logics $\mathsf{L}_n^i$ with $n$ prime and $i/n < 1/2$ are ideal paraconsistent logics. In addition, the case $\mathsf{L}_3^1$ will be discussed with more detail, and it will be argued that this logic constitutes the 4-valued version of the well-known 3-valued paraconsistent logic $\mathsf{J}_3$ (see~\cite{dot:dac:70}).}
We finish in Section \ref{concl} with some conclusions and prospects of future research.
\section{Maximality and recovery operators} \label{recovery}
Let us recall the usual notion of maximality of a (standard) logic with respect to another:
\begin{definition}
Let $L_1$ and $L_2$ two standard propositional logics defined over the same signature $\Theta$ such that $L_1$ is a proper sublogic of $L_2$, i.e.\ such that ${\vdash_{L_1}} \subsetneq {\vdash_{L_2}}$, where $\vdash_{L_i}$ denotes the consequence relation of $L_i$ (for $i=1,2$). Then, $L_1$ is said to be {\em maximal} w.r.t.\ $L_2$ if, for every formula $\varphi$ over $\Theta$, if $\vdash_{L_2} \varphi$ but $\nvdash_{L_1} \varphi$, then the logic $L_1^+$ obtained from $L_1$ by adding $\varphi$ as a theorem, coincides with $L_2$.
\end{definition}
By $L_1^+$ above we mean the logic whose consequence relation is obtained from the one of $L_1$ as follows: for every set of formulas $\Gamma \cup \{\psi\}$ over $\Theta$,
$$\Gamma \vdash_{L_1^+} \psi \ \ \mbox{ if } \Gamma, \{\sigma(\varphi) \ : \ \sigma \ \mbox{is a substitution over $\Theta$}\} \vdash_{L_1} \psi.$$
\begin{remark} \label{vacuous}
It should be noticed that, according to the above definition, if $L_1$ is a proper sublogic of $L_2$ such that they validate the same formulas (that is: $\vdash_{L_1} \varphi$ \ iff \ $\vdash_{L_2} \varphi$, for every formula $\varphi$) then $L_1$ is maximal w.r.t. $L_2$.
\end{remark}
In this section, for the class of propositional logics induced by finite logical matrices, we will provide a very general sufficient condition for a logic to be maximal w.r.t.\ another one (see Theorem \ref{maxthm} below), its proof being inspired in the role played by the so-called recovery operators in paraconsistent and adaptive logics. Recall from~\cite{CM} (see also~\cite{car:con:mar:07,CarCon16}) the definition of the class of paraconsistent logics called {\em Logics of Formal Inconsistency} ({\bf LFI}s): a given logic, say $L$, is an {\bf LFI} if it is paraconsistent w.r.t. some negation, say $\neg$ (that is, there exist formulas $\alpha$ and $\beta$ such that $\beta$ does not follows from $\{\alpha,\neg\alpha\}$ in $L$). In addition, there is a (primitive of definable) unary connective $\circ$ in $L$ (called a {\em consistency operator}) such that every formula $\beta$ follows in $L$ from a set of the form$\{\alpha,\neg\alpha, \circ\alpha\}$.\footnote{This is a slightly simplified presentation of the original definition of {\bf LFI}s.} If $L$ is an {\bf LFI} which is sublogic of classical propositional logic ({\sf CPL}), presented in the same signature of $L$,\footnote{In this case, the formulas $\circ \alpha$ take the value 1 for every evaluation in {\sf CPL}.} then the consistency operator $\circ$ allows to recover {\sf CPL} inside $L$ by adding additional hypothesis concerning the consistency (or `classicality', or `well-behavior') of some formulas. Namely, for every (finite) set $\Gamma \cup \{\psi\}$ of formulas,
$$\Gamma \vdash_{\sf CPL} \psi \ \mbox{ iff } \ (\exists \Lambda)[\Gamma, \{\circ \alpha \ : \ \alpha \in \Lambda\} \vdash_L \psi],$$
where $\Lambda$ is a set of formulas. This is what is called a {\em Derivability Adjustment Theorem} (DAT).
The idea of DATs was proposed by Battens in the context of {\em Adaptive logics}, but this technique (as well as the notion of consistency operator) was already used by da Costa for his well-known hierarchy of paraconsistent systems $C_n$ (see~\cite{dac:63}).
A more interesting DAT (as, for instance, the ones obtained by da Costa) requires that the consistency (or well-behavior) operator $\circ$ can just be applied to the propositional variables occurring in $\Gamma \cup \{\psi\}$. This suggests that, given two standard propositional logics $L_1$ and $L_2$ defined over the same signature $\Theta$ such that ${\vdash_{L_1}} \subseteq {\vdash_{L_2}}$, a DAT between both logics can be defined in terms of a {\em recovery} operator $\circ$ (generalizing the idea of {\bf LFI}s):
for every (finite) $\Gamma \cup \{\psi\}$,
$$\Gamma \vdash_{L_2}\psi \ \mbox{ iff } \ \Gamma, \{\circ p_1, \ldots,\circ p_m\} \vdash_{L_1} \psi,$$
where $\{p_1, \ldots, p_m\}$ is the set of propositional variables occurring in $\Gamma \cup \{\psi\}$.
The idea then is that if one of such recovery operators $\circ_\varphi$ can be defined as a family of instances of a theorem $\varphi$ of $L_2$ which is not derivable in $L_1$, and if this process can be reproduced for any of such formulas $\varphi$, then it will follow that $L_2$ is maximal w.r.t.\ $L_1$. To be more general, a finite recovery set $\bigcirc(p)$ of formulas depending only on one variable $p$ will be considered instead of a single formula $\circ(p)$, following the original definition of {\bf LFI}s. Actually, in Theorem~\ref{maxthm} below some sufficient conditions are given in order to define such recovery sets, which will allow us to determine if one logic is maximal w.r.t.\ another.
In what follows, $\mathcal{L}(\Theta)$ will denote the term algebra generated by a propositional signature $\Theta$ from a fixed set $P=\{p_n \ : \ n \geq 1\}$ of propositional variables. If {\bf A} is an algebra over $\Theta$ then the set of homomorphisms from $\mathcal{L}(\Theta)$ to {\bf A} will be denoted by $Hom(\mathcal{L}(\Theta),{\bf A})$.
Given an algebra {\bf A} over $\Theta$ and a non-empty subset $F \subseteq A$, the pair $\langle {\bf A}, F\rangle$ is called a {\em logical matrix} \cite{Woj88}. The logic $L$ defined by the matrix $\langle {\bf A}, F\rangle$ over $\mathcal{L}(\Theta)$ is given by the following consequence relation: for every set of formulas $\Gamma \cup\{\varphi\} \subseteq \mathcal{L}(\Theta)$,
$$ \Gamma \vdash_L \varphi \mbox{ if, for all } e\in Hom(\mathcal{L}(\Theta),{\bf A}), e(\psi) \in F \mbox{ for all } \psi \in \Gamma \mbox{ implies } e(\varphi) \in F . $$
From now on, with no danger of confusion, given a logical matrix $\langle {\bf A}, F\rangle$ we will write $L = \langle {\bf A}, F\rangle$ to refer to the corresponding induced logic defined as above. We will also use the term {\em matrix logic} to refer a logic defined by a logical matrix.
\begin{lemma} \label{IncMat}
Let $L_1=\langle {\bf A}_1, F_1\rangle$ and $L_2=\langle {\bf A}_2, F_2\rangle$ be two matrix logics defined over a signature $\Theta$ such that
${\bf A}_2$ is a subalgebra of ${\bf A}_1$ and $F_2 = F_1 \cap A_2$. Then ${\vdash_{L_1}} \subseteq {\vdash_{L_2}}$, that is: for every $\Gamma \cup \{\psi\}$, if $\Gamma \vdash_{L_1} \psi$ then $\Gamma \vdash_{L_2} \psi$.
\end{lemma}
\begin{proof}
Assume that $\Gamma {\vdash_{L_1}} \psi$. Let $e \in Hom(\mathcal{L}(\Theta),{\bf A}_2)$ be an evaluation for $L_2$ such that $e[\Gamma] \subseteq F_2$. Let $\bar e:\mathcal{L}(\Theta)\to A_1$ such that $\bar e(\varphi)=e(\varphi)$ for every $\varphi \in \mathcal{L}(\Theta)$. Then $\bar e \in Hom(\mathcal{L}(\Theta),{\bf A}_1)$, so $\bar e$ is an evaluation for $L_1$ such that $\bar e[\Gamma] \subseteq F_1$. By hypothesis, $\bar e(\psi) \in F_1$ and so $e(\psi) \in F_1 \cap A_2=F_2$. This shows that $\Gamma \vdash_{L_2} \psi$.
\end{proof}
After this previous lemma, we can state the main result on this section.
\begin{theorem} \label{maxthm}
Let $L_1=\langle {\bf A}_1, F_1\rangle$ and $L_2=\langle {\bf A}_2, F_2\rangle$ be two distinct finite matrix logics over a same signature $\Theta$ such that ${\bf A}_2$ is a subalgebra of ${\bf A}_1$ and $F_2 = F_1 \cap A_2$.
Assume the following:
\begin{enumerate}
\item $A_1=\{0,1,a_1,\ldots,a_k,a_{k+1},\ldots,a_n\}$ and $A_2=\{0,1,a_1,\ldots,a_k\}$ are finite such that $0 \not\in F_1$, $1 \in F_2$ and $\{0,1\}$ is a subalgebra of ${\bf A}_2$.
\item There are formulas $\top(p)$ and $\bot(p)$ in $\mathcal{L}(\Theta)$ depending at most on one variable $p$ such that $e(\top(p))=1$ and $e(\bot(p))=0$, for every evaluation $e$ for $L_1$.
\item For every $k+1 \leq i \leq n$ and $1 \leq j \leq n$ (with $i \neq j$) there exists a formula $\alpha^i_j(p)$ in $\mathcal{L}(\Theta)$ depending at most on one variable $p$ such that, for every evaluation $e$, $e(\alpha^i_j(p))=a_j$ if $e(p)=a_i$. \end{enumerate}
Then, $L_1$ is maximal w.r.t.\ $L_2$.
\end{theorem}
\begin{proof}
Let us begin by observing that the family of evaluations for $L_1$ which take values in $A_2$ for every propositional variable can be identified with the family of evaluations for $L_2$.\footnote{This fact was used in the proof of Lemma~\ref{IncMat}.}
Notice that, by Lemma~\ref{IncMat}, ${\vdash_{L_1}} \subseteq {\vdash_{L_2}}$. Suppose that there is some formula $\varphi(p_1,\ldots,p_m)$ such that $\vdash_{L_2} \varphi$ but $\nvdash_{L_1} \varphi$ (otherwise the proof is done, by Remark~\ref{vacuous}). Then, $e(\varphi) \in F_2$ for every evaluation $e \in Hom(\mathcal{L}(\Theta),{\bf A}_2)$, but there is an homomorphism $e_0 \in Hom(\mathcal{L}(\Theta),{\bf A}_1)$ such that $e_0(\varphi) \not\in F_1$. By the observation at the beginning of the proof (and by considering that $F_2 \subseteq F_1$), there exists a propositional variable $p_i$ (for $1 \leq i \leq m$) such that $e_0(p_i) \not\in A_2$. Consider now a substitution $\sigma_0$ such that
$$\sigma_0(p)= \left \{ \begin{tabular}{ll}
$\top(p_1)$ & if $e_0(p)=1$,\\
$\bot(p_1)$& if $e_0(p)=0$,\\
$p_j$ & if $e_0(p)=a_j$ (for $1 \leq j \leq n$)\\
\end{tabular}\right. $$
\
\noindent and let $\gamma(p_1,\ldots,p_n)=\sigma_0(\varphi)$. Observe that some of the variables $p_j$ may not appear in $\gamma$, but at least one variable $p_j$ (with $k+1 \leq j \leq n$) must occur in $\gamma$, by the hypothesis over $e_0$. Now we can state two immediate facts:
\begin{description}
\item[{\bf Fact 1:}] Given an evaluation $e$ for $L_1$, if $e(p_j)\in A_2$ for every $1 \leq j \leq n$ then
$e(\gamma) \in F_2$.
\end{description}
{\em Proof:} follows from the observation at the beginning of the proof, and by noting that $\gamma$ is an instance of a tautology of $L_2$.
\begin{description}
\item[{\bf Fact 2:}] Given an evaluation $e$ for $L_1$, if $e(p_j)=a_j$ for $1 \leq j \leq n$ then $e(\gamma) = e_0(\varphi) \not\in F_1$.
\end{description}
{\em Proof:} Observe that, from the hypothesis, it follows that $e(\sigma_0(p_i))=e_0(p_i)$ for every $1 \leq i \leq m$. \\
Now, for any propositional variable $p$, let $\alpha^j_j(p)=p$ for every $1 \leq j \leq n$, and let $\bigcirc(p)$ be the finite set of formulas
$$\bigcirc(p) = \{\gamma(\alpha^i_1(p),\ldots,\alpha^i_n(p)) \ : \ k+1 \leq i \leq n\}.$$
Let $e$ be an evaluation in $L_1$. Observe the following:\\[1mm]
(i) If $e(p)\in A_2$ then $e(\alpha^i_j(p)) \in A_2$ (since ${\bf A}_2$ is a subalgebra). For each $k+1 \leq i \leq n$ let $e_i$ be an evaluation for $L_1$ such that $e_i(p_j)= e(\alpha^i_j(p))$, for every $1 \leq j \leq n$. Then $e_i(\gamma) \in F_2$ , by Fact 1. But $e_i(\gamma) = e(\gamma(\alpha^i_1(p),\ldots,\alpha^i_n(p)))$ and so
$e(\gamma(\alpha^i_1(p),\ldots,\alpha^i_n(p))) \in F_2$
for every $k+1 \leq i \leq n$. This means that $e[\bigcirc(p)] \subseteq F_1$ if $e(p)\in A_2$. \\[2mm]
(ii) If $e(p)\notin A_2$ then $e(p)=a_i$ for some $k+1 \leq i \leq n$. From this, $e(\alpha^i_j(p))=a_j$ for all $1 \leq j \leq n$. Let $e'$ be an evaluation for $L_1$ such that $e'(p_j)=a_j$, for every $1 \leq j \leq n$. Then $e'(\gamma) = e(\gamma(\alpha^i_1(p),\ldots,\alpha^i_n(p)))$. But, by Fact 2, $e'(\gamma) =e_0(\varphi) \notin F_1$ and so $e(\gamma(\alpha^i_1(p),\ldots,\alpha^i_n(p))) \notin F_1$. Thus, $e[\bigcirc(p)] \not\subseteq F_1$ if $e(p) \notin A_2$. Equivalently, $e(p) \in A_2$ if $e[\bigcirc(p)] \subseteq F_1$. \\[1mm]
From the observations (i) and (ii) it follows that\\
\hspace{0.5cm} $(*) \hspace{2cm}e[\bigcirc(p)] \subseteq F_1 \ \mbox{ iff } \ e(p) \in A_2.$\\[2mm]
Finally, let $L_1^+$ be the logic obtained from $L_1$ by adding $\varphi$ (and all of its instances) as a theorem.
As observed above,
$$\Gamma \vdash_{L_1^+} \psi \ \ \mbox{ iff } \Gamma, \{\sigma(\varphi) \ : \ \sigma \ \mbox{is a substitution in $\mathcal{L}(\Theta)$}\} \vdash_{L_1} \psi.$$
\begin{description}
\item[{\bf Fact 3:}]
Let $\Gamma \cup \{\psi\}$ be a finite a set of formulas in $\mathcal{L}(\Theta)$ depending on the variables $p_1,\ldots,p_t$. Then \\[2mm]
$(**) \hspace{2cm}\Gamma \vdash_{L_2} \psi \ \ \mbox{ iff } \Gamma, \bigcirc(p_1), \ldots, \bigcirc(p_t) \vdash_{L_1} \psi.$
\end{description}
{\em Proof:} Assume that $\Gamma \vdash_{L_2} \psi$ and let $e \in Hom(\mathcal{L}(\Theta),{\bf A}_1)$ such that $e[\Gamma \cup \bigcup_{i=1}^t \bigcirc(p_i)] \subseteq F_1$. By $(*)$, $e(p_i) \in A_2$ for every $1 \leq i \leq t$. Consider now an evaluation $\bar e \in Hom(\mathcal{L}(\Theta),{\bf A}_2)$ such that $\bar e(p)=e(p)$ if $p \in \{p_1,\ldots,p_t\}$, and $\bar e(p)=0$ otherwise. Then $\bar e(\beta)=e(\beta)$ for every $\beta$ in $\mathcal{L}(\Theta)$ depending on the variables $p_1,\ldots,p_t$. Thus, $\bar e[\Gamma] \subseteq F_1 \cap A_2 = F_2$ whence $\bar e(\psi) \in F_2$, by hypothesis. That is, $e(\psi) \in F_1$ and so $\Gamma, \bigcirc(p_1), \ldots, \bigcirc(p_t) \vdash_{L_1} \psi$.
Conversely, assume that $\Gamma, \bigcirc(p_1), \ldots, \bigcirc(p_t) \vdash_{L_1} \psi$ and consider an evaluation $\bar e\in Hom(\mathcal{L}(\Theta),{\bf A}_2)$ such that $\bar e[\Gamma] \subseteq F_2$. Define an evaluation $e \in Hom(\mathcal{L}(\Theta),{\bf A}_1)$ such that $e(p)=\bar e(p)$ for every variable $p$. Then $e(\beta)=\bar e(\beta)$ for every $\beta$ in $\mathcal{L}(\Theta)$ and so $e[\Gamma] \subseteq F_1$ and also $e[\bigcirc(p_i)] \subseteq F_1$ for every $1 \leq i \leq t$, by $(*)$. By hypothesis, $e(\psi) \in F_1$ and then $\bar e(\psi) \in F_1 \cap A_2$, that is, $\bar e(\psi) \in F_2$. This shows that $\Gamma \vdash_{L_2} \psi$, proving Fact 3.\\
Consider now a finite a set of formulas $\Gamma \cup \{\psi\}$ in $\mathcal{L}(\Theta)$ depending on the variables $p_1,\ldots,p_t$. Suppose that $\Gamma \vdash_{L_2} \psi$. Then $\Gamma, \bigcirc(p_1), \ldots, \bigcirc(p_t) \vdash_{L_1} \psi$, by Fact 3. But the latter implies that $\Gamma, \{\sigma(\varphi) \ : \ \sigma \ \mbox{is a substitution in $\mathcal{L}(\Theta)$}\} \vdash_{L_1} \psi$, because each $\bigcirc(p_i)$ is a set of instances of $\varphi$. From this, it follows that $\Gamma \vdash_{L_1^+} \psi$, by definition of $L_1^+$.
On the other hand, suppose that $\Gamma \vdash_{L_1^+} \psi$. Given that ${\vdash_{L_1}} \subseteq {\vdash_{L_2}}$ (by Lemma~\ref{IncMat}) and that $\vdash_{L_2} \varphi$ (by hypothesis) then $\Gamma \vdash_{L_2} \psi$, by definition of $L_1^+$. This shows that $L_1^+$ coincides with $L_2$ and so $L_1$ is maximal w.r.t.\ $L_2$.
\end{proof}
In the next example we show an application of Theorem~\ref{maxthm} in order to prove some maximality conditions for two logics related to the well-known 4-valued logic $\mathcal{FOUR}$ introduced by Belnap and Dunn \cite{Du76,Be76,Be77}. \\
\begin{figure}[h!] \label{m4}
\centerline{ \includegraphics[width=0.5\textwidth]{rombo1}}
\caption{Lattice $M_4$.}
\end{figure}
\begin{example}
Consider Belnap-Dunn's matrix logic $\mathcal{BD}=\langle \mathfrak{M}_{4},\{1,B\}\rangle$, where $\mathfrak{M}_{4}=\langle M_4, \land, \lor, \neg\rangle$ is the algebra associated to the {\em logical lattice} $M_4$ (see Fig.~1) expanded with the De Morgan negation $\neg$ defined as:
\begin{center}
\begin{tabular}{|c||c|} \hline
$\quad$ & $\neg$ \\
\hline \hline
$1$ & 0 \\ \hline
$B$ & $B$ \\ \hline
$N$ & $N$ \\ \hline
$0$ & $1$ \\ \hline
\end{tabular}
\end{center}
Much later, De and Omori considered in~\cite{DO2015} the expansion $\mathcal{BD}^{\sim}$ of $\mathcal{BD}$ by adding the strong negation $\sim$, given by the following table:\\
\begin{center}
\begin{tabular}{|c||c|} \hline
$x$ & ${\sim}x$ \\ \hline
0 & 1 \\
N & B \\
B & N \\
1 & 0 \\ \hline
\end{tabular}
\end{center}
\noindent On the other hand, before Belnap and Dunn's investigations, L. Monteiro already considered in 1963 (see~\cite{LMonteiro}) the 4-valued algebra $\mathfrak{M}_{4m}$ obtained from $\mathfrak{M}_{4}$ by adding a modal operator $\square$ defined as follows:
\begin{center}
\begin{tabular}{|c||c|} \hline
$\quad$ & $\square$ \\
\hline \hline
$1$ & 1 \\ \hline
$B$ & $0$ \\ \hline
$N$ & $0$ \\ \hline
$0$ & $0$ \\ \hline
\end{tabular}
\end{center}
This led to A. Monteiro to consider the variety $\mathbf{TMA}$ of {\em tetravalent modal algebras}, which is the one generated by $\mathfrak{M}_{4m}$ (cf.~\cite{Loureiro}). As proven by Font and Rius in~\cite{FR2}, the (degree-preserving) logic of $\mathbf{TMA}$ is characterized by the matrix logic ${\cal M}_B =\langle\mathfrak{M}_{4m},\{B, 1\} \rangle$.
Previous to \cite{DO2015} and with a different motivation, Coniglio and Figallo define in~\cite{CF2014} the logic ${\cal M}_B^{\sim} =\langle\mathfrak{M}_{4m}^{\sim},\{B, 1\} \rangle$, the expansion of ${\cal M}_B$ with the strong negation $\sim$ described above, characterizing the (degree-preserving) logic of the variety generated by $\mathfrak{M}_{4m}^{\sim}$ (which was independently introduced by A. Monteiro in~\cite{Monteiro:69} and by G. Moisil in~\cite{Ml72}.)
By using Theorem~\ref{maxthm}, it is easy to show that both ${\cal M}_B^{\sim}$ and $\mathcal{BD}^{\sim}$ are maximal relative to {\sf CPL} presented in the signature $\Theta=\{\land,\lor,\neg\sim,\square\}$ and $\Theta'=\{\land,\lor,\neg\sim\}$ over the two-element Boolean algebra $\mathfrak{B}_2$, respectively (where $\square p$ is equivalent to $p$ and $\neg p$ is equivalent to ${\sim}p$). Indeed, observe that ${\bf B}_2$ (expanded by $\sim$ and $\square$) is a subalgebra of $\mathfrak{M}_{4m}^{\sim}$, and $\top(p)=p \lor {\sim} p$ and $\bot(p)=p \land {\sim}p$ are as required. Notice that, since there are in $M_4$ just two values besides the `classical' ones, namely $a_1=N$ and $a_2=B$, the formulas $\alpha^1_2(p)=\alpha^2_1(p)={\sim}p$ are such that $e(\alpha^1_2(p))=B$ if $e(p)=N$, $e(\alpha^2_1(p))=N$ if $e(p)=B$. Therefore, it follows from Theorem~\ref{maxthm} that ${\cal M}_B^{\sim}$ is maximal reative to {\sf CPL} presented over the signature $\Theta$. Similarly, it also follows that $\mathcal{BD}^{\sim}$ is maximal relative to {\sf CPL} presented over the signature $\Theta'$ (the latter corresponding to~\cite[Theorem~3]{DO2015}). \hfill $\blacksquare$
\end{example}
As an immediate consequence of Theorem~\ref{maxthm}, it follows that any 3-valued logic which extends {\sf CPL} and it can express the top and the bottom formulas, is maximal w.r.t.\ {\sf CPL}.
\begin{corollary} \label{3val-max}
Let ${\bf A}_1$ be an algebra defined over a signature $\Theta$ with domain $A_1=\{0,1/2,1\}$, and consider the matrix logic $L_1=\langle {\bf A}_1, F_1\rangle$ where $0 \not\in F_1$ and $1 \in F_1$. Further, let ${\bf A}_2$ be a subalgebra of ${\bf A}_1$, with $A_2 = \{0, 1\}$, and assume that the matrix logic $L_2=\langle {\bf A}_2, \{1\}\rangle$ is a presentation of classical propositional logic {\sf CPL} over signature $\Theta$ such that $L_2$ is distinct from $L_1$.
Suppose additionally there are formulas $\top(p)$ and $\bot(p)$ in $\mathcal{L}(\Theta)$ on one variable $p$ such that $e(\top(p))=1$ and $e(\bot(p))=0$, for every evaluation $e$ for $L_1$. Then, $L_1$ is maximal w.r.t.\ {\sf CPL} (presented as $L_2$).
\end{corollary}
\begin{proof}
Observe that $L_1$ and $L_2$ are matrix logics as in Lemma~\ref{IncMat}, since $\{1\}=F_1 \cap A_2$.
Given that $A_1$ contains just one element out of $ \{0,1\}$, namely $a_1=1/2$, then Theorem~\ref{maxthm} can be applied (since requirement~(3) is satisfied by vacuity). As a consequence of Theorem~\ref{maxthm}, $L_1$ is maximal w.r.t.\ {\sf CPL} (presented as $L_2$).
\end{proof}
In the next example some instances of Corollary~\ref{3val-max} are analyzed, showing the strength of this result: indeed, several well-known 3-valued logics which are known to be maximal w.r.t.\ {\sf CPL} fall inside the scope of Corollary~\ref{3val-max}.
\begin{example} \label{ExL3}
(1) Let us begin with \L ukasiewicz 3-valued logic $\L_3=\langle \textbf{\L}\mathbf{V}_3,\{1\}\rangle$, where $\textbf{\L}\mathbf{V}_3$ is the usual 3-valued algebra for $\L_3$ over $\Theta=\{\neg,\to\}$ with domain $\{0,1/2,1\}$.
Let $\mathsf{L}_1^1=\langle {\bf B}_2,F\rangle$ be a presentation of {\sf CPL}, where ${\bf B}_2$ is the two-element Boolean algebra over $\Theta$ with domain $\{0,1\}$ and $F=\{1\}$. It is easy to see that $\L_3$ satisfies the requirements of Corollary~\ref{3val-max} by taking $\top(p)=(p \to p)$ and $\bot(p)=\neg (p \to p)$. This produces a new proof of the maximality of $\L_3$ w.r.t.\ {\sf CPL}. In order to illustrate this fact consider by instance $\varphi(p_1)=p_1 \vee \neg p_1 := (p_1 \to \lnot p_1)\to \lnot p_1$, a formula which is valid in {\sf CPL} but it is not valid in $\L_3$. Indeed, any evaluation $e_0$ in $\L_3$ where $e_0(p_1)=1/2$ is such that $e_0(\varphi)=1/2$, a non-designated truth-value. By following the construction described in the proof of Theorem~\ref{maxthm} (where $\alpha^1_1(p)=p$), it follows that $\gamma(p_1)=\varphi(p_1)$, and so $\circ(p)=p \vee \neg p$ is a recovery operator for $\L_3$ w.r.t.\ {\sf CPL} defined in terms of $\varphi$. Thus, $\L_3$ plus $\varphi$ coincides with {\sf CPL}. Notice that the truth-table of the recovery operator $\circ$ is as follows:
$$
\begin{array}{|c||c|} \hline
& \circ \\ \hline \hline
1 & 1 \\ \hline
1/2 & 1/2 \\ \hline
0 & 1 \\ \hline
\end{array}
$$
(2) Consider now the logic $\mathsf{L}_2^1=\langle \textbf{\L}\mathbf{V}_3,\{1, 1/2\}\rangle$. As it is well known, the matrices of $\L_3$ are functionally equivalent to that of the 3-valued paraconsistent logic ${\sf J}_3$, introduced by da Costa and D'Ottaviano, see \cite{dot:dac:70}. This means that $\mathsf{L}_2^1$ coincides with ${\sf J}_3$ up to language. By item~(1) and Corollary~\ref{3val-max} it follows that $\mathsf{L}_2^1$ is maximal w.r.t.\ {\sf CPL}. This constitutes a new proof of the maximality of ${\sf J}_3$ (and all of its alternative presentations, such as {\bf LFI1} or {\bf MPT}, see~\cite{Con:Sil}) w.r.t.\ {\sf CPL}. A generalization of ${\sf J}_3$ to $\L_4$, called ${\sf J}_4$, will be proposed in Subsection \ref{sectJ4}. As an illustration of how the technique of the proof works, let $\varphi(p_1)=\neg((\neg p_1 \to p_1) \wedge (p_1 \to \neg p_1))$. It is easy to see that $\varphi(p_1)$ is valid in {\sf CPL} but it is not valid in $\mathsf{L}_2^1$: any evaluation $e_0$ in $\mathsf{L}_2^1$ with $e_0(p_1)=1/2$ is such that $e_0(\varphi)=0$. Then, by the proof of Theorem~\ref{maxthm} (where $\alpha^1_1(p)=p$), $\gamma(p_1)=\varphi(p_1)$ and so $\circ(p)= \neg((\neg p \to p) \wedge (p \to \neg p))$ is a recovery operator for $\mathsf{L}_2^1$ w.r.t.\ {\sf CPL} defined in terms of instances of $\varphi$. This means that $\mathsf{L}_2^1$ plus $\varphi$ coincides with {\sf CPL}. The truth-table of $\circ$ is as follows:
$$
\begin{array}{|c||c|} \hline
& \circ \\ \hline \hline
1 & 1 \\ \hline
1/2 & 0 \\ \hline
0 & 1 \\ \hline
\end{array}
$$
(3) In an unpublished draft, J. Marcos~\cite{Marcos} (see also~\cite[Section 5.3]{car:con:mar:07}) proposes a family of 8,192 logics which are 3-valued and paraconsistent, belonging to the hierarchy of {\bf LFI}s. Among these logics, there is ${\sf J}_3$ (whose truth-tables can define the matrices of all the other logics in the family) and Sette's logic $\mathsf{P}^1$ (see~\cite{Sette}), whose truth-tables are definable by the matrices of any of the logics in the family. All these logics are maximal w.r.t.\ {\sf CPL} presented in the signature $\{\land,\lor,\to,\neg,\circ\}$ such that $\circ\varphi$ is valid for every $\varphi$ (that is, algebraically, $\circ(x)=1$ for all $x \in \{0,1\}$). The proof of maximality of all these logics w.r.t.\ {\sf CPL} follows easily from Corollary~\ref{3val-max} by taking $\top(p)=p \to p$ and $\bot(p)=p \land \neg p \land \circ p$.\\[1mm]
(4) Let $\mathsf{I}^1$ be the 3-valued paracomplete logic introduced by A.M. Sette and W.A. Carnielli in~\cite{SetCar}. It is defined over $\Theta=\{\to,\neg\}$ with domain $\{0,1/2,1\}$ and designated value 1, and whose operations are given by the tables below.
$$
\begin{array}{|c||c|c|c|} \hline
\to & 1 & 1/2 & 0\\ \hline \hline
1 & 1 & 0 & 0 \\ \hline
1/2 & 1 & 1 & 1\\ \hline
0 & 1 & 1 & 1 \\ \hline
\end{array}
\hspace{1 cm}
\begin{array}{|c||c|} \hline
& \neg \\ \hline \hline
1 & 0 \\ \hline
1/2 & 0 \\ \hline
0 & 1 \\ \hline
\end{array}
$$
\noindent Once again, the maximality of $\mathsf{I}^1$ w.r.t.\ {\sf CPL} follows from Corollary~\ref{3val-max} by taking $\top(p)=p \to p$ and $\bot(p)=\neg(p \to p)$.\\[1mm]
(5) Let $G_{n+1}=\langle {\bf G}_{n+1},\{1\}\rangle$ be the $(n+1)$-valued G\"{o}del logic defined over the algebra ${\bf G}_{n+1}$ for $\Theta=\{\land,\lor,\to,\neg\}$ with domain $\big\{0,\frac{1}{n},\dots,\frac{n-1}{n},1\big\}$ such that $x \land y = \min \{x,y\}$; $x \lor y = \max \{x,y\}$; $x \to y= 1$ if $x \leq y$ and $x \to y= y$ otherwise; and $\neg x =1$ if $x=0$ and $\neg x=0$ otherwise. In particular, $G_3$ is defined over $\{0,1/2,1\}$ with the following tables for $\to$ and $\neg$:
$$
\begin{array}{|c||c|c|c|} \hline
\to & 1 & 1/2 & 0\\ \hline \hline
1 & 1 & 1/2 & 0 \\ \hline
1/2 & 1 & 1 & 0\\ \hline
0 & 1 & 1 & 1 \\ \hline
\end{array}
\hspace{1 cm}
\begin{array}{|c||c|} \hline
& \neg \\ \hline \hline
1 & 0 \\ \hline
1/2 & 0 \\ \hline
0 & 1 \\ \hline
\end{array}
$$
\noindent Clearly $G_3$ falls within the scope of Corollary~\ref{3val-max} (where $\top(p)=p \to p$ and $\bot(p)=p \land \neg p$) and so it is maximal w.r.t.\ {\sf CPL} presented over $\Theta$. Observe that for $n \geq 3$ the algebra ${\bf G}_{n+1}$ does not have enough expressive power to define all the formulas $\alpha^i_j$ in order to apply Theorem~\ref{maxthm}. For instance, in ${\bf G}_{4}$ there are no formulas $\alpha^1_2(p)$ and $\alpha^2_1(p)$ such that $e(\alpha^1_2(p))=2/3$ if $e(p)=1/3$ and $e(\alpha^2_1(p))=1/3$ if $e(p)=2/3$. \hfill $\blacksquare$
\end{example}
\begin{example} \label{EjIdeal} In \cite{ArieliAZ11a} the authors introduced the notion of {\em ideal paraconsistent logics}. Together with this, they presented a family $\mathcal{M}_{n+2}$ of $(n+2)$-valued matrix logics (with $n \geq 2$) which are ideal paraconsistent (and so, from the very definition, they are also maximal w.r.t. ${\sf CPL}$, see Definition~\ref{IdPar} in Section~\ref{sectIdeal}). The fact that all these logics are maximal w.r.t. $\sf CPL$ (as proved in \cite{ArieliAZ11a}) can also be proved by applying Theorem~\ref{maxthm}, as it will be shown in what follows.
Given $n\geq 2$ consider the algebras $\mathbf{A}_{n+2}$ over the signature $\Theta=\{\neg,\diamond,\supset\}$ with domain $A_{n+2}=\{0,1,a_1,\ldots,a_n\}$ such that the operations are defined as follows: $\neg 0=1$, $\neg 1=0$ and $\neg x=x$ otherwise; $\diamond 0=1$, $\diamond 1=0$, $\diamond a_i=a_{i+1}$ if $i<n$ and $\diamond a_n=a_1$; $x \supset y = 1$ if $x \notin D=\{1,a_1\}$ and $x \supset y = y$ otherwise. The logic $\mathcal{M}_{n+2}$ is defined by the logical matrix $\langle\mathbf{A}_{n+2},D\rangle$ for every $n\geq 2$. Let us see that the conditions of Theorem~\ref{maxthm} are satisfied for every logic $\mathcal{M}_{n+2}$ w.r.t. $\sf CPL$. It is easy to see that $\{0,1\}$ is a subalgebra of $\mathbf{A}_{n+2}$ and so, by Lemma \ref{IncMat}, $\mathcal{M}_{n+2}$ is a sublogic of $\sf CPL$ presented in the signature $\Theta$ in which $\diamond$ coincides with negation and $1$ is the designated value. In addition, it is easy to see that, given a propositional variable $p$, the formulas $\top(p)=(p \supset \diamond p) \supset (p \supset \diamond p)$ and $\bot(p)=\neg\top(p)$ are such that $e(\top(p))=1$ and $e(\bot(p))=0$, for every evaluation $e$. Consider now the formulas $\alpha^i_j(p)=\diamond^{j-i}p$ if $i<j$ and $\alpha^i_j(p)=\diamond^{n-i+j}p$ if $i>j$, where $\diamond^{0}p=p$ and $\diamond^{i+1}p=\diamond\diamond^i p$, for every $i$. An easy computation shows that $e(\alpha^i_j(p))=a_j$ if $e(p)=a_i$, for every $i\neq j$. Therefore, the conditions of Theorem~\ref{maxthm} are fullfilled and so each logic $\mathcal{M}_{n+2}$ is maximal w.r.t. $\sf CPL$.
The question of ideal paraconsistent logics in the present framework will be treated again in Section \ref{sectIdeal}.\hfill $\blacksquare$
\end{example}
The examples given above show the value of Theorem~\ref{maxthm} in order to state maximality of logics under certain hypothesis concerning the expressive power of the given logics. Indeed, several proofs of maximality found in the literature can be easily obtained as a consequence of Theorem~\ref{maxthm}: for instance, the ones given for the 3-valued paraconsistent logic $\mathsf{P}^1$ in \cite[Proposition 11]{Sette}, for the 3-valued logic $\mathsf{I}^1$ in \cite[Proposition 17]{SetCar} and for ${\sf J}_3$ (formulated as the equivalent logic {\sf LFI1}) in \cite[Theorem 4.6]{CarMarAmo}, respectively. It is worth noting that all the examples of maximality of a logic $L_1$ w.r.t. another logic $L_2$ given in this section, as well as the examples to be given in the rest of the paper, are non-vacuous in the sense of Remark~\ref{vacuous}. Indeed, in all the examples of maximality presented here the set of theorems of $L_1$ is strictly contained in the set of theorems of $L_2$, thus the notion of maximality holds in a non-trivial way. For instance, the formula $p \to \circ p$ is a theorem of {\sf CPL} which does not hold in any of the logics presented in Example~\ref{ExL3}(3), while the formula $p \to \neg\diamond p$ is a theorem of {\sf CPL} which does not hold in any of the systems $\mathcal{M}_{n+2}$ presented in Example~\ref{EjIdeal}.
On the other hand, it should be observed that the set of designated values may not play a relevant role with respect to maximality, for instance, when analyzing maximality with respect to $\sf CPL$ (recall e.g. Corollary \ref{3val-max} or see Proposition \ref{maxLn} in next section).
As observed above, Theorem~\ref{maxthm} cannot be applied to logics which do not have enough expressive power, as seen in Examples~\ref{ExL3}(5) for G\"{o}del's logics $G_n$ (with $n \geq 4$). This is not the case for finite-valued {\L}ukasiewicz logics, as it will be shown in the next section.
\section{Maximality between finite-valued {\L}ukasiewicz logics induced by order filters}
\label{Sect-max}
In the rest of the paper we will deal with matrix logics based on the family of finite-valued {\L}ukasiewicz logics $\L_n$ with $n \geq 2$.
The $(n+1$)-valued {\L}ukasiewicz logic can be semantically defined as the matrix logic
$$\L_{n+1}=\langle \textbf{\L}\mathbf{V}_{n+1}, \{1\} \rangle, $$
where {$\textbf{\L}\mathbf{V}_{n+1} = ({\L}V_{n+1}, \neg, \to)$} with ${\L}V_{n+1} = \big\{0,\frac{1}{n},\dots,\frac{n-1}{n},1\big\}$, and the operations are defined as follows: for every $x,y \in {\L}V_{n+1}$,
\begin{itemize}
\item[] $\neg x =1-x$ (\L ukasiewicz negation)
\item[] $x \to y = \min \{1, 1-x+y\}$ (\L ukasiewicz implication)
\end{itemize}
\noindent The following operations can be defined in every algebra $\textbf{\L}\mathbf{V}_{n+1}$:
\begin{itemize}
\item[] $x \otimes y=\neg(x \to \neg y)=\max \{0, x+y-1\}$ (strong conjunction)
\item[] $x \oplus y=\neg x \to y=\min \{1, x+y\}$ (strong disjunction)
\item[] $x \vee y=(x \to y) \to y=\max \{x,y\}$ (lattice disjunction)
\item[] $x \wedge y=\neg((\neg x \to \neg y) \to \neg y)=\min \{x,y\}$ (lattice conjunction)
\end{itemize}
Observe that $\L_2$ is the usual presentation of classical propositional logic {\sf CPL} as a matrix logic over the two-element Boolean algebra ${\bf B}_2$ with domain $\{0,1\}$ with signature $\{\neg,\to\}$.
The logics $\L_{n}$ can also be presented as Hilbert calculi that are axiomatic extensions of the infinite-valued {\L}ukasiewicz logic \L$_\infty$. Recall that $\L_{\infty}$ is algebraizable and the class $MV$ of all MV-algebras is its equivalent quasivariety semantics \cite{RTV90,CiMuOt99}. Since algebraizability is preserved by finitary extensions then every finite valued $\L$ukasiewicz logic $\L_n$ is also algebraizable, and we will denote by $MV_n$ its corresponding subvariety of algebras.
In this section, finite-valued {\L}ukasiewicz logics with a set of designated values possibly different to $\{ 1\}$ will be studied from the point of view of maximality. First, some notation will be introduced.
For every $i\geq 1$ and for every $x \in{\L}V_{n+1}$, $i x$ will stand for $x\oplus\dots\oplus x$ ($i$-times), while $x^i$ will stand for $x\otimes\dots\otimes x$ ($i$-times).
For $1 \leq i \leq n$ let
$$F_{i/n} =\{x \in {\L}V_{n+1} \ : \ x \geq i/n\}=\big\{\frac{i}{n},\dots,\frac{n-1}{n},1\big\}$$
be the order filter generated by $i/n$, and let
$$\mathsf{L}^i_n=\langle \textbf{\L}\mathbf{V}_{n+1}, F_{i/n} \rangle$$
be the corresponding matrix logic.
From now on, the consequence relation of $\mathsf{L}^i_n$ is denoted by $\vDash_{\mathsf{L}^i_n}$.
Observe that $\L_{n+1}= \mathsf{L}^n_n$ for every $n$. In particular, {\sf CPL} is $\mathsf{L}^1_1$ (that is, $\L_2$). If $1 \leq i, m \leq n$, we can also consider the following matrix logic:
$$\mathsf{L}^{i/n}_m=\langle \textbf{\L}\mathbf{V}_{m+1}, F_{i/n} \cap {\L}V_{m+1} \rangle.$$
Since $ F_{i/n} \cap {\L}V_{m+1} = F_{j/m}$ for some $1\leq j\leq m$, $\mathsf{L}^{i/n}_m=\mathsf{L}^{j}_{m}$ for that $j$. It is interesting to notice that some of these logics are paraconsistent, and some are not. Indeed, it is easy to prove the following characterization.
\begin{proposition} \label{parLqi} The logic $\mathsf{L}^i_n$ is paraconsistent w.r.t.\ $\neg$ iff $i/n \leq 1/2$.
\end{proposition}
\begin{proof}
$\mathsf{L}^i_n$ is paraconsistent w.r.t.\ $\neg$ iff there exists $x \in {\L}V_{n+1}$ such that $x \geq i/n$ and $\neg x \geq i/n$, iff $i/n \leq x \leq (n-i)/n$ for some $x \in {\L}V_{n+1}$, iff $i/n \leq (n-i)/n$, iff $2i \leq n$.
\end{proof}
Thus, for instance, for $n=5$ it follows that $\mathsf{L}_5^1$ and $\mathsf{L}_5^2$ are paraconsistent, while $\mathsf{L}_5^3$, $\mathsf{L}_5^4$ and $\mathsf{L}_5^5=\L_6$ are explosive. By its turn, if $n=3$ then $\mathsf{L}_3^1$ is paraconsistent, while $\mathsf{L}_3^2$ and $\mathsf{L}_3^3=\L_4$ are explosive. The paraconsistent logics of this form which are maximal w.r.t. {\sf CPL} will be analyzed with more detail in Section \ref{sectIdeal}.
Theorem \ref{maxthm} can be used in order to analyze the maximality of the logic $\mathsf{L}^i_n$ w.r.t.\ $\mathsf{L}^{i/n}_m$ whenever $m | n$ (taking into account that $\textbf{\L}\mathbf{V}_{m+1}$ is a subalgebra of $\textbf{\L}\mathbf{V}_{n+1}$ iff $m|n$). In particular, the maximality of certain instances of $\mathsf{L}^i_n$ w.r.t.\ {\sf CPL} can be obtained by using Theorem \ref{maxthm}.
The following examples deal with the algebras $\textbf{\L}\mathbf{V}_n$ which, as observed above, can define a meet operator $\wedge$ such that, for any order filter $F$, $(a \wedge b) \in F$ iff $a,b \in F$. Because of this, a recovery operator $\circ(p)$ will be considered instead of a recovery set $\bigcirc(p)$, consisting of the conjunction of all of its members.
\begin{example} Let us first consider the case of $\textbf{\L}\mathbf{V}_4$.
For $1 \leq i \leq 3$ let $\mathsf{L}^i_3=\langle \textbf{\L}\mathbf{V}_4,F_{i/3}\rangle$. Then $F_{1/3}=\{1/3, 2/3,1\}$, $F_{2/3}=\{2/3,1\}$ and $F_{3/3}=F_1=\{1\}$. As in the previous example, it can be proved that each $\mathsf{L}^i_3$ satisfies the requirements of Theorem~\ref{maxthm} w.r.t.\ {\sf CPL} and so each $\mathsf{L}^i_3$ is maximal w.r.t.\ {\sf CPL}, presented as ${\sf CPL}=\langle {\bf B}_2,\{1\}\rangle$. Indeed, ${\bf B}_2$ is a subalgebra of $\textbf{\L}\mathbf{V}_4$ and $\top(p)=(p \to p)$ and $\bot(p)=\neg (p \to p)$ are as required. Finally, the formulas $\alpha^1_2(p)=\alpha^2_1(p)=\neg p$ are such that $e(\alpha^1_2(p))=2/3$ if $e(p)=1/3$, $e(\alpha^2_1(p))=1/3$ if $e(p)=2/3$. (Observe that there are in $\textbf{\L}\mathbf{V}_4$ just two `non-classical' values: $a_1=1/3$ and $a_2=2/3$.)
Fix $1 \leq i \leq 3$. Thus, given a theorem $\varphi(p_1,\ldots,p_m)$ of {\sf CPL} which is not valid in $\mathsf{L}^i_3$, consider the formula $\gamma(p_1,p_2)$ as in the proof of Theorem~\ref{maxthm}. Then, the formula $\circ(p)=\gamma(p,\neg p) \wedge \gamma(\neg p,p)$ defines an operator (in terms of a conjunction of instances of $\varphi$) which allows to recover classical logic inside $\mathsf{L}^i_3$. \hfill $\blacksquare$
\end{example}
From Komori's characterization of axiomatic extensions of (infinite-valued) \linebreak {\L}ukasiewicz logic ${\L}_{\infty}$ \cite{Komori:SuperLukasiewiczPropositional}, it directly follows that the logic $\L_{n+1}$ is maximal w.r.t.\ {\sf CPL} iff $n$ is a prime number. By adapting our previous arguments, we can obtain the following extension of this classical result for logics matrix logics over $\textbf{\L}\mathbf{V}_{n+1}$ with (almost) arbitrary filters.
\begin{proposition} \label{maxLn} Let $n\geq 2$ and $\emptyset\neq F \subseteq {\L}V_{n+1}$. Then, the logic $L=\langle\textbf{\L}\mathbf{V}_{n+1},F\rangle$ is maximal w.r.t.\ {\sf CPL} provided that $0 \notin F$ and $n$ is a prime number.
\end{proposition}
Observe that, as a direct consequence, all the logics $\mathsf{L}^i_q$ with $q$ prime are maximal w.r.t.\ classical logic.
\begin{corollary} \label{maxLqi} Let $q$ be a prime number, and $1 \leq i \leq q$. Then, the logic $\mathsf{L}^i_q$ is maximal w.r.t.\ {\sf CPL}.
\end{corollary}
\begin{remark} \label{Lqi-indist}
Note that, for a given prime $q$, if $i < j$ the set of theorems of $\mathsf{L}^j_q$ is strictly included in the set of theorems of $\mathsf{L}^i_q$. However this does not contradict the fact that both logics are maximal w.r.t.\ {\sf CPL}, since their consequence relations are in fact incomparable. For example, the set of theorems of $\mathsf{L}^2_3$ is included in the set of theorems of $\mathsf{L}^1_3$, but the inclusion is strict: $\models_{\mathsf{L}^1_3} (p \vee \neg p)\otimes (p \vee \neg p)$ while $\not\models_{\mathsf{L}^2_3} (p \vee \neg p)\otimes (p \vee \neg p)$. It suffices to consider an evaluation $e$ such that $e(p) = 1/3$; then $e((p \vee \neg p)\otimes (p \vee \neg p)) = 1/3 \not\geq 2/3$. On the other hand, $\mathsf{L}^2_3$ is not a sublogic of $\mathsf{L}^1_3$: $p \models_{\mathsf{L}^2_3} (p \otimes p) \oplus (p \otimes p)$ but $p \not\models_{\mathsf{L}^1_3} (p \otimes p) \oplus (p \otimes p)$. In order to see this, consider an evaluation $e$ such that $e(p) = 1/3$; then $e((p \otimes p) \oplus (p \otimes p)) = 0$.
\end{remark}
Next examples exploit the fact that $\L_{n+1}$ is a sublogic of $\L_{m+1}$ iff $m$ divides $n$, considering additional filters as designated values, and obtaining maximality in some cases.
\begin{example} Now, the logics asociated to the algebra $\textbf{\L}\mathbf{V}_5$ will be analyzed.
For $1 \leq i \leq 4$ let
$\mathsf{L}_4^i=\langle \textbf{\L}\mathbf{V}_5,F_{i/4}\rangle$ such that $F_{1/4}=\{1/4,1/2,3/4,1\}$, $F_{2/4}=F_{1/2}=\{1/2,3/4,1\}$, $F_{3/4}=\{3/4,1\}$, and $F_{4/4}=F_1=\{1\}$. Since $2$ divides $4$ then $\textbf{\L}\mathbf{V}_3$ is a subalgebra of $\textbf{\L}\mathbf{V}_5$ and $\L_5$ is a sublogic of $\L_3$. We will prove that, indeed, any $\mathsf{L}_4^i$ (for $1 \leq i \leq 4$) is maximal w.r.t.\ $\mathsf{L}_2^{i/4}=\langle \textbf{\L}\mathbf{V}_3,F_{i/4} \cap {\L}V_{3}\rangle$, by using Theorem~\ref{maxthm}.
By Lemma~\ref{IncMat}, each $\mathsf{L}_4^i$ is a sublogic of $\mathsf{L}_2^{i/4}$. $\textbf{\L}\mathbf{V}_3$ is a subalgebra of $\textbf{\L}\mathbf{V}_5$ and $\top(p)=(p \to p)$ and $\bot(p)=\neg (p \to p)$ are as required. Let $a_1=1/2$, $a_2=1/4$ and $a_3=3/4$, and consider the formulas $\alpha^2_1(p)= p \oplus p$, $\alpha^2_3(p)=\alpha^3_2(p)=\neg p$, and $\alpha^3_1(p)=p \otimes p$. Finally, let $\alpha^i_i(p)= p$ for $i=2,3$. Then, the formulas $\alpha^i_j$ defined above are such that $e(\alpha^i_j(p))=a_j$ if $e(p)=a_i$, for $i=2,3$ and $j=1,2,3$.
Fix $1 \leq i \leq 4$. Thus, given a theorem $\varphi_i(p_1,\ldots,p_{m_i})$ of $\mathsf{L}_2^{i/4}$ which is not valid in $\mathsf{L}_4^i$, consider the formula $\gamma_i(p_1,p_2,p_3)$ as in the proof of Theorem~\ref{maxthm}. Then, the formula
$$\circ_i(p)=\gamma_i(p\oplus p,p, \neg p) \wedge \gamma_i(p\otimes p,\neg p,p)$$
defines a recovery operator (in terms of a conjunction of instances of $\varphi_i$) which allows to recover $\mathsf{L}_2^{i/4}$ inside $\mathsf{L}_4^i$. This shows that the latter is maximal w.r.t.\ the former. \hfill $\blacksquare$
\end{example}
\begin{example}
Consider now the case of $\textbf{\L}\mathbf{V}_7$. Since 2 and 3 divide 6, it follows that $\textbf{\L}\mathbf{V}_3$ and $\textbf{\L}\mathbf{V}_4$ are subalgebras of $\textbf{\L}\mathbf{V}_7$ and so $L=\langle \textbf{\L}\mathbf{V}_7,F\rangle$ is a sublogic of both $\langle \textbf{\L}\mathbf{V}_3,F\cap {\L}V_3 \rangle$ and $\langle\textbf{\L}\mathbf{V}_4,F\cap {\L}V_4 \rangle$ for any non-trivial filter $F$ of $\textbf{\L}\mathbf{V}_7$. However, it is not possible to prove the maximality of $L$ by applying Theorem~\ref{maxthm} since, for every formula $\alpha(p)$ and every evaluation $e$ in $\textbf{\L}\mathbf{V}_7$, $e(\alpha(p))\neq 1/2$ if $e(p) \in \{1/3,2/3\}$ (since $\textbf{\L}\mathbf{V}_4$ is a subalgebra), while $e(\alpha(p))\notin \{1/3,2/3\}$ if $e(p)= 1/2$ (since $\textbf{\L}\mathbf{V}_3$ is a subalgebra). \hfill $\blacksquare$
\end{example}
As another example of application of Theorem \ref{maxthm}, we can obtain the following maximality condition of a logic $\mathsf{L}^i_n$ with respect to a logic ${\sf L}_{m}^{i/n}$.
\begin{proposition}\label{maxLnLm} Let $1 \leq i,m \leq n$. Then $\mathsf{L}^i_n=\langle \textbf{\L}\mathbf{V}_{n+1},F_{i/n}\rangle$ is maximal w.r.t.\ $\mathsf{L}_{m}^{i/n}=\langle \textbf{\L}\mathbf{V}_{m+1}, F_{i/n} \cap {\L}V_{m+1}
\rangle$ if the following condition
holds: there is some prime number $q$ and $k \geq 1$ such that $n=q^k$, and $m=q^{k-1}$.
\end{proposition}
\begin{proof}
We recall that $\textbf{\L}\mathbf{V}_{n+1}$ is singly generated by any element $0<\frac{l}{n}<1$ such that $l$ and $n$ are mutually prime \cite[Lemma 1.2]{GpT98}. Then, since $q$ is prime, ${\L}V_{q^{k}+1}\smallsetminus {\L}V_{q^{k-1}+1}=\{0<\frac{r}{q^{k}}<1 \ : \ r \mbox{ and } q \mbox{ are mutually prime}\}$ and therefore all conditions of Theorem \ref{maxthm} are satisfied.
\end{proof}
\section{On strong maximality and explosion rules in the logics $\mathsf{L}^i_q$} \label{sectLqi}
Along this section $q$ will denote a prime number.
In the previous section we have seen that all the logics of the form $\mathsf{L}^i_q=\langle \textbf{\L}\mathbf{V}_{q+1}, F_{i/q} \rangle$ are maximal w.r.t. {\sf CPL}.
However, there are maximal logics that are not maximal w.r.t. {\sf CPL} in an stronger sense, as firstly considered in \cite{AvronAZ10,ArieliAZ11a} in the context of paraconsistent logics, or in \cite{RibCon12} in a more general context of belief revision techniques for change of logics.
\begin{definition}
Let $L_1$ and $L_2$ two standard propositional logics defined over the same signature $\Theta$ such that $L_1$ is a proper sublogic of $L_2$, i.e.\ such that ${\vdash_{L_1}} \subsetneq {\vdash_{L_2}}$. Then, $L_1$ is said to be {\em strongly maximal} w.r.t.\ $L_2$ if, for every finitary rule $\varphi_{1}, \ldots , \varphi_{n}/ \psi$ over $\Theta$, if $\varphi_{1}, \ldots , \varphi_{n}\vdash_{L_2} \psi$ but $\varphi_{1}, \ldots , \varphi_{n}\nvdash_{L_1} \psi$, then the logic $L_1^*$ obtained from $L_1$ by adding $\varphi_{1}, \ldots , \varphi_{n}/ \psi$ as structural rule, coincides with $L_2$.
\end{definition}
By $L_1^*$ above we mean the logic whose consequence relation $\vdash_{L_1^*}$ is the minimal extension of $\vdash_{L_1}$ such that
$\sigma(\varphi_{1}), \ldots , \sigma(\varphi_{n}) \vdash_{L_1^*} \sigma(\psi)$ for any substitution $\sigma$ over $\Theta$ (see e.g. \cite{Woj88,ArieliAZ11a}).
For instance, as observed in~\cite[Remark~14]{DO2015}, the logic $\mathcal{BD}^{\sim}$ introduced in Section \ref{recovery}, that is maximal w.r.t.\ {\sf CPL}, it is not strongly maximal relative to {\sf CPL}.
Thus, a natural question is whether a given logic is strongly maximal w.r.t.\ another logic. In particular, in this section, we are interested in studying the status of the logics $\mathsf{L}^i_q=\langle \textbf{\L}\mathbf{V}_{q+1}, F_{i/q} \rangle$ with $q$ prime in relation to the notion of strong maximality w.r.t.\ {\sf CPL}. We will show that the answer is negative, as each of them admits a proper extension by a finitary rule related to the explosion law w.r.t.\ {\L}ukasiewicz negation. In fact, in Section \ref{CPL} it will be shown that such proper extensions are strongly maximal w.r.t.\ {\sf CPL}.
\begin{remark} \label{axiomHqi}
By using the techniques presented in \cite{Con:Est:God}, a sound and complete Hilbert calculus for each $\mathsf{L}^i_q$ (where $i< q$) can be defined from the one for $\L_{q+1}^{\leq}$ (the degree-preserving counterpart of $\L_{q+1}$) by adding additional inference rules. The negative feature of such approach is that these Hilbert calculi have ``global'' inference rules, that is, inference rules such that one of its permises need to be a theorem of $\L_{q+1}$.
By a general result by Blok and Pigozzi (see Theorem 4.3 in \cite{blok:pig:01}) and from Theorem~\ref{equivsys} in Section~\ref{Joan} below, a standard Hilbert calculus for $\mathsf{L}^i_q$ (for $i< q$) can be obtained from the usual one for $\mathsf{L}^q_q=\L_{q+1}$ by means of translations. That is, such calculi have no ``global'' inference rules. The negative side of this approach is that the resulting axiomatization is obtained by translating connectives from the other logic, and so the resulting calculus can appear as very artificial.
As an alternative, it seems that a direct method for defining a sound and complete ``more natural'' Hilbert calculus for each $\mathsf{L}^i_q$ over a suitable signature can also be obtained by means of a `separation' technique for the truth-values, similar to the one used in Subection~\ref{sectJ4} to define an alternative axiomatization for ${\sf L}^1_3$. To verify this conjecture is left as an open problem.
Anyway, from now on it will be assumed the existence of a standard Hilbert calculus ${\sf H}_q^i$ which is sound and complete for the logic $\mathsf{L}^i_q$, where $i< q$. Of course ${\sf H}_q^q$ will stand for the usual axiomatization of $\L_{q+1}$.
\end{remark}
According to the notation introduced at the beginning of Section \ref{Sect-max}, $i\alpha$ is an abbreviation for the formula $\alpha \oplus\dots\oplus \alpha$ ($i$-times), and the consequence relation of $\mathsf{L}^i_n$ is denoted by $\vDash_{\mathsf{L}^i_n}$. Recall the following basic property of $\textbf{\L}\mathbf{V}_{n+1}$.
\begin{lemma} \label{exp-i-prop}
For every $1 \leq i \leq n$ and $x \in \textbf{\L}\mathbf{V}_{n+1}$: $ix < i/n$ iff $x=0$. Thus, $e(i \alpha)< i/n$ iff $e(\alpha)=0$ for every evaluation $e$ in $\textbf{\L}\mathbf{V}_{n+1}$ and every formula $\alpha$.
\end{lemma}
From now on, $\bot$ will denote any formula of the form $\neg(p \to p)$, for a propositional variable $p$. Observe that $e(\bot)=0$ for every evaluation in $\textbf{\L}\mathbf{V}_{n+1}$, every $n \geq 1$ and every propositional variable $p$. This is why the choice of $p$ is inessential for a concrete construction of $\bot$.
Consider, for $1 \leq i \leq q$, the $i$-explosion law
$$(exp_i) \ \displaystyle \frac{i(\varphi \land \neg\varphi)}{\bot} \ . $$
It is not hard to prove that this rule is not derivable in any ${\sf H}_q^i$, the sound and complete Hilbert calculus given for the logic $\mathsf{L}^i_q$ (see Remark \ref{axiomHqi}).
\begin{corollary} \label{exp-i-prop-notder}
For every $1 \leq i \leq q$, the rule $(exp_i)$ is not derivable in ${\sf H}_q^i$.
\end{corollary}
\begin{proof}
We first observe that if $p, p'$ are two different propositional variables, then $i(p \wedge \neg p) \not\vDash_{\mathsf{L}^i_n} p'$ for $1 \leq i \leq n$. Indeed, let $e$ be an evaluation in $\textbf{\L}\mathbf{V}_{n+1}$ such that $e(p)\notin\{0,1\}$ and $e(p')=0$. Since $e(p \wedge \neg p)\neq 0$ then $e(i (p \wedge \neg p))\geq i/n$, by Lemma \ref{exp-i-prop}. Hence, $i(p \wedge \neg p) \not\vDash_{\mathsf{L}^i_n} p'$. Finally, the corollary then follows from soundness and completeness of ${\sf H}_q^i$ w.r.t.\ $\mathsf{L}^i_q$.
\end{proof}
However, the $i$-explosion rule is clearly admissible in ${\sf H}^i_q$ since it is a {\em passive} rule, that is: for no instance of the $(exp_i)$ rule, the premisse can be a theorem of ${\sf H}^i_q$. Indeed, for any classical evaluation over $\{0,1\}$ it is the case that $e(\varphi) \in \{0,1\}$ for every formula $\varphi$ and so $e(i (\varphi \wedge \neg \varphi)) < i/n$, by Lemma \ref{exp-i-prop}. This leads us to consider the following definition.
\begin{definition} \label{Hbar}
$\overline{\sf H}^i_q$ is the Hilbert calculus obtained from ${\sf H}^i_q$ by adding the $i$-explosion rule $(exp_i)$. We will denote by $\vdash_{{\sf H}^i_q}$ and $\vdash_{\overline{\sf H}^i_q}$ the notions of proof associated to the Hilbert calculi ${\sf H}^i_q$ and $\overline{\sf H}^i_q$, respectively.
\end{definition}
The following is a characterization of $\vdash_{\overline{\sf H}^i_q}$ in terms of $\vdash_{{\sf H}^i_q}$.
\begin{proposition} \label{proof-Hqi}
Let $\Gamma \cup\{\varphi\}$ be a set of formulas. Then $\Gamma \vdash_{\overline{\sf H}^i_q} \varphi$ iff either $\Gamma \vdash_{{\sf H}^i_q} \varphi$, or $\Gamma\vdash_{{\sf H}^i_q} i (\psi \wedge \neg \psi)$ for some formula $\psi$.
\end{proposition}
\begin{proof}
`Only if' part: Suppose that $\Gamma \vdash_{\overline{\sf H}^i_q} \varphi$ such that $\Gamma \nvdash_{{\sf H}^i_q} \varphi$. Then, any derivation in $\overline{\sf H}^i_q$ of $\varphi$ from $\Gamma$ must use the rule $(exp_i)$. Let $\varphi_1 \ldots \varphi_n$ be a derivation in $\overline{\sf H}^i_q$ of $\varphi$ from $\Gamma$. Thus, there exists $1 \leq m < n$ such that $\varphi_m=i (\psi \wedge \neg \psi)$ for some formula $\psi$, allowing so the first application of $(exp_i)$ in the given derivation. This means that $\Gamma\vdash_{{\sf H}^i_q} i (\psi \wedge \neg \psi)$, since it was assumed that $(exp_i)$ was not applied before $\varphi_m$ in the given derivation.
`If' part: Suppose that $\Gamma \vdash_{{\sf H}^i_q} \varphi$. Then, clearly $\Gamma \vdash_{\overline{\sf H}^i_q} \varphi$.
Now, suppose that
$\Gamma\vdash_{{\sf H}^i_q} i (\psi \wedge \neg \psi)$ for some formula $\psi$. Then $\Gamma \vdash_{\overline{\sf H}^i_q} \bot$, by using $(exp_i)$. But $\bot \vDash_{\mathsf{L}^i_q} \varphi$ and so $\bot \vdash_{{\sf H}^i_q} \varphi$, by completeness of ${\sf H}^i_q$ w.r.t.\ $\mathsf{L}^i_q$. This means that $\Gamma \vdash_{\overline{\sf H}^i_q} \varphi$.
\end{proof}
The following question is how to characterize semantically the logic $\overline{\sf H}^i_q$ with respect to $\mathsf{L}^i_q$, the original sematics for ${\sf H}^i_q$. The answer will be obtained in the next section by algebraic arguments (Theorem \ref{final-simple} and Remark \ref{rem-simple}). Indeed, it will be shown there that $\overline{\sf H}^i_q$ is sound and complete w.r.t.\ $\bar{\mathsf L}^i_q$ where, for every $i$ and $n$ with $1 \leq i \leq n$,
$$\bar{\mathsf L}^i_n=\langle \textbf{\L}\mathbf{V}_{n+1} \times \textbf{\L}\mathbf{V}_{2}, F_{i/n}\times\{1\} \rangle$$
such that $\textbf{\L}\mathbf{V}_{2}$ is the two-element Boolean algebra ${\bf B}_2$ with domain $\{0,1\}$.
\section{Translations, equivalent logics and strong maximality} \label{Joan}
\subsection{Preliminaries}
Blok and Pigozzi introduce the notion of equivalent deductive systems in \cite{blok:pig:01} (see also \cite{Blok-Pigozzi:DeductionTheorems}). Two propositional deductive systems $S_{1}$ and $S_{2}$ in the same language $\mathcal{L}$ are equivalent iff there are two translations $\tau_{1}, \tau_{2}$ (finite subsets of $\mathcal{L}$-propositional formulas in one variable) such that:
\begin{itemize}
\item $\Gamma\vdash_{S_{1}}\varphi$ iff $\tau_{1}(\Gamma)\vdash_{S_{2}}\tau_{1}(\varphi)$,
\item $\Delta\vdash_{S_{2}}\psi$ iff $\tau_{2}(\Delta)\vdash_{S_{1}}\tau_{2}(\psi)$,
\item $\varphi\dashv\vdash_{S_{1}}\tau_{2}(\tau_{1}(\varphi))$,
\item $\psi\dashv\vdash_{S_{2}}\tau_{1}(\tau_{2}(\psi))$.
\end{itemize}
From very general results stated in \cite{blok:pig:01} it follows that two equivalent logic systems are indistinguishable from the point of view of algebra, provided that one of them is algebraizable. Indeed, in such case if one of the systems is algebraizable then the other will be also algebraizable w.r.t. the same quasivariety. By applying this fact to the systems of the form $\mathsf{L}^{i}_{n}$ studied in the previous sections, several results on relative maximality between these systems and classical logic will be obtained in the next Subsection \ref{CPL}. Actually, these results will be generalized in Subsection \ref{general} to obtain relative maximality results among the systems $\mathsf{L}^{i}_{n}$. However, for the sake of self containment, we prefer to leave the results of Subsection \ref{CPL} with their simpler proofs as well.
In the rest of this subsection, we provide the necessary preliminaries that will be needed in the subsequent subsections.
We recall that $\L_{\infty}$ is algebraizable and the class $MV$ of all MV-algebras is its equivalent quasivariety semantics \cite{RTV90,CiMuOt99}. Since algebraizability is preserved by finitary extensions then every finite valued $\L$ukasiewicz logic is also algebraizable.
Now we can prove that the deductive systems $\mathsf{L}^{i}_{n}$ and $\mathsf{L}^{j}_{n}$ are in fact equivalent in the above sense. First, observe that, by the McNaughton functional representation theorem \cite{McNaughton:FunctionalRep}, for every $n\geq2$ and every $1\leq m\leq n$ there is an MV-term $\lambda_{m,n}(p)$ such that for every $a\in [0,1]$,
$$\lambda_{m,n}(a)=\left\{
\begin{array}{ll}
0, & \hbox{if $a\leq \frac{m-1}{n}$;} \\
na-(m-1), & \hbox{if $\frac{m-1}{n}<a<\frac{m}{n}$;} \\
1, & \hbox{if $\frac{m}{n}\leq a$.}
\end{array}
\right.
$$
\begin{lemma} \label{negFi} The restrictions of the $\lambda_{i,m}$ and $\lambda_{n,n}$ functions on ${\bf LV}_{n+1}$ are the characteristic functions of the order filters $F_{i/n}$ and $F_1$ respectively, i.e.\ for each $a \in {\bf LV}_{n+1}$,
$$\lambda_{i,n}(a)=\left\{
\begin{array}{ll}
0, & \hbox{if $a < \frac{i}{n}$} \\
1, & \hbox{if $ a \geq \frac{i}{n}$}
\end{array}
\right.
\qquad
\lambda_{n,n}(a)= a^n = \left\{
\begin{array}{ll}
0, & \hbox{if $a < 1$} \\
1, & \hbox{if $ a = 1$}
\end{array}
\right.
$$
\end{lemma}
\begin{theorem} \label{equivsys}
For every $n\geq 2$ and every $1\leq i, j\leq n$, $\mathsf{L}^{i}_{n}$ and $\mathsf{L}^{j}_{n}$ are equivalent deductive systems.
\end{theorem}
\begin{proof}
It is enough to prove that for every $1\leq i\leq n-1$, $\mathsf{L}^{i}_{n}$ is equivalent to $\mathsf{L}^{n}_{n}=\L_{n+1}$.
Let the translations $\tau$ and $\sigma$ be given by $\tau=\{\lambda_{i,n}(p)\}$ and $\sigma=\{\lambda_{n,n}(p)\}$. It is easy to check that for every set of formulas $\Gamma\cup\{\varphi\}$,
$$\Gamma \vDash_{\mathsf{L}^{i}_{n}}\varphi \mbox{ iff } \{\tau(\psi) \ : \ \psi\in\Gamma\}\vDash_{\mathsf{L}^{n}_{n}}\tau(\varphi)$$
$$\Gamma \vDash_{\mathsf{L}^{n}_{n}}\varphi \mbox{ iff } \{\sigma(\psi) \ : \ \psi\in\Gamma\}\vDash_{\mathsf{L}^{i}_{n}}\sigma(\varphi)$$
$$\varphi\Dashv\vDash_{\mathsf{L}^{i}_{n}}\sigma(\tau(\varphi)) \mbox{ and } \varphi\Dashv\vDash_{\mathsf{L}^{n}_{n}}\tau(\sigma(\varphi)).$$
Thus, $\mathsf{L}^{i}_{n}$ and $\mathsf{L}^{n}_{n}=\L_{n+1}$ are equivalent deductive systems.
\end{proof}
From the equivalence among $\mathsf{L}^{i}_{n}$ and $\L_{n+1}$, we can obtain, by translating the axiomatization of the finite valued $\L$ukasiewicz logic \L$_{n+1}$, a calculus sound and complete with respect $\mathsf{L}^{i}_{n}$ that we denote by ${\sf H}^{i}_{n}$ (see \cite[Theorem 4.3]{blok:pig:01}).
Since $\L_{\infty}$ is algebraizable and the class $MV$ of all MV-algebras is its equivalent quasivariety semantics, finitary extensions of $\L_{\infty}$ are in $1$ to $1$ correspondence with quasivarieties of MV-algebras . Actually, there is a dual isomorphism from the lattice of all finitary extensions of $\L_{\infty}$ and the lattice of all quasivarieties of $MV$. Moreover, if we restrict this correspondence to varieties of MV we get the dual isomorphism from the lattice of all varieties of MV and the lattice of all axiomatic extensions of $\L_{\infty}$. Since $\L_{n+1}=\mathsf{L}^{n}_{n}$ is an axiomatic extension of $\L_{\infty}$, $\L_{n+1}$ is an algebraizable logic with the class $MV_{n} = \mathcal{Q}({\bf {\L}V}_{n+1})$, the quasivariety generated by ${\bf {\L}V}_{n+1}$, as its equivalent variety semantics. It follows from the previous theorem and from~\cite{blok:pig:01} that $\mathsf{L}^{i}_{n}$, for every $1\leq i\leq n$, is also algebraizable with the same class of $MV_{n}$-algebras as its equivalent variety semantics. Thus, the lattices of all finitary extensions of $\mathsf{L}^{i}_{n}$ are isomorphic, and in fact, dually isomorphic to the lattice of all subquasivarieties of $MV_{n}$, for all $0<i<n$.
Therefore maximality conditions in the lattice of finitary (axiomatic) extensions correspond to minimality conditions in the lattice of subquasivarieties (subvarieties). Thus, given two finitary extensions $L_{1}$ and $L_{2}$ of a given logic $\mathsf{L}^{i}_{n}$, where $K_{L_{1}}$ and $K_{L_{2}}$ are its associated $MV_{n}$-quasivarieties, $L_{1}$ is strongly maximal with respect $L_{2}$ iff $K_{L_{1}}$ is a minimal subquasivariety of $MV_{n}$ among those $MV_{n}$-quasivarieties properly containing $K_{L_{2}}$. Moreover, if $L_{1}$ and $L_{2}$ are axiomatic extensions of $\mathsf{L}^{i}_{n}$, then $K_{L_{1}}$ and $K_{L_{2}}$ are indeed $MV_{n}$-varieties. In that case, $L_{1}$ is maximal with respect $L_{2}$ iff $K_{L_{1}}$ is a minimal subvariety of $MV_{n}$ among those $MV_{n}$-varieties properly containing $K_{L_{2}}$.
All the axiomatic extensions of $\L_{\infty}$ are characterized by Komori in \cite{Komori:SuperLukasiewiczPropositional}, where it is shown that every axiomatic extension is finitely axiomatizable and depends only on two finite sets of natural numbers $I, J$ not both empty. Moreover, Panti proved in \cite{Panti:Varieties} that every axiomatic extension can be axiomatized relative to $\L_{\infty}$ by a single axiom $\gamma_{I,J}$ with a single propositional variable. For the case of finite valued $\L$ukasiewicz logics, Komori's characterization depends on just a finite set of natural numbers in the following sense: given $n>1$, every axiomatic extension of $\L_{n+1}$ is of the form
$$\displaystyle \bigcap_{1\leq j\leq k}\L_{m_{j}+1}$$
for some natural number $k$ where $m_{j}|n$ for every $1\leq j\leq k$. Moreover, from the equivalence of Theorem \ref{equivsys}, it follows that every axiomatic extension of $\mathsf{L}^{i}_{n}$
is of the form
$$\displaystyle \bigcap_{1\leq j\leq k}\mathsf{L}^{i/n}_{m_{j}}$$
for some natural number $k$ where $m_{j}|n$ for every $1\leq j\leq k$, and it is axiomatized by a single axiom $\gamma^{i/n}_{m_{1},\ldots,m_{k}}$ which depends on one variable.
We denote by ${\sf H}^{i/n}_{m_{1},\ldots,m_{k}}$ the calculus obtained from ${\sf H}^{i}_n$
by adding the axiom $\gamma^{i/n}_{m_{1},\ldots,m_{k}}$. Note that for every $m\geq 1$ such that $m\vert n$, the calculus ${\sf H}^{i/n}_{m}$ is the same logic as ${\sf H}^{j}_{m}$, where $j$ is the natural number such that $F_{j/m}=F_{i/n}\cap {\L}V_{m}$.
The lattice of all axiomatic extensions $\L_{\infty}$ is fully described also by Komori in \cite{Komori:SuperLukasiewiczPropositional}, thus from the equivalence of Theorem \ref{equivsys}, we can obtain the following maximality conditions for all axiomatic extensions of ${\sf L}^{i}_{n}$.
\begin{theorem}
Let $0<i,m\leq n$ be natural numbers such that $m\vert n$. If $L$ is an axiomatic extension of $\mathsf{L}^i_n$, then
\begin{itemize}
\item $L$ is maximal with respect to $\mathsf{L}^{i/n}_{m}$ iff $L=\mathsf{L}^{i/n}_{m}\cap \mathsf{L}^{i/n}_{q^{k+1}}$ for some prime number $q$ with $q | n$ and a natural $k \geq 0$ such that $q^{k}| m$ and $q^{k+1} \mathop{\!\not\vert} m$.
\end{itemize}
\end{theorem}
\begin{proof}
Using the equivalence of Theorem \ref{equivsys} we obtain that the lattice of axiomatic extensions of $\mathsf{L}^{i}_{n}$ is isomorphic to the lattice of axiomatic extensions of $\L_{n+1}$. As mentioned above, every axiomatic extension of $\L_{n+1}$ is characterized by a finite set $\{m_{1}, \ldots, m_{k}\}$ where all
of its elements are divisors of $n$. Given two such sets $\{m_{1}, \ldots, m_{k}\}$ and $\{n_{1}, \ldots, n_{s}\}$, we define the following relation among finite subsets of divisors of $n$: $\{m_{1}, \ldots, m_{k}\}\preceq \{n_{1}, \ldots, n_{s}\}$ iff for every $1\leq i\leq k$ there is $1\leq j\leq s$ such that $m_{i}\vert n_{j}$. This relation $\preceq$ is the dual order of the lattice of axiomatic extensions of $\L_{n+1}$ in the following sense: $\{m_{1}, \ldots, m_{k}\}\preceq \{n_{1}, \ldots, n_{s}\}$ iff $\displaystyle \bigcap_{1\leq j\leq s}\L_{n_{j}+1}\leq \displaystyle \bigcap_{1\leq i\leq k}\L_{m_{i}+1}$. Clearly, $\{m\}\preceq \{m, q\}$ and $\{m, q\}\not\preceq\{m\}$ if $q$ is a prime number such that $q\vert n$ and $q \mathop{\!\not\vert} m$; Similarly, $\{m\}\preceq\{m, q^{k+1}\}$ and $\{m, q^{k+1}\}\not\preceq\{m\}$ if $q$ is a prime number such that $q\vert n$, $q^{k}| m$ and $q^{k+1} \mathop{\!\not\vert} m$. Moreover if $\{m\}\preceq\{m_{1}, \ldots, m_{k}\}$ and $\{m_{1}, \ldots, m_{k}\}\not\preceq\{m\}$, then there is $m_{i}$ such that $m\vert m_{i}$. If $m\neq m_{i}$ then there is a prime number $q$ such that $mq \vert m_{i}\vert n$. Thus $\{m, q\}\preceq\{m_{1},\ldots, m_{k}\}$ if $q \mathop{\!\not\vert} m$ and $\{m, q^{k+1}\}\preceq\{m_{1},\ldots, m_{k}\}$ if $q^{k}| m$ and $q^{k+1} \mathop{\!\not\vert} m$. If $m= m_{i}$, then there is an $m_{j}$ with $1<j<k$ such that $m_{j}\mathop{\!\not\vert} m$. If there is a prime number $q$ such that $q\vert m_{j}\vert n$ such that $q \mathop{\!\not\vert} m$, then $\{m, q\}\preceq\{m_{1},\ldots, m_{k}\}$. Otherwise, there are a prime number $q$ and a natural $k>0$ such that $q^{k+1}\vert m_{j}\vert n$, $q^{k}| m$ and $q^{k+1} \mathop{\!\not\vert} m$, then $\{m, q^{k+1}\}\preceq\{m_{1},\ldots, m_{k}\}$. Duality and Theorem \ref{equivsys} close the proof.
\end{proof}
As a corollary we obtain that the suficient condition of Proposition \ref{maxLnLm} is also necessary.
\begin{corollary}
Let $1 \leq i,m \leq n$. Then $\mathsf{L}^i_n=\langle \textbf{\L}\mathbf{V}_{n+1},F_{i/n}\rangle$ is maximal w.r.t.\ $\mathsf{L}_{m}^{i/n}=\langle \textbf{\L}\mathbf{V}_{m+1}, F_{i/n} \cap {\L}V_{m+1}
\rangle$ if and only if there is some prime number $q$ and $k \geq 1$ such that $n=q^k$, and $m=q^{k-1}$.
\end{corollary}
The task of fully describing the lattice of all all finitary extensions of $\L_{\infty}$, isomorphic to the lattice of all subquasivarieties of $MV$, turns to be an heroic task since the class of all MV-algebras is $Q$-universal (see~\cite{AdDz}). For the finite valued case it is much simpler, since $MV_{n}$ is a locally finite discriminator variety (cf.~\cite{B_C_V, GT14}). Any locally finite quasivariety is generated by its critical algebras (see~\cite{Dz0}), where an algebra $A$ is said to be \emph{critical} iff it is a finite algebra not belonging to the quasivariety generated by all its proper subalgebras. A description of all critical MV-algebras can be found in~\cite{GT14}.
\begin{theorem}\emph{\cite[Theorem 2.5]{GT14}}
\label{Wcritical} An MV-algebra $A$ is critical if and only if $A$ is isomorphic to a finite MV-algebra $\mathbf{LV}_{n_{0}+1}\times\cdots\times\mathbf{LV}_{n_{l-1}+1} $ satisfying the following conditions:
\begin{enumerate}
\item For every $i,j<l$, $ i\neq j$ implies $ n_{i}\neq n_{j}$.
\item If there exists $n_{j}$, $j<l$ such that $n_{i}|n_{j}$ for some
$i\neq j$, then $n_{j}$ is unique.
\end{enumerate}
\end{theorem}
Moreover the following result characterizes the inclusion among locally finite quasivarieties.
\begin{lemma} \emph{\cite[Lemma 2.9]{GT14}} \label{previous}
\label{distinct}
Let $\mathfrak{F} = \{\mathbf{ LV}_{n_{i1}+1}\times\cdots\times\mathbf{ LV}_{n_{il(i)}+1} \ : \ i\in I\}$ and
$\mathfrak{G} = \{\mathbf{ LV}_{m_{j1}+1}\times\cdots\times\mathbf{ LV}_{m_{jl(j)}+1} \ : \ j\in J\}$ be two finite families of
critical MV-algebras. Then it holds that
$$ \mathcal{Q}(\mathfrak{F}) \subseteq \mathcal{Q}(\mathfrak{G})$$
if, and only if,
for every $i\in I$ there exists a non-empty $H \subseteq J$ such that:
\begin{enumerate}
\item For any $1\le k\le l(i)$ there are $j\in H$ and $1\le r\le l(j)$ such that
$n_{ik}|m_{jr}$.
\item For any $j\in H$ and $1\le r\le l(j)$ there exists $1\le k\le l(i)$ such that
$n_{ik}|m_{jr}$.
\end{enumerate}
\end{lemma}
\subsection{Strong maximality among logics $\mathsf{L}^i_q$, $\bar{\sf L}^i_q$, and classical logic}
\label{CPL}
As a direct application of Lemma \ref{distinct}, we have the following particular case that will be used later.
\begin{corollary} \label{corollary-simple}
Consider the following two sets of one critical MV-algebra each: \\
$\{\mathbf{ LV}_{q+1}\times \mathbf{ LV}_{2}\}$ and
$\{\mathbf{ LV}_{k+1}\}$, where $q$ is a prime number such that $q > 1$. Then
\[\mathcal{Q}(\{\mathbf{ LV}_{q+1}\times \mathbf{ LV}_{2}\}) \subseteq \mathcal{Q}(\mathbf{ LV}_{k+1})\]
if and only if $q | k$.
\end{corollary}
\begin{proof} The two families of critical algebras above correspond in Lemma \ref{distinct} to take $I = \{1\}$ and $J = \{1\}$, with $n_{11} = q$, $n_{12} = 1$, $m_{11} = k$. Then one can check that these values satisfy the two conditions of the lemma only in the case that $q | k$.
\end{proof}
Now, for any $k > 1$, we are able to provide a full description of the minimal subquasivarieties of $MV_{k}=\mathcal{Q}(\mathbf{LV}_{k+1})$ strictly containing the variety of Boolean algebras.
\begin{theorem} \label{minimal-simple}
Let $k>1$.
The set of all minimal subquasivarieties of $MV_{k}=\mathcal{Q}(\mathbf{LV}_{k+1})$ among those strictly containing the class of all the Boolean algebras ${\bf B} = \mathcal{Q}(\mathbf{LV}_{2})$ is $$M^{k}=\{\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}) \ : \ q > 1 \mbox{ prime, } q|k \}.$$
\end{theorem}
\begin{proof}
By Lemma \ref{previous} and the previous Corollary \ref{corollary-simple}, every $K\in M^{k}$ is a subquasivariety of $\mathcal{Q}(\mathbf{LV}_{k+1})$ strictly containing ${\bf B}$. Moreover, for every $K_{1}, K_{2}\in M^{k}$, if $K_{1}\neq K_{2}$ then $K_{1}\not\subseteq K_{2}$ and $K_{2}\not\subseteq K_{1}$.
On the other hand, let $K$ be a minimal subquasivariety of $\mathcal{Q}(\mathbf{LV}_{k+1})$ strictly containing ${\bf B}$.
Since $K\neq {\bf B}$, it must contain a critical algebra $C$ that, by Theorem \ref{Wcritical}, it must be such that $C\cong\mathbf{ LV}_{m_{1}+1}\times\cdots\times\mathbf{ LV}_{m_{s}+1}$, where $m_{i}| k$ for every $1 \leq i \leq s$, and $m_{j} > 1$ for some $1\leq j\leq s$.
Hence, for every prime number $q$ such that $q|m_{j}$, and hence $q | k$, we have $\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\in \mathcal{Q}(C)\subseteq K$, and thus $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}) \subseteq K$. Since we are assuming the minimality of $K$, it must be $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}) = K$.
\end{proof}
\begin{theorem}\label{axiomminimal-simple}
If $q > 0$ is a prime number, then $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$ is axiomatized by the MV quasi-identities plus:
\begin{itemize}
\item $\gamma_{q}(x)\approx 1$ (the identity axiomatizing $\mathcal{V}(\mathbf{LV}_{q+1})$)
\item $q(x\land\lnot x)\approx 1 \Rightarrow y \lor \neg y \approx 1$
\end{itemize}
\end{theorem}
\begin{proof}
It is easy to check that $\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}$ satisfies these two quasi-identities.
Since the MV-identities and $\gamma_{q}(x)\approx 1$ axiomatize $\mathcal{V}(\mathbf{LV}_{q+1})$, and $\mathcal{V}(\mathbf{LV}_{q+1})$ is a locally finite quasivariety, it is enough to prove that every critical MV-algebra $C\in\mathcal{V}(\mathbf{LV}_{q+1})$ where the quasi-equation $q(x\land\lnot x)\approx 1 \Rightarrow y \lor \neg y \approx 1$ holds, belongs to $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$.
Let $C$ be a critical MV-algebra satisfying the axiomatization, then $C$ is such that
$C\cong\mathbf{ LV}_{m_{1}+1}\times\cdots\times\mathbf{ LV}_{m_{k}+1}$ satisfying conditions of Theorem \ref{Wcritical}. Moreover, every $1 \leq i \leq k$, is such that either $m_{i} = 1$ or $m_{i} = q$ because $\mathbf{ LV}_{m_{i}+1}$ belongs to $\mathcal{V}(\mathbf{LV}_{q+1})$. If there is $c\in C$ such that $q(c\land\lnot c)=1$ then, by the second quasi-equation of the above axiomatization,
$b \lor \neg b \approx 1$ for any $b\in C$. Thus we have $C\in {\bf B} \subseteq
\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$. Otherwise, recalling that either $m_{i} = 1$ or $m_{i} = q$ for every $i$, if for every $c\in C$ one has $q(c\land \lnot c)\neq 1$ then $m_{i}=1$ for some $1\leq i\leq k$. In that case, by the characterization of critical algebras (Theorem \ref{Wcritical}), we have $C\cong \mathbf{LV}_{2}$ or $C\cong \mathbf{LV}_{q+1}\times\mathbf{LV}_{2}$. If $C\cong \mathbf{LV}_{2}$, then trivially $C\in
\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$. If $C\cong \mathbf{LV}_{q+1}\times\mathbf{LV}_{2}$,
then clearly $C \in\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$.
\end{proof}
Above, note that the identity $y \lor \neg y \approx 1$ corresponds to the previously mentioned Panti's axiom $\gamma_{I, J}(y)$, with $I = \{1\}$ and $J = \emptyset$, axiomatizing {\sf CPL} as an axiomatic extension of $\L_{n+1}$ for any $n > 1$.
Finally, we obtain the following characterization result about strong maximality of logics $\bar{\sf L}^{j}_{q}$ with respect to classical logic.
\begin{theorem} \label{final-simple
Let $q > 1$ be a prime number. Then, for every $j$ such that $0 < j \leq q$:
\begin{itemize}
\item
$ \bar{\sf L}^{j}_{q}$ is strongly maximal with respect to {\sf CPL} and it is axiomatized by
${\sf H}^j_q$ plus the rule $j(\varphi\land\lnot \varphi)/ (\psi \lor \neg \psi)^q$.
\item $\mathsf{L}^{j}_{q}$ is strongly maximal w.r.t. $\bar{\sf L}^{j}_{q}$.
\end{itemize}
\end{theorem}
\begin{proof}
By using the equivalence of Theorem \ref{equivsys} and the algebraizability of $\L_{q+1}$, the lattice of subquasivarieties of $\mathcal{V}(\mathbf{LV}_{q+1})$ is dually order isomorphic to the lattice of all finitary extensions of $\L_{q+1}$. Clearly ${\sf CPL} = \L_2$ is the finitary extension of $ \bar{\sf L}^{j}_{q}$ corresponding to the subvariety $\mathcal{Q}(\mathbf{LV}_{2})$ of $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$, and
$ \bar{\sf L}^{j}_{q}$
is the finitary extension of $\mathsf{L}^{j}_{q}$ corresponding to the subquasivariety $\mathcal{Q}(\mathbf{LV}_{q+1}\times\mathbf{LV}_{2})$ of $\mathcal{V}(\mathbf{LV}_{q+1})$.
By Theorem \ref{Wcritical}, the only critical algebras of $\mathcal{V}(\mathbf{LV}_{q+1})$ are $\mathbf{LV}_{q+1}$, $\mathbf{LV}_{2}$ and $\mathbf{LV}_{2} \times \mathbf{LV}_{q+1}$ and, by Lemma \ref{distinct}, all its subquasivarieties are $\mathcal{Q}(\mathbf{LV}_{2}) \subsetneq \mathcal{Q}(\mathbf{LV}_{2} \times \mathbf{LV}_{q+1}) \subsetneq \mathcal{Q}(\mathbf{LV}_{q+1})$. Therefore, by Theorem \ref{minimal-simple}, $ \bar{\sf L}^{j}_{q}$
is strongly maximal with respect to {\sf CPL}, while ${L}^{j}_{q}$
is strongly maximal with respect to $\bar{\sf L}^{j}_{q}$.
Finally, the axiomatization of $\bar{\sf L}^{j}_{q}$ follows from Theorem \ref{axiomminimal-simple} and the facts that $j\,\varphi\Dashv\vDash_{\mathsf{L}^{j}_{q}}q\ \varphi$ holds for every formula $\varphi$ and that the equation $(q x)^q = q x$ is valid in the class $MV_q$.
\end{proof}
From the above proof, it readily follows the next corollary.
\begin{corollary} \label{between}
$\bar{\sf L}^{j}_{q}$ is the unique strongly maximal logic w.r.t.\ {\sf CPL} above ${\sf L}^j_{q}$. In fact, $\bar{\sf L}^{j}_{q}$ is the only logic between ${\sf L}^j_{q}$ and {\sf CPL}.
\end{corollary}
\begin{remark} \label{rem-simple}
It is worth noting that the rule $j(\varphi\land\lnot \varphi)/ (\psi \lor \neg \psi)^q$ exactly corresponds to the explosion rule $(exp_j)$ introduced in Section \ref{sectLqi}. Indeed, the rule $j(\varphi\land\lnot \varphi)/ (\psi \lor \neg \psi)^q$ is clearly derivable from $(exp_j)$. On the other hand, assuming $j(\varphi\land\lnot \varphi)$, by this rule it follows that $(\psi \lor \neg \psi)^q$ for every $\psi$. Hence the logic becomes {\sf CPL} because the translation of the classical axiom $\psi \lor \neg \psi$ is precisely $(\psi \lor \neg \psi)^q$, and thus $\bot$ follows from $j(\varphi\land\lnot \varphi)$.
This does not come as a surprise, since as we have proved above, $\mathsf{L}^{j}_{q}$ is strongly maximal w.r.t. $\bar{\sf L}^{j}_{q}$ and so the latter is the only proper extension of $\mathsf{L}^{j}_{q}$ (with a finitary rule) properly contained in {\sf CPL}.
\end{remark}
As a corollary of the previous remark, it follows the completeness of $\overline{\sf H}^j_q$.
\begin{corollary} \label{cor-after-remark} $\overline{\sf H}^j_q$ is sound and complete w.r.t.\ $\bar{\sf L}^{j}_{q}$.
\end{corollary}
\subsection{Strong maximality with respect to systems $\mathsf{L}^{i}_{n}$}
\label{general}
Next theorems are generalizations of Theorems \ref{minimal-simple}, \ref{axiomminimal-simple} and \ref{final-simple} respectively.
\begin{theorem} \label{minimal}
Let $n>0$ and $k>1$.
The set of all minimal subquasivarieties of $MV_{nk}=\mathcal{Q}(\mathbf{LV}_{nk+1})$ among those strictly containing $\mathcal{Q}(\mathbf{LV}_{n+1})$ is \\
\noindent \begin{tabular}{rl}
$M^{nk}_{n} =$&$\{ \mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\}) \ : \ q \mbox{ prime, } q | k \mbox{ and } q\mathop{\!\not\vert} n \} \bigcup$ \\
& $\{\mathcal{Q}(\{\mathbf{LV}_{n+1}, \mathbf{LV}_{q^{r+1}+1}\times\mathbf{LV}_{2}\}) \ : \ q \mbox{ prime, } q|k, q^{r}|n \mbox{ and } q^{r+1}\mathop{\!\not\vert} n \}.$ \\
\end{tabular}
\end{theorem}
\begin{proof}
By Lemma \ref{previous}, every $K\in M^{nk}_{n}$ is a subquasivariety of $\mathcal{Q}(\mathbf{LV}_{nk+1})$ strictly containing $\mathcal{Q}(\mathbf{LV}_{n+1})$. Moreover, for every $K_{1}, K_{2}\in M^{nk}_{n}$, if $K_{1}\neq K_{2}$ then $K_{1}\not\subseteq K_{2}$ and $K_{2}\not\subseteq K_{1}$.
Let $K$ be a minimal subquasivariety of $\mathcal{Q}(\mathbf{LV}_{nk+1})$ strictly containing $\mathcal{Q}(\mathbf{LV}_{n+1})$. Trivially, $\mathbf{LV}_{n+1}\in K$. Since $K\neq \mathcal{Q}(\mathbf{LV}_{n+1})$, then it must contain a critical algebra $C\cong\mathbf{ LV}_{m_{1}+1}\times\cdots\times\mathbf{ LV}_{m_{s}+1}$ such that $m_{i}|nk$ for every $1 \leq i\leq s$ and $m_{j} \mathop{\!\not\vert} n$ for some $1\leq j\leq s$. If there is a prime number $q|m_{j}$ such that $q\mathop{\!\not\vert} n$, then $\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\in \mathcal{Q}(C)\subseteq K$. Otherwise, there is a prime $q$ such that $q|m_{j}$ and $q^{r}|n$, and for some $r\geq 1$, $q^{r+1}\! \not | n$ and $q^{r+1}|m_{j}$, whence $\mathbf{LV}_{q^{r+1}+1}\times\mathbf{LV}_{2}\in \mathcal{Q}(C)\subseteq K$. Thus, in both cases $ K$ contains some $K_{i}\in M^{nk}_{n}$, from which it follows that $K \in M^{nk}_{n}$ since we are assuming minimality of $K$.
\end{proof}
\begin{theorem}\label{axiomminimal}
For every $n>0$.
\begin{itemize}
\item If $q$ is a prime number such that $q \mathop{\!\not\vert} n$,
then $\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$ is axiomatized by the MV identities plus
\begin{itemize}
\item $\gamma_{\{n, q\},\emptyset}(x)\approx 1 \mbox{ (the identity axiomatizing }
\mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})\mbox{})$
\item $nq(x\land\lnot x)\approx 1 \Rightarrow \gamma_{\{n\},\emptyset}(y)\approx 1.$
\end{itemize}
\item If $q$ is a prime number such that $q^{r}| n$ and $q^{r+1} \mathop{\!\not\vert} n$,
$\mathcal{Q}(\{\mathbf{LV}_{n+1}, \mathbf{LV}_{q^{r+1}+1}\times\mathbf{LV}_{2}\})$ is axiomatized by the MV identities plus
\begin{itemize}
\item $\gamma_{\{n, q^{r+1}\},\emptyset}(x)\approx 1\mbox{ (the identity axiomatizing } \mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q^{r+1}+1}\}) \mbox{} )$
\item $nq(x\land\lnot x)\approx 1 \Rightarrow \gamma_{\{n\},\emptyset}(y)\approx 1.$
\end{itemize}
\end{itemize}
\end{theorem}
\begin{proof} We prove the first item, the other is proved in a analogous way.
It is easy to check that $\mathbf{LV}_{n+1}$ and $\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}$ satisfy all the quasi-identities.
Since the MV-identities with $\gamma_{\{n, q\},\emptyset}(x)\approx 1$ axiomatize $\mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})$ and $\mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})$ is a locally finite quasivariety, it is enough to prove that every critical MV-algebra $C\in\mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})$ where the quasiequation $nq(x\land\lnot x)\approx 1 \Rightarrow \gamma_{\{n\},\emptyset}(y)\approx 1$ holds, belongs to $\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$. Therefore, let $C$ be a critical MV-algebra satisfying the axiomatization. Then, $C$ is such that
$C\cong\mathbf{ LV}_{m_{1}+1}\times\cdots\times\mathbf{ LV}_{m_{r}+1}$ satisfying conditions of Theorem \ref{Wcritical}, and moreover for every $1 \leq i \leq k$, either $m_{i}|n$ or $m_{i} = q$ because $C \in \mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})$. If there is $c\in C$ such that $nq(c\land\lnot c)=1$ then, by the second quasi-equation of the axiomatization above,
$\gamma_{\{n\},\emptyset}(b)\approx 1$ for any $b\in C$, thus $C\in \mathcal{V}(\{\mathbf{LV}_{n+1}\})=\mathcal{Q}(\{\mathbf{LV}_{n+1}\})\subseteq
\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$. If for every $c\in C$, $nq(c\land \lnot c)\neq 1$ then $m_{i}=1$ for some $1\leq i\leq k$. In that case, by the characterization of critical algebras (Theorem \ref{Wcritical}), either $C\cong \mathbf{LV}_{2}$ or $C\cong \mathbf{LV}_{m+1}\times\mathbf{LV}_{2}$. If $C\cong \mathbf{LV}_{2}$, then trivially $C\in
\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$. Otherwise, if $C\cong \mathbf{LV}_{m+1}\times\mathbf{LV}_{2}$, since $C\in\mathcal{V}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\})$, either $m|n$ or $m=q$. If $m|n$ then $C\in \mathcal{V}(\{\mathbf{LV}_{n+1}\})=\mathcal{Q}(\{\mathbf{LV}_{n+1}\})\subseteq
\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$. If $m=q$ then $C\cong \mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\in\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$.
\end{proof}
If $1\leq i,m\leq n$, by analogy with $\mathsf{L}^{i/n}_m$, we define the matrix logic $$\bar{\sf L}^{i/n}_{m}=\langle \textbf{\L}\mathbf{V}_{m+1} \times \textbf{\L}\mathbf{V}_{2}, (F_{i/n}\cap\textbf{\L}\mathbf{V}_{m+1}) \times\{1\} \rangle.$$
Then we have the following generalization of Theorem \ref{final-simple}.
\begin{theorem}
Let $0<i\leq n$ be natural numbers and let $q$ be a prime number. Then we have:
\begin{itemize}
\item If
$q \mathop{\!\not\vert} n$ then, for every $j$ such that $(i-1)q<j\leq iq$, $\mathsf{L}^{i}_{n}\cap \bar{\sf L}^{j/nq}_{q}$ is strongly maximal with respect to $\mathsf{L}^{i}_{n}$, and it is axiomatized by ${\sf H}^{j/nq}_{n, q}$ plus the rule $j(\varphi\land\lnot \varphi)/\gamma^{j/nq}_{n}(\psi)$.
\item If $q^{r}| n$ and $q^{r+1} \mathop{\!\not\vert} n$ then, for every $j$ such that $(i-1)q<j\leq iq$, $\mathsf{L}^{i}_{n}\cap \bar{\sf L}^{j/nq}_{q^{r+1}}$ is strongly maximal with respect to $\mathsf{L}^{i}_{n}$, and it is axiomatized by ${\sf H}^{j/nq}_{n, q^{r+1}}$ plus the rule $j(\varphi\land\lnot \varphi)/\gamma^{j/nq}_{n}(\psi)$.
\end{itemize}
Recall that in the above rules $\gamma^{j/nq}_{n}(\psi)$ refers to the axiom in one variable that axiomatizes $\mathsf{L}^{j/nq}_{n}$ relative to $\mathsf{L}^{j}_{nq}$.
Moreover, every finitary extension of some $\mathsf{L}^{j}_{k}$ is strongly maximal with respect $\mathsf{L}^{i}_{n}$ iff it is of one of the two preceeding types.
\end{theorem}
\begin{proof}
Notice that $\mathsf{L}^{i}_{n}=\mathsf{L}^{j/nq}_{n}$ for every $j$ such that $(i-1)q<j\leq iq$. Thus, $\mathsf{L}^{i}_{n}$ is an extension of $\mathsf{L}^{j}_{nq}$. Now, by using the equivalence of Theorem \ref{equivsys} and the algebraizability of $\L_{nq}+1$, the lattice of subquasivarieties of $\mathcal{V}(\mathbf{LV}_{nq+1})$ is dually order isomorphic to the lattice of all the finitary extensions of $\mathsf{L}^{j}_{nq}$. Moreover, $\mathsf{L}^{i}_{n}\cap \bar{\sf L}^{j/nq}_{q}$ and $\mathsf{L}^{i}_{n}\cap \bar{\sf L}^{j/nq}_{q^{r+1}}$ are the finitary extensions of $\mathsf{L}^{j}_{nq}$ associated to $\mathcal{Q}(\{\mathbf{LV}_{n+1},\mathbf{LV}_{q+1}\times\mathbf{LV}_{2}\})$ and $\mathcal{Q}(\{\mathbf{LV}_{n+1}, \mathbf{LV}_{q^{r+1}+1}\times\mathbf{LV}_{2}\})$, respectively. Hence, they are strongly maximal with respect to $\mathsf{L}^{i}_{n}$, by Theorem \ref{minimal}. The axiomatization follows from Theorem \ref{axiomminimal} and the facts that $j\,\varphi\Dashv\vDash_{\mathsf{L}^{j}_{nq}}nq\ \varphi$
holds for every formula $\varphi$ and that the equation $(nq x)^{nq} = nq x$ is valid in the class $MV_{nq}$.
Finally, the last statement of this theorem follows from Theorem \ref{minimal} and Theorem \ref{equivsys}.
\end{proof}
\section{An application to ideal paraconsistent logics} \label{sectIdeal}
As mentioned in Example \ref{EjIdeal}, Arieli et al.\
have introduced in \cite{ArieliAZ11a} the concept of {\em ideal paraconsistent logics}. We recall here this notion.
\begin{definition} [c.f. \cite{ArieliAZ11a}] \label{IdPar}
Let $L$ be a propositional logic defined over a signature $\Theta$ (with consecuence relation $\vdash_L$) containing at least a unary connective $\neg$ and a binary connective $\to$ such that:
\begin{itemize}
\item[(i)] $L$ is paraconsistent w.r.t.\ $\neg$ (or simply $\neg$-paraconsistent), that is, there are formulas $\varphi,\psi \in \mathcal{L}(\Theta)$ such that $ \varphi, \neg\varphi \nvdash_L\psi$;
\item[(ii)] $\to$ is an implication for which the deduction-detachment theorem holds in $L$, that is, $\Gamma \cup \{\varphi\} \vdash_L \psi$ iff $\Gamma \vdash_L \varphi \to\psi$, for every set for formulas $\Gamma \cup \{\varphi, \psi\} \subseteq \mathcal{L}(\Theta)$.
\item[(ii)] There is a presentation of {\sf CPL} as a matrix logic $L'=\langle \mathbf{A}, \{1\}\rangle$ over the signature $\Theta$ such that the domain of $\mathbf{A}$ is $ \{0,1\}$, and $\neg$ and $\to$ are interpreted as the usual 2-valued negation and implication of {\sf CPL}, respectively.
\item[(iv)] $L$ is a sublogic of {\sf CPL} in the sense that $\vdash_L \subseteq \; \vdash_{L'}$, that is, $\Gamma \vdash_L \varphi$ implies $\Gamma \vdash_{L'} \varphi$, for every set for formulas $\Gamma \cup \{\varphi\} \subseteq \mathcal{L}(\Theta)$.
\end{itemize}
Then, $L$ is said to be an {\em ideal paraconsistent logic} if it is maximal w.r.t.\ $L'$, and every proper extension of $L$ over $\Theta$ is not $\neg$-paraconsistent.
\end{definition}
An implication connective satisfying the above condition (ii) will be called {\em deductive implication} in the rest of the paper.\footnote{Such an implication is called {\em deductive} in \cite{car:con:mar:07,CF2014} and {\em proper} in \cite{ArieliAZ11a}. }
\
Thus, a $\neg$-paraconsistent logic $L$ with a deductive implication is ideal if it is maximal w.r.t. {\sf CPL} (presented over the signature $\Theta$ of $L$) and, if $L''$ is another logic over $\Theta$ properly containing $L$, with $\Gamma \cup \{\varphi\} \subseteq \mathcal{L}(\Theta)$ such that $\Gamma \vdash_{L''}\varphi$ but $\Gamma \nvdash_L \varphi$, then the logic obtained from $L$ by adding $\Gamma / \varphi$ as an inference rule is not $\neg$-paraconsistent.
As already noticed, the logics ${\sf L}_i^n$ with $i/n \leq 1/2$ are paraconsistent. In this section, using the results of the previous sections, we study the status of the logics ${\sf L}^i_n$ in relation to ideal paraconsistency. Namely,
in the following subsection, we will show that the logics of the form ${\sf L}_q^i$, where $q$ is prime and $i/q \leq 1/2$ are ideal paraconsistent,
while in subsection \ref{sectJ4} the special case of ${\sf L}_3^1$, renamed as ${\sf J}_4$, is analyzed in more detail.
\subsection{The ideal paraconsistent logics ${\sf L}_q^i$} \label{sectLiq}
By combining Proposition~\ref{parLqi} with Corollary~\ref{maxLqi} we know a logic $\mathsf{L}^i_q$ is $\neg$-paracon\-sis\-tent and maximal w.r.t.\ {\sf CPL}, provided that $q$ is prime and $i/q \leq 1/2$. From now on we will assume this is the case when referring to a logic $\mathsf{L}^i_q$.
Recall
that
$\overline{\sf H}^i_q$ is the Hilbert calculus obtained from the calculus ${\sf H}^i_q$ for $\mathsf{L}^i_q$ by adding the $i$-explosion rule $(exp_i)$. Since $\varphi \wedge \neg\varphi \vdash_{{\sf H}^i_q} i(\varphi \wedge \neg\varphi)$, the logic $\overline{\sf H}^i_q$ is explosive. Then, taking into account Corollary \ref{between},
it follows that every proper extension of $\mathsf{L}^i_q$ defined over its signature is either $\bar{\mathsf{L}}^i_q$ or {\sf CPL}, and hence not $\neg$-paraconsistent.
In addition, by Lemma \ref{negFi}, we know there is a definable unary connective ${\sim}^i_q$ such that, for every evaluation $e$,
$e({\sim}^i_q~p)=0$ if $e(p) \geq i/q$, and $e({\sim}^i_q~p)=1$ otherwise, for every propositional variable $p$.\footnote{Namely, ${\sim}^i_q~p = \neg\lambda_{i,q}(p)$.} This is a kind of ``classical'' negation defined on ${\sf L}_q^i$. Using this negation, one can define in turn a new implication $\Rightarrow^i_q$ by stipulating $\varphi \Rightarrow^i_q \psi={\sim}^i_q \varphi \lor \psi$. In fact, one can easily check that $\Rightarrow^i_q$ is a deductive implication on ${\sf L}^i_q$ in the sense of Definition \ref{IdPar} and that
over $\{0,1\}$ it coincides with the classical implication.
All the above considerations lead to the following result.
\begin{proposition} \label{IdealLiq}
Let $q$ is a prime number, and let $1 \leq i < q$ such that $i/q \leq 1/2$. Then, $\mathsf{L}^i_q$ is a $(q+1)$-valued ideal paraconsistent logic.\footnote{Strictly speaking, in this claim we implicitly assume that the signature of $\mathsf{L}^i_q$ has been changed by adding the definable implication $\Rightarrow^i_q$ as a primitive connective. }
\end{proposition}
Therefore we have a large family of examples of ideal paraconsistent logics. In particular, for each prime $q$, all the logics in the set $PC_{q+1} = \{ \mathsf{L}^i_q \ : \ i < q/2 \}$ are $(q+1)$-valued ideal paraconsistent logics. Moreover, if we consider ``the more theorems a paraconsistent logic has, the more well-behaved is the logic'' as a valid further criterion, then we can still refine the set $PC_{q+1}$. Indeed, if we denote by $Th(L)$ the set of theorems of a logic $L$ then, as noticed in Remark~\ref{Lqi-indist}, we have the strict inclusions $Th(\mathsf{L}^i_q) \subsetneq Th(\mathsf{L}^j_q) \subsetneq Th({\sf CPL})$ whenever $i > j$. Therefore the logic ${\sf J}_{q+1} = \mathsf{L}^1_q$ appears to be the ``best'' ideal logic in the set $PC_{q+1}$,\footnote{We have chosen the name ${\sf J}_{q+1}$ to denote the logic $\mathsf{L}^1_q$ inspired in the 3-valued case, where the ideal paraconsistent logic ${\sf J}_3$ coincides with $\mathsf{L}^1_2$.} since it is the logic in that set having the biggest set of theorems from classical logic.
Finally, it is worth mentioning that all the paraconsistent logics of the form $\mathsf{L}^i_n$ are, indeed, {\bf LFI}s (recall Section~\ref{recovery}):
\begin{proposition} \label{LFI-Lqi} Suppose that $i/n \leq 1/2$. Then, the logic $\mathsf{L}^i_n$ is an {\bf LFI} w.r.t.\ $\neg$ and where the consistency operator is defined as $\circ \alpha = {\sim}^i_n (\alpha \wedge \neg \alpha)$.
\end{proposition}
\begin{proof}
Straightforward.
\end{proof}
\subsection{The four-valued ideal paraconsistent logic ${\sf J}_4$} \label{sectJ4}
As mentioned in Remark \ref{axiomHqi}, we know from Theorem 4.3 in \cite{blok:pig:01} that it is possible to obtain a standard (that is, without ``global'' inference rules) Hilbert calculus for a logic $\mathsf{L}^i_n$ for $i< n$ from the usual one for $\L_{n+1}$ by using translations. However, the calculi obtained in this manner can lack an intuitive meaning since they are defined in terms of the implication connective $\to$ of $\L_{n+1}$, that is naturally associated to the filter $F_1 =\{1\}$ but not
to the filter $F_{i/n} = \{i/n, \ldots, 1\}$, which is the one at work in ${\sf L}^i_n$. Actually, the implication naturally associated to the filter $F_{i/n}$ is $\Rightarrow^i_n$, which was considered above, for which modus ponens (MP) and the deduction-detachament theorem hold.
In this section we focus on the particular case of the (ideal paraconsistent) logic ${\sf J}_4 = {\sf L}^1_3$. ${\sf J}_4$ can be considered as a generalization to four values of the paraconsistent 3-valued logic ${\sf J}_3$ introduced by da Costa and D'Ottaviano in \cite{dot:dac:70} and briefly mentioned in Example~\ref{ExL3}.
For this logic a more natural signature $\Sigma$ will be
considered for describing it axiomatically in terms
of a deductive implication connective (in the sense of Definition~\ref{IdPar} item~(ii)) and a unary connective $*$ representing the square operation $x \otimes x$, which can be seen as a kind of `truth stresser' (see e.g. \cite{Ha01c}).
A soundness and completeness result for this calculus proved by using a `separation' technique for truth-values will be presented. Note that dealing with logics ${\sf J}_q = \mathsf{L}^1_q$ for a prime $q>3$ appears to be much more complicated, and certainly it lies outside the scope of this paper.
The signature $\Sigma$ that will be used in the rest of the section is given by two unary connectives ${\ast}$ (square) and $\neg$ (negation), plus a binary connective $\vee$ for disjunction. Abusing the notation, we formally define next ${\sf J}_4$ over this signature, and we will show later that it is an equivalent presentation of ${\sf L}^1_3$.
\begin{definition} \label{algA4}
${\sf J}_4$ is the matrix logic $\langle {\bf A}_4, F_{1/3}\rangle$ over $\Sigma$, where the algebra is ${\bf A}_4 = ({\L}V_4, \lor, \neg, *)$, with operations defined by the tables below:
$$
\begin{array}{|c||c|c|c|c|} \hline
\vee & 1 & 2/3 & 1/3 & 0\\ \hline \hline
1 & 1 & 1 & 1 & 1 \\ \hline
2/3 & 1 & 2/3 & 2/3 & 2/3 \\ \hline
1/3 & 1 & 2/3 & 1/3 & 1/3\\ \hline
0 & 1 & 2/3 & 1/3 & 0 \\ \hline
\end{array}
\hspace{1 cm}
\begin{array}{|c||c|c|} \hline
& \neg & {\ast} \\ \hline \hline
1 & 0 & 1 \\ \hline
2/3 & 1/3 & 1/3 \\ \hline
1/3 & 2/3 & 0 \\ \hline
0 & 1 & 0 \\ \hline
\end{array}
$$
\end{definition}
Observe that $\neg$ is \L ukasiewicz negation in $\textbf{\L}\mathbf{V}_4$, while ${\ast}x=x \otimes x$ (with $\otimes$ being {\L}ukasiewicz strong conjunction) and $\vee$ is the lattice join in $\textbf{\L}\mathbf{V}_4$.
In this signature $\Sigma$ the following derived connectives can be defined (as usual, the corresponding operators will be denoted using the same symbol): \\
\begin{tabular}{ll}
- & $\Delta(p) = {\ast}{\ast} p$ ; \\
- & ${\sim}p = \Delta( \neg p)$ ; \\
- & $p \Rightarrow r = {\sim}p \vee r$ ; \\
- & $p \Leftrightarrow r = (p \Rightarrow r) \wedge (r \Rightarrow p)$ ; \\
- & $p \wedge r = \neg(\neg p \vee \neg r)$ ; \\
- & $\nabla(p)= \neg{\sim}p$; \\
- & $\alpha_{1/3}(p)=\nabla(p) \wedge {\sim}{\ast} p$; \\
- & $\beta_{1/3}(p)=\alpha_{1/3}(p) \wedge {\ast}\neg p$.\\ \\
\end{tabular}
\ \\
It is easy to see that $\Delta$ is Monteiro-Baaz {\em Delta-operator}) and $\sim$ is G\"odel negation (${\sim}x=1$ if $x=0$, and $0$ otherwise). Note that $\sim$ actually coincides with ${\sim}^1_3$, and thus $\Rightarrow$ is nothing but $\Rightarrow^1_3$. Furthermore, $\nabla(x)=0$ if $x=0$, and $1$ otherwise; $\alpha_{1/3}(x)=1$ and $\beta_{1/3}(x)=1/3$ if $x=1/3$, and $0$ otherwise.
It is worth to remark that {\L}ukasiewicz implication is definable from these operators in the following way: $$p \to r = ((\nabla(\neg p) \vee r) \wedge (\neg p \vee \nabla(r)) \wedge \neg \beta_{1/3}(r) )
\vee (({\sim}p \wedge \alpha_{1/3}(r)) \vee (\alpha_{1/3}(p) \wedge \alpha_{1/3}(r))).$$
Then, the following result follows easily:
\begin{proposition} \label{algA4=algL4}
The algebras $\textbf{\L}\mathbf{V}_4$ and ${\bf A}_4$ are functionally equivalent.
\end{proposition}
This means that the proposed operators over $\Sigma$ constitute an alternative presentation of the algebra $\textbf{\L}\mathbf{V}_4$ underlying $\L_4$. Next we define an axiomatic system for ${\sf J}_4$.
\begin{definition} \label{calH4} The Hilbert calculus ${\sf H}_4$ for the logic ${\sf J}_4$, defined over the signature $\Sigma$, is given as follows:\\[2mm]
{\em Axiom schemas:} those of {\sf CPL} over the signature $\{\vee, \Rightarrow, {\sim}\}$ plus
\begin{itemize}
\item[(Ax1)] $\neg{\sim}\alpha \Rightarrow \alpha$ \vspace{-0.2cm}
\item[(Ax2)] $\alpha \vee \neg \alpha$ \vspace{-0.2cm}
\item[(Ax3)] $\neg\neg\alpha \Leftrightarrow \alpha$ \vspace{-0.2cm}
\item[(Ax4)] $\neg(\alpha \vee \beta) \Rightarrow \neg \alpha$ \vspace{-0.2cm}
\item[(Ax5)] $\neg(\alpha \vee \beta) \Rightarrow \neg \beta$ \vspace{-0.2cm}
\item[(Ax6)] $\neg \alpha \Rightarrow (\neg\beta \Rightarrow \neg(\alpha \vee \beta))$ \vspace{-0.2cm}
\item[(Ax7)] ${\ast}\alpha \Rightarrow \alpha$ \vspace{-0.2cm}
\item[(Ax8)] ${\ast}(\alpha \vee \neg \alpha)$ \vspace{-0.2cm}
\item[(Ax9)] ${\ast}\alpha \Rightarrow {\sim}{\ast}\neg\alpha$ \vspace{-0.2cm}
\item[(Ax10)] ${\ast}{\ast}\alpha \Leftrightarrow {\sim}\neg \alpha$ \vspace{-0.2cm}
\item[(Ax11)] $\neg {\ast}\alpha \Leftrightarrow \neg\alpha$ \vspace{-0.2cm}
\item[(Ax12)] ${\ast}(\alpha \vee \beta) \Leftrightarrow ({\ast}\alpha \vee {\ast}\beta)$
\end{itemize}
\noindent {\em Inference rule:}
\begin{itemize}
\item[(MP)] $\displaystyle\frac{\alpha \ \ \ \ \alpha\Rightarrow
\beta}{\beta}$
\end{itemize}
\end{definition}
Observe that, since (MP) is the only inference rule, ${\sf H}_4$ satisfies the deduction-detachment theorem w.r.t. the implication $\Rightarrow$: $\Gamma \cup\{ \alpha\} \vdash_{{\sf H}_4} \beta$ iff $\Gamma \vdash_{{\sf H}_4} \alpha \Rightarrow \beta$, for every set of formulas $\Gamma \cup \{\alpha,\beta\}$. {On the other hand, it can be proved that $*(\alpha\Rightarrow \beta) \Rightarrow (*\alpha \Rightarrow *\beta)$ is derivable in ${\sf H}_4$, which gives additional support to consider $*$ as a truth stresser.}
Soundness of ${\sf H}_4$ can be proved straightforwardly.
\begin{proposition} [Soundness of ${\sf H}_4$] The calculus ${\sf H}_4$ is sound w.r.t. ${\sf J}_4$, that is: $\Gamma \vdash_{{\sf H}_4} \varphi$ implies that $\Gamma \vDash_{\sf J_4} \varphi$, for every finite set of formulas $\Gamma \cup \{\varphi\}$.
\end{proposition}
In order to prove completeness, since ${\sf H}_4$ is a finitary Tarskian logic, one can use the technique of maximal consistent sets of formulas. Indeed, for any set of formulas $\Gamma \cup \{\varphi \}$, if $\Gamma \nvdash_{{\sf H}_4} \varphi$ then, by Lindenbaum-{\L}os theorem, $\Gamma$ can be extended to a maximal set $\Lambda$ such that $\Lambda \nvdash_{{\sf H}_4} \varphi$. We will call the set $\Lambda$ {\em maximal \ntwrt\varphi} in ${\sf H}_4$.
Maximal sets w.r.t.\ a formula enjoy remarkable properties which directly follow from the axioms and rules of ${\sf H}_4$.
\begin{proposition}\label{MaxJ4}
Let $\Lambda$ be a maximal set \ntwrt\varphi\ in ${\sf H}_4$. Then, $\Lambda$ is closed, i.e. for every formula $\psi$, $\Lambda \vdash\psi$ iff $\psi\in\Lambda$. Moreover, for any formulas $\alpha$ and $\beta$ the following conditions hold: \vspace{0.2cm}
\begin{tabular}{rl}
(1) &$\alpha \vee \beta \in \Lambda$ iff $\alpha \in \Lambda$ or $\beta \in \Lambda$;\\
(2) &$\alpha \not\in\Lambda$ iff ${\sim}\alpha \in\Lambda$;\\
(3) & $\alpha \Rightarrow \beta \in \Lambda$ iff $\alpha \not\in \Lambda$ or $\beta \in \Lambda$;\\
(4) &$\alpha \not\in\Lambda$ implies $\neg\alpha \in\Lambda$;\\
(5) & $\alpha \in\Lambda$ iff $\neg\neg\alpha \in\Lambda$;\\
(6) & $\neg{\sim}\alpha \in\Lambda$ implies $\alpha \in\Lambda$;\\
(7) & $\neg(\alpha \vee \beta) \in \Lambda$ iff $\neg\alpha \in \Lambda$ and $\neg\beta \in \Lambda$;\\
(8) & ${\ast}\alpha \in\Lambda$ implies $\alpha \in\Lambda$;\\
(9) & ${\ast}(\alpha \vee \beta) \in \Lambda$ iff ${\ast}\alpha \in \Lambda$ or ${\ast}\beta \in \Lambda$;\\
(10) & ${\ast}{\ast}\alpha\in\Lambda$ iff $\neg\alpha \not\in\Lambda$;\\
(11) & $\neg{\ast}\alpha \in\Lambda$ iff $\neg\alpha \in\Lambda$;\\
(12) & ${\ast}\alpha \not\in\Lambda$ iff ${\ast}\neg\alpha \in\Lambda$.
\end{tabular}
\end{proposition}
Next we prove a Truth Lemma for ${\sf H}_4$.
\begin{lemma} [Truth Lemma for ${\sf H}_4$] \label{TL-J4} Let $\Lambda$ be a maximal set of formulas \ntwrt\varphi\ in ${\sf H}_4$. Consider the following evaluation $e_\Lambda$ of propositional variables for ${\sf J}_4$:
$$(T) \hspace*{2cm}
e_\Lambda(\gamma) = \left\{\begin{array}{rl}
1 & \mbox{iff}\quad \gamma \in \Lambda, \mbox{and} \;\neg \gamma \not\in \Lambda\\[1mm]
2/3 & \mbox{iff}\quad \gamma \in \Lambda, \;\neg \gamma \in \Lambda, \mbox{and} \; {\ast} \gamma \in \Lambda\\[1mm]
1/3 & \mbox{iff}\quad \gamma \in \Lambda, \;\neg \gamma \in \Lambda, \mbox{and} \; {\ast} \gamma \not\in \Lambda\\[1mm]
0 & \mbox{iff}\quad \gamma \not\in \Lambda.
\end{array}\right.
$$Then, (T) holds for every complex formula $\gamma$.
\end{lemma}
\begin{proof}
The proof is done by induction on the complexity of the formula $\gamma$. If $\gamma$ is atomic then (T) holds by hypothesis.
Now, suppose (T) holds for every formula with complexity $\leq n$ (induction hypothesis -- IH) and let $\gamma$ be a formula with complexity $n$.
In order to prove (T) from (IH) by analyzing all the possible cases (namely, $\gamma = \neg\alpha$ or $\gamma = {\ast}\alpha$ or $\gamma = \alpha \vee \beta$), each item of Proposition~\ref{MaxJ4} should be used.\footnote{Observe that it is enough to prove the `only if' part of (T), since the four conditions on the right-hand side are pairwise incompatible, and $e(\gamma)$ can only take one of the values $0, 1, 1/3, 2/3$. Thus, if, for instance, the first condition on the right-hand side of (T) holds for a given formula $\gamma$ then the other 3 conditions are false and so $e_\Lambda(\gamma) \not\in \{1/3,1,0\}$, by the `only if' part of (T). Hence, $e_\Lambda(\gamma)$ must be $2/3$. This shows that the `if' part of (T) follows from the `only if' part.} The details are left to the reader.
\end{proof}
\begin{theorem} [Completeness of ${\sf H}_4$] \label{complJ4} The calculus ${\sf H}_4$ is complete w.r.t. ${\sf J}_4$, that is: $\Gamma \vDash_{{\sf J}_4} \varphi$ implies that $\Gamma \vdash_{{\sf H}_4} \varphi$, for every finite set of formulas $\Gamma \cup \{\varphi\}$.
\end{theorem}
\begin{proof}
Let $\Gamma\cup \{\varphi\}$ be a set of formulas in the language of ${\sf J}_4$
such that $\Gamma\nvdash_{{\sf H}_4} \varphi$.
By Lindenbaum-\L os, there exists a set $\Lambda$ maximal \ntwrt\varphi\ in ${\sf H}_4$
such that $\Gamma \subseteq \Lambda$.
Let $e_\Lambda$ be the evaluation defined as in the Truth Lemma~\ref{TL-J4}.
Then, it follows that $e_\Lambda(\gamma) \in F_{1/3}$ iff $\gamma \in \Lambda$, for every formula $\gamma$. Therefore $e_\Lambda$ is an evaluation such that $e_\Lambda[\Gamma] \subseteq F_{1/3}$ but $e_\Lambda(\varphi) = 0$ since $\varphi \not\in \Lambda$, hence $\Gamma \not\vDash_{{\sf J}_4} \varphi$.
\end{proof}
Recall that, from Theorem \ref{final-simple} and Remark \ref{rem-simple}, the Hilbert calculus $\overline{\sf H}_4$ obtained from ${\sf H}_4$ by adding the explosion rule
$$(exp_1) \ \displaystyle \frac{\varphi \land \neg\varphi}{\bot} $$
(see Definition~\ref{Hbar}) is the axiomatization of the (only) proper extension of ${\sf H}_4$ which is strongly maximal w.r.t. {\sf CPL}, and that is semantically characterized by the matrix logic
$$\bar{J}_4 =\langle {\bf A}_4 \times {\bf A}_{2}, F_{1/3}\times\{1\} \rangle, $$
where ${\bf A}_2$ is the Boolean algebra over $\{0,1\}$ in the signature $\Sigma$, where the operator $\ast$ is defined as ${\ast}x=x$.
\section{Conclusions} \label{concl}
In this paper we have been concerned with the study of maximality and strong maximality conditions among finite-valued {\L}ukasiewicz logics ${\sf L}^i_n$ with order filters as designated values. In particular, we have characterized the conditions under which a logic ${\sf L}^i_n$ is maximal w.r.t.\ {\sf CPL} and its unique extension $\bar{{\sf L}}^i_n$ by an inference rule is strongly maximal w.r.t.\ classical logic. This allows us to show that, although they are not strongly maximal w.r.t.\ {\sf CPL}, the logics ${\sf L}^i_n$ with $n$ prime and $i/n \leq 1/2$ are in fact ideal paraconsistent logics.
{Thus, they provide interesting and well-motivated examples of ideal paraconsistent logical systems which are $(n+1)$-valued, in contrast with the $(n+2)$-valued logics $\mathcal{M}_{n+2}$ presented in~\cite{ArieliAZ11a} and reproduced here in Example~\ref{EjIdeal}, whose definition is somewhat {\em ad hoc}.}
As for future work, there are several interesting problems that we leave open in this paper.
{
Concerning maximality, a natural question is how to obtain a stronger version of Theorem~\ref{maxthm} which give us sufficient conditions to guarantee that a given matrix logic $L_1$ is {\em strongly maximal} w.r.t. another matrix logic $L_2$.}
On the other hand, notice that the study of strong maximality developed in Section~\ref{Joan} was heavily based on
results on the algebraic semantics associated to these systems by means of the Blok and Pigozzi's techniques. Thus, another interesting issue to be explored in future work is to obtain more examples of strong maximality for different families of algebraizable logics
Another question raised here is the axiomatization of the ideal paraconsistent logics ${\sf J}_{q+1}$ for $q > 3$ in a ``natural'' signature containing a deductive implication. As it was shown in Subsection~\ref{sectJ4}, the signature $\Sigma = \{ \lor, \neg, *\}$ is suitable for the case $q = 3$. Moreover, besides being apt for axiomatizing ${\sf J}_4 = {\sf L}^1_3$, it can be proved that the (non-paraconsistent) logic ${\sf L}^2_3$ can also be axiomatized over $\Sigma$ in a relatively simple way. Note that $\alpha \Rightarrow \beta = \neg\alpha \vee \beta$ defines a deductive implication in ${\sf L}^2_3$.
The fact that {\L}ukasiewicz implication is definable in $\Sigma$ justifies the convenience of using that signature for dealing with the case $q=3$. However, this property does not hold for any prime $q> 3$. Indeed, there are primes $q$ in which {\L}ukasiewicz implication of \L$_{q+1}$ cannot be defined over $\Sigma$, e.g. $q = 17$. The study of the fragments of ${\sf L}^i_q$ in the signature $\Sigma$ is thus a different but closely related problem, which deserves future research.
\subsection*{Acknowledgements} The authors acknowledge partial support by the H2020 MSCA-RISE-2015 project SYSMICS. Coniglio was also financially supported by an individual research grant from CNPq, Brazil (308524/2014-4). Esteva and Godo also acknowledge partial support by the Spanish MINECO/FEDER project RASO (TIN2015- 71799-C2-1-P). Gispert also acknowledges partial support by the Spanish \linebreak MINECO/FEDER projects (MTM2016-74892 and MDM-2014-044) and grant 2017-SGR-95 of Generalitat de Catalunya.
\bibliographystyle{plain}
|
1,116,691,500,925 | arxiv | \section{Introduction}\label{s:introduction}
The dynamics of an ideal fluid/plasma is constrained by infinitely many constants of motion.
Among them, the helicity dictates the invariance of the topology of vortex lines
(in the present work, we assume a barotropic relation between the entropy and the temperature; then, the helicity is conserved in every co-moving fluid element confining the vortex lines).
According to Noether's earlier work~\cite{Noether}, a conservation law comes from a symmetry property.
Here, a symmetry denotes an invariance of action, or Lagrangian, for infinitesimal transformations (reparametrizations) of independent or/and dependent variables.
The primary example is the time reparametrization invariance eliciting the conservation of energy in classical mechanics.
In fluid/plasma theory, the symmetry leading the conservation of helicity is known as the ``relabeling symmetry'' pertinent to the Lagrangian labels of fluid elements (independent variables on Lagrangian coordinates)~\cite{Calkin,Salmon2,Yahalom,Padhye-Morrison1,Padhye-Morrison2,Fukumoto}.
Unlike the example of the time reparametrization, relabeling transformation is infinite dimensional, which resides in Noether's second theorem;
see \cite{Olver} for extensive mathematical discussions around Noether's theorem and advanced topics, and~\cite{Newcomb,Bretherton,Ripa,Salmon1} for related applications of the relabeling symmetry.
The aim of this work is to examine this classical relation in the relativistic framework.
The reason why this practice interests us is because the conventional helicity is no longer a constant of motion in a relativistic fluid.
The space-time distortion (inhomogeneous Lorentz contraction due to non-constant velocity of the fluid) yields a ``relativistic baroclinic effect'' on a thermodynamically barotropic fluid, allowing a change in the circulation (or, equivalently, the vorticity)~\cite{MahajanYoshida2010}.
Yet, we can formulate a ``relativistic helicity'', by which we can delineate the topological constraint on the vortex lines in the relativistic space-time~\cite{YKY-JMP2014}.
As naturally expected, and as to be shown in Sec. \ref{s:R-Helicity conservation via Noether}, the conservation of the relativistic helicity is derived by the relabeling symmetry.
The key is to establish the `proper' correspondence between the Lagrangian frame and the Eulerian frame; the Noether current (being formulated by a Lagrangian formalism of the action) resides on the former, while the helicity is evaluated on the latter.
Interestingly, the relativity reveals this fundamental relation with raising caution about the treatment of the proper-time in the Eulerian frame;
the covector Eulerianized from the Noether current is not divergence-free, whereas it is divergence-free in the non-relativistic regime (so, it is often called a ``Noether current'' in the Eulerian frame).
To deal with a charged fluid (plasma), we consider the canonical momentum that is the combination of the fluid momentum and the electromagnetic (EM) potential.
The (relativistic) helicity is defined for the canonical vorticity that is the curl (exterior derivative) of the canonical momentum.
Formulating the action principle on the Lagrangian frame, we can similarly relate the helicity to the Noether current (sec. \ref{ss:Noether PL}).
However, the complete action principle, encompassing the kinetic part producing the equation of motion and EM part producing Maxwell's equation, becomes a mixture of the Lagrangian kinetic term and the Eulerian EM term (we, then, need the protocol in evaluating the variation of the EM potential in the Lagrangian sense for deriving the equation of motion and in the Eulerian sense for deriving Maxwell's equation).
A solution to remove this inconvenience is to formulate the kinetic action in the Eulerian frame, but then, we miss the notion of the Lagrangian labels of fluid elements~\cite{Yoshida_Mahajan_PPCF2012}.
The action of the magnetohydrodynamics (MHD), however, can be formulated fully on the Lagrangian frame (Sec. \ref{ss:MHD Lagrange description}).
This is because, in the MHD model, the EM field (in fact, only the magnetic field is included as a state variable) is assumed to be co-moving with the fluid, and thus, Maxwell's equation is not needed.
The formulation of relativistic MHD action in Lagrangian coordinates is another new product of this work
(see~\cite{Dixon,Anile} for formulation of basic equations, \cite{Chiueh} for an Eulerian action principle, and \cite{Koide,Komissarov,Harikae} for applications in astrophysical computations).
This paper is organized as follows.
In Sec. \ref{s:fundamentals}, we start with preliminaries of relativistic fluid, plasma and MHD.
In Sec. \ref{s:R-Helicity}, we review the relativistic canonical helicity formulated in \cite{YKY-JMP2014},
and in Sec. \ref{s:Noether}, Noether's theorem applied for classical fields.
In Sec. \ref{s:Lagrange description}, the Lagrangian description of a relativistic fluid is given by following Salmon's formulation~\cite{Salmon3}, and formulate the actions of fluid, plasma, and MHD.
In Sec. \ref{s:R-Helicity conservation via Noether}, we derive the generalized relativistic helicity by Noether's theorem.
We also show the conservation of the relativistic cross helicity in MHD.
\begin{remark}[helicity and circulation]
\label{remark:circulation}
The invariance of the helicity implies various topological constraints on the field.
Let $\bm{a}$ be a three-dimensional field and $\bm{b}=\nabla\times\bm{a}$ (which is called he vorticity of $\bm{a}$).
For a fixed three-dimensional domain $\Omega$, which confines $\bm{b}$ (i.e., $\bm{n}\cdot\bm{b}=0$ on the
boundary $\partial\Omega$; $\bm{n}$ is the unit normal vector onto $\partial\Omega$),
the \emph{total helicity} is the volume integral $C=\int_\Omega \bm{a}\cdot\bm{b}\,\rmd^3x$,
which sums up the links, twists, and writes of all vortex lines confined in $\Omega$~\cite{Moffatt-Ricca-1992}.
When $\bm{b}$ is ``filamentary'', $C$ evaluates the topological index of the filaments;
as the simplest setting, we may consider a pair of loops bounding disks, and define unit-vorticity filaments that are formally the delta-measures on the loops
(see \cite{YKY-JMP2014} for a mathematical formulation in the context of Banach algebra),
and then, $C$ evaluates the Gauss linking number of the loops.
A \emph{local helicity} can be defined by introducing a ``co-moving'' volume $W(t)$
that moves with a velocity $\bm{V}$, and integrating
$C_W=\int_{W(t)} \bm{a}\cdot\bm{b}\,\rmd^3x$.
Here $W(t)$ is co-moving with $\bm{b}$ in the sense that $\bm{b}$ is transported by the
same velocity $\bm{V}$, i.e., $\bm{b}$ (2-form) satisfies $\partial_t\bm{b} -\nabla\times(\bm{V}\times\bm{B}) =0$.
If $\bm{n}\cdot\bm{b}=0$ on $\partial W(t)$, $C_W$ is a constant of motion,
The relativistic helicity\,\cite{YKY-JMP2014} is defined by generalizing these relations for the
four-dimensional Minkowski space-time; the vorticity is, then, the exterior derivative $\rmd a$ of
a 1-form $a$, and the integrand $\bm{a}\cdot\bm{b}$ is identified to be the 3-form $a\wedge \rmd a$,
which is integrated over a co-moving 3-chain $W(t)$ embedded in the four-dimensional space-time
(which is no loner purely spatial); see Sec.\,\ref{s:R-Helicity} for a short review.
Introducing an arbitrary co-moving exact 2-form $\bm{c}$ (which may not be a physical quantity, and then we call it a \emph{mock field}, cf.\,\cite{YM_FDR2014}),
we can define a constant of motion $C_c=\int_{W(t)} \bm{a}\cdot\bm{c}\,\rmd^3x$,
which is called a \emph{cross helicity}\,\cite{Fukumoto}.
When $\bm{c}$ is a filament on a co-moving loop $L(t)$, $C_c$ evaluates the \emph{circulation}
$\oint_{L(t)}\bm{a}\cdot\bm{\tau}\,\rmd x$, where $\bm{\tau}$ is the unit tangential vector on $\Gamma(t)$.
In the context of Hamiltonian mechanics (Eulerian representation),
a helicity is a Casimir invariant pertinent to the degeneracy (or noncanonicality) of the Poisson bracket\,\cite{Morrison_RMP1998}.
However, a general topological invariant is not necessarily a Casimir invariant;
the circulation is such an example.
We may yet define a \emph{cross helicity} by associating the invariant with a co-moving \emph{mock field},
which is a Casimir invariant in the extended phase space,
i.e., the product space of the original Poisson manifold and the mock field\,\cite{YM_FDR2014}.
The co-moving mock field corresponds to the vector generating the relabeling group\,\cite{Moreau}.
\end{remark}
\section{Preliminaries of relativistic fluid, plasma and MHD}\label{s:fundamentals}
We use the notation of Minkowski space time.
The reference-frame coordinates are denoted by
\begin{eqnarray*}
x^\mu := (ct,x,y,z), \quad x_\mu := (ct,-x,-y,-z),
\end{eqnarray*}
where $c$ is the speed of light.
The Minkowski metric tensor is $g_{\mu\nu} = \mr{diag}(+, -, -, -)$.
The gradients are denoted as $\p_\mu = \p/\p x^\mu$ and $\p^\mu = \p/\p x_\mu$.
The relativistic 4-velocity is defined as
\begin{eqnarray*}
{U}^\mu := \f{dx^\mu}{ds} = \l( \gamma ,\, \gamma\f{\bm{v}}{c} \ri), \quad
{U}_\mu := \f{dx_\mu}{ds} = \l( \gamma ,\, -\gamma\f{\bm{v}}{c} \ri),
\end{eqnarray*}
where $s$ is the proper time, $\bm{v}$ is the reference-frame 3-velocity, and $\gamma = 1/\sqrt{1 - (v/c)^2}$ is the Lorentz factor.
\subsection{Fluid}
We start by introducing the thermodynamic enthalpy $h = mc^2 + {\mathcal E}( n,\, \sigma) + p/n$,
where $m$ is the rest mass of a particle, ${\mathcal E}$ is the internal energy, $n$ is the rest frame particle density, $\sigma$ is the specific entropy and $p$ is pressure.
Then the 4-momentum is defined as $P_\mu := hU_\mu$.
The exterior derivative of the 4-momentum, $M_{\mu\nu} := \p_\mu P_\nu - \p_\nu P_\mu$, is a vorticity field tensor.
The equation of motion is given as
\begin{equation}
U^\mu M_{\mu\nu} = -\p_\nu \theta.
\label{e:Eulerian e.o.m. FL}
\end{equation}
Here we have assumed a barotropic relation $Td\sigma = d\theta$ ($T$ : temperature, $\theta$ : some function of $ n$).
The thermodynamic first law is, then,
\begin{equation}
dh = d\theta + \f{dp}{ n}.
\label{e:thermodynamic relation}
\end{equation}
Due to the barotropicity the internal energy is written as ${\mathcal E} = {\mathcal E}(n)$.
The dual of $M_{\mu\nu}$ is defined as
\begin{equation}
M^{*\mu\nu} := \f{1}{2}\epsilon^{\mu\nu\alpha\beta}M_{\alpha\beta},
\label{e:def M* FL}
\end{equation}
where $\epsilon^{\mu\nu\alpha\beta}$ is the four dimensional Levi-Civita symbol.
We have the identity
\begin{equation}
\p_\mu M^{*\mu\nu} = 0.
\label{e:div M FL}
\end{equation}
Differentiating (\ref{e:Eulerian e.o.m. FL}), we obtain the vorticity equation
\begin{equation*}
\p_\lambda(U^\mu M_{\mu\nu}) - \p_\nu(U^\mu M_{\mu\lambda}) = 0.
\end{equation*}
Substituting (\ref{e:def M* FL}), we obtain
\begin{equation}
\p_\lambda(U^\lambda M^{*\mu\nu}) = M^{*\lambda\nu}\p_\lambda U^\mu + M^{*\mu\lambda}\p_\lambda U^\nu.
\label{e:Eulerian vorticity eq.2 FL}
\end{equation}
\subsection{Charged fluid (plasma)}
When the fluid particles are charged, we have to dress the momentum by the electromagnetic potential, and consider the ``canonical momentum'',
\begin{eqnarray*}
{\mathcal P}_\mu := P_\mu + eA_\mu,
\end{eqnarray*}
where $e$ is the charge and $A_\mu$ is the 4-potential.
We write the Faraday tensor as $F_{\mu\nu} = \p_\mu A_\nu - \p_\nu A_\mu$.
The dual of $F_{\mu\nu}$ is defined in the same way as (\ref{e:def M* FL}).
Electric field and magnetic field are given as $\bm{E} = -\nbl A^0 - c^{-1}\bm{A}$ and $\bm{B} = \Curl \bm{A}$.
We have the identity
\begin{equation}
\p_\mu F^{*\mu\nu} = \p_\mu F_{\nu\lambda} + \p_\nu F_{\lambda\mu} + \p_\lambda F_{\mu\nu} = 0.
\label{e:div F*EM}
\end{equation}
Here we consider a single species plasma, but if the plasma consists of multiple fluids, we have to sum the 4-currents of all species.
We have
\begin{eqnarray}
U^\mu{\mathcal P}_\mu = h + e\varrho \quad (\varrho := U^\mu A_\mu).
\label{e:UP}
\end{eqnarray}
The vorticity field tensor is extended as a \emph{canonical} vorticity field tensor ${\mathcal M}_{\mu\nu} := \p_\mu {\mathcal P}_\nu - \p_\nu {\mathcal P}_\mu$.
The equation of motion is given as
\begin{equation}
U^\mu {\mathcal M}_{\mu\nu} = -\p_\nu \theta.
\label{e:Eulerian e.o.m. PL}
\end{equation}
Equations (\ref{e:div M FL}) and (\ref{e:Eulerian vorticity eq.2 FL}) are modified by replacing $M$ by ${\mathcal M}$:
\begin{equation}
\p_\mu {\mathcal M}^{*\mu\nu} = 0,
\label{e:div M PL}
\end{equation}
and
\begin{equation}
\p_\lambda(U^\lambda {\mathcal M}^{*\mu\nu}) = {\mathcal M}^{*\lambda\nu}\p_\lambda U^\mu + {\mathcal M}^{*\mu\lambda}\p_\lambda U^\nu.
\label{e:Eulerian vorticity eq.2 PL}
\end{equation}
\subsection{MHD}
Next we review the MHD equation~\cite{Dixon,Anile}.
We define
\begin{equation*}
\tilde{E}_\nu := U^\mu F_{\mu\nu} = \l(\gamma \Vec{E}\cdot\l( \f{\Vec{v}}{c} \ri),\; -\gamma\l( \Vec{E} + \f{\Vec{v}}{c}\times\Vec{B} \ri)\ri),
\end{equation*}
\begin{equation*}
\tilde{B}^\nu := U_\mu F^{*\mu\nu} = \l(-\gamma \Vec{B}\cdot\l( \f{\Vec{v}}{c} \ri),\; -\gamma\l( \Vec{B} - \f{\Vec{v}}{c}\times\Vec{E} \ri)\ri).
\end{equation*}
The 3-vector parts of $\tilde{E}_\nu$ and $\tilde{B}^\nu$ correspond to the Lorentz transformations of $\bm{E}$ and $\bm{B}$, respectively.
In MHD, we assume
\begin{equation}
\Vec{E} + (\Vec{v}/c)\times\Vec{B} = 0,
\label{e:Ohm's law}
\end{equation}
which is equivalent to $\tilde{E}_\nu = 0$.
We eliminate $\bm{E}$ from $F^{*\mu\nu}$ by using (\ref{e:Ohm's law}) and define
\begin{equation*}
{\mathcal F}^{*\mu\nu} := F^{*\mu\nu}|_{\bm{E} = \bm{B}\times\bm{v}/c},
\end{equation*}
which we call the ``MHD tensor''.
As (\ref{e:div F*EM}), we have
\begin{equation}
\p_\mu {\mathcal F}^{*\mu\nu} = 0.
\label{e:div F}
\end{equation}
We denote
\begin{equation*}
b^\nu := \tilde{B}^\nu|_{\bm{E} = \bm{B}\times\bm{v}/c} = U_\mu {\mathcal F}^{*\mu\nu} = \l(-\f{\gamma}{c}\Vec{v}\cdot\Vec{B},\; -\f{1}{\gamma}\Vec{B} - \f{\gamma}{c^2}(\Vec{v}\cdot\Vec{B})\Vec{v}\ri).
\end{equation*}
The norm of $b^\nu$ is
\begin{equation*}
b^2 = b^\nu b_\nu = -\f{|\bm{B}|^2}{\gamma^2} - \f{(\bm{v}\cdot\bm{B})^2}{c^2}.
\end{equation*}
At the non-relativistic limit ($|\bm{v}/c| \ll 1$), $b^\nu$ becomes $(0,\, -\bm{B})$.
We remark that $b^\nu$ is \emph{orthogonal} to $U_\nu$:
\begin{eqnarray*}
U_\nu b^\nu = 0,
\end{eqnarray*}
and the 4-dimensional divergence of $b^\nu$ is not zero:
\begin{equation}
\p_\nu b^\nu = {\mathcal F}^{*\mu\nu}\p_\nu U_\mu = -b^\mu U^\nu \p_\nu U_\mu.
\label{e:div b}
\end{equation}
We may retrieve ${\mathcal F}^{*\mu\nu}$ as
\begin{eqnarray*}
{\mathcal F}^{*\mu\nu} = U^\mu b^\nu - U^\nu b^\mu.
\end{eqnarray*}
In the place of the vorticity equation (\ref{e:Eulerian vorticity eq.2 FL}), we have
\begin{equation*}
\p_\lambda(U^\lambda {\mathcal F}^{*\mu\nu}) = {\mathcal F}^{*\lambda\nu}\p_\lambda U^\mu + {\mathcal F}^{*\mu\lambda}\p_\lambda U^\nu.
\end{equation*}
The energy-momentum tensor of MHD is given as
\begin{equation*}
T^{\mathrm{MHD}}_{\mu\nu} = n h U_\mu U_\nu - pg_{\mu\nu} - b^2U_\mu U_\nu + \f{b^2}{2}g_{\mu\nu} - b_\mu b_\nu.
\end{equation*}
The equation of motion, $\p^\mu T^{\mathrm{MHD}}_{\mu\nu} = 0$, reads
\begin{equation}
U^\mu M_{\mu\nu} = -\p_\nu \theta + \f{1}{ n}\l[ \p^\mu(b^2U_\mu U_\nu + b_\mu b_\nu) - \p_\nu\l( \f{b^2}{2} \ri) \ri].
\label{e:Eulerian MHD e.o.m.}
\end{equation}
The last term on the right-hand side is the Lorentz force.
\subsection{Special Relativistic Helicity}\label{s:R-Helicity}
The 4 velocity $U$ generates a diffeomorphism group $T(s)$, where $s$ is the proper time.
Let $V_0$ be a spatial volume. We denote $V(s) = T(s)V_0$.
The relativistic helicity~\cite{YKY-JMP2014} is
\begin{eqnarray*}
\int_{V(s)} {\mathcal P} \wedge {\mathcal M} &=& \int_{V(s)} \f{1}{2}{\mathcal P}_\nu {\mathcal M}_{\alpha\beta}dx^\nu \wedge dx^\alpha \wedge dx^\beta \nonumber \\
&=& \int_{V(s)} {\mathcal P}_\nu {\mathcal M}^{*\mu\nu}dV_\mu,
\end{eqnarray*}
where $dV_\mu$ is the Hodge dual of $-dx_\mu$, which is shown to be invariant if the barotropic equation of motion~(\ref{e:Eulerian e.o.m. PL}) holds, i.e.
\begin{equation}
\dd{}{s}\int_{V(s)}{\mathcal P} \wedge {\mathcal M} = \int_{V(s)}L_U({\mathcal P}\wedge {\mathcal M}) = 0,
\label{e:canonical helicity conservation}
\end{equation}
where $L_U$ denotes a Lie derivative along $U$.
Putting $e = 0$ in ${\mathcal P}$ yields the conservation of the helicity of a neutral fluid ($\int_{V(s)}P \wedge M$), and $m = 0$, the magnetic helicity ($\int_{V(s)}A \wedge d{\mathcal F}$) conservation in MHD;
(\ref{e:Ohm's law}) is the massless limit of~(\ref{e:Eulerian e.o.m. PL}) and $d\theta = 0$.
We note that ${\mathcal M}$ in (\ref{e:canonical helicity conservation}) may be replaced by an arbitrary exact 2-form ${\mathcal W}$ that obeys
\begin{eqnarray*}
L_U{\mathcal W} = di_U{\mathcal W} = 0,
\end{eqnarray*}
where $i_U$ is a interior product with $U$, and then, $\int_{V(s)}{\mathcal P} \wedge {\mathcal M}$ is a cross helicity.
The invariance of this cross helicity is a straight forward generalization of Theorem 1 of~\cite{YKY-JMP2014}.
In the case of MHD, however, the cross helicity is special $\int_{V(s)}P \wedge {\mathcal F}$, which is initially discovered in this work.
\section{Noether's theorem}\label{s:Noether}
In this section we briefly review Noether's theorem for Lagrangian mechanics of general field.
We consider a first-order action such that
\begin{equation*}
S = \int_D {\mathcal L}(q^\nu(a),\: \tilde{\p}_\mu q^\nu(a),\: a^\mu)d^4 a,
\end{equation*}
where ${\mathcal L}$ is the Lagrangian density, $a^\mu$ is the Lagrangian coordinates, $q^\nu(a)$ is the field variable, $\tilde{\p}_\mu = \p/\p a^\mu$ (the tilde is used to distinguish from the derivative with respect to $x^\mu$ in the previous sections) and $D$ is the domain.
The equation of motion (Euler-Lagrange equation) is assumed to be satisfied:
\begin{equation}
\pp{\mathcal{L}}{q^\nu} - \tilde{\p}_\mu\l( \pp{\mathcal{L}}{(\tilde{\p}_\mu q^\nu)} \ri) = 0.
\label{e:e.o.m}
\end{equation}
Then we consider a transformation
\begin{eqnarray*}
a^\mu &\mapsto& a'^\mu = a^\mu + \delta a^\mu.
\end{eqnarray*}
The field variable is altered as
\begin{eqnarray*}
q^\nu(a) &\mapsto& q'^\nu(a') = q^\nu(a) + \delta q^\nu(a) + \delta a^\lambda\tilde{\p}_\lambda q^\nu(a) =: q^\nu(a) + \Delta q^\nu(a).
\end{eqnarray*}
The change $\Delta q^\nu(a)$ is sometimes called ``total variation''; the first term is the change of $q$ on fixed $a$, and the second term is caused by the variation of $a$.
The change of the action under these transformations is calculated as,
\begin{eqnarray}
\delta S = \int_{D'} \mathcal{L}(q'^\nu(a'),\: \tilde{\p}'_\mu q'^\nu(a'),\: a'^\mu) d^4a' - \int_D \mathcal{L}(q^\nu(a),\: \tilde{\p}_\mu q^\nu(a),\: a^\mu) d^4a \nonumber\\
\nonumber\\
= \int_D\l\{ \delta q^\nu\l[ \pp{\mathcal{L}}{q^\nu} - \tilde{\p}_\mu\l( \pp{\mathcal{L}}{(\p_\mu q^\nu)} \ri) \ri] + \tilde{\p}_\mu\l[ \mathcal{L}\delta a^\mu + \delta q^\nu\pp{\mathcal{L}}{(\tilde{\p}_\mu q^\nu)} \ri] \ri\}d^4a. \nonumber\\
\label{e:delta S}
\end{eqnarray}
where we used $\Delta\tilde{\p}_\mu q^\nu = \tilde{\p}_\mu\Delta q^\nu - (\tilde{\p}_\mu\delta a^\lambda)\tilde{\p}_\lambda q^\nu$.
The first term in the integrand disappears because of the equation of motion (\ref{e:e.o.m}).
When the integrand of the right-hand side is written as $\tilde{\p}_\mu \delta \Lambda^\mu$ where $\delta \Lambda^\mu$ is some function, the transformation $\delta a$ is called invariant transformation.
For such transformations, action invariance $\delta S = 0$ induces four dimensional divergence free current ($\tilde{\p}_\mu I^\mu = 0$), so-called Noether current:
\begin{equation*}
I^\mu := \mathcal{L}\delta a^\mu + \delta q^\nu\pp{\mathcal{L}}{(\tilde{\p}_\mu q^\nu)} - \delta \Lambda^\mu.
\end{equation*}
\section{Lagrange description of fluid, plasma and MHD}\label{s:Lagrange description}
\subsection{Fluid}
We show the Lagrangian description of relativistic fluid which was first proposed by Salmon~\cite{Salmon3}, then extend it to plasma and MHD.
The Lagrangian description of the relativistic plasma and MHD is firstly proposed in this paper.
Let $q^\mu(a) = (t, \bm{x})$ be Eulerian coordinates and $a^\mu = (s, \bm{a})$ be Lagrangian coordinate, where $t$ and $s$ represent reference and proper time respectively.
The relativistic 4-velocity is written as
\begin{equation*}
U^\mu(x) := \l.\dd{q^\mu}{s}\ri|_{a = q^{-1}(x)} = \dot{q}^\mu|_{a = q^{-1}(x)}.
\end{equation*}
We define the Jacobian between the Lagrangian and the Eulerian coordinates $J := \p(x)/\p(a)$ and $R := \sqrt{\dot{q}^\mu\dot{q}_\mu}$.
Then the proper number density is given as
\begin{equation*}
n(t,\,\bm{x}) = \f{R n_0(\bm{a})}{J},
\end{equation*}
where $ n_0$ is the initial rest number density.
We define $C_\mu^{\;\;\nu}$ as the cofactor of the matrix element $\p q^\mu/\p a^\nu$.
The identities of $J$ and $C_\mu^{\;\;\nu}$ which will be used repeatedly in the following calculations are provided in the appendix.
Now, the action is written as
\begin{eqnarray}
S = \int \Bigl\{- n[mc^2 + {\mathcal E}( n)]\Bigr\} d^4x &=& \int \l\{-R n_0\l[mc^2 + {\mathcal E}\l(\f{R n_0}{J}\ri)\ri]\ri\} d^4a \nonumber\\
&=:& \int {\mathcal L}_\mathrm{FL}d^4a,
\label{e:FL Lagrangian}
\end{eqnarray}
where ${\mathcal E}( n)$ is the barotropic internal energy.
Although it is obvious that $R = 1$, this evaluation is done after taking the variation of the action.
Let us consider the variation $\delta q^\nu$.
We find
\begin{equation*}
\delta S = \int \l[ - n_0h\dot{q}_\nu\delta\dot{q}^\nu + pC_\nu^{\;\;\mu}\pp{(\delta q^\nu)}{a^\mu} \ri]d^4a.
\end{equation*}
where $p = n^2\p {\mathcal E}/\p n$ is used.
The invariance of the action leads the equation of motion as
\begin{equation}
n_0\pp{}{s}(h\dot{q}_\nu) - C_\nu^{\;\;\mu}\pp{p}{a^\mu} = 0.
\label{e:Lagrangian e.o.m.}
\end{equation}
Converting to Eulerian coordinate, we obtain
\begin{equation*}
U^\mu\p_\mu P_\nu - \f{\p_\nu p}{ n} = 0,
\end{equation*}
where we used the formula (\ref{e:A d/da = J d/dx}). This is equivalent to (\ref{e:Eulerian e.o.m. FL}).
\subsection{Plasma}
We modify the action (\ref{e:FL Lagrangian}) so as to include the interaction with EM field
\begin{eqnarray}
S &= & \int \l\{-R n_0\l[mc^2 + {\mathcal E}\l(\f{R n_0}{J}\ri)\ri] - n_0e A_\nu(q)\dot{q}^\nu\ri\} d^4a \nonumber \\
&=:& \int {\mathcal L}_\mathrm{PL}d^4a.
\label{e:PL Lagrangian}
\end{eqnarray}
The variation $\delta q$ induces the change of the action as
\begin{eqnarray}
\delta S = \nonumber\\
\int \l[ - n_0(h\dot{q}_\nu + eA_\nu)\delta\dot{q}^\nu + pC_\nu^{\;\;\mu}\pp{(\delta q^\nu)}{a^\mu} - n_0 e \dot{q}^\mu\pp{A_\mu}{q^\nu}\delta q^\nu \ri]d^4a.
\label{e:variation of PL action}
\end{eqnarray}
The equation of motion is obtained by $\delta S = 0$ and converting it to Eulerian coordinate, we get
\begin{equation*}
U^\mu\p_\mu {\mathcal P}_\nu - \f{\p_\nu p}{ n} - eU^\mu\p_\nu A_\mu = 0.
\end{equation*}
This is equivalent to (\ref{e:Eulerian e.o.m. PL}).
To obtain Maxwell's equations, we need to add the EM Lagrangian, then the action is
\begin{equation*}
S = \int \l[- n(mc^2 + {\mathcal E}) - neA_\mu U^\mu - \f{1}{4}F^{\mu\nu}F_{\mu\nu} \ri] d^4x.
\end{equation*}
We take variation with respect to $A_\nu$.
The careful treatment is required for this action.
As we showed above, we must not consider the EM Lagrangian for the derivation of the equation of motion.
On the other hand, for the derivation of Maxwell equation, the variation need to be carried out in Eulerian coordinate.
This ad hoc treatment is known for the particle mechanics~\cite{Landau}.
\subsection{MHD}\label{ss:MHD Lagrange description}
Next we derive Lagrangian description of MHD.
Before looking into the action, we investigate Lagrangian description of magnetic field like variable $b^\mu$.
Let us start by manipulating the MHD tensor as
\begin{eqnarray*}
{\mathcal F}^{*\mu\nu} &=& \l. \epsilon^{\mu\nu\alpha\beta}\pp{ A_\beta}{x^\alpha} \ri|_{\bm{E} = \bm{B}\times\bm{v}/c} = \l. \f{\epsilon^{\lambda\kappa\gamma\delta}}{J}\pp{q^\mu}{a^\lambda}\pp{q^\nu}{a^\kappa}\pp{ A_\beta}{a^\gamma}\pp{q^\beta}{a^\delta} \ri|_{\bm{E} = \bm{B}\times\bm{v}/c} \\
\\
\\
&=& \l[ \f{\epsilon^{0ijk}}{J}\l( \dot{q}^\mu\pp{q^\nu}{a^i} - \dot{q}^\nu\pp{q^\mu}{a^i} \ri)\pp{ A_\lambda}{a^j}\pp{q^\lambda}{a^k} \ri. \\
& &\hspace{5em} \l.+ \f{\epsilon^{0ijk}}{J}\pp{q^\mu}{a^i}\pp{q^\nu}{a^j}\l( \dot{ A}_\lambda\pp{q^\lambda}{a^k} - \dot{q}^\lambda\pp{ A_\lambda}{a^i} \ri) \ri]_{\bm{E} = \bm{B}\times\bm{v}/c},
\end{eqnarray*}
where we used the formula (\ref{e:J epsilon}) in the second equality and the Latin indices represent $1,\,2,\,3$.
The second term in the right-hand side equals to $U^\mu F_{\mu\lambda}(\p q^\lambda/\p a^k)|_{\bm{E} = \bm{B}\times\bm{v}/c} = \tilde{E}_\lambda(\p q^\lambda/\p a^k)|_{\bm{E} = \bm{B}\times\bm{v}/c} = 0$.
Then we define the vector $b_0^i$ on Lagrangian coordinates as
\begin{equation*}
b_0^i := \epsilon^{0ijk}\pp{ A_\lambda}{a^j}\pp{q^\lambda}{a^k}.
\end{equation*}
Defining ${\mathscr A}_i := A_\mu\p q^\mu/\p a^i$ and $\nbl_a$ as a gradient in $a$ space, $b_0^i$ is expressed as
\begin{eqnarray*}
b_0^i = (\nbl_a \times \bm{{\mathscr A}})^i.
\end{eqnarray*}
We can show $b_0^i$ satisfies
\begin{equation}
\pp{ b_0^i}{a^i} = 0 \;\;\;\;\mathrm{and}\;\;\;\; \dot{b_0^i} = 0.
\label{e:div b0 and dot b0}
\end{equation}
Now ${\mathcal F}^{*\mu\nu}$ is rewritten as
\begin{equation*}
{\mathcal F}^{*\mu\nu} = \f{Rb_0^i}{J}\l( \dot{q}^\mu\pp{q^\nu}{a^i} - \dot{q}^\nu\pp{q^\mu}{a^i} \ri).
\end{equation*}
Therefore $b^\nu$ is rewritten using $ b_0^i$ as
\begin{equation*}
b^\nu = U_\mu {\mathcal F}^{*\mu\nu} = \f{Rb_0^i}{J}\l( \pp{q^\nu}{a^i} - \dot{q}^\nu\dot{q}_\mu\pp{q^\mu}{a^i} \ri).
\end{equation*}
This tells us that $b^\nu$ is obtained by multiplying $(R/J)(\p q_\mu/\p a^i)$ (Eulerianization) and $g^{\mu\nu} - U^\mu U^\nu$ (projector orthogonal to $U^\mu$) to $b_0^i$.
Now we write the action of MHD by adding $b^2/2$ to (\ref{e:FL Lagrangian}) as
\begin{eqnarray}
S &=& \int \l\{- n[mc^2 + {\mathcal E}( n)] + \f{b^2}{2} \ri\} d^4x \nonumber\\
\nonumber \\
&=& \int \l\{-R n_0\l[mc^2 + {\mathcal E}\l(\f{R n_0}{J}\ri)\ri] + \ri.\nonumber\\
& & \hspace{4em}\l.\f{R^2}{2J} b_0^i b_0^j\l( \pp{q^\lambda}{a^i} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i} \ri)\l( \pp{q_\lambda}{a^j} - \dot{q}_\lambda\dot{q}^\zeta\pp{q_\zeta}{a^j} \ri)\ri\} d^4a \nonumber\\
\nonumber \\
&=:& \int {\mathcal L}_\mathrm{MHD} d^4a.
\label{e:MHD Lagrangian}
\end{eqnarray}
Here, $\dot{q}^\lambda\dot{q}_\lambda$ in the third term will be evaluated as 1 after taking a variation.
The variation of ${\mathcal L}_\mathrm{MHD}$ with respect to $q^\nu$ gives the equation of the motion as
\begin{eqnarray*}
& & \pp{}{s}(h\dot{q}_\nu) - \f{1}{ n_0}\pp{}{s}\l\{ \l[\f{1}{J} b_0^i b_0^j\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri) \ri]\dot{q}_\nu \ri\} \nonumber\\
\nonumber\\
&-& \f{1}{ n_0}C_\nu^{\;\;\mu}\pp{p}{a^\mu} + \f{1}{ n_0}C_\nu^{\;\;\mu}\pp{}{a^\mu}\l[\f{1}{2J^2} b_0^i b_0^j\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri) \ri] \nonumber\\
\nonumber\\
&-& \f{1}{ n_0}\pp{}{a^i}\l[ \f{1}{J} b_0^i b_0^j\l( \pp{q_\nu}{a^j} - \dot{q}^\mu\dot{q}_\nu\pp{q_\mu}{a^j} \ri) \ri] \nonumber\\
\nonumber\\
&+& \f{1}{ n_0}\pp{}{s}\l[ \f{1}{J} b_0^i b_0^j\l( \dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\nu}{a^j} - \dot{q}_\nu\dot{q}_\mu\dot{q}^\lambda\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri) \ri] = 0 . \nonumber\\
\end{eqnarray*}
Using the identity (\ref{e:d/da = J d/dx(J dq/da)}), the second term on the left hand side is manipulated as,
\begin{eqnarray*}
- \f{J}{n_0}\pp{}{x^\zeta}\l\{ \l[\f{1}{J^2} b_0^i b_0^j\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri) \ri]\dot{q}_\nu\dot{q}^\zeta \ri\}\\
\nonumber \\
= - \f{1}{ n}\pp{}{x^\zeta}(b^2U^\zeta U_\nu),
\end{eqnarray*}
and the fifth and sixth terms are manipulated as,
\begin{eqnarray*}
-\f{J}{ n_0}\pp{}{x^\zeta}\l\{
\f{1}{J^2} b_0^i b_0^j\l( \pp{q_\nu}{a^j} - \dot{q}^\mu\dot{q}_\nu\pp{q_\mu}{a^j} \ri)\pp{q_\zeta}{a^i} \ri. \\
\hspace{7em}\l.- \f{1}{J^2} b_0^i b_0^j\l( \dot{q}_\mu \pp{q^\mu}{a^i} \pp{q_\nu}{a^j} - \dot{q}_\nu\dot{q}_\mu\dot{q}^\lambda\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri)\dot{q}_\zeta
\ri\} \\
\\
= -\f{1}{ n}\p_\lambda(b^\lambda b_\nu).
\end{eqnarray*}
Using (\ref{e:A d/da = J d/dx}), the third and fourth terms become,
\begin{eqnarray*}
- \f{1}{n}\pp{p}{x^\nu} + \f{1}{n}\pp{}{x^\nu}\l(\f{b^2}{2}\ri).
\end{eqnarray*}
Then we obtain the Eulerianized equation (\ref{e:Eulerian MHD e.o.m.}).
\section{Relabeling symmetry and the relativistic helicity}\label{s:R-Helicity conservation via Noether}
\subsection{Relativistic helicity in plasma}\label{ss:Noether PL}
In this section we derive the conservation of the relativistic canonical helicity in plasma from Noether's theorem.
Let us suppose the label variation while the field variables are kept constant,
\begin{eqnarray*}
\Delta q^\nu = 0 \;\;\;\mr{and}\;\;\; \delta a^\nu \ne 0,
\end{eqnarray*}
resulting
\begin{eqnarray*}
\Delta \l( \pp{q^\nu}{a^\mu} \ri) &=& -\pp{(\delta a^\lambda)}{a^\mu}\pp{q^\nu}{a^\lambda}, \\
\delta q^\nu &=& -\delta a^\lambda\pp{q^\nu}{a^\lambda}.
\end{eqnarray*}
These are called as the relabeling transformation.
We investigate $\delta a^\nu$ which does not alter the plasma action (\ref{e:PL Lagrangian}).
From (\ref{e:variation of PL action}), we find
\begin{eqnarray*}
\pp{{\mathcal L}_\mathrm{PL}}{\dot{q}^\nu} = - n_0 (h\dot{q}_\nu + eA_\nu) + pC_\nu^{\;\;0} \;\;\;\mr{and}\;\;\;
\pp{{\mathcal L}_\mathrm{PL}}{(\tilde{\p}_i q^\nu)} = pC_\nu^{\;\;i},
\end{eqnarray*}
where $\tilde{\p}_\mu = \p/\p a^\mu$.
Let us define $\omega^i := n_0 \delta a^i$.
We obtain
\begin{equation*}
\mathcal{L}_\mathrm{PL}\delta a^\mu + \delta q^\nu\pp{\mathcal{L}_\mathrm{PL}}{(\tilde{\p}_\mu q^\nu)} = \l( {\mathcal P}_\nu \omega^i\pp{q^\nu}{a^i},\; -(h + e\varrho)\bm{\omega} \ri).
\end{equation*}
Thereby we manipulate the integrand in the right-hand side of (\ref{e:delta S}) as
\begin{eqnarray}
& & \pp{}{a^\mu}\l( \mathcal{L}_\mathrm{PL}\delta a^\mu + \delta q^\nu\pp{\mathcal{L}_\mathrm{PL}}{(\tilde{\p}_\mu q^\nu)} \ri) \nonumber\\
\nonumber\\
&=& \dot{{\mathcal P}}_\nu \omega^i \pp{q^\nu}{a^i} + {\mathcal P}_\nu \dot{\omega}^i \pp{q^\nu}{a^i} + \pp{}{a^i}\l[ - (h + e\varrho)\omega^i \ri] \nonumber\\
\nonumber\\
&=& \pp{(h + e\varrho - \theta)}{a^i}\omega^i + {\mathcal P}_\nu \dot{\omega}^i \pp{q^\nu}{a^i} + \pp{}{a^i}\l[ - (h + e\varrho)\omega^i \ri],\nonumber\\
\label{e:div(I - delta lambda)}
\end{eqnarray}
where we used (\ref{e:Lagrangian e.o.m.}) and (\ref{e:thermodynamic relation}).
If the label transformation satisfies
\begin{equation}
\pp{\omega^i}{a^i} = 0,
\label{e:PL Noether condition1}
\end{equation}
\begin{equation}
\dot{\omega}^i = 0,
\label{e:PL Noether condition2}
\end{equation}
the right-hand side of (\ref{e:div(I - delta lambda)}) becomes $-\p(\theta\omega^i)/\p a^i$.
Therefore $\delta \Lambda^\mu = (0,\, -\theta\omega^i)$, and a Noether current is given as
\begin{equation}
I^\mu = \l( {\mathcal P}_\nu \omega^i\pp{q^\nu}{a^i},\; -(h + q\varrho - \theta)\bm{\omega} \ri).
\label{e:PL I}
\end{equation}
The conservation of the Noether current ($\p I^\mu/\p a^\mu = 0$) is rewritten as
\begin{equation}
\pp{}{s}(\bm{{\mathscr P}} \cdot \bm{\omega}) + \nbl_a\cdot\l[ -(h + q\varrho - \theta)\bm{\omega} \ri] = 0,
\label{e:general Lagrangian conservation}
\end{equation}
where ${\mathscr P}_i := {\mathcal P}_\mu\p q^\mu/\p a^i$.
We define antisymmetric field tensor
\begin{eqnarray}
{\mathcal W}^{*\mu\nu} := \f{\omega^i}{J}\l( \dot{q}^\mu\pp{q^\nu}{a^i} - \dot{q}^\nu\pp{q^\mu}{a^i} \ri).
\label{e:def W*}
\end{eqnarray}
By the use of (\ref{e:PL Noether condition1}), we can calculate
\begin{eqnarray}
\p_\mu {\mathcal W}^{*\mu\nu} = 0,
\label{e:div W}
\end{eqnarray}
and (\ref{e:PL Noether condition2}) leads
\begin{equation}
\p_\lambda(U^\lambda {\mathcal W}^{*\mu\nu}) = {\mathcal W}^{*\lambda\nu}\p_\lambda U^\mu + {\mathcal W}^{*\mu\lambda}\p_\lambda U^\nu.
\label{e:dot W}
\end{equation}
In terms of differential forms, if we consider ${\mathcal W}^{*\mu\nu}$ as the dual of 2-form ${\mathcal W}$, (\ref{e:div W}) corresponds to
\begin{equation}
d{\mathcal W} = 0,
\label{e:dW}
\end{equation}
and (\ref{e:dot W}) corresponds to
\begin{equation}
d i_U {\mathcal W} = 0.
\label{e:di_UW}
\end{equation}
The divergence-freeness of $\omega$ leads exactness of ${\mathcal W}$, and
the constancy of $\omega$ along the streamline leads transport of ${\mathcal W}$ by $U$, respectively.
Therefore the antisymmetric tensor ${\mathcal W}$ is ``vorticity like'', and we find (\ref{e:def W*}) is the Eulerianization to 2-form field.
Next we show the conservation of generalized helicity (\ref{e:canonical helicity conservation}) from the conservation of the Noether current $\p I^\mu/\p a^\mu = 0$.
Substituting (\ref{e:PL I}) and multiplying the volume element $da^1 \wedge da^2 \wedge da^3$, we calculate
\begin{eqnarray*}
\pp{I^\mu}{a^\mu} da^1 \wedge da^2 \wedge da^3 \nonumber\\
\nonumber\\
= \l\{ \pp{}{a^0}\l( {\mathcal P}_\nu\omega^i\pp{q^\nu}{a^i} \ri) + \pp{}{a^i}\l[ -(h + e\varrho - \theta)\omega^i \ri] \ri\}da^1 \wedge da^2 \wedge da^3 \nonumber\\
\nonumber\\
= C^{\;\;0}_\lambda\f{\dot{q}^\lambda}{J}\l\{ \pp{}{a^0}\l( {\mathcal P}_\nu\omega^i\pp{q^\nu}{a^i} \ri) \ri. \nonumber\\
\hspace{9em}\l.+ \pp{}{a^i}\l[ -(h + e\varrho - \theta)\omega^i \ri] \ri\}da^1 \wedge da^2 \wedge da^3. \nonumber\\
\end{eqnarray*}
The purely spatial volume element $da^1 \wedge da^2 \wedge da^3$ in Lagrangian coordinates is mapped to a space-time mixed three-dimensional volume element $dV_\lambda$ embedded in the four-dimensional Eulerian coordinates (Fig.~\ref{f:volumes}):
\begin{eqnarray}
& & da^1 \wedge da^2 \wedge da^3 \nonumber\\
\nonumber\\
&=&\hspace{1em} \pp{(a^1,a^2,a^3)}{(x^1,x^2,x^3)}dx^1 \wedge dx^2 \wedge dx^3 + \pp{(a^1,a^2,a^3)}{(x^0,x^3,x^2)}dx^0 \wedge dx^3 \wedge dx^2 \nonumber\\
\nonumber\\
& &+ \pp{(a^1,a^2,a^3)}{(x^0,x^1,x^3)}dx^0 \wedge dx^1 \wedge dx^3 + \pp{(a^1,a^2,a^3)}{(x^0,x^2,x^1)}dx^0 \wedge dx^2 \wedge dx^1 \nonumber\\
\nonumber\\
&=& [C^{-1}]^{\;\;\lambda}_0\;dV_\lambda.
\label{e:d^3a to dV_mu}
\end{eqnarray}
\begin{figure}[htpb]
\begin{center}
\includegraphics*[width=0.8\textwidth]{./volumes.eps}
\end{center}
\caption{Volume element transformation between the Lagrangian coordinates and the Eulerian coordinates}
\label{f:volumes}
\end{figure}
Then we obtain
\begin{equation}
\f{\dot{q}^\lambda}{J}\l\{ \pp{}{a^0}\l( {\mathcal P}_\nu\omega^i\pp{q^\nu}{a^i} \ri) + \pp{}{a^i}\l[ -(h + e\varrho - \theta)\omega^i \ri] \ri\}dV_\lambda = 0.
\label{e:L_U P^W4}
\end{equation}
Further manipulation leads
\begin{eqnarray}
\l[ \p_\nu(U^\mu {\mathcal P}_\mu){\mathcal W}^{*\lambda\nu} + U^\mu(\p_\mu {\mathcal P}_\nu - \p_\nu {\mathcal P}_\mu){\mathcal W}^{*\lambda\nu} \ri.\nonumber\\
\hspace{12em}\l. - \p_\nu(h + e\varrho - \theta){\mathcal W}^{*\lambda\nu} \ri]dV_\lambda = 0,
\label{e:L_U P^W2}
\end{eqnarray}
which is the component representation of
\begin{equation}
L_U({\mathcal P} \wedge {\mathcal W}) - d\l[ (h + e\varrho - \theta){\mathcal W} \ri] = 0.
\label{e:L_U P^W}
\end{equation}
By integrating we get
\begin{equation}
\dd{}{s}\int_{V(s)}{\mathcal P} \wedge {\mathcal W} = 0.
\label{e:generalized helicity conservation}
\end{equation}
Here ${\mathcal W}$ is any 2-form field which satisfies (\ref{e:dW}) and (\ref{e:di_UW}).
The arbitrariness of ${\mathcal W}$ is originated from that any label transformation satisfying (\ref{e:PL Noether condition1}) and (\ref{e:PL Noether condition2}) makes the action invariant.
In non-relativistic case, the arbitrariness in the helicity is already known~\cite{Fukumoto}.
Specifying as
\begin{equation*}
\omega^i = \epsilon^{0ijk}\pp{{\mathcal P}_\zeta}{a^j}\pp{q^\zeta}{a^k},
\end{equation*}
(\ref{e:generalized helicity conservation}) becomes (\ref{e:canonical helicity conservation}).
\subsection{Relativistic cross helicity in MHD}\label{ss:Noether MHD}
We apply the relabeling transformation to relativistic MHD Lagrangian (\ref{e:MHD Lagrangian}).
We can calculate
\begin{eqnarray}
& & \mathcal{L}_\mathrm{MHD}\delta a^0 + \delta q^\nu\pp{\mathcal{L}_\mathrm{MHD}}{\dot{q}^\nu} \nonumber\\
&=& \l[ n_0 P_\nu - \f{1}{J} b_0^i b_0^j\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri)\dot{q}_\nu \ri] \delta a^k \pp{q^\nu}{a^k} \nonumber\\
& &\hspace{6em}+ \f{1}{J} b_0^i b_0^j\l( \dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\nu}{a^j} - \dot{q}_\nu\dot{q}_\mu\dot{q}^\lambda\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri)\delta a^k \pp{q^\nu}{a^k} \nonumber\\
\nonumber\\
& & \mathcal{L}_\mathrm{MHD}\delta a^k + \delta q^\nu\pp{\mathcal{L}_\mathrm{MHD}}{(\tilde{\p}_k q^\nu)} \nonumber\\
&=& -\l[ n_0 h - \f{1}{J} b_0^i b_0^j\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri) \ri] \delta a^k \nonumber\\
& & \hspace{10em}- \f{1}{J} b_0^j b_0^k\l( \pp{q_\nu}{a^j} - \dot{q}^\mu\dot{q}_\nu\pp{q_\mu}{a^j} \ri)\delta a^i \pp{q^\nu}{a^i}. \nonumber\\
\label{e:MHD I - delta lambda}
\end{eqnarray}
Thereby we manipulate the integrand in the right-hand side of (\ref{e:delta S}) as
\begin{eqnarray}
\pp{}{a^\mu}\l( \mathcal{L}_\mathrm{MHD}\delta a^\mu + \delta q^\nu\pp{\mathcal{L}_\mathrm{MHD}}{(\tilde{\p}_\mu q^\nu)} \ri) \nonumber\\
\nonumber\\
= -\pp{}{a^i}( \theta \omega^i) + \f{1}{J}\l[ \pp{ b_0^i}{a^k} b_0^j\f{\omega^k}{n_0} + b_0^i b_0^j \pp{}{a^k}\l( \f{\omega^k}{n_0} \ri) \ri.\nonumber\\
\hspace{7em}\l.- b_0^k b_0^j \pp{}{a^k}\l( \f{\omega^i}{n_0} \ri) \ri]\l( \pp{q^\lambda}{a^i}\pp{q_\lambda}{a^j} - \dot{q}^\lambda\dot{q}_\mu\pp{q^\mu}{a^i}\pp{q_\lambda}{a^j} \ri). \nonumber\\
\label{e:MHD div(I - delta lambda)2}
\end{eqnarray}
For this to be exact differential, the condition for the label transformation (\ref{e:PL Noether condition1}) and (\ref{e:PL Noether condition2}) are not sufficient; additionally we require
\begin{equation}
\omega^i = b_0^i.
\label{e:MHD-Noether condition3}
\end{equation}
This means that the degree of freedom in the relabeling symmetry is reduced;
Plugging (\ref{e:MHD-Noether condition3}), the right-hand side of (\ref{e:MHD div(I - delta lambda)2}) becomes $-\p(\theta b_0^i)/\p a^i$.
Therefore $\delta \Lambda^\mu = (0,\, -\theta b_0^i)$, and a Noether current is given as
\begin{equation}
I^\mu = \l( P_\mu b_0^i \pp{q^\mu}{a^i},\, -(h - \theta)\bm{b}_0 \ri).
\label{e:MHD I}
\end{equation}
Here the momentum is mechanical (not ${\mathcal P}$ but $P$).
In the same way as the previous plasma case by replacing $\omega^i \to b_0^i$ and $e = 0$,
we can show the conservation of the Noether current $\p I^\mu/\p a^\mu = 0$ leads
\begin{equation*}
\dd{}{s}\int_{V(s)}P \wedge {\mathcal F} = 0.
\end{equation*}
The generalized field ${\mathcal W}$ is specified as ${\mathcal F}$ in the definition of the MHD cross helicity because $\omega^i$ is specified as $ b_0^i$ in the Noether current (\ref{e:MHD I}) due to the additional term in the MHD Lagrangian (\ref{e:MHD Lagrangian}).
\section{Summary}
The conservation of the relativistic helicity is derived from Noether's theorem pertaining to the fluid elements' relabeling symmetry.
In plasma, labeling transformation has freedom to some extent; any transformation satisfying (\ref{e:PL Noether condition1}) and (\ref{e:PL Noether condition2}) preserve the plasma action.
Therefore relativistic canonical helicity in plasma is generalized as the external product of the canonical momentum ${\mathcal P}$ and field tensor ${\mathcal W}$ which satisfies (\ref{e:dW}) and (\ref{e:di_UW}).
Then we discovered the Lagrangian description of the relativistic MHD.
The freedom of labeling transformation is reduced in the MHD Lagrangian, then only transformation $b_0$ makes the action invariant.
Resulting helicity becomes the external product of the mechanical momentum $P$ and MHD tensor ${\mathcal F}$.
In non-relativistic dynamics, the time in Lagrangian and Eulerian coordinates coincides.
Therefore the divergence free current $(\p I^\mu/\p a^\mu)da^1 \wedge da^2 \wedge da^3 = 0$ in the Lagrangian coordinates can be transformed into some divergence free current $(\p K^\mu/\p x^\mu)dx^1 \wedge dx^2 \wedge dx^3 = 0$ in the Eulerian coordinate~\cite{Padhye-Morrison1,Padhye-Morrison2}.
On the other hand, in relativistic dynamics, the time on the Lagrangian coordinates becomes proper time $s$ because the time is measured on the co-moving fluid element
while the time on the Eulerian coordinates is reference time $t$.
As we discovered in the present work, due to the difference of $s$ and $t$, Eulerianized conservation law (\ref{e:L_U P^W}) is no longer a divergence of some current;
Technically we require (\ref{e:d^3a to dV_mu}) for transforming the conservation of the current into Eulerian coordinate because the Jacobian is defined as $d^4x = Jd^4a$ (while in non-relativistic case, $d^3x = Jd^3a$).
We must be careful when we use the term ``current'' in fluid/plasma theory. It should be used in Lagrangian coordinate.
\ack
We appreciate Professor Philip J Morrison for helpful comments and discussions.
The work of Y.K. was supported by Grant-in-Aid for JSPS Fellows 241010.
|
1,116,691,500,926 | arxiv | \section{Introduction}
Mutations in DNA play a very important role in the theory of evolution.
DNA and RNA are build up as sequences of four basis
or nucleotides which are usually identified by the letters: C, G, T, A (T being replaced by
U in RNA), C and U (G and A) belonging to the purine family, denoted by R (respectively to the pyrimidine family, denoted by Y).
Therefore in the case of genome sequences each point in the
sequence should be identified by an element of a four letter alphabet
or by a set of two binary values. In a simplified treatment one identifies each
element according to the purine or pyrimidine nature, reducing so to a two letter alphabet or to a
binary set.
Genetic mutations, i.e. modifications of the DNA genomic sequences, play
a fundamental role in the evolution. They include changes
of one or more than one nucleotide, insertions and
deletions of nucleotides, frame-shifts and inversions. In the present
paper we consider only the point mutations, for a review see (Li Wen-Hsiung, 1997). These are usually modeled by
stationary, homogeneous Markov process, which assume: 1) the nucleotide
positions are stochastically independent one from another, which is clearly
not true in functional sequences; 2) the mutation is not
depending on the site and constant in time, which ignores the existence
of ``hot spots" for mutations as well as the probable existence of
evolutionary spurts ; 3) the nucleotide frequencies
are equilibrium frequencies.
In the simplest model one can think of, all the mutations are assumed reversible and with equal rate, therefore only one parameter rules all the transitions. This is clearly a very rough approximation and indeed more complicated models have been proposed depending on more parameters. The most general one, with not reversible transitions, depending on the type of the nucleotide undergoing a mutation and on the kind of mutation, requires 12 parameters. However all these models are based on the assumption that the transitions are not depending of the neighbour nucleotides.
%
In the early nineties was realized that the intensity of point mutations is really depending on the context where they happen (Blake R., Hess and Nicholson-Tuell, 1992; Hess, Blake J. and Blake R., 1994) and
in the last decades an increasing amount of data in genetic research has provided further evidence that there is indeed a not negligible effect of the nearest neighbors as well as an effect of the the whole sequence, see e.g. (Arndt, Burge and Hwa, 2002).
In the more simplified descriptions, where the elements of the two chemical families (purine and pyrimidine), the four nucleotides belong to, are identified, a correspondence is made between the nucleotides and the elements of a binary set. It follows that the mutations are mathematically modelised as transitions between binary labels sequences. As a binary alphabet is equivalent to
spin variables, it is clear that the spin approaches, extensively studied in physics, have a natural application in the theory of molecular biological evolution. Indeed since 1986, when
Leuth\"{a}usser (Leuth\"{a}usser, 1986, 1987) put a correspondence between the Eigen model
of evolution (Eigen 1971; Eigen, McCaskill and Schuster, 1989) and a two-dimensional Ising model, many
articles have been written representing biological systems as
spin models.
In (Baake E., Baake M. and Wagner, 1997) it has been shown that the parallel mutation-selection model can be put in
correspondence with the hamiltonian of an Ising quantum chain and in
(Saakian and Hu, 2004) the Eigen model of evolution has been
mapped into the hamiltonian of one-dimensional quantum spin chains.
In this approach the genetic sequence is specified by a sequence of
spin values $ \pm 1$. In more refined models the correspondence is made between the four nucleotides and a set of two binary labels, see (Hermisson, Wagner and Baake M., 2001) for a four-state quantum chain approach.
The main aim of the works using this approach, see (Baake E., Baake M. and Wagner, 1998),
(Wagner, Baake E. and Gerisch, 1998), (Baake E. and Wagner, 2001), (Hermisson, Redner and Wagner, 2002), is to find, in different landscapes, the mean ``fitness" and the
``biological surplus", in the framework of biological population
evolution. As standard assumption, the strength
of the mutation is assumed to depend from the distance between two
sequences, which is identified with the Hamming distance. We recall that the
Hamming distance between two strings of binary labels is given by the number of sites with different
labels. Moreover usually it is assumed that the mutation matrix
elements are vanishing for Hamming distances larger than 1, i.e. for
more than one nucleotide changes.
The Hamming distance assumption is clearly
unrealistic in the domain of genetic mutations, so the only justification for its use is
that this assumption generally allows to solve the problem exactly in the one point mutation
scheme or to find more tractable numerical solutions.
For example the
mutation between the sequence $\ldots GUGU-ACAC \ldots$ and the
sequences, both differing of one unit, in the Hamming distance, from the
original one,
$\ldots GUUU-AAAC \ldots$ and $\ldots GGGU-ACCC \ldots$ implies respectively a
change in the free energy, at standard conditions, of $ \approx -0.89$
kcal/mol and of $ \approx +0.8$ kcal/mol, see (SantaLucia, 1998).
To assume these transitions equally probable is clearly a rough approximation.
Let us note that we use the term transition in a physical general sense.
In biology \textit{transition} is a mutation from a purine (pyrimidine)
to a purine (pyrimidine), \textit{transversion} is a mutation from a purine
(pyrimidine) to the other family.
So, in the above specified simplified assumption, the
transitions have to be really understood as biological \textit{transversions}.
%
At our knowledge there has been no attempt to apply spin models to
obtain the observed equilibrium distribution of oligonucleotides
in DNA. Martindale and Konopka (Martindale and Konopka, 1996) have, indeed, remarked that
the ranked short (ranging from 3 to 10
nucleotides) oligonucleotide frequencies, in both coding and
non-coding region of DNA, follow a Yule
distribution. We recall that a Yule distribution (Yule, 1924) is given by
\begin{equation}
f = a \, {n^k}\, {b^n} \label{eq:Yule}
\end{equation}
where $n$ is the rank and $a$, $ k < 0$ and $b$ are 3 real parameters.
In order to face this problem, in this paper we propose a
spin model where effects of neighbours (not only the nearest ones) and on the
whole sequence context is taken into account.
%
To this aim, we build up a quantum and classical spin model in which the strength
of the transition matrix does not depend only from the number of different
symbols (Hamming distance) between two sequences, but in some sense
also from the position of the changed symbols and from the whole distribution
of the nucleotides in the sequence.
In this paper, we assume that the transition matrix
does not vanish only for total spin flip equal $\pm 1$, induced by the
action of a single step operator, which generally is equivalent to one
nucleotide change.
%
Let us recall some phenomenological aspects of mutations.
From observations on the characters of spontaneous mutations,
it seems possible to point out
some common features of almost every studied process.
These can
be resumed in the following points:
\begin{itemize}
\item the mutation rate of a nucleotide depends on nature of its
first neighbouring ones;
\item mutations occur more frequently in purine/pyrimidine alternating
tracts;
\item \textit{transitions} are more frequent than \textit{transversions};
\item mutations mainly interest dinucleotides \textsc{CG}.
\end{itemize}
In modeling mutation mechanism, only paying attemption on the
difference between purines and pyrimidines (so that we only consider
\textit{transversions}), we take only into account the first two of the four points above listed.
In a context slightly different, it has been
remarked (Frappat, Minichini, Sciarrino and Sorba, 2003) that the rank of codon usage probabilities follows a universal law, that is independent of the biological species, the rank ordered distribution $f(n)$
being nicely fitted by a sum of an exponential part
and a linear part. Of course
the same codon occupies in general two different positions in the rank
distribution function for two different species, but the shape of the function is the same.
More specifically, for each biological species, codons are ordered following the decreasing order of the values of their
usage probabilities, i.e. the codon with rank $n = 1$ corresponds to the one with highest
value of the codon usage frequency,
codon with rank $n = 2$ is the one corresponding to the next highest value of the codon usage frequency, and so on.
In that article $f(n)$ was plotted versus the
rank and was well fitted by the following function
\begin{equation}
f(n) = \widehat{\alpha} \, e^{-\widehat{\eta} n} \, - \, \widehat{\beta} \, n \, + \, \widehat{\gamma} \;,
\label{eq:bf}
\end{equation}
where $0.0187 \leq \widehat{\alpha} \leq 0.0570$, $0.050 \leq \widehat{\eta} \leq 0.136$,
$0.82
\; 10^{-4} \leq \widehat{\beta} \leq 3.63 \; 10^{-4}$, depend on the biological species,
essentially on the total \emph{exonic} $GC$ content.
The four constants have to satisfy the normalization condition
\begin{equation}
\sum_{n} \, f(n) = 1
\end{equation}
The value of the constant $\widehat{\gamma} = 0.0164$ is approximately equal to $1/61$, i.e. the value of the codon usage probability in the
case of uniform and not biased codon distribution (not taking into account the 3 Stop codons), so really eq.(\ref{eq:bf}) depends on only two free parameters.
Therefore the first two
terms in eq.(\ref{eq:bf}) can be viewed as the effect of some bias mechanism.
We assume that this bias is only the effect of the mutation
and selection pressure, which we modelise by the effect of a suitable fitness and a mutation matrix
which depend on the change of the labels identifying the codons in the so called {\bf crystal basis} model of the genetic code, see (Frappat, Sciarrino and Sorba, 1998).
The paper is organised in the following way. In Sec. 2 we briefly review the mathematical
tools we use, putting in an Appendix, to make the article self-consistent, the basic definitions and properties. We identify a sequence of $N$ nucleotides or a $N$ spins chain as a vector state of an irreducible representation (irrep.) of $U_{q \to 0}(sl(2))$. Transitions between sequences are introduced
in terms of operators connecting vector states belonging or not belonging to the same irrep.. In Sec. 3 we build up a quantum spin model described by a hamiltonian whose diagonal
part, in the basis vectors of the irrep., represents the {\it fitness} and the off diagonal terms describes the
mutations. Let us point out that we do not aim to describe mutations in DNA as quantum effects. We use
the quantum mechanics formalism only as a very useful language to introduce the mutations inducing
operators. The model, which can appear unphysical if applied to a
quantum spin chain, should be considered, on the light of the previous
remarks on the application to the biological evolution, as a guideline toward
the search of solutions which can reproduce the observed
oligonucleotide distribution. In some sense
we proceed in the backward direction with respect to the usual approach: we go from the quantum
to the classical model. In Sec. 4, using the results of the previous section, we write classical kinetic equations for the probabilities and we solve it numerically, in the case of short oligonucleotide sequences. In Sec. 5 we discuss our results. In Sec. 6 we extend the model to four letter alphabet, that is we identify the nucleotides with the fundamental 4-dim irreducible representation
of $U_{q \to 0}(sl(2) \oplus sl(2))$. In Sec. 7 the four letters model described in Sec. 6 is applied
and numerically solved for the codons.
The numerical solution of
the model gives a stationary configuration for the distribution frequency
which is indeed nicely fitted by the function $f(n)$. These solutions,
but largely not their shape, depend on the numerical values of the
arbitrarily choosen parameters of the mutation matrix and of the fitness.
However a choice of the parameters in severe contradiction with the
reality, implying, e.g., a ratio of transversions over transitions
mutations very high or very low, seems to destroy the goodness of the fit.
At the end a few conclusions and future possible developments
are presented.
\section{Mutations and Crystal basis}
An ordered sequence of N nucleotides, characterized only by the purine or
pyrimidine character, that is a string of $N$ binary labels or spins, can be represented as a vector state belonging to the
N-fold tensor product of the fundamental irriducible representation
(irrep.) (labeled by $ J =1/2$) of $\mathcal{U}_{q \to 0}(sl(2))$
(Kashiwara, 1990), see
Appendix A. This parametrization allows to represent, in a simple way,
the mutation of a sequence as a transition between vectors, which
can be subjected to selection rules and
whose strength depends from the two concerned states.
\subsection{Labelling the state}
We identify a N-nucleotide sequence as a state
\begin{equation}
\mid \mathbf{J} \rangle = \mid J_3, J^N, \ldots, J^2 \rangle
\end{equation}
where $J^{N}$ labels the irrep. which the state belongs to, $J_3$ is the value of the 3rd
diagonal generator of $\mathcal{U}_{q \to 0}(sl(2))$ ($2J_3 = n_R - n_Y$, $n_X$ being the
number of $X$ elements in the sequence) and $J^i$ ($2 \leq \; i \; \leq N - 1$) are $ N - 2 $ labels
needed to remove the degeneracy of the irreps. in the $N$-fold tensor product in order to
completely identify the state. These further labels can be seen as the labels identifying
the irrep. which the state, corresponding to the sequence truncated to the $i$-th element,
belongs to.
We introduce a scalar product, such that
\begin{equation}
\langle \mathbf{J}\mid \mathbf{K} \rangle =
\left\{
\begin{array}{rl}
1 & \mbox{if } J_3 = K_3 \mbox{ and } J^i = K^i \; \forall i \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation}
As an example, we can consider a trinucleotidic string
($N=3$) and label the eight different spin chains in the following way,
using the crystal basis representation $\mid {J_3},{J^N},\ldots,{J^2}\rangle$:
\begin{eqnarray*}
\uparrow\downarrow\doa &=& \mid -{\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},0 \rangle \;\;\;\;\;\;\;\;
\uparrow\downarrow\uparrow = \mid {\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},0 \rangle \\
\downarrow\uparrow\downarrow &=& \mid -{\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},1 \rangle \;\;\;\;\;\;\;\;
\uparrow\upa\downarrow = \mid {\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},1 \rangle \\
\downarrow\doa\downarrow &=& \mid -{\textstyle{\frac{3}{2}}},{\textstyle{\frac{3}{2}}},1 \rangle \;\;\;\;\;\;\;\;
\downarrow\doa\uparrow = \mid -{\textstyle{\frac{1}{2}}},{\textstyle{\frac{3}{2}}},1 \rangle \\
\downarrow\uparrow\upa &=& \mid {\textstyle{\frac{1}{2}}},{\textstyle{\frac{3}{2}}},1 \rangle \;\;\;\;\;\;\;\;\;\;\;
\uparrow\upa\uparrow = \mid {\textstyle{\frac{3}{2}}},{\textstyle{\frac{3}{2}}},1 \rangle.
\end{eqnarray*}
In our approach, sequences with the same number of
spin up and down, placed in different sites, are described by different
states. This has a phenomelogical support; indeed, in the case of RNA sequences,
the values of the free energy ($- \Delta G$), on Kcal/mol at standard conditions,
for four different sequences made of two CC and two GG, i.e. two R
and two Y, as reported in Table I
of (Xia et al., 1998): CCGG (4.55), GGCC (5.37), CGCG (3.66), GCGC
(4.61), are different.
At this stage the crystal basis provides, at least. an alternative way of labelling any
finite spin sequence, mapping any sequence in a vector state of an
irrep., but we know that in physics and mathematics the
choice of appropriate variables is of primary importance to face a
problem. Indeed we argue that these variables are suitable to
partially describe non local events which affect the mutations.
We only consider a single spin flip, which in most cases, but not
always, is equivalent to a single nucleotide mutation.
Flipping one spin can induce a transition to a state belonging or not
belonging to the irrep of the original state. From the results of
Appendix A we see that to identify a nucleotide sequence as a state of
an irrep. requires to fix the number of RY contracted couples occurring
in the considered sequence \footnote{For readers familiar with physics formalism, contraction should be
understood in the same sense of contraction of creation-annihilation
operators in the Wick expansion}.
Therefore, flipping a spin implies or the creation or the deletion of
a RY contracted couple, corresponding respectively to a variation
of -1 o +1 on the value of the $J^N$ and, in case, of some others $J^i$
($2 \leq i \leq N-1$), or to leave unmodified the number of
contracted couples (so that the variation of $J^{N}$ is $\Delta{J^{N}}=0$, but some other
$J^{i}$ may change).
In the following we classify the mutations of a single spin flip in
N-nucleotide string, according to the induced variation in
the string labels $J_{3}, J^{N},\ldots,J^{2}$. We focus our attention on the spin flip
of the $i$-th position, but sometimes the transition will also effect other nucleotides.
We call \textit{left} (\textit{right}) \textit{side free} the nucleotides
on the left (right) of $i$-th position and not contracted (in the sense expressed in Appendix A) with
another one on the same side.
Let $R_l$ be the initial (before mutation) number of the \textit{left side free} purines and $Y_r$
the initial number of the \textit{right side free} pyrimidines.
We want to count the total number of contracted $RY$ couples (before and after mutation) in the
string, so we call $R_{in}$ ($R_{fi}$) the number, in the initial (final) state, of $R$ preceding
some $Y$, which is not on the same side, and not contracted with any $Y$ on their side. In the same way, with $Y_{in}$ ($Y_{fi}$)
we refer to the number of $Y$ following some $R$, which is not on the same side, and not contracted with any $R$ on their side.
If a $R \rightarrow Y$ mutation ($\Delta{J_3}=-1$) occurs in $i$-th position, then $R_{in}=R_{fi}+1$
and $Y_{in}=Y_{fi}-1$, where $R_{in}=R_{l}+1$ and $Y_{in}=Y_{r}$. We can distinguish different string
configurations around the $i$-th position, so that a single nucleotide mutation in $i$-th position
can correspond to different variations in the string labels. We have that
$\Delta{J^N} = |{R_{fi}}-{Y_{fi}}|-|{R_{in}}-{Y_{in}}|$
\begin{itemize}
\item If $\mathbf{R_l}=\mathbf{Y_r}$ then $R_{in}-1=Y_{in}$, so that $|R_{in}-Y_{in}|=1$; after mutation, $R_{fi}=Y_{fi}-1$, so that $|R_{fi}-Y_{fi}|=1$. Then the
variation of $J^{N}$ is $\Delta{J^{N}}=0$. We distinguish two subcases:
\begin{enumerate}
\item $R_{l}=Y_{r}\neq{0}$: $\Delta{J^{2}}=0,\ldots,\Delta{J^{i-1}}=0,\Delta{J^{i}}=-1,\ldots,\Delta{J^{k-1}}=-1,\Delta{J^{k}}=0,\ldots,\Delta{J^{N}}=0$
($2 \leq N-1; i+1 \leq k \leq N$);
\item $R_{l}=Y_{r}={0}$: $\Delta{J^{i}}=0 \; \forall{i}$.
\end{enumerate}
\item If $\mathbf{R_l}>\mathbf{Y_r}$, i.e. $R_{l}=Y_{r}+g$ ($g>0$), then $|R_{in}-Y_{in}|=g+1$ and
$J^{N}=\frac{1}{2}(g+1)$; after mutation, $|R_{fi}-Y_{fi}|=g-1$ and $J^{N}=\frac{1}{2}(g-1)$. Then
$\Delta{J^{N}}=-1$. We distinguish two subcases:
\begin{enumerate}
\item $Y_{r}=0$: $\Delta{J^{2}}=0,\ldots,\Delta{J^{i-1}}=0,\Delta{J^{i}}=-1,\ldots,\Delta{J^{N}}=-1$ ($2 \leq i \leq N$);
\item $Y_{r}\neq{0}$: $\Delta{J^{2}}=0,\ldots,\Delta{J^{i-1}}=0,\Delta{J^{i}}=-1,\ldots,\Delta{J^{N}}=-1$ ($3 \leq i \leq N-1$).
\end{enumerate}
\item If $\mathbf{R_l}<\mathbf{Y_r}$, i.e. $R_{l}=Y_{r}-g$ ($g>0$), then $J^{N}=\frac{1}{2}(g-1)$;
after mutation, $J^{N}=\frac{1}{2}(g+1)$, so that $\Delta{J^{N}}=1$. We distinguish two subcases:
\begin{enumerate}
\item $R_{l}=0$: $\Delta{J^{2}}=0,\ldots,\Delta{J^{m-1}}=0,\Delta{J^{m}}=1,\ldots,\Delta{J^{N}}=1$ ($2 \leq m \leq N$,
$m\neq{i}$);
\item $R_{l}\neq0$:$\Delta{J^{2}}=0,\ldots,\Delta{J^{i-1}}=0,\Delta{J^{i}}=-1,\ldots,\Delta{J^{k-1}}=-1,\Delta{J^{k}}=0,\Delta{J_{k+1}}=1,\Delta{J^{N}}=1$
($2 \leq N-2; i+1 \leq k \leq N-1$).
\end{enumerate}
\end{itemize}
In the case of mutation $Y \rightarrow R$, for a fixed string configuration, the selection rules
are similar, changing ${\pm}1$ with ${\mp}1$.
Operators which lead to the above transitions can be built by $J_{-},A_{i},A_{i,k}$ and their adjoint
operators, defined in the following section.
\subsection{Transition operators}
In this section we write the transition part of the hamiltonian, for the
different possible initial configurations of the string.
We distinguish different string configurations around the $i$-th
position, so that a single nucleotide mutation in $i$-th position can
correspond to different variations in the string labels.
The transitions inducing operators are built by means of $J_{-},A_{i},A_{i,k}$ and
their adjoint operators, as below defined.
\begin{itemize}
\item If $\mathbf{R_l}=\mathbf{Y_r}. \;$ We distinguish two subcases:
\begin{enumerate}
\item $R_{l}=Y_{r} \neq {0}$
\begin{equation}
{H_1}=\s{i}{2}{N-1}\s{k}{i+1}{N} \, \alpha_{1}^{ik} \,( {A_{i,k}J_{-} + \jpA_{i,k}^{\dagger}})
\label{eq:1}
\end{equation}
\item $R_{l}=Y_{r}={0}$
\begin{equation}
{H_2}= \alpha_{2} \, (J_{-}+J_{+}) \label{eq:2}
\end{equation}
\end{enumerate}
\item If $\mathbf{R_l}>\mathbf{Y_r}. \;$
We distinguish two subcases:
\begin{enumerate}
\item $Y_{r}=0$
\begin{equation}
{H_3}=\s{i}{2}{N} \, \alpha_{3}^{i} \, ({A_{i}J_{-} + \jpA_{i}^{\dagger}}) \label{eq:3}
\end{equation}
\item $Y_{r}\neq{0}$
\begin{equation}
{H_4}=\s{i}{3}{N-1} \, \alpha_{4}^{i} \, ({A_{i}J_{-} + \jpA_{i}^{\dagger}}) \label{eq:4}
\end{equation}
\end{enumerate}
\item If $\mathbf{R_l}<\mathbf{Y_r}. \;$
We distinguish two subcases:
\begin{enumerate}
\item $R_{l}=0$
\begin{equation}
{H_5}=\s{m}{2}{N} \, \alpha_{5}^{m} \,({J_{-} A_{m}^{\dagger} + A_{m}J_{+}})
\label{eq:5}
\end{equation}
\item $R_{l}\neq0$
\begin{equation}
{H_6}=\s{i}{2}{N-2}\s{k}{i+1}{N-1} \, \alpha_{6}^{ik} \,
({A_{i,k} J_{-} A_{k+1}^{\dagger} + A_{i,k}^{\dagger} A_{k+1} J_{+}}) \label{eq:6}
\end{equation}
\end{enumerate}
\end{itemize}
where $J_{+}$ and $J_{-}$ are the \textit{step operators} defined by Kashiwara
(Kashiwara, 1990), acting on an
irreducible representation with highest weight $J^{N}$, i.e.
inducing
the transitions $\Delta J^i=0, \; \forall i$
\begin{eqnarray}
A_{i,k} \mid \mathbf{J} \rangle &=& \mid J_3, J^N,.,
J^k, J^{k-1}-1,., J^{i}-1, J^{i-1},., J^2 \rangle
\nonumber \\
& & (2 \leq i \leq N-1 \;\;\;\; i+1 \leq k \leq N) \label{eq:ik}
\end{eqnarray}
\begin{eqnarray}
A_i \mid \mathbf{J} \rangle &=& \mid J_3, J^{N}-1, \ldots,
J^{i}-1, J^{i-1}, \ldots, J^2 \rangle
\nonumber \\
& & (2 \leq i \leq N)
\end{eqnarray}
\begin{eqnarray}
B_m \mid \mathbf{J}\rangle &=& \mid J_3, J^{N}+1, \ldots,
J^{m}+1, J^{m-1}, \ldots, J^2 \rangle
\nonumber \\
& & (2 \leq m \leq N)
\end{eqnarray}
Therefore $A_{i,k}^{\dagger}$ is the operator which increase by 1 the value
of $J^l$, for $k-1 \leq l \leq i$ and $B_m = A_m^{\dag}$.
Let us remark that in the above equations only the writing order of
$A_{k+1}$ and $J_{\pm}$ has to be respected as
\begin{equation}
[\, A_{k+1}, \; J_{\pm}] \neq 0
\end{equation}
while ($ i < k < N$)
\begin{equation}
[\, A_{i,k}, \; A_{k+1}] = [\, A_{i,k}, \; J_{\pm}] = 0
\end{equation}
The following commutation relations can be useful for
understanding the action of the transition hamiltonian as well as for further
developments:
\begin{equation}
[\, A_{i}, \; J_3] = [\, A_{i,k}, \; J_3] = 0 \;\;\;\;\; \forall i, k
\end{equation}
\begin{equation}
[\, A_{i} J_{-}, \; J_3] = A_{i} J_3 \;\;\;\;\;
[\, J_{+} A^{\dag}_{i}, \; J_3] = - J_{+} A^{\dag}_{i}
\end{equation}
\begin{equation}
[\, A_{i,k} J_{-}, \; J_3] = A_{i,k} J_{-} \;\;\;\;\;
[\,J_{+} A^{\dag}_{i,k}, \; J_3] = - J_{+} A_{i,k}
\end{equation}
\begin{equation}
[\, A_{i,k} J_{-} A^{\dag}_{k+1}, \; J_3] = A_{i,k} J_{-} A^{\dag}_{k+1}
\end{equation}
\begin{equation}
[\, A^{\dag}_{i,k} A_{k+1} J_{+} , \; J_3] = - A^{\dag}_{i,k} A_{k+1} J_{+}
\end{equation}
A few words to comment on the above equations. Let us consider a mutation
$R \rightarrow Y$ which involve a transition
$J^{N}=-1$ (case $R_{l}>Y_{r}$); the considered transition also entails
$\Delta{J_3}=-1$, so we
have to apply the operator $J_{-}$, as well as the operator $A_{i}$. Of course, first we have to lower by
1 the value of $J_3$, then to modify $J^N$, otherwise the initial state may
eventually be annihilated, even if the
transition is allowed (in the case $J^{N}-1<J_3$).
Likewise, in corrispondence of a transition $Y \rightarrow R$ ($\Delta{J_3}=+1$), first the change
$J^{N} \rightarrow J^{N}+1$ has to be maked, then $J_3 \rightarrow J_{3}+1$.
To write a self-adjoint operator, we have to add to the operator, which gives
rise to the
transition $Y \rightarrow R$, the one which leads to $R \rightarrow Y$, leaving the rest of the
string unmodified, that is
\begin{equation}
A_{i}J_- + {J_+}{A_{i}^{\dagger}}
\end{equation}
This operator leads to the mutation $Y \rightarrow R$ or $R \rightarrow Y$ for a nucleotide in
$i$-th position, in a string with $R_{l}>Y_{r}$.
If the mutation $R \rightarrow Y$ rises the value of $J^{N}$,
first $J^{N}$ has to be modified, then $J_3$;
with the aim to write a self adjoint operator, we write
\begin{equation}
{J_-}{A_{m}^{\dagger}} + {A_m}J_+
\end{equation}
The above operator gives rise to mutations $R \rightarrow Y$ and
$Y \rightarrow R$
for a nucleotide in $i$-th position, preceding the $m$-th one, in the case $R_{l}=0, Y_{r}\neq{0}$.
Let us remark that eq.(\ref{eq:4}) is included in
eq.(\ref{eq:3}), if the coupling constants $\alpha_{4}^{i}$ are assumed equal
to $\alpha_{3}^{i}$;
in eq.(\ref{eq:6}), only the writing order for $A_{k+1}$ (and its adjoint)
and $J_{\pm}$ has to be respected.
Let us also note that when $\Delta J^{N} = 0$ there is no need to order the operators.
\section{The quantum spin model}
Assuming now that the coupling constants do not depend on $i,k,m$, we can write the transition hamiltonian $H_I$ as
\begin{equation}
{H_I}=\mu_{1}({H_3}+{H_5})+\mu_{2}{H_1}+\mu_{3}{H_2}+\mu_{4}{H_6}
\end{equation}
The total hamiltonian of the model will be written as
\begin{equation}
H = H_{0} \, + \, H_I
\label{eq:H}
\end{equation}
where $ H_{0}$ is the diagonal part in the choosen basis and,
in the following, is assumed to be $H_{0} = \mu_{0} \, J_{3}$.
We let the fenomenology suggests us the scale of the values of the
coupling constants of $H_I$. We want to write an interaction
term which makes the mutation in alternating purinic/pyrimidinic tracts less
likely than in
polypurinic or polypyrimidinic ones. We mean as a single nucleotide mutation in a polypurinic
(polypyrimidinic) tract, a mutation \emph{inside} a string with all nucleotides $R$ ($Y$), i.e.
a highest (lowest) weight state. Such a transition corresponds to the
selection rules
$\Delta J^N=-1$,$\Delta{J_3}=\pm1$, i.e. a transition generated by the action of $H_3$ and $H_5$.
In the interaction term $H_I$, we give them a coupling constant smaller than the
others terms.
We introduce, for $ \; \Delta J_{3} = \pm 1 \;$, only four
different mutation parameters $\;\mu_{i}\;$ ($i = 1,2,3,4$), with
$\;\mu_{1} < \mu_{k},\;\; k > 1$.
\begin{enumerate}
\item $\mu_{1}$ for mutations which change the irrep., $ \; \Delta {J^N} = \pm
1, \; $and include the spin flip inside an highest or lowest weight vector;
\item $\mu_{2}$ for mutations which do not change the irrep., $ \; \Delta {J^N} = 0,
\; $ but modifies other values of $J^{k}$, $\; \Delta J^{k} = \pm 1$;
\item $\mu_{3}$ for mutations which do not change the irrep., $ \; \Delta J^{N} = 0$,
neither the other values of $J^{k}$, $\; \Delta J^{k} = 0$, ($2 \leq k \leq N-1$);
\item $\mu_{4}$ for mutations which change the irrep., $ \; \Delta {J^N} = \pm
1, \; $ but only in a string with $0 \neq R_{l} < Y_{r}$.
\end{enumerate}
We do not introduce another parameter, for mutations
generated by $H_4$, i.e. $i$-th nucleotide mutation in a string with
$R_{l} > Y_{r} \neq 0$, not to distinguish, in a polypurinic string, between
a mutation in $2$-th position and another one inside the string.
Let us emphasize once more that the proposed model takes into account,
at least partially, the effect on the transition in the $i$-th site of
the distribution of all the spins.
Presently we consider only the part of the interaction hamiltonian $H_I$, which generates transitions corresponding to one spin flip write, but, in analogous way, we could write more complicated
transition operators.
Let us illustrate, in a simple example, the difference between this
scheme and the standard one, based on the hypotesis of transition
probability between chains, only depending by their Hamming distance. Let us
consider the following string $RRRRR$. By a single flip spin the string
goes in one of the following configurations:
\begin{equation}
1) \; YRRRR \;\;\; 2) \; RYRRR \;\;\; 3) \; RRYRR \;\;\;
4) \; RRRYR \;\;\; 5) \; RRRRY \label{eq:states}
\end{equation}
In the models based on the Hamming distance, all the transitions are
equally probable, as the final strings are all at the same distance from
the original one. In the present scheme the 1-rst transition is ruled by the
value
of $\; \mu_{3}$, the transition 2-5 are ruled by $\; \mu_{1} \;$.
Let us stress that our scheme is not equivalent to an Ising model
with the transition strength depending on the position. To
illustrate the difference with a few examples let us consider the
transitions $\; RYYRR \to RYYYR\;$, $\;RYYRY \to RYYYY\;$, both
with a flip in the fourth position, the first one ruled by
$\mu_{2}$, the second one by $\mu_{1}$. Mutations in different points
can be ruled by the same coupling constant: $\; RYYRR \to RYYYR\; $,
$\; RYRRY \to RYYRY\; $
with a flip, respectively, in the
4th and 3rd position ruled by $\; \mu_{2}$.
As already said, the main motivation for introducing this quantum model is that it provides
the formal and conceptual language to write the transitions, ensuring in the same time, due to the unitary character of the evolution operator, the conservation of the probability. We shall briefly describe in Sec. 6 the outcome of this model, see (Minichini and Sciarrino 2004a) for more details, which has only been reported to make, hopefully, more clear the structure of the classical model of the next section. Let us point out that there are very strong drawback in trying to further pursue the study of the quantum model, for example
superposed states, that is linear combinations of sequences, do exist in such models, while
only the different sequences have a biophysical interpretation.
\section{The classical model}
In the previous section we have introduced mutation inducing operators based on the change of the global labels $J^{i}$. Using these results as a guide we write a kinetic equations systems in which
the non vanishing mutation matrix entries depend on the labels of the connected sequences.
We are interested in finding the stationary or equilibrium configuration of the $2^{N}$
different possible
sequence. Writing $p_{\mathbf{J}}(t)$ the probability distribution at
time $t$ of the sequence identified by the vector $\mid \mathbf{J}
\rangle$, a decoupled version of selection mutation equation, (see
(Hofbauer and Sigmund, 1988) for an exhaustive review), for a haploid organism, can be written as
\begin{equation}
\frac{d}{dt} \, p_{\mathbf{J}}(t) = p_{\mathbf{J}}(t)\left(R_{\mathbf{J}}-
\sum_{\mathbf{K}}\;R_{\mathbf{K}}\; p_{\mathbf{K}}(t)\right)+\sum_{\mathbf{K}}\;M_{\mathbf{J,K}}\; p_{\mathbf{K}}(t)
\label{eq:ME}
\end{equation}
where $R_{\mathbf{K}}$ is the Malthusian fitness of the sequence corresponding to
the vector $\mid \mathbf{K} \rangle$ and $M_{\mathbf{J,K}}$ are the entries of a
mutation matrix $M$ which satisfies
\begin{equation}
{\Large M}_{\mathbf{J},\mathbf{J}} = - \, \sum_{\mathbf{K} \neq
\mathbf{J}} \; {\Large M}_{\mathbf{J},\mathbf{K}}
\label{eq:norm}
\end{equation}
The equation (\ref{eq:ME}) is reduced to
\begin{equation}
\frac{d}{dt} \, x_{\mathbf{J}}(t) = \sum_{\mathbf{K}}\;\left(H+M\right)_{\mathbf{J,K}}
\;x_{\mathbf{K}}(t)
\label{eq:Hamilt}
\end{equation}
where
\begin{equation}
x_{\mathbf{J}}(t) = p_{\mathbf{J}}(t)\exp\left(\sum_{\mathbf{K}}\;R_{\mathbf{K}}\;
\int_{0}^{t}\; p_{\mathbf{K}}(\tau)\,d\tau \right)
\label{eq:transf}
\end{equation}
and $H$ is a diagonal matrix, with fitness as entries ($R_{\mathbf{K}} = H_{{\mathbf{K}},{\mathbf{K}}}$).
In our model the mutation matrix is written as the sum of the
partial mutation matrices ${M_i}$ which are obtained by the
interaction hamiltonians ${H_i}$ replacing the adjoint operators by the transposed
(denoted by an upper labe $^T$).
Assuming now that the coupling constants do not depend on $i,k,m$,
we can write the mutation matrix $M$ as (Minichini and Sciarrino, 2004b)
\begin{equation}
{M}=\mu_{1}({M_3}+{M_5})+\mu_{2}{M_1}+\mu_{3}{M_2}+\mu_{4}{M_6} + M_{D}
\end{equation}
where $M_{D}$ is the diagonal part of the mutation matrix defined by
eq.(\ref{eq:norm}).
The hierarchy of the values of the coupling constants is fixed as in the previous
section.
\section{Results}
The evolution equation of the model for the probabilities will be written in
terms of the matrix $\bar{H}= H + M + \lambda\mathbf{1}$,
where the fitness can be $H=J_3$ (purely additive fitness)
and $\lambda$ is choosen in such a way to guarantee $\bar{H}$ is
positive. Being $H + M$ irreducible, the composition of equilibrium population is
given by
\begin{equation}
p_{\mathbf{J}} =
\frac{\tilde{x}_{\mathbf{J}}}{\sum_{\mathbf{K}}\;\tilde{x}_{\mathbf{K}}}
\end{equation}
where $\tilde{x}_{\mathbf{J}}$ is the Perron-Frobenius eigenvector, see, e.g. (Encyclopedic Dictionary of Mathematics, 1960) of
$\bar{H}$.
In (Minichini and Sciarrino, 2004b) the
numerical solutions of the model have been reported, with a suitable choice of the value of
the parameters, for N = 3,4,6. %
Before discussing these results, we point out
explicitly the main features of our model.
$M$
describes an interaction on the $i$-th spin neither depending on the
position nor on the nature of the closest neighbours, but which
takes into account, at least partially, the effects, on the transition
in the $i$-th site, of the
distribution of all the spins, that is non local effects.
Indeed it
depends on the ``ordered" spin orientation surplus on the left and on
the right of the $i$-th position. Should it not depend on the order,
it
may be considered as a mean-field like effect. Moreover $\Delta J_3 =
\pm 1$ transitions are allowed, which, e.g. for N = 4, can be considered or as the
flip of a spin combined with an exchange of the two, oppositely
oriented, previous or following spins or as the collective flip of
particular three spin systems, containing a two spin system with
opposite spin orientations (see example below).
Biologically, the transition depends in some way on the "ordered"
purine surplus on the left and on the right of the mutant position.
Let us briefly comment on the physical-biological meaning of the
``ordered" spin sequence. Our aim is to study finite oligonucleotide
sequence in which a beginning and an end are defined. This implies we
can neither make a thermodynamic limit on $N$ nor define periodic
conditions on the spin chain. So we have to take into account the
``edge" or ``boundary" conditions on the finite sequence. An analogous
problem appears in determing thermodynamic properties of short
oligomers and, in this framework, in (Goldstein and Benight, 1992) the
concept of fictitious nucleotide pairs E and E' has been introduced,
in order to mimick the edge effects. The ordered couple of RY takes
into account in some way the different interactions of R and Y with
the edges.
For example, the transition matrix, on the above basis (for N = 3)
is the following
one, up to a multiplicative dimensional factor $\mu_{0}$
\begin{equation}
\label{matrixModel}
M =
\left(
\begin{array}{cccccccc}
x & \delta & 0 & \gamma & \epsilon & 0 & \epsilon & 0 \\
\delta & x & 0 & 0 & 0 & \epsilon & 0 & \epsilon \\
0 & 0 & x & \delta & \epsilon & 0 & \epsilon & 0 \\
\gamma & 0 & \delta & x & 0 & \epsilon & 0 & \epsilon \\
\epsilon & 0 & \epsilon & 0 & x & \delta & 0 & 0 \\
0 & \epsilon & 0 & \epsilon & \delta & x & \delta & 0 \\
\epsilon & 0 & \epsilon & 0 & 0 & \delta & x & \delta \\
0 & \epsilon & 0 & \epsilon & 0 & 0 & \delta & x
\end{array}
\right)
\end{equation}
where the diagonal entries $x$, not explicitly written, are given by eq.(\ref{eq:norm}).
Note that the above matrix depends only on three coupling
constants
due to the very short length of the chain. For $N \ge 4$ the 4th
coupling constant
(denoted in the following by $\eta$) will appear.
Let us emphasize that the mutation matrix $M$ (\ref{matrixModel}) does not
only
connect states at unitary Hamming distance. As an example, we write
explicitly the
transitions from $\mid {\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},0\rangle$ ($\uparrow\downarrow\uparrow$)
and from
$\mid -{\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}},0\rangle$ ($\uparrow\downarrow\doa$)
\begin{eqnarray*}
\uparrow\downarrow\uparrow \longrightarrow
\left\{
\begin{array}{c}
\uparrow\upa\uparrow \\
\downarrow\doa\uparrow \\
\uparrow\downarrow\doa
\end{array}
\right.& \qquad \uparrow\downarrow\doa \longrightarrow
\left\{
\begin{array}{c}
\downarrow\uparrow\upa \\
\downarrow\doa\downarrow \\
\uparrow\upa\downarrow \\
\uparrow\downarrow\uparrow
\end{array}
\right.
\end{eqnarray*}
The first transition of the second example can be regarded as a spin-flip of the three spins.
Let we explicitly write, for $N = 3$, the
mutation matrix, which allows transitions only between chains at
Hamming distance equal to one, with coupling constant $\alpha$.
\begin{equation}
\label{matrixHamm}
H_\mathrm{Hamm} =
\left(
\begin{array}{cccccccc}
y & \alpha & 0 & \alpha & \alpha & 0 & 0 & 0 \\
\alpha & y & 0 & 0 & 0 & \alpha & 0 & \alpha \\
0 & 0 & y & \alpha & \alpha & 0 & \alpha & 0 \\
\alpha & 0 & \alpha & y & 0 & 0 & 0 & \alpha \\
\alpha & 0 & \alpha & 0 & y & \alpha & 0 & 0 \\
0 & \alpha & 0 & 0 & \alpha & y & \alpha & 0 \\
0 & 0 & \alpha & 0 & 0 & \alpha & y & \alpha \\
0 & \alpha & 0 & \alpha & 0 & 0 & \alpha & y
\end{array}
\right)
\end{equation}
where the diagonal entries, not explicitly written, are given by eq.(\ref{eq:norm}).
Note that, even if we put in eq.(\ref{matrixModel}) all the constant equal to $\alpha$
($\delta = \gamma = \varepsilon = \alpha$), we do not get the Hamming hamiltonian
(\ref{matrixHamm}).
If we order (in a decreasing way) the equilibrium probabilities, we obtain,
using the mutation matrix with Hamming distance,
a rank
ordered distribution of transition probability like that in
fig.\ref{stepH16} for $N=4$. Its
shape does not depend on the value of $ \alpha$.
The rank-ordered distribution
of the probabilities shows a plateaux structure:
every
plateaux contains spin sequences at the same Hamming distance from
the sequence with the highest value of the fitness.
Using the mutation matrix (\ref{matrixModel}), the
rank ordered probabilities distribution does not show a
plateaux structure,
but its shape is well fitted by a Yule distribution
(fig.\ref{YuleH16}), like the observed frequency
distribution of oligonucleotidic in the strings of nucleic
acids (Martindale and Konopka, 1996).
Let us observe that we obtain a Yule distribution (and not a plateaux
structure)
even if all parameters in (\ref{matrixModel}) are tuned at the same
value, which means that the distribution is the outcome of the model
and not of the choice of the values of the coupling constants.
Analogous resultes are obtained for $N=6$
(fig.\ref{N=6}).
Let us point out that:
i) our model is not equivalent to a model where
the intensity depends on the site undergoing the
transition, or from the nature of the closest neighbours or the
number of the $R$ and $Y$ labels of the sequence; indeed
essentially
the intensity depends on distribution in the sequence of
the $R$ and $Y$;
ii) the ranked distribution of the
probabilities follows a Yule distribution law, but as the value of
the parameter b is close to the unity,
the distribution is equally well
fitted by a Zipf law (Zipf, 1949) ($f = a
\, {n^k}$), in
agreement with the remark of (Martindale and Konopka, 1996).
Let us also briefly recall the outcomes of the \textit{genetically inspired} quantum spin model presented in Sec. 3.
We can study the time evolution of an initial state, representing a given
spin chain, and evaluate the probability of transition in another one, if
$H$ is the hamiltonian which generates the dynamics of the system.
The matrix form of $H$, on the above basis for a fixed initial state,
is obtained (for $N = 3$), by replacing in eq.(\ref{matrixModel}) the diagonal terms by the eigenvalues of $J_{3}$, i.e. by, respectively, (-1,1,-1,1,-3,-1,1,3) (up a multiplicative factore $1/2$).
%
Analogously we can study the dynamics of an ordered quantum spin
chain, with an interaction Hamiltonian, leading to transitions with the same probability
between nucleotide strings at unit Hamming distance, whose matrix, for $N = 3$, is obtained by eq.(\ref{matrixHamm}) by replacing the diagonal terms with the eigenvalues of $J_{3}$.
In order to evaluate the probabilities of transition, we cannot analytically study
the time evolution of an initial state, representing a fixed spin sequence, as ruled by
eq.(\ref{matrixModel}) with the change of the diagonal terms, but we can find a numerical solution. %
The transition probability between two states, belonging to the crystal basis,
exhibits the quantum mechanically typical oscillating behaviour as a function of the time.
We define a time-averaged transition probability (initial state (i) $ \longrightarrow $ final state (f))
\begin{equation}
<p_{if}> \, = \, \frac{1}{T} \; \int_0^T \; p_{if}(t) \, dt \label{eq:tav}
\end{equation}
where the value of $T$ will be numerically fixed to a value, such that the
r.h.s. of eq.(\ref{eq:tav}) becomes stable.
If we order (in a decreasing way) the average transition probability from an
initial state to every other chain, if (\ref{matrixHamm}) is the hamiltonian, we obtain a rank
ordered distribution of transition probability like that in fig.\ref{stepH16}. Its
shape does not depend by the choice of initial state or by the coupling
constant $\alpha$ value.
We always get the same structure, for models with transition probability
only depending on Hamming distances. So the rank-ordered distribution
of the average transition probability shows a plateaux structure: every
step contains spin chains at the same Hamming distance from the initial one.
In the case of the model which we propose here, i.e. the hamiltonian in
(\ref{matrixModel}), which we call crystal basis model, the distribution
of rank ordered average transition probability does not show a plateaux structure,
but its shape is well fitted by a Yule distribution like that in fig.\ref{YuleH16}.
%
Also in the quantum model, we obtain a Yule distribution (and not a plateaux structure)
even if all parameters in (\ref{matrixModel}) are tuned at the same value.
In this case, the state, labelled by 1 in the plots, is the initial one. The ranked distribution of the
probabilities, not averaged in time, computed for several values of
the time, also follows generally a Yule distribution law. Moreover we still remark that,
for the highest value of $N$, the distribution is equally well
fitted by a Zipf law, i.e. $ b = 1$ in eq.(\ref{eq:Yule}), but not for the lowest values of $N$, in
agreement with the remark of (Martindale and Konopka, 1996.
\section{The four letter model}
In order to label a sequence of N nucleotides, taking into account that they belong to the four letter set \{C,T/U,G,A\}, we assign the 4 nucleotides to the 4-dim irreducible fundamental
representation (irreps.) $(1/2, 1/2)$ of $\mathcal{U}_{q \to 0}(sl(2) \oplus sl(2))$ (Frappat, Sciarrino and Sorba, 1998) with the following assignment for the values of the third component of
$\vec{J}$ for the two $sl(2)$ which in the following will be denoted
as $sl_{H}(2) $ and $sl_{V}(2) $ :
\begin{equation}
\mbox{C} \equiv (+{\textstyle{\frac{1}{2}}},+{\textstyle{\frac{1}{2}}}) \qquad \mbox{T/U} \equiv (-{\textstyle{\frac{1}{2}}},+{\textstyle{\frac{1}{2}}})
\qquad \mbox{G} \equiv (+{\textstyle{\frac{1}{2}}},-{\textstyle{\frac{1}{2}}}) \qquad \mbox{A} \equiv
(-{\textstyle{\frac{1}{2}}},-{\textstyle{\frac{1}{2}}})
\label{eq:gc1}
\end{equation}
It follows that an ordered sequence of N nucleotides can be represented as a vector belonging to the
N-fold tensor product of the fundamental irriducible representation
of $\mathcal{U}_{q \to 0}(sl(2) \oplus sl(2))$, in a straightforward generalization of the approach followe in Sec.2 for $\mathcal{U}_{q \to 0}(sl(2))$. In the following we use the symbols $X$ for C,G and $Z$ for U,A. In the formalism of $\mathcal{U}_{q \to 0}(sl(2) \oplus sl(2))$ all the previous results have to be understood to refer to $sl_{V}(2)$.
Now we identify a N-nucleotide sequence as a state
\begin{equation}
\mid \mathbf{J}_{H} \mathbf{J}_{V} \rangle = \mid J_{3,H}, J_{3,V}; J_{H} ^N, J_{V} ^N; \ldots ;J_{H} ^2, J_{V} ^2 \rangle
\end{equation}
where $J^{N}_{m}$ ($m = H, V$) labels the irrep. which the state belongs to, $J_{3,m}$ is the value of the 3rd diagonal generator of $\mathcal{U}_{q \to 0}(sl_{m}(2))$ ($2J_{3,H} = n_X - n_Z$, $2J_{3,V} = n_R - n_Y$) and $J_{m}^i$ ($2 \leq \; i \; \leq N - 1$) are $ 2(N - 2) $ labels
needed to completely identify the state. As an example, the trinucleotidic string CGA is labeled by
\begin{equation}
\mid CGA \rangle = \mid \left( {\textstyle{\frac{1}{2}}} \right)_{H}, - \left({\textstyle{\frac{1}{2}}} \right)_{V}; \left({\textstyle{\frac{1}{2}}} \right)_{H}, \left({\textstyle{\frac{1}{2}}} \right)_{V}; \left(1\right)_{H} \left(1 \right)_{V} \rangle
\end{equation}
The previously introduced scalar product is straightforwardly
generalized.
In the present paper, we only consider a single spin flip in H or V spin or in both H and V, which in most cases, but not always, is equivalent to a single nucleotide mutation. Obviously a H spin flip (V and H,V flip) corresponds, respectively, to a biological \textit{transition} (\textit{transversion}).
Flipping one spin can induce a transition to a state belonging or not
belonging to the irrep. of the original state.
From an immediate generalisation of the results of
Appendix A, we need, to identify a nucleotide sequence as a state of
an irrep., to fix the number of RY and XZ contracted couples occurring
in the considered sequence.
Therefore flipping a spin implies or the creation or the deletion of
a RY or XZ or both contracted couple, corresponding, respectively, to a variation
of -1 o +1 on the value of the $J_{V} ^N$, $J_{H} ^N$ or both and, possibly, of some others $J_{m}^i$
($2 \leq i \leq N-1$), or to leave unmodified the number of
contracted couples (so that $\Delta J_{m} ^N = 0$, but some other
$J_{m} ^{i}$ are modified).
We focus our attention on the spins flip
of the $i$-th position and we go on in a completely analogous way as in Sec. 2, but taking into account the two couples RY and XZ.
Assuming, as previously, that the coupling constants do not depend on $i,k,m$,
we write the mutation matrix $M$ as
\begin{eqnarray}
{M } & = & {M}_{H} + {M}_{V} \nonumber \\
& = & \mu_{1}({M_{3,H}} + {M_{5,H}}) + \mu_{2}{M_{1,H}} + \mu_{3}{M_{2,H}} + \mu_{4}{M_{6,H}}
\nonumber \\
& + &
\lambda_{1}({M_{3,V}} + {M_{5,V}}) + \lambda_{2}{M_{1,V}} + \lambda_{3}{M_{2,V}} + \lambda_{4}{M_{6,V}} + M_{D}
\label{eq:mhv}
\end{eqnarray}
where $M_{D}$ is the diagonal part of the mutation matrix defined by eq.(\ref{eq:norm}), and
${M_{k,m}}$ ($ k = 1,2,3,5,6; m = H,V$) are the off-diagonal mutation matrices defined by the
following operators, where we have omitted to explicitly write the coupling constants
\begin{equation}
{H_{1,m}} = {A_{i,k;m}J_{-,m} + \jpmA_{i,k;m}^{\dagger}}
\end{equation}
\begin{equation}
{H_{2,m}} = J_{-,m} + J_{+,m}
\end{equation}
\begin{equation}
{H_{3,m}} = {A_{i;m}J_{-,m} + \jpmA_{i;m}^{\dagger}}
\end{equation}
\begin{equation}
{H_{4,m}} = {A_{i;m}J_{-,m} + \jpmA_{i;m}^{\dagger}}
\end{equation}
\begin{equation}
{H_{5,m}} = {J_{-,m} A_{m;m}^{\dagger} + A_{m;m}J_{+,m}}
\end{equation}
\begin{equation}
{H_{6,m}} = {A_{i,k;m} J_{-,m} A_{k+1;m}^{\dagger} + A_{i,k;m}^{\dagger} A_{k+1;m} J_{+,m}}) \label{eq:6b}
\end{equation}
Note that in eq.(\ref{eq:mhv}) we have not introduced a coupling term between the two $sl(2)$, i.e. a mutation matrix of the type ${M}_{H,V} \propto J_{+,H}J_{+,V}$ or ${M}_{H,V} \propto J_{-,H}J_{-,V}$.
In order to fit the phenomenological observation that the transitions occur more frequently than the transversions, we have to fix the coupling constants $\lambda$ of the order of $1/2 - 1/3$ of the coupling constants $\mu$. Let us remark that, with the chosen mutation matrix eq.(\ref{eq:mhv}), a single spin mutation does not correspond necessarily to a H-spin or V-spin flip. Indeed the mutations $ C \leftrightarrow A$ amd $T \leftrightarrow G$ imply a flip of both the H and V spins, therefore these mutations should be depressed.
\section{The rank ordered distribution of codons}
In (Frappat, Sciarrino and Sorba, 1998) a mathematical model, called crystal basis model, for the genetic code has been proposed where from the assignment eq.(\ref{eq:gc1}) of
the four nucleotides to the 4-dim fundamental
$({\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}})$ irreducible representation of
the quantum group ${\cal U}_{q \to 0}(sl(2) \oplus sl(2))$, the codons ($3$-nucleotide
sequence) appear as composite state in the$3$-fold tensor product of $({\textstyle{\frac{1}{2}}},{\textstyle{\frac{1}{2}}})$.
From the general formalism of the previous section, a codon is identified as a state
\begin{equation}
\mid \mathbf{J}_{H}\rangle \, \otimes \mid \mathbf{J}_{V} \rangle\equiv
\mid \mathbf{J}_{H} \mathbf{J}_{V} \rangle = \mid J_{3,H}, J_{3,V};
J_{H} ^3 J_{V} ^3; J_{H} ^2, J_{V} ^2 \rangle
\end{equation}
For example we have, see (Frappat, Sciarrino and Sorba, 2001) for a list of all the states:
$$
\mid CGA \rangle = \mid \left( {\textstyle{\frac{1}{2}}} \right)_{H}, - \left({\textstyle{\frac{1}{2}}}
\right)_{V}; \left({\textstyle{\frac{1}{2}}} \right)_{H}, \left({\textstyle{\frac{1}{2}}} \right)_{V};
\left(1\right)_{H} \left(1 \right)_{V} \rangle
$$
The mutation matrix eq.(\ref{eq:mhv}) now becomes
\begin{eqnarray*}
{M}&=&\sum_{m=H,V} \sum_{i=2,3} \mu_{1,m}[(A_{i,m}J_{-,m}+ J_{+,m}
A_{i,m}^T) \\
& + & (J_{-,m} A_{i,m}^{T} + A_{i,m} J_{+,m}) ]
+ \mu_{2,m} \, (J_{-,m} + J_{+,m}) \\
& + & \mu_{3,m }( B_{m} J_{-,m} +
J_{+,m} B_{m}^T) + M_{D,m}
\end{eqnarray*}
where
\begin{eqnarray}
B_{m} \mid \mathbf{J} \rangle & = & \mid J_{3,m}, J^3_m, J^2_m - 1 \rangle \\
A_{i,m} \mid \mathbf{J} \rangle & = & \mid J_{3,m} ,J^{3}_m-1, \ldots
J^{i}_m-1, \ldots \rangle
\quad (2 \leq i \leq 3)
\end{eqnarray}
and $M_{D}$ is the diagonal part of the mutation matrix.
We are interested in finding the stationary configuration solution of the eq.(\ref{eq:ME}) for the
$64$ different possible sequences. We choose
the following form for the (purely additive) fitness
$H = J_{3,H} + J_{3,V} + \lambda\mathbf{1}$,
$\lambda > 0$ ensuring $H + M$ to be positive.
Below we report several representative figures in which the obtained numerical solutions are fitted with a function given by eq.(\ref{eq:bf}) (we omit the hat on the parameters). In figg.\ref{Par96}-\ref{Par99}, with a suitable choice of
the values of the parameters, our results are well fitted.
In fig.\ref{Par94} we report another solution where the ratio, denoted by $(H/V)$, between the mutation intensity between transitions and transversions, is chosen larger than one, but the value of the coupling constants do not satisfy the hierarchy $\mu_{1,H} \, < \, \mu_{2,H}, \mu_{3,H}$, which is less well fitted.
In fig.\ref{Par98} we report another solution with a unrealistic choice of
the values of the parameters of the ratio $(H/V)$ ($(H/V) \approx 10$), which is, indeed, badly fitted by a function given by eq.(\ref{eq:bf}). Finally in fig.\ref{Par87} we report another solution, also badly fitted, where $(H/V) \approx 10^{-1}$. This last result is a consequence of the fact that we have chosen a fitness symmetric for the exchange $H \leftrightarrow V$. Therefore the exchange of the values of the coupling constants between $M_H$ and $M_V$ gives the same shape of the distribution. Of course the rank of the same codon is, in general, different in the two cases. Summarizing, we can state that the numerical solutions of our model, for arbitrary choice of the values of the coupling constants, are rather well fitted by a function of the type given in eq.(\ref{eq:bf}), with a suitabe choice of the parameters, but that
a non realistic choice of the values of the coupling constants, e.g.
a ratio of transversion/transition
mutation very high or very low, seems to destroy the goodness of the fit. Moreover, it is quite surprising to remark that the values of the parameters in the function eq.(\ref{eq:bf}), which fits our numerical solutions, are of the same order of magnitude of the parameters (depending on the total $GC$ content) found in (Frappat, Minichini, Sciarrino and Sorba, 2003) to best fit the observed rank ordered distribution. In the present paper, the values of $ \widehat{\alpha}$ and $ \widehat{\eta}$ are found to be slightly larger than the ones computed in (Frappat, Minichini, Sciarrino and Sorba, 2003). Let us stress once more that
a mutation matrix $M$, with non diagonal non vanishing entries
connecting only codons with Hamming distance equal to one, is unable to reproduce the observed rank ordered distribution as it induces mutation between classes of codons at the same Hamming distance.
We have considered separately the finess and mutation matrix for the {\it horizontal} and
{\it vertical} labels of the codons. As, a priori, one can consider also a coupling term between the two parts, our simplified treatment has to be considered as a first step in the way of constructing a realistic model. We have also performed a preliminary analysis with a a value, $\rho_H$, of the {\it horizontal} fitness different from the value, $\rho_V$ of the {\it vertical} one. It appears that the outcome depends on the ratio $\rho_H/\rho_V$ as well as on the ratio between the values of the $\rho$ and the value of the transition coupling constant $\mu$ (beside the discussed dependence on the ratio of $\mu_H/\mu_V$ and on the hierarchy of the values of the differents $\mu_H$ and $\mu_V$). So we believe that
a better understanding of the form of the fitness and of the hierarchy of the values of the mutation parameters, as well as on the reliability of a model which explains the rank ordered distribution of the codons as a consequence of the mutation-selection of 64 triplets, is necessary before further pursuing the numerical analysis .
\section{Conclusions}
We have proposed a model not analytically soluble, but which admits an
easy numerical solution for short spin chains.
Let us emphasize that the main purpose of the proposed scheme is to
take into account, at least partially, the effects of the neighbours
in the mutation.
We point out, once more, that our model is not equivalent to a model where
the intensity depends on the site undergoing the
transition, or on the nature of the closest neighbours or on the
number of the $R$ and $Y$ labels of the sequence; indeed essentially
the intensity depends on the distribution in the sequence of
the $R$ and $Y$.
%
We find that the numerically computed stationary distribution for short oligonucleotides to follow a Yule o Zipf law, in agreement with the observed distribution.
We are far from claiming, for several obious reasons, that our simple
model is the only model able to explain the observed oligonucleotide distribution, but that the standard approach using the
Hamming distance does not provide such a solution.
One may correctly argue that the comparison between the Hamming model,
depending on only one parameter and taking into account only one
site spin flip, with our model, which depends on four parameters and
takes into account spin flip of more than one site, is not meaningful. So we
have computed the stationary distribution with a mutation matrix
not vanishing for Hamming distance larger than one and allowing the
same number of mutations as our model. The
result reported in fig.\ref{fig:cdc} shows that the plateaux structure is
always the dominant feature. Let us comment on the non point mutations which
naturally are present in our model. In literature there is
an increasing number of papers that, on the basis of more accurate data,
question both the assumptions that mutations occur as single nucleotide
and as independent point event. In a quite recent paper
(Whelan and Goldman, 2004)
have presented a model allowing for single-nucleotide,
doublet and triplet mutation, finding that the model provides
statistically significant improvements in fits with protein coding
sequences. We note that the triplet mutations, for which there is no known
inducing mechanism, but which can possibly be explained by large
scale event, called sequence inversion
in (Whelan and Goldman, 2004), are indeed the kind of mutations, above discussed, that
our model naturally describes.
Doublet mutations do not appear, due to the assumed spin flip equal $\pm 1$, but
on
one side some of these mutations are hidden by the binary
approximation, and on the other side the parameter ruling such
mutations, as computed in (Whelan and Goldman, 2004), is lower than the one ruling the triplet mutation.
In conclusion the Hamming distance does not seem
a suitable measure of the distance in the space of the biological sequences,
the crystal basis, on the contrary, seems a better candidate to parametrize
the elements of such space.
Our model makes use of this parametrisation, allows to modelise
some non point mutations and exhibits intriguing and interesting features,
hinting in the right direction, worthwhile to be further
investigated.
In the present simple version, the model depends
only on 4 (8) parameters in the two letter (resp. four letter) alphabet for any N, which are, very likely, not enough to
describe sequences longer that the considered ones. However the model is rather
flexible: as shown in the case of the codons, it is easily generalised to the four letter
alphabet; besides the obvious introduction of
more coupling constants, it allows, e.g., to analyse part of the sequences containing
hot spots in the mutation, to
take into account doublet mutations (indeed the
operator eq.(\ref{eq:ik}) or $A_{i,i+1}^{T}$ describes a doublet spin
flip at position i,i+1).
Although the very short chain, which we were interested in, can be studied numerically
without any use of the crystal basis, we propose a
general algorithm, which can be applied to chains of arbitrary
length and which can be easily implemented in computers.
It is worthwhile to remark that we are trying to compare theoretical
results, deriving from simple models, to really observed data, coming
from the extremely complex biological world. In this context the
crystal basis provides a compact and useful notation to describe the
``kinematical" variables which are changed by the dynamics.
The generalisation of our approach to the a four letters alphabet, which is easily done replacing ${\cal U}_{q}(sl(2))$
by ${\cal U}_{q}(sl(2) \oplus sl(2))$, has been presented and applied to the study of the mutations of the codons.
As expected, calculations are more complicated and only a few results in the simple case of the triplets are given. In this framework, further investigation deserve attention, in particular to study oligonucleotide distribution in the four letters alphabet and mutations in long sequences.
In conclusion we point out that:
\begin{itemize}
\item the crystal basis provides an alternative way of labelling
nucleotide sequences, in particular codons or genes, mapping any finite ordered nucleotide sequence in a vector state of an irrep.. We point out that the choice of the
limit $q \to 0$ ({\bf crystal basis}) is essential for the above identification as, only in this limit,
due to Kashiwara theorem (Kashiwara, 1990),
the composite states are pure states.
\item the mutation matrix $M$ in our model
describes an interaction on the $i$-th nucleotide depending on the
input-output sequences and, in the flip of one spin (or double spin), inherently takes into account
non local effects.
So the crystal basis variables are suitable to
partially describe non local events which affect the mutations.
\item models based on the crystal basis
seem, in the light of the obtained results, better candidates than models based on Hamming distance to
describe mutations.
\end{itemize}
As final remark, this article should be seen as a first, simplified attempt to build models, more realistic than the ones based on the Hamming distance, to describe the effects of the mutation-selection on the observed distribution of oligonucleotides.
|
1,116,691,500,927 | arxiv | \section{introduction}\label{intro}
This paper is devoted to the stability analysis of a unique (up to translation) traveling wave solution to a thermo-diffusive model of flame propagation with stepwise temperature kinetics and first-order reaction (see \cite{BGKS15}) at high Lewis numbers, namely $\Le>1$. The problem reads in one spatial dimension:
\begin{eqnarray}\label{problem-1}
\left\{
\begin{array}{l}
\displaystyle\frac{\partial\Theta}{\partial t}=\frac{\partial^2\Theta}{\partial x^2}+W(\Theta,\Phi), \\[2mm]
\displaystyle\frac{\partial\Phi}{\partial t}={\Le}^{-1}\frac{\partial^2\Phi}{\partial x^2}-W(\Theta,\Phi).
\end{array}
\right.
\end{eqnarray}
Here, $\Theta$ and $\Phi$ are appropriately normalized temperature and concentration of deficient reactant,
$x\in \R$ denotes the spatial coordinate, $t>0$ the time. The nonlinear term $W(\Theta,\Phi)$ is a scaled reaction rate given by (see \cite[Section 2, formula (3)]{BGKS15}):
\begin{eqnarray}\label{problem-2}
W(\Theta,\Phi)=
\left\{
\begin{array}{lllll}
A\Phi, & \mbox{if} & \Theta\ge \Theta_i, \\[2mm]
0, & \mbox{if} &\Theta<\Theta_i.
\end{array}
\right.
\end{eqnarray}
In \eqref{problem-2}, $0<\Theta_i<1$ is the reduced ignition temperature, $A>0$ is a normalized factor depending on $\Theta_i$ and $\Le$, to be determined hereafter for the purpose of ensuring that the speed of traveling wave is set at unity. Moreover, the following boundary conditions hold at $\pm \infty$:
\begin{eqnarray}
\label{boundary condition-12}
\begin{matrix}
\Theta(t, -\infty)=1, & & \Theta(t,\infty)=0,\\
\Phi(t, -\infty)=0, & & \Phi(t,\infty)=1.
\end{matrix}
\end{eqnarray}
In this first-order stepwise kinetics model, $\Phi$ does not vanish except as $t$ tends to $-\infty$. Thus, problem \eqref{problem-1}-\eqref{boundary condition-12} belongs to the class of parabolic Partial Differential Equations with discontinuous nonlinearities. Models in combustion theory and other fields (see, e.g. \cite[Section 1]{AF82}) involving discontinuous reaction terms have been used by physicists and engineers for long because of their manageability; as a result, elliptic and parabolic PDEs with discontinuous nonlinearities, and related Free Boundary Problems, have received a close attention from the mathematical community (see \cite[Section 1]{ABLZ18} and references therein). We quote in particular the paper \cite{C80}, by K.-C. Chang, which contains a systematical study of elliptic PDEs with discontinuous nonlinearities (DNDE).
In this paper, we consider the case of a free \textsl{ignition interface} $g(t)$ defined by
\begin{equation}
\label{ignition}
\Theta(t,g(t))=\Theta_i,
\end{equation}
such that $\Theta(t,x)>\Theta_i$ for $x>g(t)$ and $\Theta(t,x)<\Theta_i$ for $x<g(t)$. Formula (\ref{ignition}) means that the ignition temperature $\Theta_i$ is reached at the ignition interface which defines the flame front. We point out that, in contrast to conventional Arrhenius kinetics where the reaction zone is infinitely thin, the reaction zone for stepwise temperature kinetics is of order unity (thick flame). It is also interesting to compare the first-order stepwise kinetics with the zero-order kinetics model (see \cite{ABLZ18,BGKS15,BGZ16}): in the zero-order kinetics, $\Phi(t,x)$ vanishes at a \textit{trailing interface} and does not appear explicitly in the nonlinear term (see \cite[Section 2, formula (4)]{BGKS15}).
According to \eqref{ignition}, the system for $\pmb{X}= (\Theta,\Phi)$ reads as follows, for $ t>0$ and $x \in \R, x \neq g(t)$:
\begin{eqnarray}
\label{system-1}
&&\left\{\begin{aligned}
&\frac{\partial\Theta}{\partial t}=\frac{\partial^2\Theta}{\partial x^2}+A\Phi,&x<g(t),\\
&\frac{\partial\Phi}{\partial t}={\rm \Le}^{-1}\frac{\partial^2\Phi}{\partial x^2}-A\Phi,\quad &x<g(t),
\end{aligned}\right.\\[2mm]
\label{system-2}
&&\left\{\begin{aligned}
&\frac{\partial\Theta}{\partial t}=\frac{\partial^2\Theta}{\partial x^2},&x>g(t),\\
&\frac{\partial\Phi}{\partial t}={\rm \Le}^{-1}\frac{\partial^2\Phi}{\partial x^2},\quad &x>g(t).
\end{aligned}\right.
\end{eqnarray}
At the free interface $x=g(t)$, the following continuity conditions hold:
\begin{equation}\label{system 1-2}
[\Theta]=[\Phi]=0, \qquad\;\, \bigg [\frac{\partial\Theta}{\partial x}\bigg ]=\bigg [\frac{\partial\Phi}{\partial x}\bigg ]=0,
\end{equation}
where we denote by $[f]$ the jump of a function $f$ at a point $x_0$, i.e., the difference $f(x_0^+)-f(x_0^-)$.
The system above admits a unique (up to translation) traveling wave solution $\pmb U=(\Theta^0,\Phi^0)$ which propagates with constant positive velocity $V$. In the moving frame coordinate $z=x-Vt$, by choosing
\begin{equation}
\label{eqn:A}
A=\frac{\Theta_i}{1-\Theta_i}\bigg(1+\frac{\Theta_i}{\Le(1-\Theta_i)}\bigg),
\end{equation}
to have $V=1$ and, hence, $z=x-t$, the traveling wave solution is explicitly given by the following formulae:
\begin{eqnarray*}
&\Theta^0(z)&=
\left\{\begin{aligned}
&1-(1-\Theta_i)e^{\frac{\Theta_i}{1-\Theta_i}z},& \ \ &z<0,&\\
&\Theta_ie^{-z},& \ \ &z>0,&
\end{aligned}\right.\\[2mm]
&\Phi^0(z)&=
\left\{\begin{aligned}
&\frac{\Theta_i}{A(1-\Theta_i)}e^{\frac{\Theta_i}{1-\Theta_i}z},& \ \ &z<0,&\\
&1+\left (\frac{\Theta_i}{A(1-\Theta_i)}-1\right )e^{-\Le z},& \ \ &z>0.&
\end{aligned}\right.
\end{eqnarray*}
The goal of this paper is the analysis of the stability of the traveling wave solution $\pmb U$ in the case of high Lewis numbers ($\Le>1$). Here, stability refers to orbital stability with asymptotic phase, because of the translation invariance of the traveling wave. It is known (see \cite[Section 3.2]{BGKS15})
that large enough Lewis numbers give rise to \textit{pulsating instabilities}, i.e., oscillatory behavior of the flame. This is
very unlike \textit{cellular instabilities} for relatively small Lewis number ($\Le<1$), that is pattern formation; in the latter case, a paradigm for the evolution of the disturbed flame front is the Kuramoto-Sivashinsky equation (see \cite{MS79, S80}, and also \cite{BHL13,BHL09, BHL11, BHLS10, BLSX10}).
The paper is organized as follows: In Section \ref{linear operator}, we first transform the free interface problem to a system of parabolic equations on a fixed domain. Then, in the spirit of \cite{BHL00,Lorenzi02,Lorenzi02-b}, the perturbation $\pmb u$ of the traveling wave $\pmb U$ is split as $\displaystyle\pmb u= s\frac{d\pmb U}{d\xi} +\pmb v$ (``ansatz 1''), in which $s$ is the perturbation of the front $g$. The largest part of the section is devoted to a thorough study of the linearization at $0$ of the elliptic part of the parabolic system in a weighted space $\bm{\mathcal W}$ where its realization $L$ is sectorial (see Subsection \ref{linearized subsect} for further details about the use of a weighted space). Furthermore, we determine the spectrum of $L$ which contains $(-\infty,-\frac{1}{4}]$, a parabola and its interior, the roots of the so-called dispersion relation, and the eigenvalue $0$.
Thereafter, an important point is getting rid of the eigenvalue 0 which, as it has been already stressed, is generated by translation invariance. In Section \ref{sect-3}, we use a spectral projection $P$ as well as ``ansatz 2'' and then derive the fully nonlinear problem (see, e.g. \cite{Lunardi96}) for $\pmb w$:
\begin{equation*}
\frac{\partial\pmb w}{\partial\tau} = (I-P)L\pmb w + {F}(\pmb w).
\end{equation*}
Next, in Sections \ref{stability} and \ref{sect-5} we use the bifurcation parameter $m$ defined by
\begin{eqnarray*}
m:=\frac{\Theta_i}{1-\Theta_i}
\end{eqnarray*}
to investigate the stability of the traveling wave. Simultaneously, as one already noted that pulsating instability is likely to occur at large Lewis number, it is natural to introduce a small perturbation parameter $\varepsilon >0$ (dimensionless diffusion coefficient) defined by $\varepsilon:=\Le^{-1}$, so that \eqref{eqn:A} reads $A =m+\varep m^2$.
The simplest situation arises in the asymptotic case of gasless combustion when $\Le =\infty$, as in \cite{GJ09}. As it is easily seen, as $\varepsilon \to 0$, problem \eqref{system-1}-\eqref{system-2} converges formally to:
\begin{eqnarray}
\label{system-1-limit}
&&\left\{\begin{aligned}
&\frac{\partial\Theta}{\partial t}=\frac{\partial^2\Theta}{\partial x^2}+A\Phi,\quad &x<g(t),\\
&\frac{\partial\Phi}{\partial t}=-A\Phi,&x<g(t),
\end{aligned}\right.\\[2mm]
\label{system-2-limit}
&&\left\{\begin{aligned}
&\frac{\partial\Theta}{\partial t}=\frac{\partial^2\Theta}{\partial x^2},\quad &x>g(t),\\
&\Phi\equiv 1,&x>g(t),
\end{aligned}\right.
\end{eqnarray}
with conditions $[\Theta]= [\Phi] = 0$, $\displaystyle\left [\frac{\partial\Theta}{\partial x}\right ]=0$ at the free interface $x=g(t)$. However,
the limit free interface system \eqref{system-1-limit}-\eqref{system-2-limit} is only partly parabolic.
At the outset, we fix $m$ in Section \ref{stability} and let $\varep$ tend to $0$, which allows to apply the classical Hurwitz Theorem in complex analysis to the \textit{dispersion relation} $D_{\varep}(\lambda,m)$. Our first main result, Theorem \ref{stability theorem TW}, states that, for $2<m<m^c=6$ and $0<\varep<\varep_0(m)$, the traveling wave $\pmb U$ is orbitally stable with asymptotic phase and, for $m>m^c=6$, it is unstable. To give a broad picture, we take advantage of the regular convergence of the point spectrum as $\varep \to 0$.
Section \ref{sect-5} is devoted to the proof of Hopf bifurcation in a neighborhood of the critical value $m^c=6$. The difficulty is twofold: first, the framework is that of a fully nonlinear problem; second, $m$ is not fixed in the sequence of parameterized analytic functions $D_{\varep}(\lambda,m)$ which prevents us from using Hurwitz Theorem directly. The trick is to find a proper approach to combining $m$ with $\varep$: to this end we construct a sequence of critical values $m^c(\varep)$ such that $m^c(0)=m^c$ and apply Hurwitz Theorem to $D_{\varep}(\lambda,m^c(\varep))$. Proposition \ref{give critical value} and Theorem \ref{Hopf bifurcation theorem} are crucial to prove Hopf bifurcation at $m^c(\varep)$ for $\varep$ small enough. Finally, in three appendices, we collect some formulae and results that we use to prove our main results.
\section{The linearized operator}\label{linear operator}
In this section, we first derive the governing equations for the perturbations of the traveling wave solution. As usual, it is convenient to transform the free interface problem to a system on a fixed domain. More specifically, we use the general method of \cite{BHL00} that converts free interface problems to fully nonlinear problems with
transmission conditions at a fixed interface (see \cite{ABLZ18}). Then, we are going to focus on the linearized system.
\subsection{The system with fixed interface}\label{fixed}
To begin with, we rewrite problem \eqref{system-1}-\eqref{system 1-2} in a new system of coordinates that fixes the position of the ignition interface at the origin:
\begin{eqnarray*}
\tau=t,\ \
\xi=x-g(\tau).
\end{eqnarray*}
Hereafter, we are going to use, whenever it is convenient, the superdot to denote differentiation with respect to time and the prime to denote partial differentiation with respect to the space variable.
Then, the system for ${\pmb X}=(\Theta,\Phi)$ and $g$ reads:
\begin{eqnarray}
\label{perturbation T}
&&\left\{\begin{aligned}
\frac{\partial\Theta}{\partial\tau}-\dot{g}\frac{\partial\Theta}{\partial\xi}=&\frac{\partial^2\Theta}{\partial\xi^2}+A\Phi, \ \ &\xi<0,\\
\frac{\partial\Phi}{\partial\tau}-\dot{g}\frac{\partial\Phi}{\partial\xi}=&\Le^{-1}\frac{\partial^2\Phi}{\partial\xi^2}-A\Phi, \ \ &\xi<0, \end{aligned}\right.\\[1mm]
\label{perturbation W}
&&\left\{\begin{aligned}
\frac{\partial\Theta}{\partial\tau}-\dot{g}\frac{\partial\Theta}{\partial\xi}=&\frac{\partial^2\Theta}{\partial\xi^2}, \ \ &\xi>0,\\
\frac{\partial\Phi}{\partial\tau}-\dot{g}\frac{\partial\Phi}{\partial\xi}=&\Le^{-1}\frac{\partial^2\Phi}{\partial\xi^2}, \ \ &\xi>0. \end{aligned}\right.
\end{eqnarray}
Moreover, $\Theta$, $\Phi$ and their first-order space derivatives are continuous at the fixed interface $\xi=0$, thus
\begin{equation}
\Theta(\cdot,0)=\Theta_i, \qquad\;\, [\Theta]=[\Phi]=0, \qquad\;\, \bigg [\frac{\partial\Theta}{\partial\xi}\bigg ]=\bigg [\frac{\partial\Phi}{\partial\xi}\bigg ]=0.
\label{interface-Theta-Phi}
\end{equation}
In addition, at $\xi=\pm \infty$, $\Theta$ and $\Phi$ satisfy \eqref{boundary condition-12}.
Next, we introduce the small perturbations $\pmb u=(u_1,u_2)$ and $s$, respectively of the traveling wave $\pmb U$ and of the front $g$, more precisely,
\begin{align*}
&u_1(\tau,\xi)=\Theta(\tau, \xi)-\Theta^0(\xi),\\
&u_2(\tau,\xi)=\Phi(\tau, \xi)-\Phi^0(\xi),\\
&s(\tau)=g(\tau)-\tau.
\end{align*}
It then follows that the perturbations $\pmb u$ and $s$ verify the system
\begin{eqnarray}
\label{u_1}
&&\left\{\begin{aligned}
\frac{\partial u_1}{\partial \tau}&=\frac{\partial^2 u_1}{\partial \xi^2}+\frac{\partial u_1}{\partial \xi}+Au_2+\dot{s}\frac{d\Theta^0}{d\xi}+\dot{s}\frac{\partial u_1}{\partial \xi}, \ &\xi<0,&\\
\frac{\partial u_2}{\partial \tau}&=\Le^{-1}\frac{\partial^2 u_2}{\partial \xi^2}+\frac{\partial u_2}{\partial \xi}-Au_2+\dot{s}\frac{d\Phi^0}{d\xi}+\dot{s}\frac{\partial u_2}{\partial \xi}, \ &\xi<0,&
\end{aligned}\right.\\[1mm]
\label{u_2}
&&\left\{\begin{aligned}
\frac{\partial u_1}{\partial \tau}&=\frac{\partial^2 u_1}{\partial \xi^2}+\frac{\partial u_1}{\partial \xi}+\dot{s}\frac{d\Theta^0}{d\xi}+\dot{s}\frac{\partial u_1}{\partial \xi}, \ &\xi>0,&\\
\frac{\partial u_2}{\partial \tau}&=\Le^{-1}\frac{\partial^2 u_2}{\partial \xi^2}+\frac{\partial u_2}{\partial \xi}+\dot{s}\frac{d\Phi^0}{d\xi}+\dot{s}\frac{\partial u_2}{\partial \xi}, \ &\xi>0,&
\end{aligned}\right.
\end{eqnarray}
and the corresponding interface conditions obtained from \eqref{interface-Theta-Phi} are:
\begin{equation}\label{interface-u}
u_1(\tau,0)=0,\qquad\;\, [u_1]=[u_2]=\bigg [\frac{\partial u_1}{\partial \xi}\bigg ]=\bigg [\frac{\partial u_2}{\partial \xi}\bigg ]=0.
\end{equation}
\subsection{Ansatz 1}
\label{subsect-2.2}
In the spirit of \cite{BHL00,Lorenzi02}, we introduce the following splitting or ansatz:
\begin{equation}\label{ansatz1}
\begin{aligned}
u_1(\tau,\xi)=&s(\tau)\frac{d\Theta^0}{d\xi}(\xi)+v_1(\tau,\xi),\\
u_2(\tau,\xi)=&s(\tau)\frac{d\Phi^0}{d\xi}(\xi)+v_2(\tau,\xi),
\end{aligned}
\end{equation}
in which $v_1$, $v_2$ are new unknown functions. In a more abstract setting, the ansatz reads
\begin{equation*}
\pmb u(\tau,\xi)= s(\tau)\frac{d\pmb U}{d\xi} +\pmb v(\tau,\xi), \qquad\;\, \pmb v=(v_1,v_2).
\end{equation*}
Substituting \eqref{ansatz1} into \eqref{u_1}-\eqref{u_2}, we get the system for $\pmb u$ and $s$:
\begin{eqnarray}
\label{v1}
&&\left\{\begin{aligned}
\frac{\partial v_1}{\partial \tau}&=\frac{\partial^2 v_1}{\partial \xi^2}+\frac{\partial v_1}{\partial \xi}+Av_2+\dot{s}\left (s\frac{d^2\Theta^0}{d\xi^2}+\frac{\partial v_1}{\partial \xi}\right ), \ \ &\xi<0&,\\
\frac{\partial v_2}{\partial \tau}&=\Le^{-1}\frac{\partial^2 v_2}{\partial \xi^2}+\frac{\partial v_2}{\partial \xi}-Av_2+\dot{s}\left (s\frac{d^2\Phi^0}{d\xi^2}+\frac{\partial v_2}{\partial \xi}\right ), \ \ &\xi<0&,
\end{aligned}\right.\\[1mm]
\label{v2}
&&\left\{\begin{aligned}
\frac{\partial v_1}{\partial \tau}&=\frac{\partial^2 v_1}{\partial \xi^2}+\frac{\partial v_1}{\partial \xi}+\dot{s}\left (s\frac{d^2\Theta^0}{d\xi^2}+\frac{\partial v_1}{\partial \xi}\right ), \ \ &\xi>0&,\\
\frac{\partial v_2}{\partial \tau}&=\Le^{-1}\frac{\partial^2 v_2}{\partial \xi^2}+\frac{\partial v_2}{\partial \xi}+\dot{s}\left (s\frac{d^2\Phi^0}{d\xi^2}+\frac{\partial v_2}{\partial \xi}\right ), \ \ &\xi>0&.
\end{aligned}\right.
\end{eqnarray}
At $\xi=0$, it is easy to see that the new interface conditions are:
\begin{eqnarray*}
[v_1]=[v_2]=0,\qquad\,\,\bigg [\frac{\partial v_1}{\partial \xi}\bigg ]=-s\bigg [\frac{d^2\Theta^0}{d\xi^2}\bigg ],\qquad\,\, \bigg [\frac{\partial v_2}{\partial \xi}\bigg ]=-s\bigg [\frac{d^2\Phi^0}{d\xi^2}\bigg ],\qquad\,\,v_1(\tau,0)=-s\frac{\partial\Theta^0}{\partial\xi}(0).
\end{eqnarray*}
Taking advantage of the conditions
\begin{eqnarray*}
\frac{d\Theta^0}{d\xi}(0)=-\Theta_i,\quad\bigg [\frac{d^2\Theta^0}{d\xi^2}\bigg ]=\frac{\Theta_i}{1-\Theta_i},\quad \bigg [\frac{d^2\Phi^0}{d\xi^2}\bigg ]=-\frac{\Le\Theta_i}{1-\Theta_i},
\end{eqnarray*}
where we used (\ref{eqn:A}) to derive the last condition, it follows that
\begin{equation}\label{transm}
s(\tau)=\frac{v_1(\tau,0)}{\Theta_i},\qquad\;\, \bigg [\frac{\partial v_1}{\partial \xi}\bigg ]=-\frac{v_1(\tau,0)}{1-\Theta_i},\qquad\;\, \bigg [\frac{\partial v_2}{\partial \xi}\bigg ]=\frac{v_1(\tau,0)\Le}{1-\Theta_i}.
\end{equation}
Summarizing, the free interface problem \eqref{system-1}-\eqref{system-2} has been converted to ($\ref{u_1}$)-($\ref{u_2}$), which constitutes a nonlinear system for $v_1$, $v_2$ and $s$, with transmission conditions \eqref{transm} at $\xi=0$. The next subsections are devoted to the study of the linearized problem (at zero) in an abstract setting, with simplified notation $\pmb u=(u,v)$ for convenience.
\subsection{The linearized problem}\label{linearized subsect}
Now, we consider the linearization at $0$ of the system \eqref{v1}-\eqref{transm}, which reads as follows:
\begin{eqnarray}
\label{linear pb-u}
&&\left\{\begin{aligned}
\frac{\partial u}{\partial \tau}&=\frac{\partial^2u}{\partial\xi^2}+\frac{\partial u}{\partial\xi}+Av, &\xi<0,\\
\frac{\partial v}{\partial\tau}&=\Le^{-1}\frac{\partial^2v}{\partial \xi^2}+\frac{\partial v}{\partial\xi}-Av,\quad &\xi<0,\\
\end{aligned}\right.\\[1mm]
\label{linear pb-v}
&&\left\{\begin{aligned}
\frac{\partial u}{\partial \tau}&=\frac{\partial^2u}{\partial\xi^2}+\frac{\partial u}{\partial\xi}, & \xi>0,\\
\frac{\partial v}{\partial\tau}&=\Le^{-1}\frac{\partial^2v}{\partial\xi^2}+\frac{\partial v}{\partial\xi}, &\quad \xi>0,
\end{aligned}\right.
\end{eqnarray}
with the interface conditions
\begin{equation}
\label{linear pb-interface}
[u]=[v]=0,\qquad\;\, \bigg [\frac{\partial u}{\partial\xi}\bigg ]=-\frac{u(\tau,0)}{1-\Theta_i},\qquad\;\,
\bigg [\frac{\partial v}{\partial\xi}\bigg ]=\frac{u(\tau,0)\Le}{1-\Theta_i}.
\end{equation}
Problem \eqref{linear pb-u}-\eqref{linear pb-v} can be written in the more compact form
$\displaystyle\frac{\partial\pmb u}{\partial\tau}={\mathcal L}\pmb u$, where $\pmb u=(u,v)$,
\begin{eqnarray*}
\mathcal{L}=\left(\begin{matrix}
\displaystyle\frac{\partial^2}{\partial\xi^2}+\frac{\partial}{\partial\xi}& &A\chi_{-}\\
0& &\displaystyle\Le^{-1}\frac{\partial^2}{\partial\xi^2}+\frac{\partial}{\partial \xi}-A\chi_{-}
\end{matrix}\right)
\end{eqnarray*}
and $\chi_-$ denotes the characteristic function of the set $(-\infty,0)$.
We now introduce the weighted space $\bm{\mathcal W}$ where we analyze the system \eqref{linear pb-u}-\eqref{linear pb-interface}.
As a matter of fact, the introduction of exponentially weighted spaces for proving stability of traveling waves has been a standard tool since the pioneering work of Sattinger (see \cite{S76}), its role being to shift the continuous spectrum to the left and, thus, creating a gap with the imaginary axis which simplifies the analysis.
\begin{definition}
The exponentially weighted Banach space $\bm{\mathcal W}$ is defined by
\begin{align*}
\bm{\mathcal W}=\Big\{&\pmb u:
e^{\frac{1}{2}\xi}u, e^{\frac{1}{2}\xi}v\in C_b((-\infty,0);\mathbb C),\
e^{\frac{1}{2}\xi}u, e^{\frac{\Le}{2}\xi}v\in C_b((0,\infty);\mathbb C), \lim_{\xi\to 0^{\pm}}u(\xi),\ \lim_{\xi\to 0^{\pm}}v(\xi)\in\mathbb R\Big\},
\end{align*}
equipped with the norm:
\begin{align*}
\|\pmb u\|_{\bm{\mathcal W}}=&\sup_{\xi <0}|e^{\frac{1}{2}\xi}u(\xi)|+\sup_{\xi>0}|e^{\frac{1}{2}\xi}u(\xi)|+\sup_{\xi <0}|e^{\frac{1}{2}\xi}v(\xi)|+\sup_{\xi>0}|e^{\frac{\Le}{2}\xi}v(\xi)|.
\end{align*}
\end{definition}
In the above definition, $C_b(I;\mathbb C)$ denotes the space of bounded and continuous functions from $I$ to $\mathbb{C}$, $I$ being either the interval $(-\infty,0)$ or $(0,\infty)$. We finally introduce the realization $L$ of the operator ${\mathcal L}$ in $\bm{\mathcal W}$ defined by
\begin{align*}
&D(L)=\bigg\{\pmb u\in \bm{\mathcal W}: \frac{\partial \pmb u}{\partial\xi},\frac{\partial^2\pmb u}{\partial\xi^2}\in \bm{\mathcal W},\ [u]=[v]=0,\ \bigg [\frac{\partial u}{\partial\xi}\bigg ]=-\frac{u(0)}{1-\Theta_i},\ \bigg [\frac{\partial v}{\partial\xi}\bigg ]=\frac{\Le\ u(0)}{1-\Theta_i}\bigg\},\\[1mm]
&L\pmb u=\mathcal{L}\pmb u,\qquad\;\, \pmb u\in \bm{\mathcal W}.
\end{align*}
\begin{remark}
\label{rem-simple}
{\rm We observe that, for any Lewis number, the pair
$\displaystyle\frac{d\pmb U}{d\xi}=\left (\frac{d\Theta^0}{d\xi},\frac{d\Phi^0}{d\xi}\right )$ verifies System \eqref{linear pb-u}, \eqref{linear pb-v}, and it belongs to the space $\bm{\mathcal W}$. In other words, $\displaystyle\frac{d\pmb U}{d\xi}$ is an eigenfunction of the operator $L$ associated with the eigenvalue 0.}
\end{remark}
The above remark gives a first justification for the choice of the exponential weights in the definition of $\bm{\mathcal W}$.
We also stress that, following the same strategy as in the proof of the forthcoming Theorem \ref{thm-2.3} it can be easily checked that the spectrum of the realization of the operator ${\mathcal L}$ in the nonweighted space of pairs $(u,v)$ such that $u$, $v$ are bounded and continuous in $(-\infty,0)\cup (0,\infty)$, contains a parabola which is tangent at $0$ to the imaginary axis.
\subsection{Analysis of the operator $L$}
\label{Resolvent operator}
Next theorem is devoted to a deep study of the operator $L$. For simplicity of notation, for $j=1,2$ we set
\begin{align}
H_{1,\lambda}=\sqrt{1+4\lambda},\qquad\;\,H_{2,\lambda}=\sqrt{\Le^2+4\Le(A+\lambda)},\qquad\;\,H_{3,\lambda}=\sqrt{\Le^2+4\Le\lambda}
\label{formula-1}
\end{align}
and
\begin{align}
&k_{j,\lambda}=\frac{-1+(-1)^{j+1}H_{1,\lambda}}{2},\qquad k_{2+j,\lambda}=\frac{-\Le+(-1)^{j+1}H_{2,\lambda}}{2},\qquad k_{4+j,\lambda}=\frac{-\Le+(-1)^{j+1}H_{3,\lambda}}{2}.
\label{formula-3}
\end{align}
\begin{theorem}
\label{thm-2.3}
The operator $L$ is sectorial and therefore generates an analytic semigroup. Moreover, its spectrum has components:
\begin{enumerate}[\rm (1)]
\item
$(-\infty,-1/4]\cup \mathcal{P}$, where $\mathcal{P}=\{\lambda\in\mathbb{C}:a\Re\lambda+b(\Im\lambda)^2+c\le 0\}$ with
\begin{align*}
a=\bigg (1-\frac{1}{\Le}\bigg )^2,\qquad\;\,
b=\frac{1}{\Le},\qquad\;\,
c=\frac{2A+1}{2}+\frac{8A-5}{4\Le}+\frac{1+A}{\Le^2}-\frac{1}{4\Le^3};
\end{align*}
\item
the simple isolated eigenvalue $0$, the kernel of $L$ being spanned by $\displaystyle\frac{d\pmb U}{d\xi}$;
\item
additional eigenvalues given by the solution of the dispersion relation
\begin{equation}
\label{d,r,Le}
D(\lambda; \Theta_i, \Le):=(k_{6,\lambda}-k_{3,\lambda})(k_{3,\lambda}-k_{2,\lambda})\big [1-(1-\Theta_i)\sqrt{1+4\lambda}\big ]+A\Le,
\end{equation}
where $A$ is given by \eqref{eqn:A}.
\end{enumerate}
\end{theorem}
\begin{proof}
Since the proof is rather lengthy, we split it into four steps.
In the first two steps, we prove properties (1) and (3). Step 3 is devoted to the proof of property (2).
Finally, in Step 4, we prove that the operator $L$ is sectorial in $\bm{\mathcal W}$.
For notational convenience, throughout the proof, we set
\begin{align*}
&{\mathscr I}_1:=\int_{0}^{\infty}f_1(s)e^{-k_1s}ds,&
&{\mathscr I}_2:=\int_{-\infty}^{0}f_1(s)e^{-k_2s}ds,&
&{\mathscr I}_3:=\int_{-\infty}^{0}f_2(s)e^{-k_2s}ds,\\
&{\mathscr I}_4:=\int_{-\infty}^{0}f_2(s)e^{-k_4s}ds, &
&{\mathscr I}_5:=\int_{0}^{\infty}f_2(s)e^{-k_5s}ds,&
\end{align*}
for any fixed $\pmb f=(f_1,f_2)\in\bm{\mathcal W}$, where, here and Step 1 to 3, we simply write $k_j$ instead of $k_{j,\lambda}$ to enlighten the notation.
\vskip 1mm
{\em Step 1}. To begin with, we prove that the interval $(-\infty,-1/4]$ belongs to the point spectrum of $L$. We first assume that $\lambda\le-\Le/4$ (recall that $\Le>1$). In such a case, $\Re(k_1)=\Re(k_2)=-1/2$, $\Re(k_5)=\Re(k_6)=-\Le/2$ and the function $\pmb u$ defined by
\begin{equation}
u(\xi)=
\left\{
\begin{array}{ll}
c_1 e^{k_1\xi}+c_2 e^{k_2 \xi}, & \xi<0,\\
c_5 e^{k_1\xi}+c_6 e^{k_2 \xi}, & \xi\ge 0,
\end{array}
\right.
\qquad\;\,
v(\xi)=
\left\{
\begin{array}{ll}
0, &\xi<0,\\
c_7 e^{k_5\xi}+c_8 e^{k_6\xi}, &\xi\ge 0,
\end{array}
\right.
\label{eigenvalue-1/4}
\end{equation}
belongs to $\bm{\mathcal W}$ and solves the equation $\lambda \pmb u-{\mathcal L}\pmb u={\bf 0}$ for any choice of the complex parameters $c_1$, $c_2$, $c_5$, $c_6$, $c_7$ and $c_8$.
Since there are only four boundary conditions to impose to guarantee that $\pmb u\in D(L)$,
the resolvent equation $\lambda \pmb u-{\mathcal L}\pmb u={\bf 0}$ is not uniquely solvable in $\bm{\mathcal W}$. Thus, $\lambda$ belongs to the point spectrum of $L$.
Next, we consider the case when $\lambda\in(-\Le/4, -1/4]$. In this situation, $\Re(k_1)=\Re(k_2)=-1/2$, however, $\Re(k_5)+\Le/2>0$, $\Re(k_6)+\Le/2<0$.
Thanks to the fact that $e^{\frac{\Le}{2}\xi}v(\xi)$ should be bounded in $(0,\infty)$, the constant $c_7$ in \eqref{eigenvalue-1/4} is zero, whereas the constants $c_1$, $c_2$, $c_5$, $c_6$ $c_8$ are arbitrary. As above, the resolvent equation $\lambda\pmb u-L\pmb u={\bf 0}$ cannot be solved uniquely. Consequently, we conclude that $(-\infty,-1/4]$ belongs to the point spectrum of the operator $L$.
From now on, we consider the case when $\lambda\notin (-\infty,-1/4]$. Then,
$\Re(k_1)+1/2>0$, $\Re(k_2)+1/2<0$, $\Re(k_5)+\Le/2>0$ and $\Re(k_6)+\Le/2<0$.
Similarly to the previous procedure, using the formulae \eqref{rs-u-positive}, \eqref{rs-v-positive} and \eqref{resolvent-u} as well as the fact that the functions $\xi\mapsto e^{\frac{1}{2}\xi}u(\xi)$ and $\xi\mapsto e^{\frac{\Le}{2}\xi}v(\xi)$ should be bounded in $\mathbb{R}$ and in $(0,\infty)$ respectively, the constants $c_2$, $c_5$, $c_7$ can be determined explicitly and they are given by
\begin{align*}
&c_2=\frac{1}{H_{1,\lambda}}\int_{-\infty}^{0}(Av(s)+f_1(s))e^{-k_2s}ds,\qquad\;\, c_5=\frac{1}{H_{1,\lambda}}{\mathscr I}_1,\qquad\;\,c_7=\frac{\Le}{H_{3,\lambda}}{\mathscr I}_5.
\end{align*}
We now consider formula (\ref{resolvent-v}). Since $\Le>1$, it follows that $\Re(k_4)+1/2<0$. Moreover, we observe that
the inequality $\Re(k_3)+{1}/{2}\le 0$ is satisfied if and only if $\lambda\in {\mathcal P}$. Indeed, fix any $\lambda\in \stackrel{\circ}{{\mathcal P}}$, the interior of ${\mathcal P}$, so that $\Re(k_3)+1/2<0$, and take
\begin{equation*}
f_1(\xi)=
\left\{
\begin{array}{ll}
e^{-\frac{1}{2}\xi}, &\xi<0,\\
0, &\xi\ge 0,
\end{array}
\right.
\qquad\;\,
f_2\equiv 0 \ \text{in} \ \mathbb{R}.
\end{equation*}
In such a case, the more general solution, $\pmb u\in\bm{\mathcal W}$, to the equation
$\lambda \pmb u-{\mathcal L}\pmb u=\pmb f$ is given
by $u(\xi)=c_6e^{k_2\xi}$ and $v(\xi)=c_8e^{k_6\xi}$ for $\xi\ge 0$, whereas $v\equiv 0$ in $(-\infty,0)$ and
$u(\xi)=c_1e^{k_1\xi}+2H_{1,\lambda}^{-2}(2e^{-\frac{1}{2}\xi}-e^{k_1\xi})$
for $\xi<0$. Note that $k_1\neq k_3$ for $\lambda\in\stackrel{\circ}{\mathcal P}$.
Imposing the boundary conditions, we deduce that $c_6=c_8=0$,
$c_1=-2H_{1,\lambda}^{-2}$ and $k_1c_1=2H_{1,\lambda}^{-2}k_2$,
which is clearly a contradiction. We conclude that the domain $\stackrel{\circ}{{\mathcal P}}$ and, consequently, its closure belong to the continuous spectrum of $L$.
Summarizing, property (1) in the statement of the theorem is established.
\vskip 1mm
{\em Step 2}. Here, we consider the equation $\lambda \pmb u-{\mathcal L}\pmb u=\pmb f$ for $\pmb f\in\bm{\mathcal W}$ and values of $\lambda$ which are not in $(-\infty,-1/4]\cup{\mathcal P}$. For such $\lambda$'s and $j=1,2$ it holds that
\begin{equation}
\Re(k_{2j-1})+\frac{1}{2}>0,\qquad\;\, \Re(k_{2j})+\frac{1}{2}<0,\qquad\;\, \Re(k_5)+\displaystyle\frac{\Le}{2}>0,\qquad\;\, \Re(k_6)+\displaystyle\frac{\Le}{2}<0.
\label{cond-1}
\end{equation}
We first assume that $k_1\neq k_3$. Imposing that the function $\pmb u$ defined by \eqref{rs-u-positive}-\eqref{resolvent-v} belongs to $\bm{\mathcal W}$, we can
uniquely determine the constants $c_2$, $c_4$, $c_5$ and $c_7$ and we get
\begin{align}
\label{re-u-negative}
u(\xi)
=&c_1e^{k_1\xi}+\frac{e^{k_1\xi}}{H_{1,\lambda}}\int_{\xi}^0 f_1(s)e^{-k_1s}ds
+\frac{e^{k_2\xi}}{H_{1,\lambda}}\int_{-\infty}^{\xi} f_1(s)e^{-k_2s}ds \nonumber\\
&+\frac{A}{H_{1,\lambda}}\bigg\{\bigg (\frac{e^{k_3\xi}}{k_3-k_2}-\frac{e^{k_3\xi}-e^{k_1\xi}}{k_3-k_1}\bigg )c_3
+\frac{\Le}{H_{2,\lambda}}\bigg[\bigg (\frac{e^{k_1\xi}-e^{k_3\xi}}{k_3-k_1}
-\frac{e^{k_3\xi}}{k_3-k_2}\bigg )\int_{\xi}^0f_2(s)e^{-k_3s}ds\notag\\
&\phantom{-\frac{A}{H_{1,\lambda}}\bigg\{\;\,}+\frac{e^{k_1\xi}}{k_3-k_1}\int_\xi^0 f_2(s)e^{-k_1s}ds
+\bigg (\frac{e^{k_1\xi}-e^{k_4\xi}}{k_4-k_1}+\frac{e^{k_4\xi}}{k_4-k_2}\bigg )\int_{-\infty}^\xi f_2(s)e^{-k_4s}ds\nonumber\\
&\phantom{-\frac{A}{H_{1,\lambda}}\bigg\{\;\,}+\frac{e^{k_1\xi}}{k_4-k_1}\int_\xi^0 f_2(s)(e^{-k_4s}\!-\!e^{-k_1s})ds\!+\! \frac{(k_4-k_3)e^{k_2\xi}}{(k_3-k_2)(k_4-k_2)}\int_{-\infty}^{\xi}f_2(s)e^{-k_2s}ds\bigg]\bigg\},
\\[1mm]
\label{re-v-negative}
v(\xi)&=\bigg (c_3+\frac{\Le}{H_{2,\lambda}}\int_{\xi}^0f_2(s)e^{-k_3s}ds\bigg )e^{k_3\xi}
+\frac{\Le\,e^{k_4\xi}}{H_{2,\lambda}}\int_{-\infty}^{\xi}f_2(s)e^{-k_4s}ds,
\end{align}
for $\xi<0$. Note that $k_2-k_3\neq 0$ (see Appendix \ref{appendix-A}). For $\xi>0$, we get
\begin{align}
\label{re-u-positive}
u(\xi)&=\frac{e^{k_1\xi}}{H_{1,\lambda}}\int_{\xi}^{\infty}f_1(s)e^{-k_1s}ds+\bigg (c_6+{\frac{1}{H_{1,\lambda}}\int_0^{\xi}f_1(s)e^{-k_2s}ds}\bigg )e^{k_2\xi},\\[1mm]
\label{re-v-positive}
v(\xi)&=\frac{\Le\,e^{k_5\xi}}{H_{3,\lambda}}\int_{\xi}^{\infty}f_2(s)e^{-k_5s}ds+\bigg (c_8+\frac{\Le}{H_{3,\lambda}}\int_0^{\xi}f_2(s)e^{-k_6s}ds\bigg )e^{k_6\xi}.
\end{align}
Imposing the boundary conditions, we obtain the following linear system for the unknowns $c_1$, $c_3$, $c_6$ and $c_8$:
\begin{equation}
\begin{pmatrix}
1 &\frac{A}{(k_3-k_2)H_{1,\lambda}} & -1 & 0\\
0 & 1 & 0 & -1\\
k_1 & \frac{Ak_2}{(k_3-k_2)H_{1,\lambda}} & \frac{1}{\Theta_i-1}-k_2 & 0\\
0 & k_3 & \frac{\Le}{1-\Theta_i} & -k_6
\end{pmatrix}
\begin{pmatrix}
c_1\\
c_3\\
c_6\\
c_8
\end{pmatrix}
=\begin{pmatrix}
F_1\\
F_2\\
F_3\\
F_4
\end{pmatrix},
\label{matrix}
\end{equation}
where
\begin{align*}
F_1=&-\frac{A\Le}{(k_4-k_2)H_{1,\lambda}H_{2,\lambda}}{\mathscr I}_4-\frac{1}{H_{1,\lambda}}{\mathscr I}_2+\frac{1}{H_{1,\lambda}}{\mathscr I}_1
-\frac{A\Le(k_4-k_3)}{(k_3-k_2)(k_4-k_2)H_{1,\lambda}H_{2,\lambda}}{\mathscr I}_3;\\[1mm]
F_2=&\frac{\Le}{H_{3,\lambda}}{\mathscr I}_5-\frac{\Le}{H_{2,\lambda}}{\mathscr I}_4;\\[1mm]
F_3=&-\frac{A\Le k_2}{(k_4-k_2)H_{1,\lambda}H_{2,\lambda}}{\mathscr I}_4\!-\!\frac{k_2}{H_{1,\lambda}}{\mathscr I}_2\!+\!\frac{1}{H_{1,\lambda}}\bigg (k_1\!+\!\frac{1}{1-\Theta_i}\bigg ){\mathscr I}_1\!+\!\frac{A\Le k_2}{(k_3-k_2)(k_4-k_2)H_{1,\lambda}}{\mathscr I}_3;\\[1mm]
F_4=&\frac{\Le k_5}{H_{3,\lambda}}{\mathscr I}_5-\frac{\Le k_4}{H_{2,\lambda}}{\mathscr I}_4-\frac{\Le}{(1-\Theta_i)H_{4,\lambda}}{\mathscr I}_1.
\end{align*}
This system is uniquely solvable if and only if $\overline{D}(\lambda;\Theta_i,\Le)=[\Le (k_2-k_3)]^{-1}D(\lambda; \Theta_i, \Le)$, the
determinant of the matrix in left-hand side of \eqref{matrix}, does not vanish, where
$D(\lambda; \Theta_i, \Le)$ is defined in \eqref{d,r,Le}.
Hence, the solutions to the equation $D(\lambda; \Theta_i, \Le)=0$ are elements of the point spectrum of $L$. Property (3) is proved.
On the other hand, as it is easily seen, if $\lambda\notin (-\infty,-1/4]\cup {\mathcal P}$ is not a root of the dispersion relation, then
it is easy to check that the function $\pmb u$ given by \eqref{re-u-negative}-\eqref{matrix} belongs to $D(L)$, so that $\lambda$ is an element of the resolvent set of operator $L$.
Finally, we consider the case when $k_3=k_1$, which gives $\lambda=\lambda_{\pm}:=-\frac{A\Le}{\Le-1}\pm \frac{i\sqrt{A\Le(\Le-1)}}{\Le-1}$ (see Appendices \ref{appendix-A} and \ref{appendix-B}).
It is easy to check that this pair of conjugate complex numbers does not belong to ${\mathcal P}$.
It thus follows that $u$ for $\xi\ge 0$ and $v$ for $\xi\in\R$ are still given by \eqref{re-v-negative}, \eqref{re-u-positive} and \eqref{re-v-positive}.
On the other hand, for $\xi<0$, $u$ is given by
\begin{align*}
u(\xi)=&c_1e^{k_1\xi}-\frac{Ac_3}{H_{1,\lambda}}\xi e^{k_1\xi}+\frac{e^{k_1\xi}}{H_{1,\lambda}}\int_{\xi}^0f_1(s)e^{-k_1s}ds+\frac{e^{k_2\xi}}{H_{1,\lambda}}\int_{-\infty}^{\xi}f_1(s)e^{-k_2s}ds\\
&+\frac{A\Le\, e^{k_1\xi}}{H_{1,\lambda}H_{2,\lambda}}\int_{\xi}^0(s-\xi)f_2(s)ds-\frac{A\Le\, e^{k_1\xi}}{H_{1,\lambda}H_{2,\lambda}^2}\int_{-\infty}^0f_2(s)e^{-k_4s}ds\\
&+\frac{A\Le\ e^{k_1\xi}}{H_{1,\lambda}H_{2,\lambda}^2}\int_{\xi}^0f_2(s)e^{-k_1s}ds+\frac{A\Le\, e^{k_4\xi}}{H_{1,\lambda}H_{2,\lambda}^2}\int_{-\infty}^{\xi}f_2(s)e^{-k_4s}ds\\
&+\frac{A}{H_{1,\lambda}}\bigg\{
\frac{e^{k_1\xi}}{k_1-k_2}c_3+\frac{\Le}{H_{2,\lambda}}\bigg[
\frac{e^{k_4\xi}}{k_4-k_2}\int_{-\infty}^{\xi}f_2(s)e^{-k_4s}ds-\frac{e^{k_1\xi}}{k_1-k_2}\int_{\xi}^0 f_2(s)e^{-k_1s}ds \nonumber\\
&\phantom{+\frac{A}{H_{1,\lambda}}\bigg\{\frac{e^{k_1\xi}}{k_1-k_2}c_3+\frac{\Le}{H_{2,\lambda}}\bigg[\;\,}+ \frac{(k_4-k_1)e^{k_2\xi}}{(k_1-k_2)(k_4-k_2)}\int_{-\infty}^{\xi}f_2(s)e^{-k_2s}ds\bigg]\bigg\}.
\end{align*}
Notice that $\sup_{\xi<0}e^{\frac{1}{2}\xi}|u(\xi)|<\infty$; therefore, $\pmb u$ belongs to $\bm{\mathcal W}$. Imposing the boundary conditions, we get a linear system for the unknowns $(c_1,c_3,c_6,c_8)$, whose matrix is the same as in
\eqref{matrix}. Since the determinant is not zero when $\lambda=\lambda_{\pm}$ (see Appendix \ref{appendix-B}) and the first- and second-order derivatives of
$\pmb u$ belong to $\pmb {\mathcal W}$, we conclude that $\lambda_{\pm}$ are in the resolvent set of operator $L$.
\vskip 1mm
{\em Step 3}.
Now, we proceed to show that $0$ is an isolated simple eigenvalue of the operator $L$.
In view of the previous steps, in a neighborhood of $\lambda=0$ the solution $\pmb u=R(\lambda,L)\pmb f$ of the equation $\lambda \pmb u-L\pmb u=\pmb f$ is given by
\eqref{re-u-negative}-\eqref{re-v-positive} for any $\pmb f\in\bm{\mathcal W}$, where
\begin{align*}
c_1=&\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{
\bigg [\frac{(k_6\!-\!k_3)(1-\Theta_i)}{\Le}\!-\!\frac{A}{(k_3\!-\!k_2)H_{1,\lambda}}\bigg]{\mathscr I}_1\!+\!\frac{k_6\!-\!k_3}{\Le H_{1,\lambda}}{\mathscr I}_2
\!-\!\frac{A(k_6\!-\!k_3)}{(k_3\!-\!k_2)(k_4\!-\!k_2)H_{1,\lambda}}{\mathscr I}_3\\
&\phantom{\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{\,}
+\frac{A}{H_{1,\lambda}H_{2,\lambda}}\bigg(\frac{k_6-k_3}{k_4-k_2} -\frac{k_6-k_4}{k_3-k_2} \bigg){\mathscr I}_4-\frac{A}{(k_3-k_2)H_{1,\lambda}}{\mathscr I}_5\bigg\},\\[1mm]
c_3=&\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{{\mathscr I}_1+{\mathscr I}_2-\frac{A\Le}{(k_4-k_2)(k_3-k_2)}{\mathscr I}_3\\
&\phantom{\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{\;\,}
+\frac{1}{H_{2,\lambda}}\bigg [(k_6\!-\!k_4)\big[1-H_{1,\lambda}(1-\Theta_i)\big]\!+\!\frac{A\Le }{k_4-k_2} \bigg]{\mathscr I}_4
\!+\!\big[1-H_{1,\lambda}(1-\Theta_i)\big]{\mathscr I}_5\bigg\},\\[1mm]
c_6=&\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{\frac{1}{H_{1,\lambda}}\bigg (\frac{A}{k_3-k_2}+\frac{k_6-k_3}{\Le}\bigg ){\mathscr I}_1
\!+\!\frac{(k_6\!-\!k_3)(1\!-\!\Theta_i)}{\Le}{\mathscr I}_2\!-\!\frac{A(k_6\!-\!k_3)(1\!-\!\Theta_i)}{(k_3\!-\!k_2)(k_4\!-\!k_2)}{\mathscr I}_3
\\
&\phantom{\frac{\Le (k_2-k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{\;\,}
\!+\!\frac{A(1\!-\!\Theta_i)}{H_{2,\lambda}}\bigg(\frac{k_6\!-\!k_3}{k_4\!-\!k_2}\!-\!\frac{k_6\!-\!k_4}{k_3\!-\!k_2}\bigg){\mathscr I}_4
-\frac{A(1\!-\!\Theta_i)}{k_3\!-\!k_2}{\mathscr I}_5
\bigg\},\\[1mm]
c_8=&\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{{\mathscr I}_1\!+\!{\mathscr I}_2\!-\!\frac{A\Le}{(k_3\!-\!k_2)(k_4\!-\!k_2)}{\mathscr I}_3+\bigg[1\!-\!H_{1,\lambda}(1\!-\!\Theta_i)\!+\!\frac{A\Le}{(k_3\!-\!k_2)(k_4\!-\!k_2)}\bigg]{\mathscr I}_4
\\
&\phantom{\frac{\Le (k_2\!-\!k_3)}{D(\lambda;\Theta_i,\Le)}\bigg\{}
+\bigg[\frac{A\Le}{(k_3-k_2)H_{3,\lambda}}
+[1-H_{1,\lambda}(1-\Theta_i)]\bigg (1+\frac{k_6-k_3}{H_{3,\lambda}}\bigg )\bigg]{\mathscr I}_5\bigg\}.
\end{align*}
As it is immediately seen, the function $D(\cdot;\Theta_i,\Le)$ is analytic in a neighborhood of $\lambda=0$, which is simple zero of such a function, and the other functions appearing in \eqref{re-u-negative}-\eqref{re-v-positive} are holomorphic in a neighborhood of $\lambda=0$. Hence, we conclude that zero is a simple pole of the resolvent operator $R(\lambda,L)$.
Since $\displaystyle\frac{d\pmb U}{d\xi}$ belongs to the kernel of $L$ (see Remark \ref{rem-simple}) and the matrix in \eqref{matrix} has rank three at $\lambda=0$, this function
generates the kernel, so that the geometric multiplicity of the eigenvalue $\lambda=0$ is one. This is enough to conclude that
$\lambda=0$ is a simple eigenvalue of $L$. Property (2) is established and the spectrum of $L$ is completely characterized.
\vskip 1mm
{\em Step 4}. In order to prove that $L$ is sectorial, it is sufficient to show that there exist two positive constants $C$ and $M$ such that
\begin{align}
\label{resolvent estimate}
\|R(\lambda,L)\|_{L(\bm{\mathcal W})}\le
C|\lambda|^{-1},\qquad\;\,\Re{\lambda}\ge M.
\end{align}
Without loss of generality, we can assume that $k_{1,\lambda}\neq k_{3,\lambda}$ and the conditions in \eqref{cond-1} are all satisfied if $\Re\lambda\ge M$.
Throughout this step, $C_j$ denotes a positive constant, independent of $\lambda$ and $\pmb f\in\bm{\mathcal W}$.
We begin by estimating the terms $H_{j,\lambda}$ ($j=1,2,3$). As it is easily seen,
\begin{align}
|H_{2,\lambda}|\ge\Re(H_{2,\lambda})=\sqrt{\frac{|{\Le}^2+4\Le(A+\lambda)|+{\Le}^2+4\Le(A+\Re\lambda)}{2}}\ge\sqrt{2\Le|\lambda|}
\label{estim-H1}
\end{align}
for any $\lambda\in\C$ with positive real part.
Since $H_{1,\lambda}$ and $H_{3,\lambda}$ can be obtained from $H_{2,\lambda}$, by taking, $(\Le,A)=(1,0)$ and $(\Le,A)=(\Le,0)$ respectively, we also deduce that
\begin{align}
|H_{1,\lambda}|\ge\Re(H_{1,\lambda})\ge\sqrt{2|\lambda|},\qquad\;\,|H_{3,\lambda}|\ge\Re(H_{3,\lambda})\ge\sqrt{2\Le|\lambda|}
\label{estim-H2-H3}
\end{align}
for the same values of $\lambda$.
Thanks to \eqref{estim-H1} and \eqref{estim-H2-H3}, we can easily estimate the terms
${\mathscr I}_j$ $(j=1,\ldots,5)$. Indeed, since $\Re(k_1)+1/2>0$, we obtain
\begin{align*}
|{\mathscr I}_1|&=\bigg |\int_0^{\infty}f_1(s)e^{-k_1s}ds\bigg |\le \sup_{\xi>0}e^{\frac{1}{2}\xi}|f_1(\xi)|\int_0^{\infty}e^{-\frac{1}{2}\Re(H_{1,\lambda})s}ds \le C_1|\lambda|^{-\frac{1}{2}}\|\pmb f\|_{\bm{\mathcal W}}.
\end{align*}
The other terms ${\mathscr I}_j$ can be treated likewise and we get
$\sum_{j=2}^5|{\mathscr I}_j|\le C_2|\lambda|^{-\frac{1}{2}}\|\pmb f\|_{\bm{\mathcal W}}$ for every $\pmb f\in\bm{\mathcal W}$ and $\lambda\in\C$ with positive real part.
Next, we turn to the function $D(\cdot;\Theta_i,\Le)$. We observe that
\begin{align*}
|D(\lambda;\Theta_i,\Le)|\ge [(1-\Theta_i)\sqrt{|1+4\lambda|}-1]|k_{6,\lambda}-k_{3,\lambda}||k_{3,\lambda}-k_{2,\lambda}|-A\Le
\end{align*}
for any $\lambda\in\C$.
Taking \eqref{estim-H1} and \eqref{estim-H2-H3} into account, we can show that
\begin{equation}
C_3\sqrt{|\lambda|}\le |k_{3,\lambda}-k_{2,\lambda}|+|k_{3,\lambda}-k_{6,\lambda}|\le C_4\sqrt{|\lambda|}
\label{estimate-k1-k4}
\end{equation}
for $\lambda\in\C$ with sufficiently large positive real part. Hence, for such values of $\lambda$'s we can continue the previous inequality and get
\begin{align}
|D(\lambda;\Theta_i,\Le)|\ge C_5|\lambda|^{\frac{3}{2}}.
\label{estimate-D}
\end{align}
Similarly, $|k_{6,\lambda}-k_{4,\lambda}|\le C_6\sqrt{|\lambda|}$
for any $\lambda$ with positive real part and
\begin{equation}
|k_{4,\lambda}-k_{2,\lambda}|\ge \frac{1}{2}|H_{2,\lambda}|-\frac{1}{2}|H_{1,\lambda}|-\frac{\Le-1}{2}
\ge \sqrt{\frac{\Le|\lambda|}{2}}-\sqrt{\frac{|\lambda|}{2}}-\frac{\Le-1}{2}\ge C_7\sqrt{|\lambda|},
\label{estim-k2-k4}
\end{equation}
if $\Re\lambda$ is sufficiently large. From \eqref{estim-H1}-\eqref{estim-k2-k4} we infer that
$|c_1|+|c_3|+|c_6|+|c_8|\le C_8|\lambda|^{-1}$ for any $\lambda\in\C$ with $\Re(\lambda)\ge M$ and
a suitable positive constant $M$. Further, observing that
\begin{eqnarray*}
|k_{3,\lambda}-k_{1,\lambda}|+|k_{4,\lambda}-k_{1,\lambda}|\ge C_9\sqrt{|\lambda|},\qquad\;\,|k_{4,\lambda}-k_{3,\lambda}|\le C_{10}\sqrt{|\lambda|},
\end{eqnarray*}
we are now able to estimate the functions $u$ and $v$ in \eqref{re-u-negative}-\eqref{re-v-positive} and show that
\eqref{resolvent estimate} holds true. The proof is complete.
\end{proof}
\begin{remark}
\rm{It is worth pointing out that, as $\Le \to \infty$, the set ${\mathcal P}$ degenerates into a vertical line $\Re \lambda=-\Theta_i(1-\Theta_i)^{-1}-1/2$. In the limit case, the system is partly parabolic and the semigroup is not analytic, see, e.g., \cite[Section 1, p. 2435]{GLS10}.}
\end{remark}
\setcounter{tocdepth}{2}
\section{The fully nonlinear problem}
\label{sect-3}
Our goal in this section is to get rid of the eigenvalue 0 and then derive a new fully nonlinear problem.
We recall that the eigenvalue $0$ is related to the translation invariance of the traveling wave. In a first step, we use here a method similar to that of \cite{BLS92} or \cite[p. 358]{Lunardi96}.
\subsection{Ansatz revisited: elimination of the eigenvalue $0$}
\label{subsect-2.3}
It is convenient to write System \eqref{u_1}-\eqref{u_2} with notation $\pmb u=(u_1,u_2)$, $\pmb U=(\Theta^0,\Phi^0)$, see Section \ref{fixed}, in an abstract form:
\begin{equation}
\label{v}
\dot{\pmb u}=L\pmb u+\dot{s}{\pmb U}^{\prime}+\dot{s}{\pmb u}^{\prime}.
\end{equation}
Note that, in view of \eqref{interface-u}, $\pmb u(\tau,\cdot)$ belongs to $D(L)$ for each $\tau$.
Since 0 is an isolated simple eigenvalue of $L$, we can introduce the spectral projection $P$ onto the kernel of $L$, defined by
$P\pmb f=\langle\pmb f,{\pmb e^{*}}\rangle\pmb U^{\prime}$ for every $\pmb f\in\bm{\mathcal W}$ and a unique ${\pmb e^{*}}\in \bm{\mathcal W}^*$, the dual space of $\bm{\mathcal W}$, such that $\langle\pmb U^{\prime},{\pmb e^{*}}\rangle=1$. For further use, we recall that $P$ commutes with $L$ on $D(L)$.
We are going to apply the projections $P$ and $Q=I-P$ to System ($\ref{v}$) to remove the eigenvalue $0$.
\medskip
\paragraph{\bf Ansatz 2} We split $\pmb u$ into $\pmb u(\tau,\cdot)=P\pmb u(\tau,\cdot)+Q\pmb u(\tau,\cdot)=p(\tau)\pmb U^{\prime}+\pmb w(\tau,\cdot)$, i.e.,
\begin{align}
u_1(\tau, \xi)=&p(\tau)\frac{d\Theta^0}{d\xi}(\xi)+w_1(\tau, \xi),\label{splitting-w1}\\
u_2(\tau, \xi)=&p(\tau)\frac{d\Phi^0}{d\xi}(\xi)+w_2(\tau, \xi),\notag
\end{align}
where $p(\tau)=\langle \pmb u(\tau),{\pmb e^{*}}\rangle $ and $\pmb w=(w_1,w_2)$. Clearly, $\pmb w(\tau,\cdot)\in Q(D(L))$ for each $\tau$.
It follows from ($\ref{v}$) that
\begin{equation}
\dot{p}=\dot{s}+\dot{s}\langle\pmb u^{\prime},{\pmb e^{*}}\rangle,\qquad\;\,
\dot{\pmb w}=L\pmb w+\dot{s}Q\pmb u^{\prime},
\label{w}
\end{equation}
a Lyapunov-Schmidt-like reduction of the problem. We point out that the above procedure generates a new ansatz slightly different from ansatz 1 (see \eqref{ansatz1}) that helps us determine the functional framework.
Thanks to new ansatz 2, we are going to derive an equation for $\pmb w$ in the space $\bm{\mathcal W}$. Now, the spectrum of the part of $L$ in $Q(\bm{\mathcal W})$ does not contain the eigenvalue $0$.
\subsection{Derivation of the fully nonlinear equation}
To get a self-contained equation for $\pmb w$, we need to eliminate $\dot{s}$ from the right-hand side of the second equation in \eqref{w}. For this purpose,
we begin by evaluating the first component of \eqref{w} at $\xi=0^+$ to get
\begin{align}
\frac{\partial w_1}{\partial\tau}(\cdot,0^+)=&(L\pmb w)_1(\cdot,0^+)+\dot{s}(Q\pmb u')_1(\cdot,0^+)\notag\\
=&(L\pmb w)_1(\cdot,0^+)+\dot{s}\frac{\partial u_1}{\partial \xi}(\cdot,0^+)+\dot{s}\langle\pmb u',{\pmb e^{*}}\rangle\Theta_i.
\label{e}
\end{align}
Next, we observe that the function $w_1$ is continuous (but not differentiable) at $\xi=0$, since both $\pmb u$ and $\pmb U'$ are continuous at $\xi=0$. Therefore, evaluating \eqref{splitting-w1} at $\xi=0$ and recalling that $u_1(\tau,0)=0$ (see \eqref{interface-u}), we infer that
$w_1(\tau,0)=\Theta_ip(\tau)$. Differentiating this formula yields
\begin{align}
\frac{\partial w_1}{\partial\tau}(\cdot,0)=\dot{p}\Theta_i=\dot{s}\Theta_i+\dot{s}\langle \pmb u',{\pmb e^{*}}\rangle\Theta_i,
\label{key}
\end{align}
From ($\ref{e}$) and ($\ref{key}$), it follows that
\begin{equation}
\label{s}
\dot{s}\Theta_i=(L\pmb w)_1(\cdot,0^+)+\dot{s}\frac{\partial u_1}{\partial \xi}(\cdot,0^+).
\end{equation}
To get rid of the spatial derivatives of $u_1$ from the right-hand side of \eqref{s}, we use \eqref{splitting-w1} to write
\begin{eqnarray}
\label{u_1,w_1}
\frac{\partial u_1}{\partial \xi}(\cdot,0^+)=p\frac{d^2\Theta^0}{d\xi^2}(0^+)+w'_1(\cdot,0^+)
=w_1(\cdot,0)+w'_1(\cdot,0^+).
\end{eqnarray}
Plugging ($\ref{u_1,w_1}$) into ($\ref{s}$), we finally obtain the formula
\begin{equation}\label{shift}
\dot s=\frac{(L\pmb w)_1(\cdot,0^+)}{\Theta_i-w_1(\cdot,0)-w'_1(\cdot,0^+)},
\end{equation}
which can be regarded as a underlying \textit{second-order Stefan condition}, see \cite{BL18}.
Hence, replacing it in ($\ref{w}$), we get
\begin{align*}
\frac{\partial\pmb w}{\partial\tau}
=&L\pmb w+\frac{(L\pmb w)_1(\cdot,0^+)}{\Theta_i-w_1(\cdot,0)-w'_1(\cdot,0^+)}Q\pmb u' \nonumber\\
=&L\pmb w+\frac{(L\pmb w)_1(\cdot,0^+)}{\Theta_i-w_1(\cdot,0)-w'_1(\cdot, 0^+)}Q\bigg (\frac{w_1(\cdot,0)}{\Theta_i}\pmb{U''}+\pmb{w'}\bigg ),
\end{align*}
which is a fully nonlinear parabolic equation in the space $\bm{\mathcal W}$ written in a more abstract form:
\begin{equation}\label{FNLE}
\frac{\partial\pmb w}{\partial\tau} = L\pmb w + {F}(\pmb w), \quad {\pmb w}\in Q(D(L)).
\end{equation}
and is going to be the subject of our attention.
Note that Equation \eqref{FNLE} is fully nonlinear since the function $F$ depends on $\pmb w$ also through the limit at $0^+$ of $L\pmb w$.
Moreover, the operator $L$ is sectorial in $Q(\bm{\mathcal W})$.
Hence, we can take advantage of the theory of analytic semigroups to solve Equation \eqref{FNLE}. We refer the reader to \cite[Chapter 4]{Lunardi96} for further details.
\setcounter{tocdepth}{2}
\section{Stability of the traveling wave solution}\label{stability}
This section is devoted to the analysis of the stability of the traveling wave solution $\pmb U$. Here, stability refers to orbital stability with asymptotic phase $s_{\infty}$.
From now on, we focus on the asymptotic situation where the Lewis number, $\Le$, is large and, in this respect, we use the notation $\varepsilon = 1/\Le$ to stand for a small perturbation parameter. Simultaneously, we assume that $\Theta_i$ is close to the burning temperature normalized at unity, which is physically relevant (see \cite[Section 3.2, Fig. 5]{BGKS15}). More specifically, we restrict $\Theta_i$ to the domain $\frac{2}{3}<\Theta_i <1$.
In what follows, we introduce \hbox{$m:= \Theta_i/(1-\Theta_i)$} as the \textit{bifurcation parameter} which runs in the interval $(2,\infty)$, due to the choice of $\Theta_i$.
With the above notation, $A =m+\varep m^2$ and the \textit{dispersion relation} $D(\lambda;\Theta_i,\Le)$ (see \eqref{d,r,Le}) in Section 2 reads:
\begin{align}
\label{dispersion epsilon}
D_{\varepsilon}(\lambda;m) =&-\frac{1}{4}\big(\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}+\sqrt{1+4\varepsilon \lambda}\big)\notag\\
&\qquad\times\bigg(\frac{1}{\varepsilon}[\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}-1]\!+\!1\!+\!\sqrt{1+4\lambda}\bigg)\!
\bigg(1\!-\!\frac{\sqrt{1+4\lambda}}{1+m}\bigg )\!+\!m\!+\!\varepsilon m^2.
\end{align}
This section is split into two parts. First, we study the stability of the null solution of the fully nonlinear equation \eqref{FNLE}. Second, we turn our attention to the stability of the traveling wave.
\subsection{Stability of the null solution of (\ref{FNLE})}
To begin with, we recall that the spectrum of the part of $L$ in $\bm{\mathcal W}_Q:=Q(\bm{\mathcal W})$ is the set
\begin{align*}
\left (-\infty,-{\textstyle \frac{1}{4}}\right ]\cup\mathcal{P}\cup\{\lambda\in\C\setminus\{0\}:D_{\varepsilon}(\lambda;m)=0\}.
\end{align*}
As we will show, the roots of the dispersion relation $D_{\varepsilon}(\cdot;m)$ are finitely many.
As a consequence, there is a gap between the spectrum of this operator and the imaginary axis (at least for $\varepsilon$ small enough).
In view of the principle of linearized stability, the main step in the analysis of the stability of the null solution of Equation \eqref{FNLE} is a deep insight in the solutions of the dispersion relation. More precisely, we need to determine when they are all contained in the left halfplane and when some of them lie in the right halfplane.
The limit critical value $m^c=6$ will play an important role in the analysis hereafter.
\begin{theorem}
\label{stability theorem FNLE}
The following properties are satisfied.
\begin{enumerate}[\rm (i)]
\item
Let $m\in (2,m^c)$ be fixed. Then, there exists $\varepsilon_0=\varepsilon_0(m)>0$ such that, for $\varepsilon\in (0,\varepsilon_0)$, the null solution of the fully nonlinear problem \eqref{FNLE} is stable with respect to perturbations belonging to $Q(D(L))$.
\item
Let $m>m^c$ be fixed. Then, there exists $\varepsilon_1=\varepsilon_1(m)$ small enough such that, for $\varepsilon\in (0,\varepsilon_1)$, the null solution of \eqref{FNLE} is unstable with respect to perturbations belonging to $Q(D(L))$.
\end{enumerate}
\end{theorem}
\begin{proof}
To begin with, we observe that the functions $D_{\varepsilon}(\cdot,m)$ are holomorphic in $\C\setminus (-\infty,-1/4]$ and therein they locally converge to the \textit{limit dispersion relation} $D_0(\cdot,m)$ defined by
\begin{align*}
D_0(\lambda;m)=&-\frac{1}{2}[2(m+\lambda)+1+\sqrt{1+4\lambda}]\left (1-\frac{\sqrt{1+4\lambda}}{1+m}\right )+m\notag\\
=&\frac{\sqrt{1+4\lambda}-1}{4(1+m)}[4\lambda-(m-2)\sqrt{1+4\lambda}+m+2],
\end{align*}
as $\varepsilon\to 0^+$.
The solutions of the equation $D_0(\lambda;m)=0$ are $\lambda=0$, for all $m$,
and the roots of the second-order polynomial $4\lambda^2+(6m-m^2)\lambda+2m$, whose real part is not less than $-(m+2)/4$.
This polynomial admits conjugate solutions $\lambda_{1,2}= a(m) \pm ib(m)$, where $a(m)=\frac{1}{8}(m^2-6m)$ and $b(m)= \frac{1}{8}(m-2)\sqrt{|8m-m^2|}$, if $m\in (2,8)$ and real solutions $\lambda_{1,2}= a(m) \pm b(m)$ otherwise. The coefficient $a(m)$ is negative whenever $2<m<6$ and positive for $m>6$. It can be easily checked
that ${\rm Re}(\lambda_{1,2})\ge -(m+2)/4$ for each $m\in (2,\infty)$, so that $\lambda_{1,2}$ solve the equation $D_0(\lambda; m)=0$. In particular, there are two conjugate purely imaginary roots $\lambda_{1,2}=\pm\sqrt{3}i$ at $m=6$.
We can now prove properties (i) and (ii).
(i) Fix $\rho>0$ such that the closure of the disks of center $\lambda_{1,2}$ and radius $\rho$ is contained in $\{\Re z <0\}\backslash (-\infty,-\frac{1}{4}]$. Hurwitz Theorem (see, e.g., \cite[Chapter 7, Section 2]{Conway78}) and the above results show that there exists $\varepsilon_0 >0$ such that, for $\varepsilon\in (0,\varepsilon_0)$, $D_{\varepsilon}(\lambda;m)$ admits exactly two conjugate complex roots $\lambda_{1,2}(\varepsilon)$ in the disk $|\lambda-\lambda_{i}|<\rho$ and $\lambda_{i}(\varepsilon)$ converges to $\lambda_i$, as $\varepsilon \to 0$, for $i=1,2$. Therefore, all the elements of the spectrum of the part of operator $L$
in $\bm{\mathcal W}_Q$ have negative real parts, which implies that the operator norm of the restriction to $\bm{\mathcal W}_Q$ of the analytic semigroup $e^{\tau L}$ generated by $L$, decays to zero with exponential rate as $t\to\infty$. Now, the nonlinear stability follows from applying a standard machinery: the solution of Equation \eqref{FNLE}, with initial datum $\pmb w_0$ in a small (enough) ball of $Q(D(L))$ centered at zero, is given by the variation-of-constants-formula
\begin{eqnarray*}
\pmb w(\tau,\cdot)=e^{\tau L}\pmb w_0+\int_0^{\tau}e^{(\tau-s)L}F(\pmb w(s,\cdot))ds,\qquad\;\,\tau>0.
\end{eqnarray*}
Applying the Banach fixed point theorem in the space
\begin{eqnarray*}
\bm{\mathcal X}^{\alpha}_{\omega}\!=\!\bigg\{\pmb w\!\in\! C([0,\infty);\pmb{\mathcal W}_Q):\sup_{\sigma\in (0,1)}\sigma^{\alpha}\|\pmb w\|_{C^{\alpha}([\sigma,1];D(L))}<\infty: \tau\mapsto e^{\omega\tau}\pmb w(\tau,\cdot)\!\in\! C^{\alpha}([1,\infty);D(L))\bigg\},
\end{eqnarray*}
endowed with the natural norm, where $\alpha$ is fixed in $(0,1)$ and $\omega$ is any positive number less than the real part of
$\lambda_1(\varepsilon)$, allows us to prove the existence and uniqueness of a solution $\pmb w$ of \eqref{FNLE}, defined in $(0,\infty)$ such that
$\|\pmb w(\tau,\cdot)\|_{\pmb{\mathcal W}}+\|L{\pmb w}(\tau,\cdot)\|_{\pmb{\mathcal W}}
\le Ce^{-\omega\tau}\|\pmb w_0\|_{D(L)}$ for $\tau\in (0,\infty)$ and some positive constant $C$, which yields the claim. For further details see \cite[Chapter 9]{Lunardi96}.
(ii) For $m>m^c$, we use again Hurwitz Theorem to show that there exists $\varepsilon_1=\varepsilon_1(m)>0$ such that the equation
$D_{\varepsilon}(\lambda,m)=0$ admits a solution with positive real part if $\varepsilon\in (0,\varepsilon_1)$. More precisely, it admits a couple of conjugate complex roots with positive real parts, if $m<8$, a positive root, if $m=8$, and two real solutions if $m>8$. For these values of $\varepsilon$, the restriction of the semigroup $e^{\tau L}$ to $\bm{\mathcal W}_Q$ exhibits an exponential dichotomy, i.e., there exists a spectral projection $P_+$ which allows to split $\bm{\mathcal W}_Q=P_+(\bm{\mathcal W}_Q)\oplus (I-P_+)(\bm{\mathcal W}_Q)$. The semigroup $e^{\tau L}$ decays to zero with exponential rate when restricted to $(I-P)(\bm{\mathcal W}_Q)$, whereas the restriction of $e^{\tau L}$ to $P_+(\bm{\mathcal W}_Q)$
extends to a group which decays to zero with exponential rate as $\tau\to-\infty$. Again with a fixed point technique, we can prove the existence of a nontrivial backward solution
$\pmb z$ of the nonlinear equation \eqref{FNLE}, defined in $(-\infty,0)$ such that
$\|\pmb z(\tau,\cdot)\|_{\pmb{\mathcal W}}+\|L\pmb z(\tau,\cdot)\|_{\pmb{\mathcal W}}\le C_{\omega}e^{\omega\tau}$ for $\tau\in (-\infty,0)$ and
any $\omega$ positive and smaller than the minimum of the positive real parts of the roots of the dispersion relation.
The sequence $(\pmb z_n)$ defined by $\pmb z_n=\pmb z(-n,\cdot)$ vanishes in $D(L)$ as $n\to+\infty$ and the solution $\pmb w_n$ to \eqref{FNLE} subject to the initial condition
$\pmb w_n(0,\cdot)=\pmb z_n$ exists at least in the time domain $[0,n]$, where it coincides with the function $\pmb z(\cdot-n,\cdot)$. Thus,
the norm of $\|\pmb w_n\|_{C([0,n];\pmb{\mathcal W}_Q)}$ is positive and far way from zero, uniformly with respect to $n\in\N$, whence the instability of the trivial solution of
\eqref{FNLE} follows. Again, we refer the reader to \cite[Chapter 9]{Lunardi96} for further results.
\end{proof}
\subsection{Stability of the traveling wave}
We can now rewrite the results in Theorem \ref{stability theorem FNLE} in terms of problem
\eqref{perturbation T}-\eqref{interface-Theta-Phi}.
\begin{theorem}
\label{stability theorem TW}
The following properties are satisfied.
\begin{enumerate}[\rm (i)]
\item
For $m\in (2,m^c)$ fixed, there exists $\varepsilon_0=\varepsilon_0(m)>0$ such that, for $\varepsilon\in (0,\varepsilon_0)$, the traveling wave solution $\pmb U$ is orbitally stable with asymptotic phase $s_{\infty}$ $($see \eqref{s-infty}$)$, with respect to perturbations belonging to the weighted space $D(L)$.
\item
For $m>m^c$ fixed, there exists $\varepsilon_1=\varepsilon_1(m)$ small enough such that, for $\varepsilon\in (0,\varepsilon_1)$, the traveling wave $\pmb U$ is unstable.
with respect to perturbations belonging to the weighted space $D(L)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Let us fix $\pmb w_0\in Q(D(L))$ with $\|\pmb w_0\|_{D(L)}$ small enough, so that Theorem \ref{stability theorem FNLE}(i) can be applied.
Denote by $\pmb w$ the classical solution to Equation \eqref{FNLE} which satisfies the initial condition $\pmb w(0,\cdot)=\pmb w_0=(w_{0,1},w_{0,2})$.
Observe that, since $p=\Theta_i^{-1}w_1(\cdot,0)$ (see Subsection \ref{subsect-2.3}) it follows that the problem \eqref{v}, subject to the initial condition $\pmb u(0,\cdot)=\Theta_i^{-1}w_{0,1}\pmb U'+\pmb w_0$, admits a unique classical solution $(\pmb u,s)$, where $\pmb u$ decreases to zero as $\tau\to\infty$, with exponential rate. Moreover, using \eqref{shift} it is immediate to check
that $s(\tau)$ converges to
\begin{equation}
s_\infty=\int_{0}^{\infty}\frac{(L\pmb w)_1(\tau,0^+)}{\Theta_i-w_1(\tau,0)-w'_1(\tau,0^+)}d\tau,
\label{s-infty}
\end{equation}
as $\tau\to\infty$ (assuming for simplicity that $g$ vanishes at $\tau=0$).
We point out that $s_{\infty}$ depends on the initial condition.
Coming back to problem \eqref{perturbation T}-\eqref{interface-Theta-Phi} with initial condition ${\pmb X}(0)=\pmb u_0+\pmb U$ and $g(0)=0$, we easily see that the solution ${\pmb X}=(\Theta,\Phi)$ is defined by
\begin{align*}
&{\pmb X}=p\pmb U'+\pmb w+\pmb U=\Theta_i^{-1}w_1(\cdot,0)\pmb U'+\pmb w+\pmb U,\\
&g(\tau)=\tau+\int_0^{\tau}\frac{(L\pmb w)_1(\sigma,0^+)}{\Theta_i-w_1(\sigma,0)-w'_1(\sigma,0^+)}d\sigma,\qquad\;\,\tau\ge 0.
\end{align*}
From this formula and the above result, the claim follows at once.
(ii) The proof is similar to that of property (i) and, hence, it is left to the reader.
\end{proof}
\setcounter{tocdepth}{2}
\section{Hopf bifurcation}
\label{sect-5}
This section is devoted to investigating the dynamics of the perturbation of the traveling wave in a neighborhood, say $(6-\delta, 6+\delta)$, of the limit critical value $m^c=6$ (see Section \ref{stability}). As regards parameter $m$, the situation is more complicated than in Section 4 when it was fixed. Now, the dispersion relation ${D}_{\varepsilon}(\lambda;m)$ can be seen as a sequence of analytic functions parameterized by $m$. The main difficulty here is that Hurwitz Theorem does not a priori apply, particularly because of the lack of uniformity of ${D}_{\varepsilon}(\lambda;m)$ with respect to $\varep$ and $m$. We especially find a proper approach to combining $m$ with $\varep$: we construct in Proposition \ref{give critical value} a sequence of critical values $m^c(\varep)$ such that $m^c(0)=m^c$ and apply Hurwitz Theorem to the sequence $D_{\varep}(\lambda,m^c(\varep))$. This proposition will be crucial for proving the existence of a Hopf bifurcation (see Theorem \ref{Hopf bifurcation theorem}).
\subsection{Local analysis of the dispersion relation}\label{p7}
We look for the roots of the \textit{dispersion relation}, see \eqref{dispersion epsilon},
in a neighborhood of $m^c=6$ and of $\lambda = \pm i\sqrt{3}$, for $\varep>0$ small enough. A natural idea is to turn the dispersion relation into a polynomial by squaring, however the price to pay is double: the polynomial will be of high order without algebraic solution, and
spurious roots therefore appear.
For convenience, we rewrite the equation $D_{\varepsilon}(\lambda;m)=0$ into a much more useful form. Replacing
$\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}+\sqrt{1+4\varepsilon \lambda}$ by
$4\varepsilon(m+\varepsilon m^2)(\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}-\sqrt{1+4\varepsilon \lambda})^{-1}$ with some straightforward algebra
we obtain the equivalent equation
\begin{equation}
\sqrt{1+4\varepsilon \lambda}-\frac{1}{1+m}\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}\sqrt{1+4\lambda}+\frac{1+\varepsilon m}{1+m}\sqrt{1+4\lambda}
=\varepsilon\frac{1+4\lambda}{1+m}+1-\varepsilon.
\label{zeta}
\end{equation}
If we denote by $\zeta$ the right-hand side of \eqref{zeta} and set
\begin{align*}
\Sigma_1=&1+4\varepsilon\lambda+\frac{2+6\varepsilon m+5\varepsilon^2 m^2+4\varepsilon\lambda}{(1+m)^2}(1+4\lambda),\\[1mm]
\Sigma_2=&\frac{1+4\lambda}{(1+m)^2}\bigg [(2+6\varepsilon m+5\varepsilon^2 m^2+4\varepsilon\lambda)(1+4\varepsilon\lambda)
+\frac{[1+4\varepsilon(m+\varepsilon m^2+\lambda)](1+\varepsilon m)^2}{(1+m)^2}(1+4\lambda)\bigg ],\\[1mm]
\Sigma_3=&\frac{[1+4\varepsilon(m+\varepsilon m^2+\lambda)](1+\varepsilon m)^2}{(1+m)^4}(1+4\varepsilon\lambda)(1+4\lambda)^2.
\end{align*}
Squaring both sides of \eqref{zeta} and rearranging terms we get the equation
\begin{align}
\zeta^2-\Sigma_1=\frac{2\sqrt{1+4\lambda}}{1+m}\bigg \{&\sqrt{1+4\varepsilon \lambda}[1+\varepsilon m-\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}]\notag\\
&-\frac{1+\varepsilon m}{1+m}\sqrt{1+4\lambda}\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}\bigg \}.
\label{zeta-1}
\end{align}
Squaring both sides of \eqref{zeta-1} and rearranging terms gives
\begin{align}
(\zeta^2-\Sigma_1)^2-4\Sigma_2=\frac{8\sqrt{1+4\varepsilon \lambda}(1+4\lambda)}{(1+m)^2}\bigg [&
\frac{[1+4\varepsilon(m+\varepsilon m^2+\lambda)](1+\varepsilon m)}{1+m}\sqrt{1+4\lambda}\notag\\
&-\frac{(1+\varepsilon m)^2}{1+m}\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}\sqrt{1+4\lambda}\notag\\
&-(1+\varepsilon m)\sqrt{1+4\varepsilon\lambda}\sqrt{1+4\varepsilon(m+\varepsilon m^2+\lambda)}\bigg ].
\label{zeta-2}
\end{align}
Finally, squaring both sides of \eqref{zeta-2} and using \eqref{zeta-1}, we conclude that
$[(\zeta^2-\Sigma_1)^2-4\Sigma_2]^2-64\Sigma_3\zeta^2=0$ or, equivalently, $P_7(\lambda;m,\varepsilon)=0$, where
$P_7(\cdot;m,\varepsilon)$ is a seventh-order polynomial (see Appendix \ref{appendix-C} for the expression of the coefficients of the polynomial).
Finding the eigenvalues of $P_7(\cdot;m,\varepsilon)$ is quite challenging. The Routh-Hurwitz criterion (see, e.g., \cite[Chapter XV]{Gantmakher98}) gives relevant information on the eigenvalues without computing them explicitly, in particular whether the eigenvalues lie in the left halfplane ${\rm Re} \lambda <0$, by computing the Hurwitz determinants $\Delta_j$ ($j=1,\ldots,6$) associated with $P_7(\lambda;m,\varepsilon)$.
Unfortunately, our double-squaring method produces spurious eigenvalues which render Routh-Hurwitz criterion inefficient.
However, Orlando's formula (see \cite[Chapter XV, 7]{Gantmakher98}), a generalization of the well-known property for the sum of the roots of a quadratic equation, establishes a relation between the leading Hurwitz determinant $\Delta_{6}$ and the sums of all different pairs of roots of $P_7(\lambda;m,\varepsilon)$. In particular, $\Delta_{6}=0$ in the case when either $0$ is a double eigenvalue (i.e., $0$ is an eigenvalue with algebraic multiplicity two) or two eigenvalues are purely imaginary and conjugate.
The following one is the main result of this subsection.
\begin{proposition}
\label{give critical value}
There exist $\varepsilon_0>0$ and $\delta>0$, and a unique function $m^c: (0,\varepsilon_0)\to (6-\delta, 6+\delta)$ with $m^c(0)=6$, such that the polynomial $\widetilde{P}_7(\lambda;\varepsilon):=P_7(\lambda; m^c(\varepsilon), \varepsilon)$ has exactly one pair of purely imaginary roots $\pm i\omega(\varepsilon)$, with $\omega(\varepsilon)>0$.
Moreover, $\omega(\varepsilon)$ converges to $\sqrt{3}$ as $\varepsilon$ tends to $0$.
\end{proposition}
We first need a preliminary technical lemma:
\begin{lemma}\label{technical}
There exist $\upsilon_0>0$ and $\varepsilon_*>0$ such that, for all $m$ in the interval $[3,7]$ $($to fix ideas$)$, $\varepsilon\in (0,\varepsilon_*)$ and any
purely imaginary root $i\upsilon$ of $P_7(\cdot;m,\varepsilon)$, with $\upsilon>0$, it holds that $0<\upsilon<\upsilon_0$.
\end{lemma}
\begin{proof}
We observe that, if $i\upsilon$ is a root of $P_7(\cdot;m,\varepsilon)$, then, in particular, the imaginary part of $P_7(i\upsilon;m,\varepsilon)$, i.e., the
term $-a_0\upsilon^7+a_2\upsilon^5-a_4\upsilon^3+a_6\upsilon$ vanishes.
A straightforward computation (see Appendix \ref{appendix-C}) reveals that
\begin{align*}
\Im{P_7(i\zeta;m,\varepsilon)}=&-2048(\varepsilon-1)^4\varepsilon^2\zeta^7-8\varepsilon(m^2+3m+2)\zeta^5+O(\varepsilon^2)\zeta^5\\
&-128(2m^4-7m^2-3m-1)\zeta^3+O(\varepsilon)\zeta^3+a_6\zeta,
\end{align*}
for every $\zeta>0$, where we denote by $O(\varepsilon^k)$ terms depending only on $\varepsilon$ such that
the ratio $O(\varepsilon^k)/\varepsilon^k$ stays bounded and far away from zero for $\varepsilon$ in a neighborhood of zero.
Since $m^2+3m+2$ and $2m^4-7m^2-3m-1$ are both positive for $m\in [3,\infty)$, we can estimate
\begin{align*}
|\Im{P_7(i\zeta;m,\varepsilon)}|
\ge &[8(m^2+3m+2)-O(\varepsilon)]\varepsilon\zeta^5\!+\![128(2m^4-7m^2-3m-1)-O(\varepsilon)]\zeta^3\!-\!K|\zeta|,
\end{align*}
where $K:=\max\{|a_6(m,\varepsilon)|: m\in [3,7], \varepsilon\in (0,1]\}$. Hence, we can determine $\varepsilon_*>0$ such that
\begin{align}
|\Im{P_7(i\zeta;m,\varepsilon)}|
\ge & 64(2m^4-7m^2-3m-1)\zeta^3-K|\zeta|,\qquad\;\,m\in [3,7],\;\,\varepsilon\in (0,\varepsilon_*).
\label{imaginary}
\end{align}
The right-hand side of \eqref{imaginary} diverges to $\infty$ as $\zeta\to+\infty$. From this it follows that there exists $\upsilon_0>0$ such that
$|\Im{P_7(i\zeta;m,\varepsilon)}|>0$ for every $\zeta>\upsilon_0$ and this clearly implies that $\upsilon\le\upsilon_0$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{give critical value}]
We split the proof into two steps.
\vskip 1mm
{\em Step 1}. First, we prove the existence of a function $m^c$ with the properties listed in the statement of the proposition.
For this purpose, we consider the sixth-order Hurwitz determinant ${\Delta}_6(m,\varepsilon)$ associated with the polynomial $P_7(\lambda;m,\varep)$.
It turns out that
${\Delta}_6(m,\varepsilon)=\varepsilon^2m^2C\widetilde\Delta_6(m,\varepsilon)$ for some positive constant $C$. As $\varepsilon\to 0$,
$\widetilde\Delta_6(\cdot,\varepsilon)$ converges to the function ${\Delta}_0$, which is defined by
\begin{align*}
{\Delta}_0(m)=&-m^{18}+8m^{17}+97m^{16}+42m^{15}-2129m^{14}-9376m^{13}-16811m^{12}\\
&-7866m^{11}+19913m^{10}+31292m^9-4309m^8-55466m^7-66363m^6\\
&-35480m^5-4729m^4+4666m^3+2628m^2+500m+24.
\end{align*}
Noticing that ${\Delta}_0(6)=0$ and $\frac{d}{dm}{\Delta}_0(6)>0$, it then follows from the Implicit Function Theorem that there exist $\varepsilon_0\in (0,\varepsilon_*)$, with $\varepsilon_*$ given by Lemma \ref{technical}, $\delta>0$ and a unique mapping $m^c: (0,\varepsilon_0)\to (6-\delta, 6+\delta)$ with $m^c(0)=6$, such that $\widetilde\Delta_6(m^c(\varepsilon),\varepsilon)=0$ and $\frac{\partial}{\partial m}\widetilde\Delta_6(m^c(\varepsilon),\varepsilon)>0$ for $\varepsilon\in (0,\varepsilon_0)$. Then, upon an application of Orlando formula, it follows that either $0$ is a double root of $\widetilde P_7(\lambda;\varepsilon)$ or there exists at least one pair $\pm \omega(\varepsilon)i$ (with $\omega(\varepsilon)>0$) of purely imaginary roots of $\widetilde P_7(\lambda;\varepsilon)$ for every $\varepsilon\in (0,\varepsilon_0)$. The first case is ruled out, since 0 is not a root of $\widetilde P_7(\lambda;\varepsilon)$. Indeed, $a_7(m,\varepsilon)$ converges to a positive limit as $\varepsilon$ tends to $0$.
\vskip 1mm
{\em Step 2}.
Next, we prove that $\pm\omega(\varepsilon)i$ is the unique pair of purely imaginary roots of the polynomial $\widetilde P_7(\lambda;\varepsilon)$ for every $\varepsilon\in (0,\varepsilon_0)$. For this purpose, we begin by observing that $\widetilde{P}_7(\cdot;\varepsilon)$ converges, locally uniformly in $\mathbb C$ as $\varepsilon\to 0$, to the fourth-order polynomial $\widetilde P_4$, defined by $\widetilde{P}_4(\lambda)=-6272(4\lambda+1)(\lambda-12)(\lambda^2+3)$ for every $\lambda\in\mathbb C$.
By Hurwitz Theorem, four roots of $\widetilde{P}_7(\lambda; \varep)$, say $\lambda_1(\varep)$, $\lambda_2(\varep)$, $\lambda_3(\varep)$ and $\lambda_4(\varep)$ converge respectively to $\lambda_1(0)=-\frac{1}{4}, \lambda_2(0)=12, \lambda_3(0)= \sqrt{3}i$ and $\lambda_4(0)=-\sqrt{3}i$. More precisely, for $r_1>0$ small enough, $\lambda_i(\varepsilon)$ ($i=1,\ldots,4$) is simple in the ball $B(\lambda_i(0),r_1)$ for $\varep\in (0,\varepsilon_0)$ (up to replacing $\varepsilon_0$ with a smaller value if needed).
Assume by contradiction that there exists a positive infinitesimal sequence $\{\varepsilon_n\}$ such that, for any $n\in\N$, ($\lambda_{5}(\varepsilon_n),\lambda_6(\varepsilon_n)$) is another pair of purely imaginary and conjugate roots of $\widetilde P_7(\lambda; \varepsilon_n)$, different from $\pm\omega(\varepsilon_n)i$. By
Lemma \ref{technical}, $\nu(\varep_n)=|\lambda_5(\varepsilon_n)|\leq \upsilon_0$ for every $n\in\N$. Take a subsequence $\{\varep_{n_k}\}$ such that $\nu({\varep}_{n_k})$ converges as $k \to \infty$. The local uniform convergence in $\mathbb C$ of
$\widetilde P_7(\cdot;\varepsilon_n)$ to $\widetilde P_4$ implies that $\nu({\varep}_{n_k})$ tends to $\sqrt{3}$ as $k\to\infty$. Since the limit is independent of the choice of subsequence $\{\varep_{n_k}\}$, we conclude that $\nu(\varep_n)$ converges to $\sqrt{3}$ as $n\to\infty$.
Next, thanks to Hurwitz Theorem and the fact that $\lambda_3(\varep)$, $\lambda_4(\varep)$ converge to $\sqrt{3}i, -\sqrt{3}i$ respectively, the pair
($\lambda_5(\varepsilon_{n_k}),\lambda_6(\varepsilon_{n_k})$) coincides with ($\lambda_3(\varepsilon_{n_k}),\lambda_4(\varepsilon_{n_k})$) in $B(\sqrt{3}i,r_1)\times B(-\sqrt{3}i,r_1)$. This contradicts the fact that $\lambda_3(\varepsilon_{n_k}),\lambda_4(\varepsilon_{n_k})$ are both simple. Up to
replacing $\varepsilon_0$ with a smaller value if needed, we have proved that $(\omega(\varepsilon)i,-\omega(\varepsilon)i)$ is the unique pair of conjugate eigenvalues of
$\widetilde P_7(\cdot;\varepsilon)$ and $\lambda_3(\varepsilon)=\omega(\varepsilon)i$ for every $\varepsilon\in (0,\varepsilon_0)$. The proof is now complete.
\end{proof}
\setcounter{tocdepth}{2}
\subsection{Hopf bifurcation theorem}
\label{subsect-5.2}
For fixed $0<\varepsilon<\varepsilon_0$, $\varep_0$ and $\delta$ given by Proposition \ref{give critical value}, let us consider the fully nonlinear problem \eqref{FNLE},
where now we find it convenient to write $F(\pmb w;m)$ instead of $F(\pmb w)$ to make much more explicit the dependence of the nonlinear term $F$ on the bifurcation parameter $m$.
According to Proposition \ref{give critical value}, the bifurcation parameter $m$ has a critical value $m^c(\varep) \in (6-\delta,6+\delta)$. We intend to prove that a Hopf bifurcation occurs at $m=m^c(\varep)$ if $\varep$ is small enough. For $m$ close to $m^c(\varep)$, we are going to locally parameterize $m$ and $\pmb w$ by a parameter $\sigma \in (-\sigma_0,\sigma_0)$. To emphasize this dependence, we will write $\widetilde{m}(\sigma)$ and $\widetilde{\pmb w}(\cdot,\cdot;\sigma)$.
\begin{theorem}
\label{Hopf bifurcation theorem} For any fixed $\alpha\in (0,1)$, there exists $\tilde{\varep}_0\in (0,\varep_0)$, such that whenever $\varep\in (0,\tilde{\varep}_0)$ is fixed, the following properties are satisfied.
\begin{enumerate}[\rm (i)]
\item
There exist $\sigma_0>0$ and smooth functions $\widetilde{m}$, $\rho:(-\sigma_0,\sigma_0)\to\mathbb{R}$, $\widetilde{{\pmb w}}:(-\sigma_0,\sigma_0)\to C^{1+\alpha}(\R;\pmb{\mathcal W})\cap C^{\alpha}(\R;Q(D(L)))$, satisfying the conditions
$\widetilde{m}(0)=m^c$, $\rho(0)=1$ and $\widetilde{\pmb w}(\cdot,\cdot;0)$ $=0$.
In addition, $\widetilde{\pmb w}(\cdot,\cdot;\sigma)$ is not a constant if $\sigma\neq 0$, and $\widetilde{\pmb w}(\cdot,\cdot;\sigma)$ is a
$T(\sigma)$-periodic solution of the equation
\begin{eqnarray*}
\widetilde{\pmb w}_\tau(\cdot,\cdot;\sigma) = QL\widetilde{\pmb w}(\cdot,\cdot;\sigma) + F(\widetilde{\pmb w}(\cdot,\cdot;\sigma);\widetilde{m}(\sigma)), \qquad\;\, \tau \in \R,
\end{eqnarray*}
where $T(\sigma)=2\pi\rho(\sigma)\omega^{-1}$ and $\omega=\omega(\varepsilon)$ is defined in Proposition $\ref{give critical value}$.
\item
There exists $\eta_0$ such that if $\overline{m} \in (6-\delta_0, 6+\delta_0)$, $\bar{\rho}\in\R$ and $\pmb w \in C^{1+\alpha}(\mathbb{R};\pmb{\mathcal W})\cap C^{\alpha}(\mathbb{R};Q(D(L)))$ is a $2\pi\bar{\rho}\omega^{-1}$-periodic solution of the equation
$\overline{\pmb w}_\tau = QL\overline{\pmb w} + F(\overline{\pmb w};\overline{m})$ such that
\begin{equation*}
\|\overline{\pmb w}\|_{ C^{1+\alpha}(\R;\pmb{\mathcal W})}+\|\overline{\pmb w}\|_{C^{\alpha}(\R;Q(D(L)))}+|\bar{m}|+|1-\bar{\rho}|\leq\eta_0,
\end{equation*}
then there exist $\sigma\in(-\sigma_0,\sigma_0)$ and $\tau_0\in \R$ such that
$\overline{m}=\widetilde{m}(\sigma)$, $\bar\rho=\rho(\sigma)$ and $\overline{\pmb w}=\widetilde{\pmb w}(\cdot+\tau_0,\cdot;\sigma)$.
\end{enumerate}
\end{theorem}
\begin{proof}
We split the proof into two steps.
\vskip 1mm
\textsl{Step 1.} Here, we prove that there exists $\varepsilon_1>0$ such that $\pm \omega(\varepsilon)i$ are simple eigenvalues of $L$ (and, hence,
of the part of $L$ in $\pmb {\mathcal W}_Q=Q(\pmb {\mathcal W})$) for every $\varepsilon\in (0,\varepsilon_1]$ and there are no other eigenvalues on the imaginary axis, i.e., we prove that this operator satisfies the so-called resonance condition.
To begin with, let us prove that $\pm\omega(\varepsilon)i$ are eigenvalues of $L$. In view of Theorem \ref{thm-2.3}, we need to show that they are roots of the dispersion relation \eqref{dispersion epsilon}. For this purpose, we observe that the function $\widetilde{D}_\varepsilon:= D_\varepsilon(\cdot;m^c(\varepsilon))$ converges to $\widetilde{D}_0$ locally uniformly in the strip $\{\lambda\in\mathbb{C}:|\Re\lambda|\le \ell\}$ (for $\ell$ small enough),
where
\begin{eqnarray*}
\displaystyle\widetilde D_0(\lambda)=-\lambda-\frac{1+\sqrt{1+4\lambda}}{2}+\frac{1}{14}[(13+2\lambda)\sqrt{1+4\lambda}+1+4\lambda],\qquad\;\,\lambda\in\mathbb C.
\end{eqnarray*}
The function $\widetilde D_0$ has just one pair of purely imaginary conjugate roots $\pm\sqrt{3}i$. Hurwitz theorem shows that there exists $r>0$ such that the ball $B(\sqrt{3}i,r)$ contains exactly one root $\lambda(\varepsilon)$ of $\widetilde D_{\varep}$ for each $\varepsilon$ small enough.
By the proof of Proposition \ref{give critical value}, we know that there exists $r_1>0$ such
that $\omega(\varepsilon)i$ is the unique root of $\widetilde P_7$ in the ball $B(\sqrt{3}i,r_1)$. Clearly, $\lambda(\varepsilon)$ is a root of the polynomial $\widetilde P_7$ and, Hurwitz theorem also shows that $\lambda(\varepsilon)$ converges to $\sqrt{3}i$ as $\varepsilon\to 0^+$. Therefore, for $\varepsilon$ small enough, both
$\lambda(\varepsilon)$ and $\omega(\varepsilon)i$ belong to $B(\sqrt{3}i,r_1)$ and, hence, they do coincide.
The same argument shows that $-\omega(\varepsilon)i$ is also a root of $\widetilde D_\varepsilon$. We have proved that there exists $\varepsilon_1\le\varepsilon_0$ such that $\omega(\varepsilon)i$ and $-\omega(\varepsilon)i$ are both eigenvalues of $L$ of every $\varepsilon\in (0,\varepsilon_1]$. In particular, $\pm\omega(\varepsilon)i$ are simple roots of
the function $\widetilde D_{\varepsilon}$ and there are no other eigenvalues of $L$ on the imaginary axis.
To conclude that $\pm\omega(\varepsilon)i$ are simple eigenvalues of $L$ for each $\varepsilon\in (0,\varepsilon_1]$,
we just need to check that their geometric multiplicity is one. For this purpose, we observe that the proof of Theorem \ref{thm-2.3} shows that the eigenfunctions associated with the eigenvalues $\pm\omega(\varepsilon)i$ are given by
\begin{eqnarray*}
\begin{array}{lll}
\displaystyle u(\xi)=c_1e^{k_1\xi}+\frac{A}{H_{1,\lambda}}\bigg (\frac{e^{k_3\xi}}{k_3-k_2}-\frac{e^{k_3\xi}-e^{k_1\xi}}{k_3-k_1}\bigg )c_3,\quad &v(\xi)=c_3e^{k_3\xi}, &\xi<0,\\[3mm]
u(\xi)=c_6e^{k_2\xi}, &v(\xi)=c_8e^{k_6\xi}, &\xi\ge 0
\end{array}
\end{eqnarray*}
with $k_j=k_{j,\pm\omega(\varepsilon)i}$ and the constants $c_1$, $c_3$, $c_6$ and $c_8$ are determined through the equation \eqref{matrix} (with $\lambda=\pm\omega(\varepsilon)i$) where $F_1=\ldots=F_4=0$.
Since the rank of the matrix in \eqref{matrix} is three at $\lambda=\pm\omega(\varepsilon)i$, it follows at once that the geometric multiplicity of
$\pm\omega(\varepsilon)i$ is one.
\vskip 1mm
\textsl{Step 2:} Now, we check the nontransversality condition. We begin by observing that, for every $\varepsilon\in (0,\varepsilon_1]$, the function $D_{\varepsilon}$
is analytic with respect to $\lambda$ and continuously differentiable with respect to $m$ in $B(\sqrt{3}i,r)\times (6-\delta,6+\delta)$, where $r$ is such that
the ball $B(\sqrt{3}i,r)$ does not intersect the half line $(-\infty,-1/4]$.
We intend to apply the Implicit Function Theorem at $(\omega(\varep)i,m^c(\varep))$ for $\varep$ small enough.
In this respect, we need to show that the $\lambda$-partial derivative of $D_{\varepsilon}$ does not vanish at $(\lambda_3(\varep),m^c(\varep))$.
To this aim, we observe that
\begin{eqnarray*}
\lim_{\varepsilon\to 0^+}\frac{\partial D_{\varepsilon}}{\partial\lambda}(\omega(\varepsilon)i,m^c(\varepsilon))=\frac{\partial D_0}{\partial\lambda}(\sqrt{3}i,6)=\frac{5\sqrt{3}i-3}{49}.
\end{eqnarray*}
Therefore, there exists $\varepsilon_2\leq\varepsilon_1$ such that, if $\varepsilon\in (0,\varepsilon_2]$, the $\lambda$-partial derivative of $D_{\varepsilon}$ at $(\omega(\varepsilon)i,m^c(\varepsilon))$ does not vanish. Then, it follows from the Implicit Function Theorem that
for each $\varepsilon\in (0,\varepsilon_2]$, there exist $\delta_{\varepsilon}>0$, $r_{\varepsilon}<r$ and
a $C^1$-mapping $\lambda_{\varepsilon}:(m^c(\varepsilon)-\delta_{\varepsilon},m^c(\varepsilon)+\delta_{\varepsilon})\to B(\sqrt{3}i,r_{\varepsilon})$, such that $D_{\varepsilon}(\lambda_{\varep}(m),m)=0$ for all $m\in(m^c(\varepsilon)-\delta_{\varepsilon},m^c(\varepsilon)+\delta_{\varepsilon})$
and $\lambda_{\varepsilon}(6)=\omega(\varepsilon)i$.
As a consequence, there are two branches of conjugate isolated and simple eigenvalues, $\lambda_{\varep}(m)$ and $\overline{\lambda}_{\varep}(m)$, which cross the imaginary axis respectively at $\pm\omega(\varepsilon)i$ for $m=m^c(\varep)$.
It remains to determine the sign of the real part of the derivative of $\lambda_{\varep}$ at $m=m^c(\varep)$. Since
\begin{eqnarray*}
\lim_{\varepsilon\to 0^+}\frac{\partial\lambda_{\varepsilon}}{\partial m}(m^c(\varepsilon))
=-\bigg (\frac{\partial D_0}{\partial m}(\sqrt{3}i,6)\bigg )\bigg (\frac{\partial D_0}{\partial\lambda}(\sqrt{3}i,6)\bigg )^{-1}=\frac{3}{4}+\frac{\sqrt{3}}{12}i
\end{eqnarray*}
there exists $\varepsilon_3\le\varepsilon_2$ such that the real part of the derivative of $\lambda_{\varepsilon}$ is positive at $m^c(\varepsilon)$ for
any $\varepsilon\in (0,\varepsilon_3]$. which completes the proof of Step 2.
Applying \cite[Theorem 9.3.3]{Lunardi96}, the claims follow with $\tilde\varepsilon_0=\varepsilon_3$.
\end{proof}
\setcounter{tocdepth}{2}
\subsection{Bifurcation from the traveling wave}
As in Subsection \ref{stability theorem TW}, we rewrite the results in Theorem \ref{Hopf bifurcation theorem} in terms of problem \eqref{perturbation T}-\eqref{interface-Theta-Phi}. As above, $\varep$ is fixed in $(0,\tilde{\varep}_0)$; therefore, the traveling wave $\pmb U$ depends only on $m$, which itself is parameterized by $\sigma \in (-\sigma_0,\sigma_0)$. Accordingly, the traveling wave reads $\widetilde{\pmb U}(.;\sigma)$.
The following theorem expresses that there exists a bifurcated branch bifurcating from the traveling wave at the bifurcation point $m^c(\varep)$.
The proof can be obtained arguing as in the proof of Theorem \ref{stability theorem TW}. Hence, the details are skipped.
\begin{theorem}
For each $\sigma \in (-\sigma_0,\sigma_0)$,
the problem \eqref{perturbation T}-\eqref{interface-Theta-Phi} admit a non trivial solution $(\widetilde{{\pmb X}}(\cdot,\cdot;\sigma),\widetilde g(\cdot;\sigma))$ defined by:
\begin{align*}
&\widetilde{{\pmb X}}(\cdot,\cdot;\sigma)=\Theta_i^{-1}\widetilde w_1(\cdot,0;\sigma) {\widetilde{\pmb U}}'(\cdot;\sigma)+\widetilde{{\pmb w}}(\cdot,\cdot;\sigma)+ \widetilde{\pmb U}(\cdot;\sigma),\\
&\widetilde g(\tau;\sigma)=\tau+ \frac{\tau}{T(\sigma)}\int_0^{T(\sigma)}\frac{(L\widetilde{\pmb w}(r,\cdot;\sigma))_1(\sigma,0^+)}{\Theta_i-\widetilde w_1(r,0;\sigma)-\widetilde w'_1(r,0^+;\sigma)}dr + \widetilde{h}(\tau;\sigma),\quad\;\,\tau\in\mathbb R.
\end{align*}
where $\widetilde{{\pmb X}}(\cdot,\cdot;0)=\widetilde{\pmb U}(.;0)$, $\widetilde{\pmb w}$ is defined by Theorem $\ref{Hopf bifurcation theorem}$.
The function $\widetilde h(\cdot;\sigma)$ belongs to $C^{1+\alpha}(\mathbb R)$. Moreover, $\widetilde{\pmb X}(\cdot,\cdot;\sigma)$ and $\widetilde h(\cdot;\sigma)$ are periodic with period $T(\sigma)=2\pi\rho(\sigma)\omega^{-1}$. At the bifurcation point, the ``virtual period'' is $T(0)=2\pi\omega^{-1}$.
\end{theorem}
We refer to, e.g., \cite{NR97, Lorenzi04} for solutions which are periodic modulo a linear growth.
\section*{Acknowledgments} L.L. greatly acknowledges the School of Mathematical Sciences of the University of Science and Technology of China for the warm hospitality during his visit. M.M.Z. would like to thank the Department of Mathematical, Physical and Computer Sciences of the University of Parma for the warm hospitality during her visit. The authors wish to thank Peter Gordon, Congwen Liu and Gregory I. Sivashinsky for fruitful discussions.
|
1,116,691,500,928 | arxiv | \section{Introduction}\label{sec:intro}
O-minimal structures \cite{vdD, KPS,PS} have been studied model theoretically and geometrically.
An expansion of a dense linear order without endpoints $\mathcal M=(M,<,\ldots)$ is \textit{o-minimal} if any definable subset of $M$ is a finite union of points and open intervals.
Studies on o-minimal structures are too many to be presented here.
One of main interests in studying o-minimal structures is their tame topology.
They possess various tame topological properties such as monotonicity theorem and definable cell decomposition theorem.
An interesting question is what topological properties are remained when the definition of o-minimal structures is relaxed.
In fact, many structures relaxing the definition of o-minimal structures have been proposed and their topological properties have been investigated.
Here is an incomplete list; weakly o-minimal structures \cite{MMS, W}, structures having o-minimal open core \cite{DMS, F}, d-minimal structures \cite{M2,T}, locally o-minimal structures \cite{TV, KTTT}, models of DCTC \cite{S} and uniformly locally o-minimal structures of the second kind \cite{Fuji}.
We propose a new relative of these structures named an \textit{almost o-minimal structure} in this paper.
Why do we propose a new structure though many structures have been already proposed?
We explain why.
The notation $\mathcal M$ denotes a structure and $M$ denotes its universe below.
Toffalori and Vozoris proposed a locally o-minimal structure \cite{TV}, which is defined by simply localizing the definition of an o-minimal structure.
An expansion of a dense linear order without endpoints $\mathcal M=(M,<,\ldots)$ is \textit{locally o-minimal} if, for any definable subset $X$ of $M$ and any point $x \in M$, there exists an open interval $I$ containing the point $x$ such that the intersection $I \cap X$ is a finite union of points and open intervals.
In spite of its similarity to the definition of o-minimal structures, a locally o-minimal structure does not enjoy the localized properties possessed by o-minimal structures.
Schoutens introduced a \textit{model of DCTC} generalizing a locally o-minimal expansion of an ordered field \cite{S}.
Roughly speaking, a model of DCTC is a locally o-minimal structure which is o-minimal at the infinities $\pm \infty$.
More precisely, for any set $X$ in $M$ definable in a model of DCTC, there exist $a,b \in M$ such that $X \cap \{x<a\}$ and $X \cap \{x>b\}$ are empty sets or open intervals.
A locally o-minimal expansion of an ordered field and a model of DCTC possess several tame topological properties.
Readers who are interested in them should consult \cite{F,S, Fuji4}.
The author have pursued another direction.
His initial purpose is to find a necessary and sufficient condition for a locally o-minimal structure admitting a local definable cell decomposition \cite{Fuji}.
The answer was a uniformly locally o-minimal structure of the second kind when the structure is definably complete.
\begin{definition}\label{def:second}
We consider an expansion $\mathcal M=(M,<,\ldots)$ of a dense linear order without endpoints.
It is \textit{definably complete} if every definable subset of $M$ has both a supremum and an infimum in $M \cup \{ \pm \infty\}$ \cite{M}.
A definably complete expansion of an ordered group is divisible and abelian \cite[Proposition 2.2]{M}.
A locally o-minimal structure $\mathcal M=(M,<,\ldots)$ is a \textit{uniformly locally o-minimal structure of the second kind} if, for any positive integer $n$, any definable set $X \subseteq M^{n+1}$, $a \in M$ and $b \in M^n$, there exist an open interval $I$ containing the point $a$ and an open box $B$ containing $b$ such that the definable sets $X_y \cap I$ are finite unions of points and open intervals for all $y \in B$.
Here, $X_y$ denotes the fiber $\{x \in M\;|\; (y,x) \in X\}$.
When we can choose $B=M^n$, the structure $\mathcal M$ is called a \textit{uniformly locally o-minimal structure of the first kind}.
\end{definition}
We frequently consider definably complete uniformly locally o-minimal expansions of the second kind of ordered groups.
We simply call them \textit{DCULOAS structures}.
Their local tame topological properties have been clarified in a series of papers \cite{Fuji, Fuji3, Fuji4}
In many potential applications to other mathematical branches such as geometry and analysis, the universe is the set of reals $\mathbb R$.
The author also demonstrated that a locally o-minimal expansion of the ordered group of reals admits local definable cell decomposition better than a general definably complete uniformly locally o-minimal structure of the second kind in an unpublished paper \cite{Fuji2}, which is a special case of Theorem \ref{thm:udcd}.
A DCULOAS structure is not an excellent abstraction of locally o-minimal expansion of the ordered group of reals.
The significant difference between the real case and the general case is that any bounded definable set is a finite union of points and open intervals in the former, but it may not be in the latter.
This is the reason why we focus the following notion:
\begin{definition}
An expansion $\mathcal M=(M,<,\ldots)$ of densely linearly ordered set without endpoints is \textit{almost o-minimal} if any bounded definable set in $M$ is a finite union of points and open intervals.
\end{definition}
Note that an locally o-minimal expansion of the ordered set of reals $(\mathbb R,<)$ is almost o-minimal.
Roughly speaking, an almost o-minimal structure is o-minimal on bounded regions.
The notion of almost o-minimality is a complementary notion of DCTC in a sense.
A locally o-minimal structure is o-minimal if and only if it is simultaneously a model of DCTC and almost o-minimal as demonstrated in Proposition \ref{prop:almost1}.
The notion of subanalytic sets is another useful geometrical concept \cite{BM, H}.
A subset of $X$ of $\mathbb R^n$ is \textit{subanalytic} if each point of $\mathbb R^n$ has a neighborhood $U$ such that $X \cap U$ is a finite union of sets of the form $\operatorname{Im}(f_1) \setminus \operatorname{Im}(f_2)$, where $f_1$ and $f_2$ are proper real analytic maps from real analytic manifolds to $\mathbb R^n$.
The projection image of a subanalytic set is not necessarily subanalytic, but
its image under a proper projection is again subanalytic.
The family of subanalytic sets are not the family of sets definable in a first-order language because it is not closed under taking the projection image.
In \cite{vdDM}, van den Dries and Miller generalized the notion of subanalytic sets and proposed an \textit{analytic-geometric category} and clarified the relation between the analytic-geometric category of subanalytic sets and an o-minimal structure which is called the restricted analytic field $\mathbb R_{\text{an}}$.
Shiota also proposed \textit{$\mathfrak X$-sets} and \textit{$\mathfrak Y$-sets} in \cite{Shiota}.
They are also generalization of subanalytic sets.
The family of sets `definable' in them is only closed under taking the image under a proper projection.
In addition, their underlying set is the set of reals $\mathbb R$.
We want to generalize their concepts when underlying set is a densely linearly ordered set without endpoints.
We propose the following structure generalizing Shiota's $\mathfrak X$-sets and $\mathfrak Y$-sets.
\begin{definition}\label{def:x}
Let $(M,<)$ be a densely linearly ordered set without endpoints.
A map $p$ from a subset $X$ of $M^m$ to $M^n$ is \textit{proper} if the inverse image $ p^{-1}(U)$ of an arbitrary bounded open box $U$ in $M^n$ is bounded.
An \textit{$\mathfrak X$-structure} is a triple $\mathcal X = (M,<,\mathcal S=\{\mathcal S_n\}_{n \in \mathbb N})$ of a densely linearly ordered set without endpoints $(M,<)$ and the families $\mathcal S_n$ of subsets in $M^n$ satisfying the following conditions:
\begin{enumerate}
\item[(1)] For all $x \in M$, the singletons $\{x\}$ belong to $S_1$.
All open intervals also belong to $S_1$.
\item[(2)] The sets $\{(x,y) \in M^2\;|\; x= y\}$ and $\{(x,y) \in M^2\;|\; x< y\}$ belong to $S_2$.
\item[(3)] $\mathcal S_n$ is a boolean algebra and $M^n \in \mathcal S_n$;
\item[(4)] We have $X_1 \times X_2 \in \mathcal S_{m+n}$ whenever $X_1 \in \mathcal S_m$ and $X_2 \in \mathcal S_n$;
\item[(5)] For any permutation $\sigma$ of $\{1, \ldots, n\}$, the image $\widetilde{\sigma}(X)$ belongs to $\mathcal S_n$ when $X \in \mathcal S_n$ and the notation $\widetilde{\sigma}:M^n \rightarrow M^n$ denotes the map given by $\widetilde{\sigma}(x_1,\ldots, x_n)=(x_{\sigma(1)},\ldots,x_{\sigma(n)})$;
\item[(6)] Let $\pi:M^n \rightarrow M^m$ be a coordinate projection and $X \in \mathcal S_n$ such that the restriction $\pi|_{X}$ of $\pi$ to $X$ is proper.
Then, the image $\pi(X)$ belongs to $\mathcal S_m$.
\item[(7)] The intersection $I \cap X$ is a finite union of points and open intervals when $X \in \mathcal S_1$ and $I$ is a bounded open interval.
\end{enumerate}
The set $M$ is called the \textit{universe} and the \textit{underlying set} of the $\mathfrak X$-structure $\mathcal X$.
A subset $X$ of $M^n$ is called \textit{$\mathfrak X$-definable} in $\mathcal X$ when $X$ is an element of $\mathcal S_n$.
A set $\mathfrak X$-definable in $\mathcal X$ is simply called $\mathfrak X$-definable when $\mathcal X$ is clear from the context.
A map from a subset of $M^m$ to $M^n$ is \textit{$\mathfrak X$-definable} if its graph is $\mathfrak X$-definable.
When $(M,<,0,+)$ is an ordered divisible abelian group and the addition is $\mathfrak X$-definable, we call the $\mathfrak X$-structure an \textit{$\mathfrak X$-expansion of an ordered divisible abelian group}.
We define an \textit{$\mathfrak X$-expansion of an ordered real closed field} in the same manner.
\end{definition}
In Shiota's formulation, an $\mathfrak X$-set is locally a finite union of points and open intervals.
Here, we call that $X$ is locally a finite union of points and open intervals when, for any point $x \in \mathbb R$, there exists an open interval $I$ containing the point $x$ such that $I \cap X$ is a finite union of points and open intervals.
The formulation by van den Dries and Miller is similar.
If a subset $X$ of $\mathbb R$ is locally a finite union of points and open intervals, it satisfies the condition (7) in Definition \ref{def:x} because the closed bounded interval is compact in $\mathbb R$.
But, it is not true in a general densely linearly ordered set without endpoints $(M,<)$.
In Shiota's original formulation, we cannot deduce several good properties enjoyed by the $\mathfrak X$-structure defined in our formulation when the underlying set is a general $M$.
An almost o-minimal structure is an $\mathfrak X$-structure.
The following is another important example of $\mathfrak X$-structures.
\begin{definition}
Let $\mathcal R=(M,<,\ldots)$ be an o-minimal structure.
A subset $X$ of $M^n$ is \textit{semi-definable in $\mathcal R$} if the intersection $U \cap X$ is definable in $\mathcal R$ for any bounded open box $U$ in $M^n$.
A map from a subset of $M^m$ to $M^n$ is \textit{semi-definable} if its graph is semi-definable.
The family $\mathcal S(\mathcal R)=\{\mathcal S(\mathcal R)_n\}_{n \in \mathbb N}$ of all semi-definable sets satisfies the conditions in Definition \ref{def:x}.
The $\mathfrak X$ structure $\mathfrak X(\mathcal R)=(M,<,\mathcal S(\mathcal R))$ is called the \textit{$\mathfrak X$-structure of semi-definable sets in $\mathcal R$}.
\end{definition}
We study general $\mathfrak X$-structures in Section \ref{sec:x}.
The main theorems of this section are the structure theorems Theorem \ref{thm:in_omin} and Theorem \ref{thm:xstr}.
The former says that an $\mathfrak X$-expansion of an ordered divisible abelian group always contains an o-minimal expansion $\mathcal R$ of an ordered group such that all bounded $\mathfrak X$-definable sets are definable in the structure $\mathcal R$.
The latter gives a sufficient condition for an $\mathfrak X$-expansion of an ordered divisible abelian group being an $\mathfrak X$-expansion of an ordered real closed field.
The basic property of dimension of $\mathfrak X$-definable sets are also investigated in this section.
The $\mathfrak X$-structures of semi-definable sets in an o-minimal structure are studied in Section \ref{sec:semi-definable}.
The notion of semi-definable connectedness is introduced in this section.
The main theorem of this section is Theorem \ref{thm:connected}, which gives equivalent conditions for a semi-definable set to be semi-definably connected and also demonstrates the existence of semi-definably connected components.
Section \ref{sec:almost} is devoted for the study of almost o-minimal structures.
After we investigate the basic properties of almost o-minimal structures, we prove a uniform local definable cell decomposition theorem.
It is the last main theorem of this paper.
The definition of cells and the local definable cell decomposition theorem for a definably complete uniformly locally o-minimal structures of the second kind are as follows:
\begin{definition}[Definable cell decomposition]
Consider an expansion of dense linear order without endpoints $\mathcal M=(M,<,\ldots)$.
Let $(i_1, \ldots, i_n)$ be a sequence of zeros and ones of length $n$.
\textit{$(i_1, \ldots, i_n)$-cells} are definable subsets of $M^n$ defined inductively as follows:
\begin{itemize}
\item A $(0)$-cell is a point in $M$ and a $(1)$-cell is an open interval in $M$.
\item An $(i_1,\ldots,i_n,0)$-cell is the graph of a definable continuous function defined on an $(i_1,\ldots,i_n)$-cell.
An $(i_1,\ldots,i_n,1)$-cell is a definable set of the form $\{(x,y) \in C \times M\;|\; f(x)<y<g(x)\}$, where $C$ is an $(i_1,\ldots,i_n)$-cell and $f$ and $g$ are definable continuous functions defined on $C$ with $f<g$.
\end{itemize}
A \textit{cell} is an $(i_1, \ldots, i_n)$-cell for some sequence $(i_1, \ldots, i_n)$ of zeros and ones.
The sequence $(i_1, \ldots, i_n)$ is called the \textit{type} of an $(i_1, \ldots, i_n)$-cell.
An \textit{open cell} is a $(1,1, \ldots, 1)$-cell.
The dimension of an $(i_1, \ldots, i_n)$-cell is defined by $\sum_{j=1}^n i_j$.
We inductively define a \textit{definable cell decomposition} of an open box $B \subseteq M^n$.
For $n=1$, a definable cell decomposition of $B$ is a partition $B=\bigcup_{i=1}^m C_i$ into finite cells.
For $n>1$, a definable cell decomposition of $B$ is a partition $B=\bigcup_{i=1}^m C_i$ into finite cells such that $\pi(B)=\bigcup_{i=1}^m \pi(C_i)$ is a definable cell decomposition of $\pi(B)$, where $\pi:M^n \rightarrow M^{n-1}$ is the projection forgetting the last coordinate.
Consider a finite family $\{A_\lambda\}_{\lambda \in \Lambda}$ of definable subsets of $B$.
A \textit{definable cell decomposition of $B$ partitioning $\{A_\lambda\}_{\lambda \in \Lambda}$} is a definable cell decomposition of $B$ such that the definable sets $A_{\lambda}$ are unions of cells for all $\lambda \in \Lambda$.
\end{definition}
\begin{theorem}[Local definable cell decomposition theorem, {\cite[Theorem 4.2]{Fuji}}]\label{thm:dcd}
Consider a definably complete uniformly locally o-minimal structure of the second kind $\mathcal M=(M,<,\ldots)$.
Let $n$ be an arbitrary positive integer.
Let $\{A_\lambda\}_{\lambda\in\Lambda}$ be a finite family of definable subsets of $M^n$.
For any point $a \in M^n$, there exist an open box $B$ containing the point $a$ and a definable cell decomposition of $B$ partitioning the finite family $\{B \cap A_\lambda\;|\; \lambda \in \Lambda \text{ and } B \cap A_\lambda \not= \emptyset\}$.
\end{theorem}
The above theorem says nothing about the relationship between decompositions at two distinct points.
When the considered structure is an almost o-minimal expansion of an ordered group, we can obtain the following uniform local definable cell decomposition theorem:
\begin{theorem}[Uniform local definable cell decomposition]\label{thm:main}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
Let $\{A_\lambda\}_{\lambda\in\Lambda}$ be a finite family of definable subsets of $M^{m+n}$.
Take an arbitrary positive element $R \in M$ and set $B=]-R,R[^n$.
Then, there exists a finite partition into definable sets
\begin{equation*}
M^m \times B = X_1 \cup \ldots \cup X_k
\end{equation*}
such that $B=(X_1)_b \cup \ldots \cup (X_k)_b$ is a definable cell decomposition of $B$ for any $b \in M^m$ and either $X_i \cap A_\lambda = \emptyset$ or $X_i \subseteq A_\lambda$ for any $1 \leq i \leq k$ and $\lambda \in \Lambda$.
Furthermore, the type of the cell $(X_i)_b$ is independent of the choice of $b$ with $(X_i)_b \not= \emptyset$.
Here, the notation $S_b$ denotes the fiber of a definable subset $S$ of $M^{m+n}$ at $b \in M^m$.
\end{theorem}
We introduce the terms and notations used in this paper.
When a first-order structure is fixed, the term `definable' means `definable in the structure with parameters.'
The notation $f|_A$ denotes the restriction of a map $f:X \rightarrow Y$ to a subset $A$ of $X$.
Consider a linearly ordered set without endpoints $(M,<)$.
An open interval is a nonempty set of the form $\{x \in M\;|\; a < x < b\}$ for some $a,b \in M \cup \{\pm \infty\}$.
It is denoted by $]a,b[$ in this paper.
The closed interval is defined similarly and denoted by $[a,b]$.
We use the notations $]a,b]$ and $[a,b[$ for half open intervals.
The set $M$ equips the order topology induced from the order $<$.
The affine space $M^n$ equips the product topology of the order topology.
We consider these topologies unless otherwise stated.
An open box is the Cartesian product of open intervals.
For a topological space $T$ and its subset $A$, the notations $\overline{A}$, $\operatorname{int}(A)$, $\partial A$ and $\operatorname{bd}(A)$ denote the closure, interior, frontier and boundary of $A$, respectively.
The notation $|S|$ denotes the cardinality of a set $S$.
It also denotes the absolute value of an element.
This abuse of notation will not confuse readers.
\section{Geometry of $\mathfrak X$-structures}\label{sec:x}
We study $\mathfrak X$-structures in this section.
\subsection{$\mathfrak X$-definable maps}
We first investigate $\mathfrak X$-definable maps.
Note that the domain of definition of an $\mathfrak X$-definable map is not necessarily $\mathfrak X$-definable.
We can easily get the following lemma.
\begin{lemma}\label{lem:restriction}
Consider an $\mathfrak X$-structure whose underlying set is $M$ and an $\mathfrak X$-definable map $\varphi:X \rightarrow M^n$.
Take an $\mathfrak X$-definable subset $Y$ of $X$.
The restriction $\varphi|_Y$ of $\varphi$ to $Y$ is $\mathfrak X$-definable.
\end{lemma}
\begin{proof}
Easy. We omit the proof.
\end{proof}
We investigate when the image and the inverse image of an $\mathfrak X$-definable set under an $\mathfrak X$-definable map are again $\mathfrak X$-definable.
\begin{lemma}\label{lem:image}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
Let $X$ be an $\mathfrak X$-definable subset of $M^m$ and $\varphi:X \rightarrow M^n$ be an $\mathfrak X$-definable map.
The image $\varphi(X)$ is $\mathfrak X$-definable when $X$ is bounded or $\varphi$ is proper.
\end{lemma}
\begin{proof}
Consider the graph $\Gamma(\varphi)=\{(x,y) \in X \times M^n\;|\; y=\varphi(x)\}$.
Consider the projection forgetting the first $m$ coordinates.
The image $\varphi(X)$ is the projection image of the graph.
We can easily demonstrate that the restriction of the projection to the graph is proper.
\end{proof}
\begin{definition}
Let $(M,<)$ be a linearly ordered set without endpoints.
Let $X$ be a subset of $M^m$ and $f:X \rightarrow M^n$ be a map.
The map $f$ satisfies the \textit{bounded image condition} if the image $f(X \cap V)$ is bounded for any bounded open box $V$ of $M^m$.
\end{definition}
\begin{lemma}\label{lem:inverse}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
Let $X$ and $Y$ be $\mathfrak X$-definable subsets of $M^m$ and $M^n$, respectively.
Let $\varphi:X \rightarrow M^n$ be an $\mathfrak X$-definable map.
The inverse image $\varphi^{-1}(Y)$ is $\mathfrak X$-definable when $Y$ is bounded or $\varphi$ satisfies the bounded image condition.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma \ref{lem:image}.
We omit the proof.
\end{proof}
\begin{corollary}\label{cor:semialg}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
Let $\varphi:X \rightarrow M$ be an $\mathfrak X$-definable function satisfying the bounded image condition.
Take $c \in M$.
The sets $\{x \in X\;|\;\varphi(x)=c\}$, $\{x \in X\;|\;\varphi(x)<c\}$ and $\{x \in X\;|\;\varphi(x)>c\}$ are $\mathfrak X$-definable.
\end{corollary}
\begin{proof}
The sets given in the corollary are the inverse images of $\{c\}$, $]-\infty,c[$ and $]c,\infty[$ under the function $\varphi$.
The corollary follows from Lemma \ref{lem:inverse}.
\end{proof}
\begin{example}\label{ex:x2}
Consider a definably complete structure $\mathcal M=(M,<,\ldots)$.
The image of a definable closed and bounded set under a definable continuous map $f$ is again definable, closed and bounded by \cite[Proposition 1.10]{M}.
It means that the map $f$ satisfies the bounded image condition in this case.
It is not true in an $\mathfrak X$-structure.
The ordered field $(\mathbb R_{\text{alg}},<,+,\cdot,0,1)$ of the real numbers algebraic over $\mathbb Q$ is an ordered real closed field and the induced structure is o-minimal.
In particular, the structure is definably complete.
Consider the $\mathfrak X$-structure of semi-definable sets in this o-minimal structure.
The set of positive integers $\mathbb N$ is semi-definable.
Take $a_n,b_n \in \mathbb Q$ so that $a_n < a_{n+1} < \pi < b_{n+1} < b_n$ for all $n \in \mathbb N$ and $\lim_{n \to \infty}a_n = \lim_{n \to \infty}b_n = \pi$.
Here, $\pi$ denotes the pi $=3.14\ldots$.
We define a semi-definable function $f$ on $[a_1,b_1]$.
The graph of the restriction of $f$ to $[a_i,a_{i+1}]$ is the segment connecting the points $(a_i,i)$ and $(a_{i+1},i+1)$ for any $i \in \mathbb N$.
We define the restriction of $f$ to $[b_i,b_{i+1}]$ in the same manner.
The function $f$ is $\mathfrak X$-definable and continuous, but its image is not bounded.
\end{example}
The composition of two $\mathfrak X$-definable map is not necessarily $\mathfrak X$-definable.
We find a sufficient condition for the composition being $\mathfrak X$-definable.
\begin{lemma}\label{lem:composition}
Consider an $\mathfrak X$-structure.
Let $\varphi:X \rightarrow Y$ and $\psi: Y \rightarrow Z$ be two $\mathfrak X$-definable maps.
The composition $\psi \circ \varphi$ is $\mathfrak X$-definable if $\psi$ is proper or $\varphi$ satisfies the bounded image condition.
\end{lemma}
\begin{proof}
Let $M$ be the underlying set of the $\mathfrak X$-structure.
Let $M^l$, $M^m$ and $M^n$ be the ambient spaces of $X$, $Y$ and $Z$, respectively.
Consider the set $A=\{(x,y,z) \in X \times Z \times Y\;|\;z=\varphi(x),\ y=\psi(z)\}$.
It is $\mathfrak X$-definable by Definition \ref{def:x}(3), (4) and (5).
Let $\pi:M^{l+m+n} \rightarrow M^{l+n}$ be the projection forgetting the last $m$ coordinates.
The graph of the composition $\psi \circ \varphi$ is the image of $A$ under the projection $\pi$.
If the restriction of $\pi$ to $A$ is proper, the graph is $\mathfrak X$-definable by Definition \ref{def:x}(6).
We have only to demonstrate that the restriction is proper.
Take a bounded open box $U$ in $M^l$ and a bounded open box $W$ in $M^n$.
We show that $C=\pi^{-1}(U \times W) \cap A$ is bounded.
When $\psi$ is proper, the inverse image $\psi^{-1}(W)$ is bounded.
The set $C$ is contained in $U \times W \times \psi^{-1}(W)$, and it is bounded.
When $\varphi$ satisfies the bounded image condition, the set $\varphi(X \cap U)$ is bounded.
The set $C$ is contained in $U \times W \times \varphi(X \cap U)$, and it is bounded.
\end{proof}
The following lemma is easy to prove.
The proofs are left to readers.
\begin{lemma}\label{lem:bounded}
The following assertions hold true.
\begin{enumerate}
\item[(1)] Consider an ordered group.
The addition satisfies the bounded image condition.
The addition of a constant is proper and satisfies the bounded image condition.
\item[(2)] Consider a divisible abelian group.
Multiplication by a rational constant is proper and satisfies the bounded image condition.
\item[(3)] Consider an ordered field.
The multiplication satisfies the bounded image condition.
The multiplication by a constant is proper and satisfies the bounded image condition.
\item[(4)] Consider an $\mathfrak X$-structure and let $\varphi:X \rightarrow Y$ and $\psi: Y \rightarrow Z$ be two $\mathfrak X$-definable maps.
The composition $\psi \circ \varphi$ is proper if both $\varphi$ and $\psi$ are proper.
\item[(5)] Let $\varphi$ and $\psi$ be as in (4).
The composition $\psi \circ \varphi$ satisfies the bounded image condition if both $\varphi$ and $\psi$ satisfy the bounded image condition.
\end{enumerate}
\end{lemma}
\begin{corollary}\label{cor:linear}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
Consider a linear function $l(\overline{x})=\sum_{i=1}^n q_ix_i +c$ with $q_i \in \mathbb Q$ for all $1 \leq i \leq n$ and $c \in M$, where $\overline{x}=(x_1,\ldots, x_n)$.
The function $l(\overline{x})$ is $\mathfrak X$-definable and the sets of the form $\{\overline{x} \in M^n\;|\; l(\overline{x}) * 0\}$ for $* \in \{=,<,>\}$ are $\mathfrak X$-definable.
\end{corollary}
\begin{proof}
The function $l(\overline{x})$ is $\mathfrak X$-definable and satisfies the bounded image condition by Lemma \ref{lem:composition} and Lemma \ref{lem:bounded}.
The sets given in the corollary are $\mathfrak X$-definable by Corollary \ref{cor:semialg}.
\end{proof}
\subsection{O-minimal structure contained in $\mathfrak X$-structure}
Any $\mathfrak X$-structure has an o-minimal structure $\mathcal R$ such that any bounded $\mathfrak X$-definable set is definable in the o-minimal structure $\mathcal R$.
\begin{lemma}\label{lem:in_omin}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
There exists an o-minimal structure $\mathcal R$ having the same underlying set and satisfying the following conditions:
\begin{enumerate}
\item[(i)] Any set definable in $\mathcal R$ is $\mathfrak X$-definable.
\item[(ii)] Any bounded $\mathfrak X$-definable set is definable in $\mathcal R$.
\end{enumerate}
\end{lemma}
\begin{proof}
For any bounded $\mathfrak X$-definable set $X$, we define the predicate $P_X$ and interpret it naturally.
The notation $\mathcal S_{\text{bdd}}$ denote the set of all bounded $\mathfrak X$-sets.
Consider the language $L=(<,(P_X)_{X \in \mathcal S_{\text{bdd}}})$.
The set $M$ is naturally the underlying set of an $L$-structure $\mathcal R$.
The structure $\mathcal R$ obviously satisfies the condition (ii).
We demonstrate that $\mathcal R$ is an o-minimal structure satisfying the condition (i).
We need a preparation.
An $L$-formula $\phi(\overline{x})$ with parameters in $M$ is called \textit{simple} when it is one of the following formula:
\begin{align*}
& x_i =c,\ x_i <c,\ x_i>c, \ x_i=x_j,\ x_i<x_j \text{ and }x_i > x_j\text{,}
\end{align*}
where $\overline{x}=(x_1,\ldots, x_n)$, $c \in M$ and $1 \leq i <j \leq n$.
An $L$-formula $\phi(\overline{x})$ with parameters in $M$ is \textit{semi-simple} if it is a finite conjunction of simple formulas.
A subset $X$ in $M^n$ definable in $\mathcal R$ is \text{semi-simple} if it is defined by a semi-simple formula.
An open box is semi-simple and the complement of an open box is a finite union of semi-simple sets.
We demonstrate the following claim:
\medskip
\textbf{Claim.} Any $L$-formula $\phi(\overline{x})$ with parameters in $M$ is equivalent to either a finite disjunction of semi-simple formulas, a formula of the form $P_X(\overline{x})$ or their disjunction in $\operatorname{Th}(\mathcal R)$.
Here, the notation $\operatorname{Th}(\mathcal R)$ denotes the set of all the $L$-sentences valid in the structure $\mathcal R$.
\medskip
We demonstrate the claim by induction on the complexity of the formula $\phi(\overline{x})$.
The claim is obvious when $\phi(\overline{x})$ is an atomic formula.
The conjunction of $P_X(\overline{x})$ and $P_Y(\overline{x})$ is equivalent to $P_{X \cap Y}(\overline{x})$.
Their disjunction is equivalent to $P_{X \cup Y}(\overline{x})$.
Let $\phi_1(\overline{x})$ and $\phi_2(\overline{x})$ be finite disjunctions of semi-simple formulas.
The conjunction $\phi_1(\overline{x}) \wedge \phi_2(\overline{x})$ is obviously a finite disjunction of semi-simple formulas.
When $\phi_1(\overline{x})$ is a finite disjunction of semi-simple formulas and $X$ is a bounded $\mathfrak X$-definable set, the set $Y=X \cap \{\overline{x} \in M^n\;|\; \mathcal M \models \phi_1(\overline{x})\}$ is a bounded $\mathfrak X$-definable set.
We have $\mathcal R \models \forall \overline{x}\ ((\phi_1(\overline{x}) \wedge P_X(\overline{x})) \leftrightarrow P_Y(\overline{x}))$.
Therefore, the claim is true for the conjunction of two formulas $\phi_1(\overline{x})$ and $\phi_2(\overline{x})$ satisfying the claim.
We consider the case in which $\phi(\overline{x})$ is the negation of the formula $\psi(\overline{x})$ satisfying the claim.
The formula $\phi(\overline{x})$ clearly satisfies the claim when the formula $\psi(\overline{x})$ is equivalent to a finite disjunction of semi-simple formulas.
We next consider the case in which $\psi(\overline{x})=P_X(\overline{x})$ for some bounded $\mathfrak X$-definable subset $X$ of $M^n$.
There exists $a,b$ in $M$ with $a<b$ and $X \subseteq ]a,b[^n$.
The set $]a,b[^n \setminus X$ is a bounded $\mathfrak X$-definable set and it belongs to $\mathcal S_{\text{bdd}}$.
It is obvious that there exists a finite disjunction $\psi'(\overline{x})$ of semi-simple formulas such that $\mathcal R \models \forall \overline{x}\ (\psi'(\overline{x}) \leftrightarrow x \not\in ]a,b[^n)$.
Therefore, we have $\mathcal R \models \forall \overline{x}\ (\neg P_X(\overline{x}) \leftrightarrow (P_{]a,b[^n\setminus X}(\overline{x}) \vee \psi'(\overline{x})))$.
Using these facts, we can demonstrate that $\phi(\overline{x})=\neg\psi(\overline{x})$ satisfies the claim.
We omit the details.
The projection image of a set in $\mathcal S_{\text{bdd}}$ is again an element of $\mathcal S_{\text{bdd}}$
Using this fact, we can prove the claim when the formula $\phi(\overline{x})$ is of the form $\exists y\ \psi(\overline{x},y)$.
We also omit the details.
We have demonstrated the claim.
\medskip
Take an arbitrary subset $X$ of $M^n$ definable in $\mathcal R$.
It is either a finite union of semi-simple sets, a bounded $\mathfrak X$-definable set or their union by the claim.
A semi-simple set is obviously $\mathfrak X$-definable.
Hence, the set $X$ is $\mathfrak X$-definable.
We have demonstrated that the condition (i) holds true.
We finally show that a subset $X$ of $M$ definable in $\mathcal R$ is a finite union of points and open intervals.
A bounded $\mathfrak X$-definable subset of $M$ is a finite union of points and open intervals by Definition \ref{def:x}(7).
A semi-simple subset of $M$ is obviously a finite union of points and open intervals.
Therefore, $X$ is a finite union of points and open intervals by the claim.
\end{proof}
A locally o-minimal structure \textit{admits local definable cell decomposition} if local definable cell decomposition in Theorem \ref{thm:dcd} is always available.
Note that a locally o-minimal structure which admits local definable cell decomposition is always a uniformly locally o-minimal structure of the second kind.
\begin{corollary}\label{cor:second}
An almost o-minimal structure admits local definable cell decomposition.
In particular, it is a uniformly locally o-minimal structure of the second kind.
\end{corollary}
\begin{proof}
Recall that an almost o-minimal structure is an $\mathfrak X$-structure.
The corollary follows from Lemma \ref{lem:in_omin} and \cite[Chapter 3, Theorem 2.11]{vdD}.
\end{proof}
We quote \cite[Fact 1.7]{E} which is originally proved in \cite[Proposition 5.1(1)]{LP}:
\begin{proposition}\label{prop:lp1}
Let $\mathcal V=(V,+,<,a,(d)_{d \in D},(P)_{P \in \mathcal P})$ be an expansion of an ordered vector space $(V,+,<,(d)_{d \in D})$ over an ordered division ring $D$ by predicates $P \in \mathcal P$ on a bounded subset of $[-a,a]^n$, such that $\mathcal P$ contains predicates for all subsets of $[-a,a]^n$ which are $a$-definable in the vector space structure.
Then $\operatorname{Th}(\mathcal V)$ has quantifier elimination in its language.
\end{proposition}
\begin{proof}
\cite[Proposition 5.1(1)]{LP}.
\end{proof}
The following theorem is a better variant of Lemma \ref{lem:in_omin}.
It claims that an $\mathfrak X$-expansion of an ordered divisible abelian group always contains an o-minimal expansion of an ordered group.
\begin{theorem}\label{thm:in_omin}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
There exists an o-minimal expansion $\mathcal R$ of an ordered group having the same underlying set $M$ and satisfying the following conditions:
\begin{enumerate}
\item[(i)] Any set definable in $\mathcal R$ is $\mathfrak X$-definable.
\item[(ii)] Any bounded $\mathfrak X$-definable set is definable in $\mathcal R$.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $(M,0,+,<)$ is an ordered divisible abelian group, it is naturally a $\mathbb Q$-vector space.
For any bounded $\mathfrak X$-definable set $X$, we define the predicate $P_X$ and interpret it naturally.
The notation $\mathcal S_{\text{bdd}}$ denote the set of all bounded $\mathfrak X$-sets.
Set $\mathcal S_{\text{bdd}}(a)=\{X \in \mathcal S_{\text{bdd}}\;|\; X \subseteq [-a,a]^n \text{ when } X \in M^n\}$ for all positive $a \in M$.
Consider the language $L=(+,<,(a)_{a \in M}, (d)_{d \in \mathbb Q}, (P_X)_{X \in \mathcal S_{\text{bdd}}})$.
The set $M$ is naturally the underlying set of an $L$-structure $\mathcal R$.
The structure $\mathcal R$ obviously satisfies the condition (ii).
We demonstrate that $\mathcal R$ is an o-minimal structure satisfying the condition (i).
Take an arbitrary $L$-formula $\phi(\overline{x})$ with parameters in $M$.
Here, $\overline{x}$ is an $m$-tuple of variables.
We first show that the set defined by $\phi(\overline{x})$ is $\mathfrak X$-definable.
Since only finitely many symbols are involved in the $L$-formula $\phi(\overline{x})$,
we may assume that $\phi(\overline{x})$ is $L(a)$-formula with parameters for some $a>0$, where $L(a)=(+,<,a, (d)_{d \in \mathbb Q}, (P_X)_{X \in \mathcal S_{\text{bdd}}(a)})$.
There exists a quantifier-free formula $\psi(x)$ equivalent to $\phi(\overline{x})$ by Proposition \ref{prop:lp1}.
We may assume that $\phi(\overline{x})$ is a quantifier-free formula.
The formula $\phi(\overline{x})$ is a finite disjunction of a finite conjunction of formulas of the forms $P_X(t(\overline{x},\overline{c}))$ and $t_1(\overline{x},\overline{c})\ *\ t_2(\overline{x},\overline{c})$ and their negations, where the notation $t(\overline{x},\overline{c})$ denotes an $n$-tuple of terms with parameters $\overline{c}$, $X$ is a subset of $M^n$ and $t_1(\overline{x},\overline{c})$ and $t_2(\overline{x},\overline{c})$ are terms with parameters $\overline{c}$.
The symbol $*$ is one of $=$, $<$ and $>$.
The terms are of the form $l(\overline{x})=\sum_{i=1}^n q_ix_i+c$, where $\overline{x}=(x_1,\ldots, x_n)$, $c \in M$ and $q_i \in \mathbb Q$ for all $1 \leq i \leq n$.
The definable set of the form $t_1(\overline{x},\overline{c})\ * \ t_2(\overline{x},\overline{c})$ is $\mathfrak X$-definable by Corollary \ref{cor:linear}.
Consider a formula of the form $P_X(t(\overline{x},\overline{c}))$.
Introduce new variables $\overline{y}=(y_1,\ldots, y_n)$.
The set defined by $$\{(\overline{x},\overline{y}) \in M^m \times M^n\;|\; \overline{y} \in X \text{ and } \overline{y}=t(\overline{x},\overline{c})\}$$ is $\mathfrak X$-definable because the graph of $t(\overline{x},\overline{c})$ is $\mathfrak X$-definable by Corollary \ref{cor:linear} and the intersection of $\mathfrak X$-definable sets is again $\mathfrak X$-definable by Definition \ref{def:x}(3).
The set defined by the formula $P_X(t(\overline{x},\overline{c}))$ is the image of the above $\mathfrak X$-definable set under the proper projection forgetting $\overline{y}$.
It is also $\mathfrak X$-definable by Definition \ref{def:x}(6).
The set defined by $\phi(\overline{x})$ is a boolean combination of sets of the above forms.
It is also $\mathfrak X$-definable because family of $\mathfrak X$-definable sets are closed under boolean algebra.
We have proven that the condition (i) is satisfied.
The next task is to demonstrate that the definable set defined by the formula $\phi(\overline{x})$ is a finite union of points and open intervals when $m=1$.
We may assume that $\phi(\overline{x})$ is a quantifier-free formula for the same reason as above.
A boolean combination of finite unions of points and open intervals is again a finite union of points and open intervals.
So we have only to demonstrate that the sets defined by formulas the forms $P_X(t(\overline{x},\overline{c}))$ and $t_1(\overline{x},\overline{c})* t_2(\overline{x},\overline{c})$ with $* \in \{=,<,>\}$ are finite unions of points and open intervals.
It is trivial in the latter case.
In the former case, the set defined by $P_X(t(\overline{x},\overline{c}))$ is a bounded $\mathfrak X$-definable set.
It is a finite union of points and open intervals by Definition \ref{def:x}(7).
\end{proof}
The following corollary indicates that the o-minimal structure $\mathcal R$ given in Theorem \ref{thm:in_omin} is the minimal $\mathfrak X$-structure and the $\mathfrak X$-structure of semi-definable sets in $\mathcal R$ is the maximal $\mathfrak X$-structure containing the o-minimal structure $\mathcal R$.
\begin{corollary}\label{cor:in_omin}
Consider an $\mathfrak X$-expansion $\mathcal X$ of an ordered divisible abelian group whose underlying set is $M$.
There exists an o-minimal expansion $\mathcal R$ of an ordered group whose underlying set is $M$ satisfying the following conditions:
\begin{enumerate}
\item[(i)] Any set definable in $\mathcal R$ is $\mathfrak X$-definable in $\mathcal X$.
\item[(ii)] Any set $\mathfrak X$-definable in $\mathcal X$ is $\mathfrak X$-definable in $\mathfrak X(R)$.
\end{enumerate}
Here, the notation $\mathfrak X(R)$ denotes the $\mathfrak X$-structure of semi-definable sets in $\mathcal R$.
\end{corollary}
\begin{proof}
Let $\mathcal R$ be the o-minimal structure given in Theorem \ref{thm:in_omin}.
The condition (i) follows from the theorem.
Take an arbitrary subset $X$ of $M^n$ $\mathfrak X$-definable in $\mathcal X$.
For any bounded open box $B$, the intersection $B \cap X$ is definable in $\mathcal R$.
It means that the set $X$ is semi-definable in $\mathcal R$.
We have demonstrated that the condition (ii) is satisfied.
\end{proof}
Here is another corollary.
Its proof illustrates a typical procedure for translating an assertion for o-minimal expansions of ordered groups into the corresponding assertion on bounded $\mathfrak X$-definable sets.
\begin{corollary}[Curve selection lemma]\label{cor:curve_selection}
Consider an $\mathfrak X$-expansion $\mathcal X$ of an ordered divisible abelian group whose underlying set is $M$.
Let $X$ be an $\mathfrak X$-definable subset of $M^n$ and take a point $a \in \partial X$.
Let $\mathcal R$ be the o-minimal structure given in Theorem \ref{thm:in_omin}.
There exist a positive $\varepsilon \in M$ and a continuous map $\gamma:]0,\varepsilon[ \rightarrow X$ definable in $\mathcal R$ such that the image of $\gamma$ is bounded and $\lim_{ t \to 0}\gamma(t)=a$.
\end{corollary}
\begin{proof}
Let $\mathcal R$ be the o-minimal structure given in Theorem \ref{thm:in_omin}.
Take a bounded open box $U$ containing the point $a$.
The set $X \cap U$ is definable in $\mathcal R$ by Theorem \ref{thm:in_omin}.
Since $\mathcal R$ is an o-minimal expansion of an ordered group, there exists a continuous map $\gamma:]0,\varepsilon[ \rightarrow X \cap U$ definable in $\mathcal R$ such that $\lim_{ t \to 0}\gamma(t)=a$ by the curve selection lemma for o-minimal expansions of ordered groups \cite[Chapter 6, Corollary 1.5]{vdD}.
The image of $\gamma$ is bounded because it is contained in $U$.
\end{proof}
\subsection{Dimension}
We define the dimension of an $\mathfrak X$-definable set and investigate its basic properties.
\begin{definition}\label{def:dimension}
Consider a densely linearly ordered set without endpoints $(M,<)$.
Let $X$ be a subset of $M^n$.
If $X$ is an empty set, we set $\dim X=-\infty$.
The nonempty set $X$ is of dimension $\geq m$ if there exist a point $x \in M^n$ and a coordinate projection $\pi:M^n \rightarrow M^m$ such that, for any open box $B$ containing the point $x$, the projection image $\pi(B \cap X)$ has a nonempty interior.
The set is of dimension $m$ when it is of dimension $\geq m$ and not of dimension of $\geq m+1$.
\end{definition}
\begin{remark}
The dimensions of sets definable in an o-minimal structure and definable in a locally o-minimal admitting local definable cell decomposition are defined differently in \cite[Chapter 4, (1.1)]{vdD} and \cite[Definition 5.1]{Fuji}, respectively.
However, they coincide with the definition given above by \cite[Corollary 5.3]{Fuji}.
We use this fact without any notification in the rest of this paper.
\end{remark}
\begin{lemma}\label{lem:dim0}
Consider an $\mathfrak X$-structure.
A nonempty $\mathfrak X$-definable set is of dimension zero if and only if it is discrete.
A discrete $\mathfrak X$-definable set is closed.
\end{lemma}
\begin{proof}
Lemma \ref{lem:in_omin} and the cell decomposition theorem \cite[Chapter 3, Theorem 2.11]{vdD} immediately imply this lemma.
\end{proof}
The following lemma is well-known:
\begin{lemma}\label{lem:omin_tmp}
Consider an o-minimal structure whose underlying set is $M$.
Let $C_1, \ldots, C_N$ be definable subsets of $M^n$.
If the union $\bigcup_{i=1}^N C_i$ has a nonempty interior, $C_k$ has a nonempty interior for some $1 \leq k \leq N$.
\end{lemma}
\begin{proof}
It is an easy corollary of the cell decomposition theorem \cite[Chapter 3, Theorem 2.11]{vdD}.
\end{proof}
We give another expression of dimension.
\begin{lemma}\label{lem:equiv_dim}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
Let $X$ be an $\mathfrak X$-definable subset of $M^n$.
We have $$\dim X = \sup\{ \dim (U \cap X) \;|\; U \text{ is a bounded open box in }M^n\}\text{.}$$
\end{lemma}
\begin{proof}
Let $\mathcal R$ be the o-minimal structure given in Lemma \ref{lem:in_omin}.
Let $d$ be the right hand of the equality in the lemma.
We first demonstrate that $d \leq \dim X$.
There exists a bounded open box $U$ in $M^n$ such that $d=\dim (U \cap X)$.
The set $U \cap X$ is definable in $\mathcal R$.
There exists a cell $C$ contained in $U \cap X$ and a coordinate projection $\pi:M^n \rightarrow M^d$ such that the image $\pi(C)$ has a nonempty interior by the definition of dimension of a set definable in the o-minimal structure $\mathcal R$.
Take an arbitrary point $x \in C$.
The projection image $\pi(C \cap B)$ has a nonempty interior for any open box $B$ containing the point $x$ by the definition of cells.
Since $C$ is a subset of $X$, the projection image $\pi(X \cap B)$ has a nonempty interior for any open box $B$ containing the point $x$.
It means that $d \leq \dim X$.
We next demonstrate the opposite inequality $\dim X \leq d$.
There exist a point $x \in M$ and a coordinate projection $\pi:M^n \rightarrow M^{\dim X}$ such that, for any open box $B$ containing the point $x$, the projection image $\pi(B \cap X)$ has a nonempty interior.
Fix a bounded open box $B$ containing the point $x$.
The intersection $X \cap B$ is definable in $\mathcal R$ by Lemma \ref{lem:in_omin}.
Apply the cell decomposition theorem \cite[Chapter 3, Theorem 2.11]{vdD}.
We get a finite partition into cells $X \cap B=\bigcup_{i=1}^N C_i$.
The image $\pi(C_k)$ has a nonempty interior for some $1 \leq k \leq N$ by Lemma \ref{lem:omin_tmp}.
It implies that $\dim X \leq \dim C_k \leq \dim (B \cap X)$.
It means $\dim X \leq d$.
\end{proof}
We summarize the basic properties of dimension.
\begin{proposition}\label{prop:dim}
Consider an $\mathfrak X$-structure whose underlying set is $M$.
The following assertions hold true:
\begin{enumerate}
\item[(a)] We have $\dim (X) \leq \dim (Y)$ for any $\mathfrak X$-definable sets $X$ and $Y$ with $X \subseteq Y$.
\item[(b)] The equality $\dim (X \cup Y)=\max\{\dim(X),\dim(Y)\}$ holds true for any $\mathfrak X$-definable subsets $X$ and $Y$ of $M^n$.
\item[(c)] The equality $\dim (X \times Y)=\dim(X)+\dim(Y)$ holds true for any $\mathfrak X$-definable sets $X$ and $Y$.
\item[(d)] Let $X$ be an $\mathfrak X$-definable set.
We get $\dim (\partial X) < \dim(X)$ and $\dim(\overline{X})=\dim X$ when $\partial X$ is $\mathfrak X$-definable.
\end{enumerate}
\end{proposition}
\begin{proof}
This proposition immediately follows from Lemma \ref{lem:equiv_dim} and the basic properties of the dimension of sets definable in an o-minimal structure \cite[Chapter 4, Proposition 1.3, Corollary 1.6, Theorem 1.8]{vdD}.
\end{proof}
\subsection{Structure theorem}
Loveys and Peterzil investigated necessary and sufficient conditions for an o-minimal expansion of an ordered group being linear in \cite{LP}.
Using their results, we investigate the structure of an $\mathfrak X$-expansion $\mathcal X$ of an ordered divisible abelian group when there exists an $\mathfrak X$-definable strictly monotone homeomorphism between a bounded open interval and an unbounded open interval.
We first prove the following lemma.
\begin{lemma}\label{lem:xbij}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
Assume that there exists an $\mathfrak X$-definable strictly monotone homeomorphism between a bounded open interval and an unbounded open interval.
Then, there exists an $\mathfrak X$-definable strictly increasing homeomorphism $\psi$ between an arbitrary bounded open interval and $M$ such that $$\psi(\text{the middle point of the bounded open interval})=0\text{.}$$
\end{lemma}
\begin{proof}
Let $\varphi:I \rightarrow J$ be the given $\mathfrak X$-definable strictly monotone homeomorphism between a bounded open interval $I$ and an unbounded open interval $J$.
It is easy to construct strictly increasing homeomorphisms between $]a,b[$ and $]0,b-a[$, between $]a,\infty[$ and $]0,\infty[$, and between $]-\infty,a[$ and $]-\infty,0[$ for $a,b \in M$ using the addition.
The composition of $\varphi$ with them is $\mathfrak X$-definable by Lemma \ref{lem:composition} and Lemma \ref{lem:bounded}(1).
Hence, we may assume that $I=]0,u[$ for some $u>0$.
If $J=M$, consider the restriction of $\varphi$ to $]0,\varphi^{-1}(0)[$ instead of $\varphi$.
We may assume that $J=]-\infty,0[$ or $J=]0,\infty[$.
We have to check $\mathfrak X$-definability in the same manner as above every time we construct a new map but we omit them in the proof.
We may further assume that $J=]0,\infty[$ and $\varphi$ is strictly increasing by composing $\varphi$ with the maps given by $x \mapsto -x$ and $x \mapsto u-x$.
Take an arbitrary nonempty bounded open interval $I'$.
We will construct an $\mathfrak X$-definable strictly increasing homeomorphism from $I'$ to $M$.
We may assume that $I'$ is of the form $]0,v[$ for some $v>0$ in the same way as above.
We first construct an $\mathfrak X$-definable strictly increasing homeomorphism from $I'$ to $]0,\infty[$.
We have nothing to do when $u=v$.
When $v<u$, the $\mathfrak X$-map given by $\varphi(t+u-v)-\varphi(u-v)$ for all $0 <t <v$ is the desired map.
When $v>u$, consider the the $\mathfrak X$-map which is the identity map on $]0,v-u]$ and which is given by $\varphi(t+u-v)+v-u$ for all $t > v-u$.
We finally construct an $\mathfrak X$-definable strictly increasing homeomorphism from $I'$ to $M$.
We can construct $\mathfrak X$-definable strictly increasing homeomorphisms $\varphi_1:]0,v/2[ \rightarrow ]-\infty,0[$ and $\varphi_1:]v/2,v[ \rightarrow ]0, \infty[$ in the same manner as above.
The map $\psi:]0,v[ \rightarrow M$ given by $\psi(t)=\varphi_1(t)$ if $t<v/2$, $\psi(v/2)=0$ and $\psi(t)=\varphi_2(t)$ if $t>v/2$ is the desired map.
\end{proof}
Recall the definition of a piecewise linear map definable in an o-minimal expansion of an ordered group.
We also define a piecewise linear map definable in an $\mathfrak X$-expansion of an ordered divisible abelian group.
\begin{definition}
Consider an o-minimal expansion of an ordered group $\mathcal M=(M,<,+,0,\ldots)$.
A definable function $F:U \subseteq M^n \rightarrow M$ is \textit{piecewise linear}, if we can partition $U$ into finitely many definable sets $U_1, \ldots, U_k$ such that $F$ is linear on each of them, i.e., given $x,y \in U_i$ and $t \in M^n$, if $x+t,y+t \in U_i$, then $F(x+t)-F(x)=F(y+t)-F(y)$.
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
An $\mathfrak X$-definable function $F:U \subseteq M^n \rightarrow M$ is \textit{piecewise linear}, for any bounded open box $B$ in $M^n$ and a bounded open interval $I$, if we can partition $B \cap U \cap F^{-1}(I)$ into finitely many $\mathfrak X$-definable sets $U_1, \ldots, U_k$ such that $F$ is linear on each of them, i.e., given $x,y \in U_i$ and $t \in M^n$, if $x+t,y+t \in U_i$, then $F(x+t)-F(x)=F(y+t)-F(y)$.
\end{definition}
The following is due to Loveys and Peterzil \cite{LP} and is summarized in \cite{E}.
See also \cite{PS2}.
\begin{proposition}\label{prop:lp2}
Consider an o-minimal expansion of an ordered group $\mathcal M=(M,<,+,0,\ldots)$.
The following are equivalent:
\begin{enumerate}
\item[(1)] Every definable function $F:U \subseteq M^n \rightarrow M$ is piecewise linear.
\item[(2)] There exist no definable binary operations $\oplus, \otimes:I^2 \rightarrow I$ on an interval $I=]-a,a[$, and a positive element $1 \in I$ such that $(I,<_I,0,1,\oplus,\otimes)$ is an ordered real closed field, where $<_I$ denotes the restriction of $<$ to $I$.
\end{enumerate}
\end{proposition}
\begin{proof}
\cite[Fact 1.12]{E}.
\end{proof}
We are now ready to prove the structure theorem.
\begin{theorem}\label{thm:xstr}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
Assume further that there exists an $\mathfrak X$-definable strictly monotone homeomorphism between a bounded open interval and an unbounded open interval.
Then, exactly one of the following holds true:
\begin{enumerate}
\item[(1)] Any $\mathfrak X$-definable function is piecewise linear.
\item[(2)] The structure is an $\mathfrak X$-expansion of an ordered real closed field in the following sense:
There exists elements $1' \in M$ and $\mathfrak X$-definable binary operations $\oplus,\otimes:M^2 \rightarrow M$ such that the tuple $(M,<,0,1',\oplus,\otimes)$ is an ordered real closed field.
\end{enumerate}
\end{theorem}
\begin{proof}
Take the o-minimal expansion of an ordered group $\mathcal R$ given in Theorem \ref{thm:in_omin}.
We have the following two cases by Proposition \ref{prop:lp2}.
\begin{enumerate}
\item[(1)] Every function $F:U \subseteq M^n \rightarrow M$ definable in $\mathcal R$ is piecewise linear.
\item[(2)] There exist binary operations $\oplus_I, \otimes_I:I^2 \rightarrow I$ definable in $\mathcal R$ on an interval $I=]-a,a[$, and a positive element $1_I \in I$ such that $(I,<_I,0,1_I,\oplus_I,\otimes_I)$ is an ordered real closed field, where $<_I$ denotes the restriction of $<$ to $I$.
\end{enumerate}
We first consider the case (1).
Take an arbitrary $\mathfrak X$-function $F:U \subseteq M^n \rightarrow M$.
Take a bounded open box $B$ in $M^n$ and a bounded open interval $I$.
The set $\Gamma(F) \cap (B \times I)$ is definable in $\mathcal R$, where $\Gamma(F)$ denotes the graph of the function $F$.
It is the graph of the restriction of $F$ to $U \cap B \cap F^{-1}(I)$.
Since the function $F|_{U \cap B \cap F^{-1}(I)}$ definable in $\mathcal R$ is piecewise linear, the $\mathfrak X$-definable function is also piecewise liner.
We next treat the case (2).
There exists a strictly increasing $\mathfrak X$-definable homeomorphism $\varphi:I \rightarrow M$ with $\varphi(0)=0$ by Lemma \ref{lem:xbij}.
Set $1'=\varphi(1_I)$, $x \oplus y= \varphi(\varphi^{-1}(x) \oplus_I \varphi^{-1}(y))$ and $x \otimes y= \varphi(\varphi^{-1}(x) \otimes_I \varphi^{-1}(y))$ for all $x, y \in M$.
The graph of $\oplus_I$ is a bounded set definable in $\mathcal R$.
In particular, it is $\mathfrak X$-definable.
The graph of $\oplus$ is the image of the graph of $\oplus_I$ under the homeomorphism between $I^3$ and $M^3$ given by $(x,y,z) \mapsto (\varphi(x),\varphi(y),\varphi(z))$.
It is $\mathcal X$-definable by Lemma \ref{lem:image}.
The operator $\otimes$ is also $\mathfrak X$-definable for the same reason.
It is easy to check that the tuple $(M,<,0,1',\oplus,\otimes)$ is an ordered real closed field using \cite[Theorem 1.2.2(ii)]{BCR}.
We omit the details.
\end{proof}
\begin{remark}
Consider an o-minimal expansion $\widetilde{\mathbb R}$ of the ordered group of reals.
The $\mathfrak X$-structure of semi-definable sets in $\widetilde{\mathbb R}$ satisfies the assumption of Theorem \ref{thm:xstr}.
The map $\varphi:]0,1[ \rightarrow ]0,\infty[$ defined by $\varphi(x)=i+2^{i+1}(x-(1-1/2^i))$ for $1-1/2^i<x \leq 1-1/2^{i+1}$ is a semi-definable homeomorphism between the interval $]0,1[$ and the interval $]0,\infty[$.
\end{remark}
\begin{remark}
An assertion for Shiota's $\mathfrak Y$-sets similar to but not identical to Theorem \ref{thm:xstr} is found in \cite[Theorem V.2.2]{Shiota}.
\end{remark}
\subsection{Topological results}
We summarize other basic topological properties of $\mathfrak X$-structures which are not introduced in the previous subsections.
\begin{proposition}\label{prop:x_int_closure}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group.
The interior, closure and frontier of an $\mathfrak X$-definable set are $\mathfrak X$-definable.
\end{proposition}
\begin{proof}
Let $M$ be the underlying space of the given $\mathfrak X$-structure.
Let $X$ be an $\mathfrak X$-definable subset of $M^n$.
Fix a positive $r>0$.
Consider the $\mathfrak X$-definable set
$$ A=\{(x,y,s) \in M^n \times M^n \times M\;|\; 0<s<r,\ |x_i - y_i|<s (\forall i),\ x \in X,\ y \not\in X\} \text{,}$$
where $x_i$ and $y_i$ are the $i$-th coordinate of $x$ and $y$, respectively.
Set $$B=\{(x,s) \in X \times M\;|\; 0<s<r,\ \exists y \not\in X \text{ and } |x_i - y_i|<s (\forall i)\}\text{.}$$
The set $B$ is the image of the $\mathfrak X$-definable set $A$ under a proper projection.
The set $B$ is $\mathfrak X$-definable.
The interior $\operatorname{int}(X)$ of $X$ is the image of $(X \times \{s \in M\;|\;0<s<r\}) \setminus B$ under the projection forgetting the last coordinate.
Therefore, the interior is $\mathfrak X$-definable.
The closure of $X$ is given by $(\operatorname{int}(X^c))^c$.
Here, the notation $A^c$ denotes the complement of a set $A$.
The closure is $\mathfrak X$-definable.
The frontier is also $\mathfrak X$-definable by Definition \ref{def:x}(3).
\end{proof}
\begin{corollary}\label{cor:dim}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group.
Let $X$ be an $\mathfrak X$-definable set.
We get $\dim (\partial X) < \dim(X)$ and $\dim(\overline{X})=\dim X$.
\end{corollary}
\begin{proof}
Immediate from Proposition \ref{prop:x_int_closure} and Proposition \ref{prop:dim}(d).
\end{proof}
\begin{example}\label{ex:x1}
Consider a definably complete structure $\mathcal M=(M,<,\ldots)$.
A \textit{definable family} of subsets of $M^n$, parameterized by $A \subseteq M^m$, is an indexed family $\{Y_a\}_{a \in A}$ of fibers, where $Y \subseteq M^{m+n}$ and $A \subseteq M^m$ are definable.
A definable family $\{Y_a\}_{a \in A}$ is \textit{monotone} if $A \subseteq M$ and either $Y_r \supseteq Y_s$ for all $r,s \in A$ with $r \leq s$ or $Y_r \subseteq Y_s$ for all $r,s \in A$ with $r \leq s$.
We have $\bigcap_{r \in A} Y_r \neq \emptyset$ for all monotone definable families of nonempty definable closed and bounded sets $\{Y_r\}_{r \in A}$ by \cite[Lemma 1.9]{M}.
We can define $\mathfrak X$-definable family similarly.
But the intersection of monotone $\mathfrak X$-definable family of nonempty $\mathfrak X$-definable closed and bounded sets may be an empty set.
In fact, consider the o-minimal structure in Example \ref{ex:x2} and the $\mathfrak X$-structure of semi-definable sets in this o-minimal structure.
The set of positive integers $\mathbb N$ is semi-definable.
Take $a_n,b_n \in \mathbb Q$ as in Example \ref{ex:x2}.
Set $Y_n = [a_n,b_n] \subseteq \mathbb R_{\text{alg}}$.
The family $\{Y_n\}_{n \in \mathbb N}$ is a monotone $\mathfrak X$-definable family of nonempty definable closed and bounded sets.
We obviously have $\bigcap_{n \in \mathbb N} Y_n = \emptyset$.
\end{example}
We finally study when there exists an unbounded discrete $\mathfrak X$-definable set in $M$.
\begin{lemma}\label{lem:xoromin}
Consider an $\mathfrak X$-expansion of an ordered divisible abelian group whose underlying set is $M$.
Exactly one of the following conditions holds true:
\begin{enumerate}
\item[(1)] Any $\mathfrak X$-definable subset of $M$ is a finite union of points and open interval.
\item[(2)] There exists an unbounded discrete $\mathfrak X$-definable set.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that the condition (1) is not satisfied.
There exists an $\mathfrak X$-definable subset $X$ of $M$ which is not a finite union of points and open intervals.
Set $Y=\overline{X} \setminus \operatorname{int}(X)$.
It is $\mathfrak X$-definable by Proposition \ref{prop:x_int_closure}.
We demonstrate that $Y$ is an unbounded discrete $\mathfrak X$-definable set.
We first demonstrate that $Y$ is discrete.
For any bounded open interval $I$ in $M$, the intersection $I \cap Y$ is a finite union of points and open intervals by the definition of a $\mathfrak X$-structure.
Therefore, $Y$ is discrete when it has an empty interior.
Assume that $Y$ has a nonempty interior.
We can take a bounded open interval $J$ contained in $Y$.
Since $X \cap J$ is a finite union of points and open intervals, $J = Y \cap J = (\overline{X} \cap J) \setminus (\operatorname{int}(X) \cap J)$ consists of finite points.
Contradiction.
We next show that $Y$ is unbounded.
The set $Y$ consists of infinite points.
In fact, assume that $Y$ is a finite set.
There exists a nonempty bounded open interval $I$ which contains $Y$.
The difference $M \setminus I$ consists of a closed interval $J_+$ unbounded above and a closed interval $J_-$ unbounded below.
We have $J_+ \cap X = \emptyset$ or $J_+ \subseteq X$.
Otherwise, we can take points $a \in J_+ \cap X$ and $b \in J_+ \setminus X$.
Take a bounded open interval $J \subseteq J_+$ containing the points $a$ and $b$.
We have $J \cap Y=\emptyset$.
The set $J \cap X$ is a finite union of points and open intervals.
We have $J \cap X \neq \emptyset$ because it contains the point $a$.
We also have $J \cap X \neq J$ because $J \cap X$ does not contain the point $b$.
By the definition of $Y$, we get $Y \cap J \neq \emptyset$ in both the cases in which $X \cap J$ consists of points and in which it contains an open interval.
We have demonstrated that $J_+ \cap X = \emptyset$ or $J_+ \subseteq X$.
We also obtain $J_- \cap X = \emptyset$ or $J_- \subseteq X$ similarly.
Since $I \cap X$ is a finite union of points and open intervals, the set $X$ is a finite union of points and open intervals.
Contradiction.
We have demonstrated that $Y$ consists of infinite points.
If $Y$ is bounded, we can take a bounded open interval $I$ containing the set $Y$.
The set $Y=Y \cap I$ consists of finite points because $Y$ is $\mathfrak X$-definable and $Y$ is discrete.
Contradiction to the fact $Y$ is an infinite set.
\end{proof}
\section{Geometry of semi-definable sets}\label{sec:semi-definable}
We studied $\mathfrak X$-structures in the previous section.
In this section, we treat a special family of $\mathfrak X$-structures; that is, the $\mathfrak X$-structure of semi-definable sets in an o-minimal structure $\mathcal R=(M,<,\ldots)$.
\subsection{Frontier of semi-definable set}
We first consider the frontier, interior and closure of semi-definable sets.
\begin{lemma}\label{lem:frontier}
Consider an o-minimal structure.
The frontier, interior and closure of a semi-definable set are semi-definable.
\end{lemma}
\begin{proof}
Let $\mathcal R$ be an o-minimal structure and $M$ be its underlying set.
Let $X$ be a semi-definable subset of $M^n$.
We have $(\partial X) \cap U = (\partial (X \cap U)) \cap U$ for any bounded open box $U$.
The set $X \cap U$ is definable in $\mathcal R$ because $X$ is semi-definable.
The frontier $\partial (X \cap U)$ is also definable.
The intersection $(\partial X) \cap U $ is definable.
It means that $\partial X$ is semi-definable.
Once we know that the frontier is semi-definable, it is easy to demonstrate that the interior and the closure are semi-definable.
\end{proof}
\begin{remark}
When we assume that the o-minimal structure is an expansion of an ordered group, Lemma \ref{lem:frontier} immediately follows from Proposition \ref{prop:x_int_closure}.
\end{remark}
\subsection{Semi-definable connectedness}
We next introduce the notion of semi-definable connectedness.
\begin{definition}
Consider an o-minimal structure $\mathcal R=(M,<,\ldots)$.
A semi-definable subset $X$ of $M^n$ is \textit{semi-definably connected} if there are no non-empty proper semi-definable closed and open subsets $Y_1$ and $Y_2$ of $X$ such that $Y_1 \cap Y_2 = \emptyset$ and $X=Y_1 \cup Y_2$.
We define that a definable set is \textit{definably connected} in the same manner.
The semi-definable set $X$ is \textit{semi-definably pathwise connected} if, for any $x,y \in X$, there exist elements $c_1,c_2 \in M$ and a definable continuous map $\gamma:[c_1,c_2] \rightarrow X$ with $\gamma(c_1)=x$ and $\gamma(c_2)=y$.
We define that a definable set is \textit{definably pathwise connected} in the same manner.
\end{definition}
We easily get the following result:
\begin{lemma}[Intermediate value property]\label{lem:intermediate}
Consider an o-minimal structure $\mathcal M=(M,<,\ldots)$.
Let $D$ be a subset of $M^n$ and $f:D \rightarrow M$ be a function whose graph is semi-definable and semi-definably connected.
Take two points $y_1, y_2 \in f(D)$ with $y_1<y_2$.
For any $y \in M$ with $y_1<y<y_2$, there exists $x \in D$ such that $y=f(x)$.
\end{lemma}
\begin{proof}
Otherwise, the sets $\Gamma(f) \cap (M^n \times \{y' \in M\;|\; y'>y\})$ and $\Gamma(f) \cap (M^n \times \{y' \in M\;|\; y'<y\})$ are nonempty closed and open semi-definable subsets of $\Gamma(f)$.
Here, $\Gamma(f)$ denotes the graph of $f$.
\end{proof}
We next recall the following fact:
\begin{lemma}\label{lem:connected_omin}
Consider an o-minimal structure $\mathcal R=(M,<,\ldots)$.
Let $X$ be a definable subset of $M^n$ and $U_1 \subseteq U_2$ be open boxes in $M^n$.
Take a definably connected component $C$ of $X \cap U_2$.
The intersection $C \cap U_1$ is the union of the definably connected components of $X \cap U_1$ contained in $C$.
\end{lemma}
\begin{proof}
Immediate from the definable cell decomposition theorem for o-minimal structures \cite[Chapter 3, Theorem 2.11]{vdD}.
\end{proof}
We get the following theorem:
\begin{theorem}\label{thm:connected}
Consider an o-minimal expansion $\mathcal R=(M, <, +, 0, \ldots)$ of an ordered group.
Let $X$ be a nonempty semi-definable subset of $M^n$.
The following are equivalent:
\begin{enumerate}
\item[(1)] $X$ is semi-definably connected.
\item[(2)] For any $x,y \in X$, there exists a bounded open box $U$ in $M^n$ such that both the points $x$ and $y$ are contained in some definably connected component of $X \cap U$.
\item[(3)] $X$ is semi-definably pathwise connected.
\end{enumerate}
In addition, for any $x \in X$, there exists a maximal semi-definably connected semi-definable subset $Y$ of $X$ containing the point $x$.
The set $Y$ is called the semi-definably connected component of $X$ containing the point $x$.
A semi-definably connected component of $X$ is closed and open in $X$.
\end{theorem}
\begin{proof}
Fix a point $x \in X$.
We first define a semi-definable closed and open subset $C_x$ of $X$ containing the point $x$.
In this proof, the subscript such as $C_x$ does not denote the fiber of a set, exceptionally.
Let $\mathcal B_x$ be the set of bounded open boxes in $M^n$ containing the point $x$.
Take an arbitrary element $U \in \mathcal B_x$.
The intersection $U \cap X$ is definable by the definition of semi-definablity and it has finite definably connected components by \cite[Chapter 3, Proposition 2.18]{vdD}.
Let $\mathcal C(U)$ be the set of the definably connected components of $U \cap X$.
The notation $\mathcal L_x(U)$ denotes the subset of $\mathcal C(U)$ of the elements $C$ satisfying that a definably connected component of the intersection $B \cap X$ contains both the point $x$ and $C$ for some bounded open box $B$ containing the open box $U$.
We set $$C_x(U)=\bigcup_{C \in L_x(U)}C\text{.}$$
Let $U_1, U_2 \in \mathcal B_x$ with $U_1 \subseteq U_2$.
We show the following equality:
$$
C_x(U_2) \cap U_1 = C_x(U_1)\text{.}
$$
Since $C_x(U_2)$ is a finite union of definably connected components of $X \cap U_2$, the set $C_x(U_2) \cap U_1$ is also a finite union of definably connected components of $X \cap U_1$ by Lemma \ref{lem:connected_omin}.
Let $D_1$ be a definably connected component of $X \cap U_1$.
The exists a unique definably connected component $D_2$ of $X \cap U_2$ containing the set $D_1$ by Lemma \ref{lem:connected_omin}.
We have only to demonstrate that $D_1 \in \mathcal L_x(U_1)$ if and only if $D_2 \in \mathcal L_x(U_2)$.
We can easily demonstrate that $D_1 \in \mathcal L_x(U_1)$ when $D_2 \in \mathcal L_x(U_2)$.
We omit the proof.
We consider the opposite implication.
There exists a bounded open box $U$ such that $D_1 \subseteq U$ and a definably connected component of $X \cap U$ contains both $x$ and $D_1$.
Taking a larger bounded open box $V$ containing the both $U$ and $U_2$.
A definably connected component of $X \cap V$ still contains both $x$ and $D_1$.
The definably connected set $D_2$ is also contained in the same definably connected component of $X \cap V$ by Lemma \ref{lem:connected_omin} because $D_2$ contains $D_1$ by the assumption.
It means that $D_2 \in \mathcal L_x(U_2)$.
We are now ready to define the semi-definable closed and open subset $C_x$.
Set $$C_x=\bigcup_{U \in \mathcal B_x}C_x(U)\text{.}$$
We first show that $C_x \cap U = C_x(U)$ for any $U \in \mathcal B_x$.
In fact, the inclusion $C_x(U) \subseteq C_x \cap U$ is obvious from the definition.
We demonstrate the opposite inclusion.
Take an arbitrary element $y \in C_x \cap U$.
There exists $V \in \mathcal B_x$ such that $y \in C_x(V)$ by the definition.
Set $W=U \cap V$.
Since $C_x(V)$ is a subset of $V$, we get $y \in C_x(V) \cap U = C_x(V) \cap (V \cap U) = C_x(V) \cap W = C_x(W) = C_x(U) \cap W \subseteq C_x(U)$ by the above equality.
We have demonstrated the opposite inclusion.
We demonstrate that $C_x$ is semi-definable.
Take an arbitrary bounded open box $U$ in $M^n$.
We have only to prove that $C_x \cap U$ is definable.
Take a bounded open box $V$ larger than $U$.
If $C_x \cap V$ is definable then $C_x \cap U = (C_x \cap V) \cap U$ is also definable.
We may assume that $U$ contains the point $x$ for the above reason.
We get $C_x \cap U=C_x(U)$, which is a finite union of definably connected components of $X \cap U$.
Hence, it is definable.
The semi-definable set $C_x$ is closed and open in $X$.
In fact, take a point $y \in C_x$.
Take a bounded open box $U$ containing the points $x$ and $y$.
The intersection $C_x \cap U=C_x(U)$ is a finite union of definably connected components of $X \cap U$.
In particular, $C_x \cap U$ is closed and open in $X \cap U$ by \cite[Chapter 3, Proposition 2.18]{vdD}.
Take a sufficiently small open box $V \subseteq U$ containing the point $y$.
We have $X \cap V =(X \cap U) \cap V=(C_x \cap U) \cap V = C_x \cap V$ by the definition of definably connected components.
It means that $C_x$ is open in $X$.
We can demonstrate that $X \setminus C_x$ is open in the same manner.
We have finished the preparation.
We prove that (1) implies (2).
Assume that the condition (2) does not hold true.
There exist $x,y \in X$ such that, for any bounded open box $U$ containing the point $x$ and $y$, $x$ and $y$ are contained in different definably connected components of $X \cap U$.
It means that $\mathcal L_x(U)$ and $\mathcal L_y(U)$ has an empty intersection for any bounded open box $U$ containing $x$ and $y$.
We have $C_x(U) \cap C_y(U) = \emptyset$ by the definition for any bounded open box $U$ containing both $x$ and $y$.
We easily get $C_x \cap C_y = \emptyset$.
It means that $X$ is not semi-definably connected.
The next task is to prove that $(2) \Rightarrow (3)$.
Take arbitrary $x,y \in X$.
There exist a bounded open box $U$ containing the points $x$ and $y$ and a definably connected component $Y$ of $X \cap U$ containing the points $x$ and $y$.
It is well-known that a definable connected set is definably pathwise connected \cite[Chapter 6, Proposition 3.2]{vdD}.
There exists a definable continuous map $\gamma:[c_1,c_2] \rightarrow X \cap U$ with $\gamma(c_1)=x$ and $\gamma(c_2)=y$.
The implication $(3) \Rightarrow (1)$ is easy to be proven.
Assume for contradiction that there exist disjoint nonempty semi-definable closed and open subsets $Y_1$ and $Y_2$ of $X$ with $X=Y_1 \cup Y_2$.
Take points $y_1, y_2 \in X$ with $y_i \in Y_i$ for $i=1,2$.
There exists a definable continuous map $\gamma:[c_1, c_2] \rightarrow X$ with $\gamma(c_i)=y_i$ for $i=1,2$.
The image of $\gamma$ is bounded by \cite[Proposition 1.10]{M} because an o-minimal structure is definably complete.
We can take a bounded open box $B$ in $M^n$ containing the image of $\gamma$.
The sets $B \cap X$, $B \cap Y_1$ and $B \cap Y_2$ are all definable.
The closed interval $[c_1,c_2]$ is decomposed into two disjoint definable closed and open subsets $\gamma^{-1}(Y_1 \cap B)$ and $\gamma^{-1}(Y_2 \cap B)$.
On the other hand, the closed interval is definably connected by \cite[Corollary 1.5]{M}.
It is a contradiction.
The last task is to prove the existence of a semi-definably connected component.
In fact, the semi-definable set $C_x$ is the semi-definably connected component containing the point $x \in X$.
Take arbitrary $y_1, y_2 \in C_x$.
We can take $U_1, U_2 \in \mathcal B_x$ with $y_i \in C_x(U_i)$ for $i=1,2$.
By the definition of $C_x(U_i)$, there exists $V_i \in \mathcal B_x$ such that $x$ and $y_i$ are contained in a definably connected component of $V_i \cap X$.
Take a bounded open box $W$ containing the open boxes $V_1$ and $V_2$.
Three points $x$, $y_1$ and $y_2$ are contained in a definably connected component of $X \cap W$.
This definably connected component is also the definably connected component of $C_x(W)=C_x \cap W$.
Hence, $C_x$ is semi-definably connected by the condition (2).
We finally show that $C_x$ is maximal.
Take an arbitrary semi-definably connected semi-definable subset $Y$ of $X$ with $x \in Y$.
We have only to demonstrate that $Y$ is contained in $C_x$.
Take an arbitrary point $y \in Y$.
Since $Y$ is semi-definably connected, there exists a definable continuous map $\gamma:[c_1,c_2] \rightarrow Y$ such that $\gamma(c_1)=x$ and $\gamma(c_2)=y$.
Since the image $\gamma([c_1,c_2])$ is bounded for the same reason as above, we can take a bounded open box $U$ containing the image.
It means that $x$ and $y$ are contained in the same definably connected component of $X \cap U$ because a definably pathwise connected definable set is definably connected.
We have $y \in C_x(U) \subseteq C_x$.
We have finished the proof.
\end{proof}
We introduce a corollary of Theorem \ref{thm:connected}.
\begin{corollary}\label{cor:connected}
Consider an o-minimal expansion of an ordered group.
The closure of a nonempty semi-definably connected semi-definable set is again semi-definably connected.
\end{corollary}
\begin{proof}
Immediately follows from Theorem \ref{thm:connected}(3) and Corollary \ref{cor:curve_selection}.
\end{proof}
\subsection{Good manifolds}
We introduce the notion of a good manifold necessary in Section \ref{sec:multi}.
\begin{definition}\label{def:manifold}
Consider an o-minimal structure $\mathcal R=(M,<,\ldots)$.
A subset $X$ of $M^n$ of dimension $d$ is \textit{locally a good submanifold} at $x \in X$ if there exist
\begin{itemize}
\item a bounded open box $B$ containing the point $x$,
\item a permutation $\sigma$ of $\{1,\ldots, n\}$ and
\item a definable continuous map $f: \pi_d(\widetilde{\sigma}(X \cap B)) \rightarrow M^{n-d}$
\end{itemize}
such that $\widetilde{\sigma}(X \cap B)$ is the graph of $f$.
Here, the notation $\widetilde{\sigma}$ denotes the map defined in Definition \ref{def:x} and $\pi_d:M^n \rightarrow M^d$ denotes the projection onto the first $d$ coordinates.
The notation $\operatorname{Reg}(X)$ denotes the set of points at which $X$ is locally a good submanifold.
The notation $\operatorname{Sing}(X)$ denotes the singular locus defined by $X \setminus \operatorname{Reg}(X)$.
A semi-definable set is called a \textit{good submanifold} if it is a locally good submanifold at every point in it.
Let $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
A semi-definable subset $X$ of $M^n$ is \textit{locally the graph of a continuous function} at $x \in X$ if there exists a bounded open box $U$ containing the point $x$ such that $\pi(X) \cap \pi(U)$ is a good submanifold and $X \cap U$ is the graph of a continuous function defined on $\pi(X) \cap \pi(U)$.
A semi-definable subset $X$ of $M^n$ is \textit{locally the graph of continuous functions everywhere} if it is locally the graph of a continuous function at every point in $X$.
\end{definition}
\begin{lemma}\label{lem:open_mfd}
Consider an o-minimal structure whose underlying space is $M$ and a good submanifold $X$ of $M^n$.
Let $U$ an open subset of $M^n$.
Then, $X \cap U$ is also a good submanifold.
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
\begin{lemma}\label{lem:open_mfd2}
Consider an o-minimal structure whose underlying space is $M$.
Let $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
Take a semi-definable open subset $U$ of $M^{n-1}$.
If a semi-definable subset $X$ of $M^n$ is locally the graph of continuous functions everywhere, $X \cap (U \times M)$ is also locally the graph of continuous functions everywhere.
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
\begin{lemma}\label{lem:reg_mfd}
Consider an o-minimal structure whose underlying space is $M$.
Let $X$ be a semi-definable set.
The set $\operatorname{Reg}(X)$ is an open semi-definable subset of $X$.
We also have $\dim(\operatorname{Sing}(X))<\dim (X)$.
\end{lemma}
\begin{proof}
It is obvious that $\operatorname{Reg}(X)$ is an open subset of $X$.
Take an arbitrary bounded open box $U$.
The set $\operatorname{Reg}(X) \cap U$ is obviously definable in the o-minimal structure because $X \cap U$ is definable.
It implies that $\operatorname{Reg}(X)$ is semi-definable.
Set $d=\dim(X)$.
Take an arbitrary bounded open box $B$.
We have only to show that $\dim (\operatorname{Sing}(X) \cap B) < d$ by Lemma \ref{lem:equiv_dim}.
Get a stratification of $\overline{B}$ partitioning $\partial B$, $X \cap B$ and $\operatorname{Sing}(X) \cap B$ by \cite[Chapter 4, Proposition 1.13]{vdD}.
Recall that a stratification of $\overline{B}$ is a partition of $\overline{B}$ into finitely many cells such that the frontier of a cell is a finite union of cells.
Let $C$ be an arbitrary cell contained in $X$ of dimension $d$.
The semi-definable set $X$ is a good submanifold of $M^n$ for any $x \in C$.
In fact, we have $C \cap U=X \cap U$ for any sufficiently small open box $U$ containing the point $x$.
Otherwise, there exists a cell $C'$ contained in $X$ such that $C' \cap U \neq \emptyset$ for any small open box $U$ containing the point $x$.
It means that $x \in \overline{C'}$.
We get $C \subseteq \partial C'$.
We have $\dim C'>d$ by \cite[Chapter 4, Theorem 1.8]{vdD}.
It means that $C' \cap X = \emptyset$.
Contradiction.
We have demonstrated $C$ is not contained in $\operatorname{Sing}(X) \cap B$.
It implies that that $\dim (\operatorname{Sing}(X) \cap B) < d$.
\end{proof}
\begin{lemma}\label{lem:cont_mfd}
Consider an o-minimal structure whose underlying space is $M$.
Let $\pi:M^n \rightarrow M^{n-1}$ be the coordinate projection forgetting the last coordinate.
Let $X$ be a semi-definable subset of $M^n$ such that $\pi(X)$ is semi-definable and a good submanifold, and the fiber $X \cap \pi^{-1}(x)$ is of dimension zero for any $x \in \pi(X)$.
We further assume that $\dim \pi(X)=\dim X$.
Let $S$ be the set of points at which $X$ is locally the graph of a continuous function.
Then, $S$ is semi-definable and we have $\dim(X \setminus S)<\dim(X)$.
\end{lemma}
\begin{proof}
It is obvious that $S$ is semi-definable.
Let $\mathcal R$ be the given o-minimal structure.
Set $d=\dim(X)=\dim \pi(X)$.
Take an arbitrary bounded open box $B$.
We have only to show that $\dim (T \cap B) < d$ by Lemma \ref{lem:equiv_dim}, where $T=X \setminus S$.
The intersections $X \cap B$ and $(\pi(X) \times M) \cap B$ are definable in $\mathcal R$.
We first apply the definable cell decomposition theorem for o-minimal structures \cite[Chapter 3, Theorem 2.11]{vdD}.
There exists a partition $\{C_1, \ldots, C_N\}$ of $B$ into cells partitioning $X \cap B$ and $(\pi(X) \times M) \cap B$.
Any cell $C$ contained in $X$ is the graph of a continuous function defined on $\pi(C)$.
Get a stratification of $\overline{\pi(B)}$ partitioning $\pi(C_1), \ldots, \pi(C_N)$ and $\pi(X) \cap \pi(B)$ by \cite[Chapter 4, Proposition 1.13]{vdD}.
Let $D_1, \ldots, D_L$ be the partition.
The family $\mathcal C=\{C_i \cap (D_j \times M)\;|\;1 \leq i \leq N, 1 \leq j \leq L, D_j \subset \pi(C_i)\}$ is a partition of $B$ into cells.
Take an arbitrary cell $C \in \mathcal C$ of dimension $d$ contained in $X$.
For any $x \in \pi(C)$, $\pi(X)$ is locally a good submanifold of $M^{m-1}$ at $x$ for the same reason as the proof of Lemma \ref{lem:reg_mfd}.
Therefore, $X$ is locally the graph of a continuous function at every point in $C$ because $C$ is a cell.
We have shown that $\dim T<d$.
\end{proof}
\section{Geometry of almost o-minimal structures}\label{sec:almost}
We finally investigate almost o-minimal structures.
\subsection{Definably complete locally o-minimal structures}
We first introduce several lemmas on definably complete structures.
\begin{lemma}\label{lem:local0}
Let $\mathcal M=(M,<,\ldots)$ be a definably complete structure and $X$ be a definable subset of $M$.
Any open interval contained in $X$ is contained in a maximal open interval contained in $X$.
\end{lemma}
\begin{proof}
Let $I$ be an open interval contained in $X$.
Take a point $c$ with $c \in I$.
Set
\begin{align*}
d &= \inf\{x \in M\;|\;(x<c) \wedge (\forall y, x<y<c \rightarrow y \in X)\} \in M \cup \{-\infty\}\text{ and }\\
e&= \sup\{x \in M\;|\;(x>c) \wedge (\forall y, c<y<x \rightarrow y \in X)\} \in M \cup \{\infty\}\text{.}
\end{align*}
They are well-defined because $\mathcal M$ is definably complete.
The open interval $]d,e[$ is obviously the maximal open interval containing the interval $I$ and contained in $X$.
\end{proof}
\begin{lemma}\label{lem:local1}
Consider a definably complete structure $\mathcal M=(M,<,\ldots)$.
The following are equivalent:
\begin{enumerate}
\item[(1)] The structure $\mathcal M$ is a locally o-minimal structure.
\item[(2)] Any definable set in $M$ either has a nonempty interior or it is closed and discrete.
\end{enumerate}
\end{lemma}
\begin{proof}
\cite[Lemma 2.3]{Fuji4}.
\end{proof}
We then consider a definably complete locally o-minimal structures.
\begin{lemma}\label{lem:local2}
Let $\mathcal M=(M,<,\ldots)$ be a definably complete locally o-minimal structure and $X$ be a definable subset of $M$.
Any element $x$ in $M$ satisfies exactly one of the following conditions:
\begin{enumerate}
\item[(1)] The point $x$ is an element of an open interval contained in either $X$ or $M \setminus X$;
\item[(2)] The point $x$ is a discrete point of $X$;
\item[(3)] The point $x$ is an endpoint of a maximal open interval contained in $X$.
\end{enumerate}
The sets consisting of the discrete points of $X$ and consisting the endpoints of maximal open intervals contained in $X$ are definable, discrete and closed.
\end{lemma}
\begin{proof}
Let $x$ be an arbitrary element in $M$.
There exists an open interval $I$ containing the point $x$ such that $X \cap I$ is a finite union of points and open intervals.
Therefore, by Lemma \ref{lem:local0}, exactly one of (1) through (3) is obviously satisfied.
Consider the definable set $$Y =\{x \in M \;|\;\forall a\ \forall b, a<x<b \rightarrow (\exists y, \exists z \ a<y<b, a<z<b, y \in X, z \not\in X)\}\text{.}$$
The formula defining the set $Y$ is obviously the negation of the condition (1).
Consider the sets
\begin{align*}
D &=\{ x \in M\;|\; x \text{ is a discrete set in } X\} \text{ and }\\
E &=Y \setminus D\text{.}
\end{align*}
The set $D$ is the set consisting of the discrete points of $X$.
The set $E$ is the set consisting the endpoints of maximal open intervals contained in $X$.
They are both definable.
Since they do not contain an open interval, they are discrete and closed by Lemma \ref{lem:local1}.
\end{proof}
Lemma \ref{lem:local2} provides tests for a definably complete locally o-minimal structure being o-minimal or almost o-minimal.
\begin{corollary}\label{cor:local1}
Let $\mathcal M=(M,<,\ldots)$ be a definably complete locally o-minimal structure.
The structure $\mathcal M$ is o-minimal if and only if any definable discrete subset of $M$ is a finite set.
\end{corollary}
\begin{proof}
We have only to show that, for any definable subset $X$ of $M$, the set of discrete points and the set consisting of the endpoints of maximal open intervals contained in $X$ are finite.
It is immediate from Lemma \ref{lem:local2}.
\end{proof}
\begin{corollary}\label{cor:local2}
Let $\mathcal M=(M,<,\ldots)$ be a definably complete locally o-minimal structure.
The structure $\mathcal M$ is almost o-minimal if and only if any bounded definable discrete subset of $M$ is a finite set.
\end{corollary}
\begin{proof}
We can prove it in the same manner as Corollary \ref{cor:local1}.
We omit the proof.
\end{proof}
\subsection{Basic properties of almost o-minimal structures}
We begin to study the basic properties of almost o-minimal structures.
\begin{lemma}\label{lem:almost1}
An almost o-minimal structure is definably complete.
\end{lemma}
\begin{proof}
Let $M$ be the universe of the considered structure and $X$ be a nonempty definable subset of $M$.
We demonstrate that $\sup(X)$ is well-defined and $\sup(X) \in M \cup \{\infty\}$.
Take a point $c \in X$ and consider the set $Y=\{x \in X\;|\;x \geq c\}$.
We may assume that $X$ is bounded from below by considering $Y$ instead of $X$.
When $X$ is unbounded, we have $\sup(X) =\infty$.
Otherwise, $X$ is bounded and it is a finite union of points and open intervals by the definition.
It obviously that $\sup(X)$ is well-defined and $\sup(X) \in M$.
We can prove that $\inf(X)$ is well-defined and $\inf(X) \in M \cup \{-\infty\}$, similarly.
\end{proof}
\begin{corollary}\label{lem:almost2}
An almost o-minimal structure is an o-minimal structure or has an unbounded infinite discrete definable set.
\end{corollary}
\begin{proof}
Immediate from Lemma \ref{lem:almost1}, Corollary \ref{cor:local1} and Corollary \ref{cor:local2}.
\end{proof}
\begin{proposition}\label{prop:almost1}
A locally o-minimal structure is o-minimal if and only if it is almost o-minimal and satisfies the type-completeness property defined in \cite{S}.
\end{proposition}
\begin{proof}
It immediately follows from \cite[Theorem 2.10, Corollary 2.11]{S}, Corollary \ref{cor:local1}, Corollary \ref{cor:local2} and Lemma \ref{lem:almost1}.
\end{proof}
An almost o-minimal structure is a uniformly locally o-minimal structure of the second kind by Corollary \ref{cor:second}.
An almost o-minimal expansion of an ordered field is o-minimal by \cite[Proposition 2.1]{Fuji}.
We do not expect that the multiplication is definable in a non-o-minimal almost o-minimal structure.
However, when we study bounded definable sets, even the multiplication definable only in the bounded regions is useful.
Therefore, we propose the following definition:
\begin{definition}
An expansion of dense linear order without endpoints $\mathcal M=(M,<,\ldots)$ \textit{has a bounded definable field structure} if there exist elements $0,1 \in M$ and binary maps $\oplus, \otimes: M \times M \rightarrow M$ such that the tuple $(M,<,0,1,\oplus,\otimes)$ is an ordered real closed field and, for any $a,b \in M$ with $a<b$ the restrictions $\oplus|_{]a,b[ \times ]a,b[}$ and $\otimes|_{]a,b[ \times ]a,b[}$ of the addition $\oplus$ and the multiplication $\otimes$ to $]a,b[ \times ]a,b[$ are definable in $\mathcal M$.
\end{definition}
We also need the following:
\begin{definition}
An ordered abelian group $(G, +, 0, <)$ is \textit{archimedean} if, for any positive $a,b \in G$, we have $na>b$ for some positive integer $n$.
Here, $na$ denotes the summation of $n$ copies of $a$.
\end{definition}
We give examples of almost o-minimal structures.
\begin{proposition}\label{prop:almost2}
A locally o-minimal structure $\mathcal M =(M,<)$ is almost o-minimal if one of the following conditions is satisfied:
\begin{enumerate}
\item[(1)] All closed bounded intervals are compact;
\item[(2)] It is a uniformly locally o-minimal expansion of an ordered group of the second kind having bounded field structure;
\item[(3)] It is a definably complete locally o-minimal expansion of an archimedean ordered group, and the image of a nonempty definable discrete set under a coordinate projection is again discrete.
\end{enumerate}
\end{proposition}
\begin{proof}
(1)
Obvious. We omit the proof.
(2) Let $X$ be a bounded definable set in $M$.
We have the addition and the multiplication $\oplus, \otimes: M \times M \rightarrow M$ whose restriction to the product of bounded open intervals definable in $\mathcal M$.
We may assume that $X$ is contained in the bounded closed interval $[-N,N]$.
Consider the set $Y=\{(t,x) \in [0,1] \times M\;|\; \exists y \in X,\ x=t \otimes y\}$.
The set $Y$ is definable because we assume the bounded definable multiplication.
Since $\mathcal M$ is uniformly locally o-minimal of the second kind, there exists an open interval $I=]-L,L[$ containing the origin and a small positive $\varepsilon>0$ such that, for any $0<t<\varepsilon$, the intersection $I \cap Y_t$ of $I$ with the fiber $Y_t = \{t \otimes y \in M\;|\; y \in X\}$ of $Y$ at $t$ is a finite union of points and open intervals.
Take $t>0$ smaller than $\varepsilon$ and satisfying $L \otimes t <N$.
We have $Y_t \subseteq I$.
Since $Y_t=Y_t \cap I$ is a finite union of points and open intervals, $X$ is also a finite union of points and open intervals.
(3) Let $D$ be a nonempty bounded definable discrete set.
We have only to show that it is a finite set by Corollary \ref{cor:local2}.
It is also closed by \cite[Lemma 2.4]{Fuji4}.
We may assume that $D$ has at least two points without loss of generality.
Set $m=\sup(D)$.
We get $m<\infty$ because $D$ is bounded.
We have $m \in D$ because $D$ is closed.
Set $E=D \setminus \{m\}$.
Consider the successor function $\operatorname{succ}:E \rightarrow M$ given by
$$\operatorname{succ}(x)=\inf\{y \in D\;|\; y>x\}\text{.}$$
Since $D$ is closed, we have $\operatorname{succ}(x) \in D$.
Consider the definable function $\rho:E \rightarrow M$ defined by $\rho(x)=\operatorname{succ}(x)-x$.
We have $\rho(x)>0$.
The graph of the map $\rho$ is definable and discrete.
The image $\rho(E)$ is the projection image of the graph and it is also discrete by the assumption.
Since $\rho(E)$ is discrete, it is closed by \cite[Lemma 2.4]{Fuji4} again.
Set $d=\inf \rho(E)$.
We have $d \in \rho(E)$ because $\rho(E)$ is closed.
In particular, we have $d>0$.
Set $l=\sup(D)-\inf(D)$.
Since the structure is archimedean, there exists a positive integer $n$ with $nd>l$.
By the definition of $\rho$, the cardinality of the set $D$ is not greater than $n$.
\end{proof}
\begin{example}
We can easily construct a locally o-minimal structure having bounded definable field structure which is not an expansion of an ordered field.
Consider an arbitrary o-minimal expansion of the real field $\widetilde{\mathbb R}$.
The structure $[0,1)_{\text{def}}$ is the structure whose universe is $[0,1)$ defined in \cite[Definition 2]{KTTT}.
The simple product of $\mathbb Z$ and $[0,1)_{\text{def}}$ has bounded definable field structure but it is not an expansion of an ordered field.
The definition of a simple product is found in \cite[Definition 14]{KTTT}.
\end{example}
\begin{remark}
The latter condition in Proposition \ref{prop:almost2}(3) is \cite[Definition 1.1(a)]{Fuji4}.
This condition is satisfied in a model of DCTC \cite[Corollary 4.3]{S} and in a definably complete uniformly locally o-minimal expansion of an ordered group of the second kind \cite[Proposition 2.12]{Fuji4}.
\end{remark}
Almost o-minimality does not preserve under elementary equivalence.
\begin{proposition}\label{prop:not_almost}
Let $\mathcal M=(M,<,\ldots)$ be an almost o-minimal structure which is not o-minimal.
An $\omega$-saturated elementary extension of $\mathcal M$ is not almost o-minimal.
\end{proposition}
\begin{proof}
There exists a definable discrete infinite subset $D$ of $M$ by Corollary \ref{lem:almost2}.
It is closed by Lemma \ref{lem:local1}.
Take an arbitrary element $c \in M$.
We assume that $D_{>c}=\{x \in D\;|\; x>c\}$ is infinite.
We can prove the proposition in the same manner when $D_{<c}=\{x \in D\;|\; x<c\}$ is infinite.
We consider the formula $\Phi_n(x)$ defining that the open interval $]c,x[$ has at least $n$ elements in $D$ for any non-negative integer $n$.
The family $\{\Phi_n(x)\}$ is finitely satisfiable in $\mathcal M$ because $D_{>c}$ is infinite.
Let $\mathcal N=(N,<,\ldots)$ be an $\omega$-saturated elementary extension of $\mathcal M$.
We can find $d \in N$ such that $\mathcal N \models \Phi_n(d)$ for all $n$.
In particular, the definable set $]c,d[ \cap D^{\mathcal N}$ is not a finite union of points and open intervals.
Here, the notation $D^{\mathcal N}$ be the subset of $N$ defined by the same formula as $D$.
It implies that $\mathcal N$ is not almost o-minimal.
\end{proof}
The definition of a structure having bounded definable field structure seems to be technical.
However, as indicated in the following proposition, sufficiently complex almost o-minimal structure has bounded definable field structure.
\begin{proposition}\label{prop:field_str}
Consider an almost o-minimal expansion of an ordered group.
Assume further that there exists a strictly monotone homeomorphism from an unbounded open interval to a bounded open interval the graph of whose restriction to arbitrary bounded open subintervals of the unbounded open interval is definable.
Then, any definable function is piecewise linear or the structure has bounded definable field structure.
\end{proposition}
\begin{proof}
An almost o-minimal expansion $\mathcal M$ of an ordered group is obviously an $\mathfrak X$-structure.
It is an $\mathfrak X$-expansion of an ordered divisible abelian group by Lemma \ref{lem:almost1} and \cite[Proposition 2.2]{M}.
Take an o-minimal expansion of an ordered group $\mathcal R$ given in Theorem \ref{thm:in_omin}.
Any set definable in $\mathcal R$ is definable in $\mathcal M$ and any bounded set definable in $\mathcal M$ is definable in $\mathcal R$.
Consider the $\mathfrak X$-structure $\mathfrak X(\mathcal R)$ of semi-definable sets in $\mathcal R$.
There exists a strictly monotone homeomorphism between a bounded interval and an unbounded interval which is $\mathfrak X$-definable in $\mathfrak X(\mathcal R)$ by the assumption of the proposition.
By Theorem \ref{thm:xstr}, either any function $\mathfrak X$-definable in $\mathfrak X(\mathcal R)$ is piecewise linear or there exists binary maps $\oplus, \otimes:M^2 \rightarrow M$ $\mathfrak X$-definable in $\mathfrak X(\mathcal R)$ such that $(M,<,0,1,\oplus,\otimes)$ is an ordered real closed field.
In the former case, any function definable in $\mathcal M$ is piecewise linear because it is also $\mathfrak X$-definable in $\mathfrak X(\mathcal R)$
Consider the latter case.
The restriction of the addition $\oplus$ to the bounded open box $]a,b[ \times ]a,b[$ is definable in $\mathcal R$ and; therefore, it is definable in $\mathcal M$.
It is the same for the multiplication $\otimes$.
We have demonstrated that the structure $\mathcal M$ has bounded definable field structure.
\end{proof}
\subsection{Uniform local definable cell decomposition}
\subsubsection{Preliminary}
We begin to study uniform local definable cell decomposition for almost o-minimal structures.
The author developed the theory of uniformly locally o-minimal structures of the second kind and their dimension theory in \cite{Fuji,Fuji3,Fuji4}.
An almost o-minimal structure is a definably complete uniformly locally o-minimal structures of the second kind by Corollary \ref{cor:second} and Lemma \ref{lem:almost1}.
We use the following facts:
\begin{proposition}\label{prop:olddim}
Consider a DCULOAS structure $\mathcal M=(M, <, 0, +, \ldots)$.
The following assertions hold true:
\begin{enumerate}
\item[(1)] Let $f:X \rightarrow M^n$ be a definable map.
We have $\dim(f(X)) \leq \dim X$.
\item[(2)] Let $f:X \rightarrow M^n$ be a definable map.
The notation $\mathcal D(f)$ denotes the set of points at which the map $f$ is discontinuous.
The inequality $\dim(\mathcal D(f)) < \dim X$ holds true.
\item[(3)] (Addition Property)
Let $\varphi:X \rightarrow Y$ be a definable surjective map whose fibers are equi-dimensional; that is, the dimensions of the fibers $\varphi^{-1}(y)$ are constant.
We have $\dim X = \dim Y + \dim \varphi^{-1}(y)$ for all $y \in Y$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) \cite[Theorem 1.1]{Fuji3};
(2) \cite[Corollary 1.2]{Fuji3};
(3) \cite[Theorem 3.14]{Fuji4}.
\end{proof}
We consider an almost o-minimal expansion of an ordered group $\mathcal M=(M, <$, $0, +, \ldots)$ in Section \ref{sec:multi} and Section \ref{sec:udcd}.
An $\mathcal M$-definable set is simply called definable in these sections.
We introduce several notations.
The almost o-minimal structure $\mathcal M$ is simultaneously an $\mathfrak X$-expansion of an ordered divisible abelian group.
Applying Theorem \ref{thm:in_omin} to it, there exists an o-minimal expansion of an ordered group such that any set definable in this structure is definable in $\mathcal M$ and any bounded set definable in $\mathcal M$ is definable in the structure.
The notation $\mathcal R_{\text{ind}}(\mathcal M)$ denote this o-minimal structure.
Sets semi-definable in $\mathcal R_{\text{ind}}(\mathcal M)$ are simply called semi-definable in these sections.
Any definable set is semi-definable by the definitions of the o-minimal structure $\mathcal R_{\text{ind}}(\mathcal M)$.
\subsubsection{Partition into multi-cells}\label{sec:multi}
\begin{definition}
Consider an expansion of a dense linear order $\mathcal M=(M,<,\cdots)$.
A subset $X$ of $M^{n+1}$ is called \textit{bounded in the last coordinate} if there exists a bounded open interval $I$ such that $X \subseteq M^n \times I$.
\end{definition}
\begin{lemma}\label{lem:ld2}
Consider an almost o-minimal structure $\mathcal M=(M,<,\ldots)$.
Let $X$ be a semi-definable subset of $M^{n+1}$ which is bounded in the last coordinate.
The image $\pi(X)$ is also semi-definable, where $\pi:M^{n+1} \rightarrow M^n$ is the projection forgetting the last coordinate.
In addition, if $X$ is semi-definably connected, then the image $\pi(X)$ is also semi-definably connected.
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
We next define multi-cells.
\begin{definition}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
We define a \textit{multi-cell} $X$ in $M^n$ inductively.
\begin{itemize}
\item If $n=1$, either $X$ is a discrete definable set or all semi-definably connected components of the definable set $X$ are open intervals.
\item When $n>1$, let $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
The projection image $\pi(X)$ is a multi-cell and, for any semi-definably connected component $Y$ of $X$, $\pi(Y)$ is a semi-definably connected component of $\pi(X)$ and $Y$ is one of the following forms:
\begin{align*}
Y&=\pi(Y) \times M \text{,}\\
Y&=\{(x,y) \in \pi(Y) \times M \;|\; y=f(x)\} \text{,}\\
Y &= \{(x,y) \in \pi(Y) \times M \;|\; y>f(x)\} \text{,}\\
Y &= \{(x,y) \in \pi(Y) \times M \;|\; y<g(x)\} \text{ and }\\
Y &= \{(x,y) \in \pi(Y) \times M \;|\; f(x)<y<g(x)\}
\end{align*}
for some continuous functions $f$ and $g$ defined on $\pi(Y)$ with $f<g$.
\end{itemize}
\end{definition}
A definable set is partitioned into finitely many multi-cells.
Its proof is long.
We divide the proof into several lemmas.
\begin{lemma}\label{lem:multi-cell-pre}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$ which is not o-minimal.
Let $X$ be a definable subset of $M^n$ with $n>1$ and $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
Assume that, for any $x \in M^{n-1}$, the fiber $X \cap \pi^{-1}(x)$ is at most of dimension zero.
Then, there exists a definable closed subset $Z$ of $M^{n-1}$ satisfying the following conditions:
\begin{enumerate}
\item[(a)] $\dim(Z) < \dim(\pi(X))$;
\item[(b)] The definable set $X \setminus \pi^{-1}(Z)$ is closed in $M^n \setminus \pi^{-1}(Z)$;
\item[(c)] The definable set $X \setminus \pi^{-1}(Z)$ is locally the graph of continuous functions everywhere;
\item[(d)] Any semi-definably connected component $C$ of $X \setminus \pi^{-1}(Z)$ is bounded in the last coordinate.
\end{enumerate}
\end{lemma}
\begin{proof}
Set $d=\dim(X)$.
We have $\dim(\pi(X))=d$ by Proposition \ref{prop:olddim}(3).
Let $Z_1=\pi(\partial X)$.
We have $\dim Z_1 \leq \dim \partial X < \dim X=d$ by Corollary \ref{cor:dim} and Proposition \ref{prop:olddim}(1).
Set $X_1 =X \setminus \pi^{-1}(Z_1)$.
We have $\dim X_1=d$ by Proposition \ref{prop:dim}(b).
The definable set $X_1$ is closed in $M^n \setminus \pi^{-1}(Z_1)$.
We now set $Z_2 = {\operatorname{Sing}(\pi(X_1))}$ and $X_2 = X_1 \setminus \pi^{-1}(Z_2)$.
They are obviously definable in $\mathcal M$.
The definable set $\pi(X) \setminus (Z_1 \cup Z_2)=\pi(X_1) \setminus Z_2 = \pi(X_2)$ is a good submanifold by Lemma \ref{lem:open_mfd} and Lemma \ref{lem:reg_mfd}.
We get $\dim Z_2<d$ by Lemma \ref{lem:reg_mfd} and Corollary \ref{cor:dim}.
We also have $\dim X_2=d$ for the same reason as above.
We want to apply Lemma \ref{lem:cont_mfd} to $X_2$.
We have $\dim X_2=\dim \pi(X_2)$ by Proposition \ref{prop:olddim}(3).
The assumption in Lemma \ref{lem:cont_mfd} is satisfied.
Let $S$ be the set of points at which $X_2$ is locally the graph of a continuous function.
It is obviously definable in $\mathcal M$.
We have $\dim(X_2 \setminus S)<d$ by Lemma \ref{lem:cont_mfd}.
Set $Z_3=\overline{\pi(X_2 \setminus S)}$.
We have $\dim(Z_3)<d$ by Corollary \ref{cor:dim} and Proposition \ref{prop:olddim}(1).
The definable set $X_3=X_2 \setminus \pi^{-1}(Z_3)$ is locally the graph of a continuous function at any point in $X_3$.
We have $\dim X_3=d$.
There exists an unbounded discrete definable subset $D$ of $M$ by Corollary \ref{lem:almost2}.
We may assume that $\inf(D)=-\infty$ and $\sup(D)=\infty$ by considering $D \cup (-D)$ in place of $D$ because the group operation is definable.
Let $V_r$ be the boundary of $X_3 \cap (\pi(X_3) \times \{r\})$ in $\pi(X_3) \times \{r\}$ for any $r \in D$.
Set $W=\bigcup_{r \in D} V_r$ and $Z_4=\overline{\pi(W)}$.
The set $W$ is definable.
In fact, $W$ is given by $\{(x,r) \in M^{n-1} \times M\;|\; r \in D, \ \text{the point }x \text{ is contained in the boundary of } X_3 \cap \pi^{-1}(r) \text{ in }\pi(X_3) \times \{r\}\}$ and it is definable.
We demonstrate that $\dim W<d$.
Take an arbitrary bounded open box $B$ in $M^n$.
We have only to demonstrate that $\dim(B \cap W)<d$ by Lemma \ref{lem:equiv_dim}.
Note that $B \cap W$ is definable in $\mathcal R_{\text{ind}}(\mathcal M)$.
Take an arbitrary cell $C$ contained in $B \cap W$.
We have to show that $\dim C<d$ by \cite[Chapter 3, (1.1)]{vdD}.
There exists $r \in D$ with $C \subseteq V_r$.
We have $\dim (C)< \dim (X_3 \cap (\pi(X_3) \times \{r\})) \leq \dim(X_3)$ by Corollary \ref{cor:dim}.
We get the inequality $\dim W<d$, and consequently, we obtain $\dim Z_4<d$ by Corollary \ref{cor:dim} and Proposition \ref{prop:olddim}(1).
Set $Z=Z_1 \cup Z_2 \cup Z_3 \cup Z_4$.
We are now ready to demonstrate that the conditions (a) through (d) are satisfied.
The condition (a) is now immediate by Proposition \ref{prop:dim}(b).
The condition (b) is satisfied because $\partial X$ is contained in $\pi^{-1}(Z)$.
The condition (c) follows from the definition of $Z_3$, Lemma \ref{lem:open_mfd} and Lemma \ref{lem:open_mfd2}.
The remaining task is to show that the condition (d) is satisfied.
Set $X_{\text{flat}}=X \cap \left((\pi(X) \setminus Z) \times D\right)$ and $X_{\text{flat},r}=X \cap ((\pi(X) \setminus Z) \times \{r\}) $ for all $r \in D$ for simplicity of notations.
By the condition (c) and the definition of $Z_4$, we obtain the following assertion.
\medskip
$(*)$: For any $x \in X_{\text{flat}}$, there exists an open box $U$ containing the point $x$ such that $X \cap U= X_{\text{flat},r} \cap U$ for some $r \in D$, $\pi(X) \cap \pi(U)$ is a good submanifold of $M^n$ and $X \cap U$ is the graph of a constant function defined on $\pi(X) \cap \pi(U)$.
\medskip
In fact, by the definition of $Z_4$, we can take an open box $U$ containing the point $x$ such that $X_{\text{flat}} \cap U= X_{\text{flat},r} \cap U$ for some $r \in D$ and $U \cap \pi^{-1}(Z_4) = \emptyset$.
Shrinking $U$ if necessary, we may assume that $\pi(X) \cap \pi(U)$ is a good submanifold of $M^n$ and $X \cap U$ is the graph of a continuous function on $\pi(X) \cap \pi(U)$ by the condition (c).
If $X \cap U$ is not the graph of a constant function, it intersects with $V_r$, and it means that $U \cap \pi^{-1}(Z_4) \neq \emptyset$.
Contradiction.
\medskip
Fix an arbitrary semi-definably connected component $C$ of $X \setminus \pi^{-1}(Z)$.
We consider two cases, separately.
We first consider the case in which $X_{\text{flat}} \cap C \neq \emptyset$.
We demonstrate that $C \subseteq X_{\text{flat},r}$ for some $r \in D$.
Take a point $x\in X_{\text{flat}} \cap C $.
The point $x$ is contained in $X_{\text{flat},r}$ for some $r \in D$.
We lead to a contradiction assuming that $y \not\in X_{\text{flat},r}$ for some $y \in C$.
There exists an $\mathcal R_{\text{ind}}(\mathcal M)$-definable continuous curve $\gamma:[c_1,c_2] \rightarrow C$ such that $\gamma(c_1)=x$ and $\gamma(c_2)=y$ by Theorem \ref{thm:connected}.
The set $\gamma^{-1}(X_{\text{flat},r})=\gamma^{-1}(M^{n-1} \times \{r\})$ is a finite union of points and open intervals because it is definable in the o-minimal structure $\mathcal R_{\text{ind}}(\mathcal M)$.
It is closed and does not coincide with the closed interval $[c_1,c_2]$ by the assumption.
Therefore, there exists a point $d \in [c_1,c_2]$ such that $\gamma(d) \in X_{\text{flat},r}$ and, for any small $\varepsilon>0$, we can take points $s_1,s_2 \in [c_1,c_2]$ satisfying that $|s_i-d|<\varepsilon$ for $i=1,2$, $\gamma(s_1) \in X_{\text{flat},r}$ and $\gamma(s_2) \not\in X_{\text{flat},r}$.
It implies that $X \cap U \neq X_{\text{flat},r} \cap U$ for any open box $U$ containing the point $\gamma(d)$.
It contradicts the assertion (*).
We have demonstrated that $C$ is contained in $X_{\text{flat},r}$.
Now, the condition (d) is clearly satisfied.
We next consider the case in which $X_{\text{flat}} \cap C = \emptyset$.
We demonstrate that $C \subseteq M^{n-1} \times ]r_1,r_2[$ for some $r_1,r_2 \in M$.
Let $\pi_2:M^n \rightarrow M$ be the projection onto the last coordinate.
We demonstrate that there exists $r_1 \in M$ such that $\pi_2(C) > r_1$.
Assume the contrary.
Take a point $x_1 \in C$, then there exists an $R \in D$ with $\pi_2(x_1) > R$ because $\inf(D)=-\infty$.
We can get $x_2 \in C$ with $\pi_2(x_2)<R$ by the assumption.
Then $C = \{x \in C\;|\; \pi_2(x)>R\} \cup \{x \in C\;|\; \pi_2(x)<R\}$ is a partition into two non-empty open and closed subsets.
It contradicts the assumption that $C$ is semi-definably connected.
We have demonstrated the existence of $r_1$.
We can take $r_2 \in M$ with $\pi(C)<r_2$ in the same manner.
This concludes the assertion (d).
\end{proof}
The following lemma is the major induction step of the proof of Theorem \ref{thm:multi-cell}.
\begin{lemma}\label{lem:multi-cell-pre2}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$ which is not o-minimal.
Let $X$ be a definable subset of $M^n$ and $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
Assume that, for any $x \in M^{n-1}$, the fiber $X \cap \pi^{-1}(x)$ is at most of dimension zero.
Assume further that any definable subset of $M^{n-1}$ is partitioned into finitely many multi-cells.
Then, the definable set $X$ is also partitioned into finitely many multi-cells.
Furthermore, the projection images of two distinct multi-cells are disjoint.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $\dim(X)$.
When $\dim(X)=0$, $X$ is a discrete closed definable set and its projection images are also discrete and closed by Lemma \ref{lem:dim0} and Proposition \ref{prop:olddim}(1).
Therefore, X itself is a multi-cell.
Next we consider the case in which $\dim(X)>0$.
We can find a definable closed subset $Z$ of $M^{n-1}$ satisfying the following conditions by Lemma \ref{lem:multi-cell-pre}.
\begin{enumerate}
\item[(a)] $\dim(Z) < \dim(\pi(X))$;
\item[(b)] The definable set $X \setminus \pi^{-1}(Z)$ is closed in $M^n \setminus \pi^{-1}(Z)$;
\item[(c)] The definable set $X \setminus \pi^{-1}(Z)$ is locally the graph of continuous functions everywhere;
\item[(d)] Any semi-definably connected component $C$ of $X \setminus \pi^{-1}(Z)$ is bounded in the last coordinate.
\end{enumerate}
The lemma holds true for $X \cap \pi^{-1}(Z)$ by the induction hypothesis because $\dim(X \cap \pi^{-1}(Z)) \leq \dim(Z) < \dim(\pi(X))=\dim(X)$ by Proposition \ref{prop:olddim}(3).
Replacing $X$ with $X \setminus \pi^{-1}(Z)$, we may further assume the followings:
\begin{itemize}
\item $X$ is closed in $\pi^{-1}(\pi(X))$;
\item $X$ is locally the graph of continuous functions everywhere;
\item any semi-definably connected component of $X$ is bounded in the last coordinate.
\end{itemize}
We can partition $\pi(X)$ into finitely many multi-cells by the assumption.
Hence, we may assume that $\pi(X)$ is a multi-cell.
We demonstrate that $X$ is a multi-cell in this case.
Let $C$ be a semi-definably connected component of $X$.
We have only to show the following assertions:
\begin{itemize}
\item $\pi(C)$ is a semi-definably connected component of $\pi(X)$.
\item $C$ is the graph of a continuous function defined on $\pi(C)$.
\end{itemize}
We first demonstrate that $\pi(C)$ is a semi-definably connected component of $\pi(X)$.
The image $\pi(C)$ is semi-definable and semi-definably connected by Lemma \ref{lem:ld2} because $C$ is bounded in the last coordinate.
Therefore, we have only to show that $\pi(C)$ is open and closed in $\pi(X)$.
We first show that $\pi(C)$ is open.
Take an arbitrary point $x \in C$.
There exists an open box $U$ containing the point $x$ such that $X \cap U$ is the graph of a continuous function on $\pi(X) \cap \pi(U)$.
Shrinking $U$ if necessary, we may assume that $U$ is bounded and $X \cap U$ is definably connected as a set definable in $\mathcal R_{\text{ind}}(\mathcal M)$.
On the other hand, $C \cap U$ is a union of definably connected components of $X \cap U$ because $C$ is semi-definably connected.
It implies that $C \cap U=X \cap U$.
The intersection $C \cap U$ is the graph of a continuous function on $\pi(X) \cap \pi(U)$.
In particular, we have $\pi(X) \cap \pi(U)=\pi(C\cap U) \subseteq \pi(C) \cap \pi(U) \subseteq \pi(X) \cap \pi(U)$.
We get $\pi(C) \cap \pi(U) = \pi(X) \cap \pi(U)$.
We have demonstrated that $\pi(C)$ is open in $\pi(X)$.
We next demonstrate that $\pi(C)$ is closed in $\pi(X)$.
Assume for contradiction that we can take a point $x$ in the frontier of $\pi(C)$ in $\pi(X)$.
There exists a continuous curve $\gamma:]0,\varepsilon[ \rightarrow \pi(C)$ definable in $\mathcal R_{\text{ind}}(\mathcal M)$ with $\displaystyle\lim_{t \to 0}\gamma(t) = x$ by Corollary \ref{cor:curve_selection}.
Define $f_u:]0,\varepsilon[ \rightarrow M$ by $f_u(t)=\sup\{y \in M\;|\; (\gamma(t),y) \in C\}$.
The definable set $\{(t,y) \in ]0,\varepsilon[ \times M\;|\; (\gamma(t),y) \in C\}$ is definable in $\mathcal R_{\text{ind}}(\mathcal M)$ because $C$ is bounded in the last coordinate.
Therefore, the function $f_u$ is definable in $\mathcal R_{\text{ind}}(\mathcal M)$.
We may assume that $f_u$ is continuous and monotone by the monotonicity theorem for o-minimal structures \cite[Chapter 3, Theorem 1.2]{vdD} by taking a sufficiently small $\varepsilon>0$ if necessary.
The limit $y=\displaystyle\lim_{t \to 0} f_u(t)$ exists because the function $f_u$ definable in $\mathcal R_{\text{ind}}(\mathcal M)$ is bounded and monotone.
We have $(x,y) \in X$ because $X$ is closed in $\pi^{-1}(\pi(X))$.
We get $(x,y) \in C$ because $C$ is closed in $X$ by Theorem \ref{thm:connected}.
It means $x \in \pi(C)$.
Contradiction to the assumption that $x$ is a point in the frontier of $\pi(C)$ in $\pi(X)$.
We next demonstrate that $C$ is the graph of a continuous function defined on $\pi(C)$.
We have only to show that the restriction of $\pi$ to $C$ is injective by the condition (b).
Set
\begin{equation*}
T=\{x \in \pi(C)\;|\; |\pi^{-1}(x) \cap C| > 1\}\text{.}
\end{equation*}
We have only to demonstrate that $T$ is an empty set.
We first show that $T$ is semi-definable.
Consider the set $S=\{(x,y_1,y_2) \in M^{n-1} \times M \times M\;|\; (x,y_1) \in C \text{, } (x,y_2) \in C \text{ and } y_1 < y_2\}$.
The semi-definable set $S$ is bounded in the last coordinate, and the image $S'$ of $S$ under the projection forgetting the last coordinate is semi-definable by Lemma \ref{lem:ld2}.
It is obvious that $S'$ is also bounded in the last coordinate and $T=\pi(S')$.
The set $T$ is semi-definable using Lemma \ref{lem:ld2} again.
The set $T$ is open in $\pi(C)$.
In fact, take an arbitrary point $x \in T$.
There exist $y_1<y_2 \in M$ with $(x,y_1),(x,y_2) \in C$.
By the condition (b), there exists an open box $B$ with $x \in B \cap \pi(C)$ such that $X \cap \pi^{-1}(B)$ contains the graphs of two continuous functions whose values at $x$ are $y_1$ and $y_2$, respectively.
Therefore, $B \cap \pi(C)$ is contained in $T$, and $T$ is open in $\pi(C)$.
We next show that $T$ is closed in $\pi(C)$.
Assume the contrary.
Take a point $x \in \pi(C) \cap \partial T$.
We can take the unique $y \in M$ with $(x,y) \in C$ because $x \not\in T$.
There exists a continuous curve $\gamma:]0,\varepsilon[ \rightarrow \pi(C) \cap T$ definable in $\mathcal R_{\text{ind}}(\mathcal M)$ such that $\displaystyle\lim_{t \to 0}\gamma(t)=x$ by Corollary \ref{cor:curve_selection}.
We define the maps $\eta_u,\eta_l:]0,\varepsilon[ \rightarrow M$ by
\begin{align*}
&\eta_u(t)=\sup\{u \in M\;|\; (\gamma(t),u) \in C\} \text{ and }\\
&\eta_l(t)=\inf\{u \in M\;|\; (\gamma(t),u) \in C\} \text{.}
\end{align*}
They are well-defined because $C$ is bounded in the last coordinate.
Take a sufficiently small $\varepsilon>0$.
The two functions $\eta_u$ and $\eta_l$ are definable in $\mathcal R_{\text{ind}}(\mathcal M)$ and continuous and they have the limits $y_u = \displaystyle\lim_{t \to 0} \eta_u(t) \in M$ and $y_l = \displaystyle\lim_{t \to 0} \eta_l(t) \in M$ for the same reason as above.
We have $\eta_u(t) \not=\eta_l(t)$ because $\gamma(t) \in T$.
We have $(x,y_u) \in C$ and $(x, y_l) \in C$ because $C$ is closed in $\pi^{-1}(\pi(C))$.
We therefore get $y=y_l=y_u$ because $x \not\in T$.
The condition (b) fails at $(x,y)$ because $\eta_u(t) \not=\eta_l(t)$.
Contradiction.
We have shown that $T$ is closed in $\pi(C)$.
Since $\pi(C)$ is semi-definably connected and $T$ is open and closed in $\pi(C)$, we have either $T=\pi(C)$ or $T=\emptyset$.
We have only to lead to a contradiction assuming that $T=\pi(C)$.
Define the function $f_u:\pi(C) \rightarrow M$ by $f_u(x)=\sup\{t\;|\; (x,t) \in C\}$.
We can easily show that its graph is a semi-definable set because $C$ is bounded in the last coordinate.
It is a continuous function.
In fact, let $\mathcal D$ be the set of all the points at which $f_u$ is discontinuous.
Take a point $x \in \mathcal D$.
Let $V$ be the intersection of $\pi^{-1}(x)$ with the closure of the graph of $f_u|_{\pi(C) \setminus \{x\}}$, where $f_u|_{\pi(C) \setminus \{x\}}$ denote the restriction of $f_u$ to $\pi(C) \setminus \{x\}$.
The closure of the graph of $f_u|_{\pi(C) \setminus \{x\}}$ is semi-definable by Lemma \ref{lem:frontier}.
The set $V$ is semi-definable and bounded.
Consequently, $V$ is definable in $\mathcal R_{\text{ind}}(\mathcal M)$ by the definition of semi-definable sets.
There exists a point $(x,y) \in V$ with $y \not= f_u(x)$ by the assumption.
Note that $(x,y) \in C$ because $C$ is closed in $\pi^{-1}(\pi(C))$.
Since $X$ is locally the graph of continuous functions everywhere, the set $C$ is locally the graphs of continuous functions $g$ and $h$ defined on a neighborhood of $x$ in $\pi(X)$ at $(x,y)$ and $(x,f_u(x))$, respectively.
Take a sufficiently small $\varepsilon >0$.
Since $g$ and $h$ are continuous and $g(x)<h(x)$, we have $g(x')+\varepsilon<h(x')$ if $x'$ is sufficiently close to $x$.
We also get $h(x') \leq f_u(x')$ by the definition of the function $f_u$.
We then have $g(x')+\varepsilon < f_u(x')$ for any $x'$ sufficiently close to $x$ and
we obtain $(x,y)=(x,g(x)) \not\in V_x$.
Contradiction.
We have demonstrated that the function $f_u$ is continuous.
Consider the graph $\{(x,y) \in C\;|\; y = f_u(x)\}$.
It is easy to prove that the graph is an open and closed proper subset of $C$ using the fact that $X$ is locally the graph of continuous functions everywhere.
Contradiction to the assumption that $C$ is semi-definably connected.
\end{proof}
The following theorem is the main theorem in this subsection.
\begin{theorem}\label{thm:multi-cell}
Consider an almost o-minimal expansion of an ordered group.
A definable set is partitioned into finitely many multi-cells.
\end{theorem}
\begin{proof}
Consider the case in which the structure in consideration is o-minimal.
A definable set is partitioned into finitely many cells by \cite[Chapter 3, Theorem 2.11]{vdD}.
It is also a partition into finitely many multi-cells because a cell is simultaneously a multi-cell.
We next consider the case in which the structure is not o-minimal.
Let $\mathcal M=(M,<,0,+,\ldots)$ be an almost o-minimal expansion of an ordered group.
Let $X$ be a definable subset of $M^n$.
We demonstrate that the set $X$ is partitioned into finitely many multi-cells.
We prove it by induction on $n$.
Consider the case in which $n=1$.
The theorem is clear when $X=\emptyset$ or $X=M$.
We consider the other cases.
Let $X_1$ be the union of all the maximal open intervals contained in $X$, which is definable.
In fact, the set $X_1$ is described as follows:
\begin{align*}
X_1 &=\{x \in X\;|\; \exists \varepsilon >0,\ \forall y \in M,\ |x-y| < \varepsilon \rightarrow y \in X\} \text{.}
\end{align*}
The set $X_2 = X \setminus X_1$ is the set of the isolated points and the endpoints of the maximal open intervals in $X$ and it is a discrete closed definable set by Lemma \ref{lem:almost1} and Lemma \ref{lem:local2}.
The decomposition $X=X_1 \cup X_2$ is a partition into multi-cells.
We next consider the case in which $n>1$.
Let $\pi:M^n \rightarrow M^{n-1}$ be the projection forgetting the last coordinate.
Consider the sets
\begin{align*}
&X_{\text{oi}}=\{(x,y) \in M^{n-1} \times M \;|\; \exists \varepsilon >0,\ \forall y' \in M,\ |y'-y|<\varepsilon \rightarrow (x,y') \in X\} \text{,}\\
&X_{\forall}=\{(x,y) \in M^{n-1} \times M \;|\; \forall y' \in M,\ (x,y') \in X\} \text{,}\\
&X'_{\infty}=\{(x,y) \in M^{n-1} \times M \;|\; \forall y' \in M,\ y'>y \rightarrow (x,y') \in X\} \text{ and }\\
&X'_{-\infty}=\{(x,y) \in M^{n-1} \times M \;|\; \forall y' \in M,\ y'<y \rightarrow (x,y') \in X\} \text{.}
\end{align*}
The subscript $\text{oi}$ of $X_{\text{oi}}$ is an acronym of open intervals.
Set
\begin{align*}
&X_{\text{boi}}=X_{\text{oi}} \setminus (X_{\forall} \cup X'_{\infty} \cup X'_{-\infty})\text{,}\\ &X_{\infty} = X'_{\infty} \setminus X_{\forall}\text{,}\\
&X_{-\infty} = X'_{-\infty} \setminus X_{\forall}\text{ and }\\
&X_{\text{pt}} = X \setminus (X_{\text{boi}} \cup X_{\infty} \cup X_{-\infty} \cup X_{\forall})\text{.}
\end{align*}
The subscripts $\text{boi}$ of $X_{\text{boi}}$ and $\text{pt}$ of $X_{\text{pt}}$ represent bounded open intervals and points, respectively.
The definable set $X$ is partitioned as follows:
\begin{equation*}
X = X_{\text{boi}} \cup X_{\infty} \cup X_{-\infty} \cup X_{\forall} \cup X_{\text{pt}}\text{.}
\end{equation*}
By the definition and Lemma \ref{lem:local2}, semi-definably connected components of non-empty fibers of $X_{\text{boi}}$, $X_{\infty}$, $X_{-\infty}$, $X_{\forall}$ and $X_{\text{pt}}$ are a bounded open interval, an open interval unbounded above and bounded below, an open interval bounded above and unbounded below, M and a point, respectively.
We have only to show that the above five definable sets are partitioned into multi-cells.
The definable set $X_{\text{pt}}$ is partitioned into multi-cells by Lemma \ref{lem:multi-cell-pre2}.
As to $X_{\forall}$, there exists a partition into multi-cells $\pi(X_{\forall})=\bigcup_{i=1}^k Y_i$ by the induction hypothesis.
Set $X_{\forall,i}= Y_i \times M$, then the partition $X_{\forall}=\bigcup_{i=1}^k X_{\forall,i}$ is a partition into multi-cells.
Consider the set
\begin{equation*}
Y_{\infty}=\{(x,y) \in \pi(X_{\infty}) \times M\;|\; (x,y) \not\in X_{\infty},\ \forall y',\ y'>y \rightarrow (x,y') \in X_{\infty}\}\text{.}
\end{equation*}
The definable sets $Y_{\infty}$ consists of the lower endpoints of fibers of $X_{\infty}$.
In particular, $Y_\infty$ satisfies the assumption of Lemma \ref{lem:multi-cell-pre2}.
Let $Y_\infty=\bigcup_{i=1}^k Y_{\infty,i}$ be a partition into multi-cells given by Lemma \ref{lem:multi-cell-pre2}.
Set $X_{\infty,i}=X_{\infty} \cap \pi^{-1}(\pi(Y_{\infty,i}))$.
We claim that each definable set $X_{\infty, i}$, $1 \leq i \leq k$, is a multi-cell.
In fact, it is clear that the projection image $\pi(X_{\infty,i})$ is a multi-cell because $\pi(X_{\infty,i})=\pi(Y_{\infty,i})$.
Since $Y_{\infty,i}$ is a multi-cell, it is the graph of a continuous function $f$ defined on $\pi(Y_{\infty,i})$.
It is obvious that $X_{\infty,i}=\{(x,y) \in \pi(X_{\infty,i}) \times M\;|\; y>f(x)\}$ by the definition.
Hence, the definable set $X_{\infty,i}$ is a multi-cell, and $X_{\infty}=\bigcup_{i=1}^k X_{\infty,i}$ is a partition into multi-cells.
We can show that the definable set $X_{-\infty}$ is partitioned into multi-cells in the same way.
The remaining task is to demonstrate that $X_{\text{boi}}$ is partitioned into multi-cells.
We may assume the followings:
\begin{enumerate}
\item[(i)] All the semi-definably connected components of non-empty fibers of $X$ are bounded open intervals;
\item[(ii)] For any $x \in \pi(X)$, the closures of two distinct semi-definably connected components of $X \cap \pi^{-1}(x)$ have an empty intersection.
\end{enumerate}
In fact we can assume (i) by setting $X=X_{\text{boi}}$.
Let us prove why we can also assume (ii).
Consider the definable set
\begin{align*}
Y_{\text{both}} &= \{(x,y_1,y_2) \in \pi(X) \times M^2\;|\; (x,y_1) \not\in X, (x,y_2) \not\in X, y_1<y_2,\\
&\qquad \forall c,\ y_1 < c < y_2 \rightarrow (x,c) \in X\}\text{.}
\end{align*}
Set
\begin{align*}
X_{\text{upper}} &= \{(x,y) \in \pi(X) \times M\;|\; \exists y_1, y_2, \ (x,y_1,y_2) \in Y_{\text{both}},\ (y_1+y_2)/2 <y <y_2\}\text{,}\\
X_{\text{middle}} &= \{(x,y) \in \pi(X) \times M\;|\; \exists y_1, y_2, \ (x,y_1,y_2) \in Y_{\text{both}}, \ y=(y_1+y_2)/2\}\text{ and }\\
X_{\text{lower}} &= \{(x,y) \in \pi(X) \times M\;|\; \exists y_1, y_2, \ (x,y_1,y_2) \in Y_{\text{both}},\ y_1<y<(y_1+y_2)/2\}\text{.}
\end{align*}
The definable set $X_{\text{middle}}$ can be partitioned into finitely many multi-cells by Lemma \ref{lem:multi-cell-pre2}.
The closures of two distinct semi-definably connected components of $X_{\text{upper}} \cap \pi^{-1}(x)$ have empty intersections for all $x \in \pi(X)$.
The fiber $X_{\text{lower}} \cap \pi^{-1}(x)$ also enjoys the same property.
Therefore, we may assume that the definable set $X$ satisfies the assumption (ii) by setting $X=X_{\text{upper}}$ and $X=X_{\text{lower}}$.
\medskip
Consider the definable sets
\begin{align*}
Y_{\text{upper}} &= \{(x,y) \in \pi(X) \times M\;|\; (x,y) \not\in X,\ \exists \varepsilon >0,\ \forall c,\ y-\varepsilon < c < y\\
&\qquad \rightarrow (x,c) \in X\}\text{ and }\\
Y_{\text{lower}} &= \{(x,y) \in \pi(X) \times M\;|\; (x,y) \not\in X,\ \exists \varepsilon >0,\ \forall c,\ y < c < y+\varepsilon\\
&\qquad \rightarrow (x,c) \in X\}\text{.}
\end{align*}
For any $x \in \pi(X)$, the fiber $Y_{\text{upper}} \cap \pi^{-1}(x)$ is the set of the upper endpoints of the maximal open intervals contained in $X \cap \pi^{-1}(x)$ by the assumption (i).
The fiber $Y_{\text{lower}} \cap \pi^{-1}(x)$ is the set of the lower endpoints of the maximal open intervals.
By Lemma \ref{lem:multi-cell-pre2}, both $Y_{\text{upper}}$ and $Y_{\text{lower}}$ are partitioned into finitely many multi-cells.
Let $Y_{\text{upper}}=\bigcup_{i=1}^k Y_{\text{upper},i}$ and $Y_{\text{lower}}=\bigcup_{j=1}^l Y_{\text{lower},j}$ be these partitions, respectively.
We have $\pi(Y_{\text{upper},i_1}) \cap \pi(Y_{\text{upper},i_2})=\emptyset$ by Lemma \ref{lem:multi-cell-pre2} if $i_1 \not=i_2$.
We may further assume that, for all $1 \leq i \leq k$ and $1 \leq j \leq l$, we have either $\pi(Y_{\text{upper},i})=\pi(Y_{\text{lower},j})$ or $\pi(Y_{\text{upper},i}) \cap \pi(Y_{\text{lower},j}) = \emptyset$.
In fact, for all $1 \leq i \leq k$ and $1 \leq j \leq l$, the definable set $\pi(Y_{\text{upper},i}) \cap \pi(Y_{\text{lower},j})$ is partitioned as a finite union of multi-cells by the induction hypothesis.
Let $\pi(Y_{\text{upper},i}) \cap \pi(Y_{\text{lower},j})=\bigcup_{m=1}^{p(i,j)} Z_{ijm}$ be this partitions.
Set $Y_{\text{upper},ijm}=Y_{\text{upper},i} \cap \pi^{-1}(Z_{ijm})$ and $Y_{\text{lower},ijm}=Y_{\text{lower},j} \cap \pi^{-1}(Z_{ijm})$.
They are obviously multi-cells satisfying the requirement.
Set $X_i = X \cap \pi^{-1}(\pi(Y_{\text{upper},i}))$.
We have a partition $X=\bigcup_{i=1}^k X_i$.
The remaining task is to show that each $X_i$ is a multi-cell.
Take an arbitrary semi-definably connected component $C$ of $X_i$ and an arbitrary point $\hat{z} \in C$.
Set $\hat{x}=\pi(\hat{z})$ and $\hat{z}=(\hat{x},\hat{y})$ for some $\hat{y} \in M$.
Since semi-definably connected components of the fiber $X \cap \pi^{-1}(\hat{x})$ are bounded open intervals by the assumption (i), there exist $y_u, y_l \in M$, $1 \leq i' \leq k$ and $1 \leq j' \leq l$ with $y_l < \hat{y} < y_u$, $(\hat{x},y_u) \in Y_{\text{upper},i'}$, $(\hat{x},y_l) \in Y_{\text{lower},j'}$ and $(\hat{x},y) \in X$ for all $y_l<y<y_u$.
We have $\pi(Y_{\text{upper},i'})=\pi(Y_{\text{lower},j'})$ by the assumption.
Let $Z$ be its semi-definably connected component of $\pi(X_i)$ containing the point $\hat{x}$.
There are two continuous function $f$ and $g$ defined on $Z$ such that $y_l=f(\hat{x})$, $y_u=g(\hat{x})$ and the graphs of $f$ and $g$ are semi-definably connected components of $Y_{\text{lower},j'}$ and $Y_{\text{upper},i'}$, respectively, because $Y_{\text{lower},j'}$ and $Y_{\text{upper},i'}$ are multi-cells.
We demonstrate that $f(x)<g(x)$ on $Z$ and $$C=\{(x,y) \in Z \times M\;|\; f(x)<y<g(x)\}\text{.}$$
We first show that the graph of $f$ does not intersect $Y_{\text{upper}}$.
In particular, we have $f(x)<g(x)$ on $Z$ by Lemma \ref{lem:intermediate}.
Assume the contrary.
Let $x' \in Z$ and $y'=f(x')$ with $(x',y') \in Y_{\text{upper}}$.
By the definition of $f$ and $Y_{\text{upper}}$, there exist $y_1,y_2 \in M$ with $y_1<y'<y_2$ such that $\{x\} \times ]y_1,y'[$ and $\{x\} \times ]y',y_2[$ are semi-definably connected components of the fiber $X \cap \pi^{-1}(x)$.
The intersection of their closures is not empty.
This contradicts (ii).
We next show that $C=\{(x,y) \in Z \times M\;|\; f(x)<y<g(x)\}$.
The set $C$ is contained in $\{(x,y) \in Z \times M\;|\; f(x)<y<g(x)\}$ because the latter set is closed and open in $X$ by the definition.
We demonstrate the opposite inclusion.
Assume the contrary.
Let $(x',y')$ be a point satisfying $x' \in Z$, $f(x')<y'<g(x')$ and $(x',y') \not\in C$.
By the assumption (i), there exists $\overline{y} \in M$ with $f(x')<\overline{y} \leq y'$ and
$(x',\overline{y}) \in Y_{\text{upper}}$.
Since we have $\pi(Y_{\text{upper},i_1}) \cap \pi(Y_{\text{upper},i_2})=\emptyset$ for all $i_1 \not= i_2$, we have $(x',\overline{y}) \in Y_{\text{upper},i'}$.
Since $Y_{\text{upper},i'}$ is a multi-cell, the semi-definably connected component of $Y_{\text{upper},i'}$ containing the point $(x',\overline{y})$ is the graph of some continuous function $g'$ defined on $Z$.
We have $f(x')<g'(x')<g(x')$.
The graph of $g'$ does not intersect the graph of $g$ because $Y_{\text{upper},i'}$ is a multi-cell.
The graph of $g'$ does not intersect the graph of $f$ because the graph of $f$ does not intersect $Y_{\text{upper}}$ as we demonstrated previously.
We get $y_l=f(\hat{x})<g'(\hat{x})<g(\hat{x})=y_u$ by Lemma \ref{lem:intermediate}.
We obtain $(\hat{x},g'(\hat{x})) \not\in X$, which contradicts the fact that $(\hat{x},y) \in X$ for all $y_l<y<y_u$.
\end{proof}
\begin{remark}
The notion of special submanifolds defined in \cite{F,M2,T,Fuji4} is similar to that of multi-cells.
Consider an expansion of a densely linearly order without endpoints $\mathcal M=(M,<$,$\ldots)$.
Let $\pi:M^n \rightarrow M^d$ be a coordinate projection.
A definable subset is a \textit{$\pi$-special submanifold} or simply a \textit{special submanifold} if, $\pi(X)$ is a definable open set and, for every point $x \in \pi(X)$, there exists an open box $U$ in $M^d$ containing the point $x$ satisfying the following condition:
For any $y \in X \cap \pi^{-1}(x)$, there exist an open box $V$ in $M^n$ and a definable continuous map $\tau:U \rightarrow M^n$ such that $\pi(V)=U$, $\tau(U)=X \cap V$ and the composition $\pi \circ \tau$ is the identity map on $U$.
Let $\{X_i\}_{i=1}^m$ be a finite family of definable subsets of $M^n$.
A \textit{decomposition of $M^n$ into special submanifolds partitioning $\{X_i\}_{i=1}^m$} is a finite family of special submanifolds $\{C_i\}_{i=1}^N$ such that $\bigcup_{i=1}^NC_i =M^n$, $C_i \cap C_j=\emptyset$ when $i \not=j$ and either $C_i$ has an empty intersection with $X_j$ or is contained in $X_j$ for any $1 \leq i \leq m$ and $1 \leq j \leq N$.
For instance, a DCULOAS structure admits decomposition into special submanifolds \cite{Fuji4}.
A $d$-minimal expansion of an ordered field also admits decomposition into special submanifolds \cite{M2,T}.
A multi-cell is a special manifold, but the converse is false.
The projection image of a multi-cell under the projection forgetting the last coordinate is again a multi-cell, but it is not true for a special manifold.
We need a decomposition into multi-cells in order to prove Theorem \ref{thm:uniform}.
\end{remark}
\subsubsection{Uniform local definable cell decomposition}\label{sec:udcd}
In this subsection, we first show that an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$ has a uniformity property.
We also prove the uniform local definable cell decomposition theorem introduced in Section \ref{sec:intro} using this uniformity property.
We need the following technical definition.
\begin{definition}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
Let $X \subseteq M^n$ be a multi-cell and $Y$ be a discrete definable subset of $X$.
Let $\pi_k:M^n \rightarrow M^k$ denote the projection onto the first $k$ coordinates for all $1 \leq k \leq n$.
Note that $\pi_n$ is the identity map.
The definable set $Y$ is a \textit{representative set of semi-definably connected components of $X$} if the intersection of $\pi_k(Y)$ with any semi-definably connected component of $\pi_k(X)$ is a singleton for any $1 \leq k \leq n$.
\end{definition}
\begin{lemma}\label{lem:onept}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
Let $X \subseteq M^{m+n}$ be a multi-cell and $\pi:M^{m+n} \rightarrow M^m$ be the projection onto the first $m$ coordinates.
There exists a definable subset $Y$ of $X$ such that $Y \cap \pi^{-1}(x)$ is a representative set of semi-definably connected components of $X \cap \pi^{-1}(x)$ for any $x \in \pi(X)$.
\end{lemma}
\begin{proof}
We demonstrate the lemma by the induction on $n$.
We first consider the case in which $n=1$.
Consider the following definable sets:
\begin{align*}
&S_{\infty} = \{x \in \pi(X)\;|\; \forall y \in M,\ (x,y) \in X\}\text{,}\\
&S_u = \{x \in \pi(X)\;|\; \exists y \in M, \ \forall z, \ z > y \rightarrow (x,z) \in X\} \setminus S_\infty \text{ and }\\
&S_l = \{x \in \pi(X)\;|\; \exists y \in M, \ \forall z, \ z < y \rightarrow (x,z) \in X\} \setminus S_\infty\text{.}
\end{align*}
The definable functions $\rho_u:S_u \rightarrow M$ and $\rho_l:S_l \rightarrow M$ are given as follows:
\begin{align*}
\rho_u(x) &= \inf\{y \in M\;|\; \forall z, \ z > y \rightarrow (x,z) \in X\}\text{ and }\\
\rho_l(x) &= \sup\{y \in M\;|\; \forall z, \ z < y \rightarrow (x,z) \in X\}\text{.}
\end{align*}
It is well-defined by Lemma \ref{lem:almost1}.
We set
\begin{align*}
Y_c &= \{(x,y_1,y_2) \in \pi(X) \times M^2\;|\; (x,y_1) \not\in X, (x,y_2) \not\in X, y_1<y_2,\\
&\qquad \forall c,\ y_1 < c < y_2 \rightarrow (x,c) \in X\}\text{ and }\\
Y_p &= \{(x,y) \in X\;|\; \exists \varepsilon > 0,\ \forall c,\ 0<|y-c|<\varepsilon \rightarrow (x,c) \not\in X\}\text{.}
\end{align*}
We finally set
\begin{align*}
Y &= \{(x,\rho_u(x)+\varepsilon) \in M^{m+1}\;|\; x \in S_u\} \cup \{(x,\rho_l(x)-\varepsilon) \in M^{m+1}\;|\; x \in S_l\}\\
&\quad \cup \{(x,y) \in M^{m+1}\;|\; \exists y_1, y_2, \ (x,y_1,y_2) \in Y_c,\ y = (y_1+y_2)/2\}\\
&\quad \cup Y_p \cup (S_\infty \times \{0\})\text{,}
\end{align*}
where $\varepsilon$ is a fixed positive element in $M$.
The definable set $Y \cap \pi^{-1}(x)$ is obviously a representative set of semi-definably connected components of $X \cap \pi^{-1}(x)$ for any $x \in \pi(X)$ by the definition of multi-cells.
We consider the case in which $n>1$.
The notations $\pi_1:M^{m+n} \rightarrow M^{m+n-1}$ and $\pi_2: M^{m+n-1} \rightarrow M^{m}$ denote the projections forgetting the last coordinate and onto the first $m$ coordinates, respectively.
The projection image $\pi_1(X)$ is a multi-cell by the definition of multi-cells.
There exists a definable subset $Y_1 \subseteq \pi_1(X)$ such that the definable set $Y_1 \cap \pi_2^{-1}(x)$ is a representative set of semi-definably connected components of $\pi_1(X) \cap \pi_2^{-1}(x)$ for any $x \in \pi(X)$ by applying the induction hypothesis to $\pi_1(X)$ and $\pi_2$.
Set $X'=X \cap \pi_1^{-1}(Y_1)$, and apply the lemma for $n=1$ to $X'$ and $\pi_1$.
We can find a representative set $Y$ of semi-definably connected components of $X'$.
It is easy to demonstrate that $Y$ is also a representative set of semi-definably connected components of $X$.
\end{proof}
\begin{theorem}[Uniformity theorem]\label{thm:uniform}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
For any definable subset $X$ of $M^{n+1}$ and a positive element $R \in M$, there exists a positive integer $K$ such that, for any $a \in M^n$, the definable set $X \cap (\{a\} \times ]-R,R[)$ has at most $K$ semi-definably connected components.
\end{theorem}
\begin{proof}
Consider the set $X^{<R}:= X \cap (M^n \times ]-R,R[)$.
Apply Theorem \ref{thm:multi-cell} to $X^{<R}$.
We have a partition into multi-cells $X^{<R} = \bigcup_{i=1}^k X_i$.
Let $\pi_1:M^{n+1} \rightarrow M^n$ and $\pi_2:M^{n+1} \rightarrow M$ be the projections onto first $n$ coordinates and onto the last coordinate, respectively.
We next apply Lemma \ref{lem:onept} to $X_i$ and $\pi_2$.
For any $1 \leq i \leq k$, we can take a definable discrete subset $Y_i$ of $X_i$ which is a representative set of semi-definably connected components of $X_i$.
Since $Y_i$ is discrete, we have $\dim (Y_i) \leq 0$ by Lemma \ref{lem:equiv_dim}.
Set $Z_i = \pi_2(Y_i)$ for all $1 \leq i \leq k$.
We get $\dim(Z_i) \leq 0$ by Proposition \ref{prop:olddim}(1).
It implies that the definable set $Z_i$ is discrete.
The definable set $Z_i$ is included in the bounded open interval $]-R,R[$ by the definition.
Hence, the definable set $Z_i$ is a finite set for any $1 \leq i \leq k$ because $\mathcal M$ is almost o-minimal.
Set $K=\sum_{i=1}^k |Z_i|$.
When $a \in \pi_1(X_i)$ for some $1 \leq i \leq k$, there exists a point $a'_i \in \pi_1(Y_i)$ contained in the semi-definably connected component of $\pi_1(X_i)$ containing the point $a$.
The definable set $X_i \cap \pi_1^{-1}(a)$ has the same number of semi-definably connected components as $X_i \cap \pi_1^{-1}(a'_i)$ which is equal to $|Y_i \cap \pi_1^{-1}(a'_i)|$
by the definitions of multi-cells and representative sets of their semi-definably connected components.
Let $\operatorname{NC}(S)$ denote the number of semi-definably connected components of a definable set $S$.
We therefore have
\begin{align*}
\operatorname{NC}(X \cap (\{a\} \times ]-R,R[)) &= \operatorname{NC}(X^{<R} \cap \pi_1^{-1}(a))
\leq \displaystyle\sum_{1 \leq i \leq k, a \in \pi_1(X_i)}\operatorname{NC}(X_i \cap \pi_1^{-1}(a))\\
&=\displaystyle\sum_{1 \leq i \leq k, a \in \pi_1(X_i)} \operatorname{NC}(X_i \cap \pi_1^{-1}(a'_i))\\
&=\displaystyle\sum_{1 \leq i \leq k, a \in \pi_1(X_i)} |Y_i \cap \pi_1^{-1}(a'_i)|\\
& \leq \displaystyle\sum_{i=1}^k |\pi_2(Y_i)| = \displaystyle\sum_{i=1}^k \left|Z_i\right|=K\text{.}
\end{align*}
We have finished the proof.
\end{proof}
We now begin to demonstrate Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We first show the assertion for $n=1$.
For any definable set $S \subseteq M^{m+1}$, the notation $\operatorname{bd}_m(S)$ denotes the set $\{(x,y) \in M^m \times M\;|\; y \in \operatorname{bd}(S_x)\}$.
Set $I=]\!-\!R,R[$, then $S' \cap I$ is a finite union of points and open intervals for any definable subset $S'$ of $M$ by the definition of almost o-minimality.
Set $X= \bigcup_{\lambda \in \Lambda}\operatorname{bd}_m(A_\lambda \cap I)$.
The fibers $X_b$ are finite sets for all $b \in M^m$.
It is obvious that any definable cell decomposition of $I$ partitioning $X_b \cap I$ partitions $\{(A_\lambda)_b \cap I\}_{\lambda\in\Lambda}$ for any point $b \in M^m$.
There exists a positive integer $K$ such that $|X \cap(\{b\} \times I)| \leq K$ for any point $b \in M^m$ by Theorem \ref{thm:uniform}.
Set $S_i=\{b \in M^m\;|\; |X_b \cap I|=i\}$ for all $0 \leq i \leq K$.
The family $\{S_i\}_{i=0}^K$ partitions the parameter space $M^m$.
Let $y_j(b)$ be the $j$-th largest point of $X_b \cap I$ for all $b \in S_i$ and $1 \leq j \leq i$.
Set $y_0(b)=-R$ and $y_{i+1}(b)=R$ for all $b \in S_i$.
Applying Proposition \ref{prop:olddim}(2) inductively, we can find a partition into definable sets
\begin{equation*}
S_i = S_{i0} \cup \ldots \cup S_{im}
\end{equation*}
such that either $S_{ik}=\emptyset$ or $\dim(S_{ik})=k$, and $y_j$ is continuous on $S_{ik}$ for any $0 \leq j \leq i$ and $0 \leq k \leq m$.
We set
\begin{align*}
&C_{ijk} =\{(x,y_j(x)) \in S_{ik} \times M\} \ \ (1 \leq j \leq i)\text{ and }\\
&D_{ijk} = \{(x,y) \in S_{ik} \times M\;|\; y_j(x) < y < y_{j+1}(x)\} \ \ (0 \leq j \leq i)
\end{align*}
for any $0 \leq i \leq K$ and $0 \leq k \leq m$.
Consider the family of maps $\mathcal F = \{\sigma: \Lambda \rightarrow \{0,1\}\}$.
Set
\begin{align*}
&T^0_{ijk\sigma} =\{x \in S_{ik}\;|\; C_{ijk} \cap (\{x\} \times M) \text{ is contained in } A_\lambda \text{ iff } \sigma(\lambda)=1\} \ (1 \leq j \leq i) \text{ and }\\
&T^1_{ijk\sigma} =\{x \in S_{ik}\;|\; D_{ijk} \cap (\{x\} \times M) \text{ is contained in } A_\lambda \text{ iff } \sigma(\lambda)=1\} \ (0 \leq j \leq i)
\end{align*}
for any $0 \leq i \leq K$, $0 \leq k \leq m$ and $\sigma \in \mathcal F$.
We finally set $C_{ijk\sigma}=C_{ijk} \cap (T^0_{ijk\sigma} \times M)$ and $D_{ijk\sigma}=D_{ijk} \cap (T^1_{ijk\sigma} \times M)$.
The partition
\begin{equation*}
M^m \times I = \bigcup_{i=0}^K \left(\bigcup_{k=1}^m \left(\bigcup_{\sigma \in \mathcal F}\left(\bigcup_{j=1}^i C_{ijk\sigma} \cup \bigcup_{j=0}^i D_{ijk\sigma}\right)\right)\right)
\end{equation*}
is the desired partition.
Furthermore, the above definable functions $y_j$ can be chosen as continuous functions on $p(C_{ijk\sigma})$ and $p(D_{ijk\sigma})$, where $p:M^{m+1} \rightarrow M^m$ is the projection forgetting the last coordinate.
It is clear that the type of the cell $(X_i)_b$ is independent of the choice of $b$ with $(X_i)_b \not= \emptyset$.
We consider the case in which $n>1$.
Let $\pi:M^{m+n} \rightarrow M^{m+n-1}$ be the projection forgetting the last coordinate.
Set $I=]-R,R[$.
Applying the theorem for $n=1$ to the family $\{A_\lambda\}_{\lambda\in\Lambda}$, there exists a partition $M^{m+n-1} \times I = Y_1 \cup \ldots \cup Y_l$ such that $I=(Y_1)_b \cup \ldots \cup (Y_l)_b$ is a definable cell decomposition $I$ for any $b \in M^{m+n-1}$ and either $Y_i \subseteq A_\lambda$ or $Y_i \cap A_\lambda=\emptyset$ for any $1 \leq i \leq l$ and $\lambda \in \Lambda$.
We can further assume that $Y_i$ is one of the following forms:
\begin{align*}
Y_i &= \{(x,f(x)) \in \pi(Y_i) \times M\} \text{ and }\\
Y_i &= \{(x,y) \in \pi(Y_i) \times M\;|\; f(x)<y<g(x)\}\text{,}
\end{align*}
where $f$ and $g$ are definable continuous functions on $\pi(Y_i)$ with $f<g$.
Set $B'=]-R,R[^{n-1}$.
Apply the induction hypothesis to the family $\{\pi(Y_i)\}_{i=1}^l$.
There exists a partition $M^{m} \times B' = Z_1 \cup \ldots \cup Z_q$ such that $B'=(Z_1)_b \cup \ldots \cup (Z_q)_b$ is a definable cell decomposition $B'$ for any $b \in M^{m}$, and either $\pi(Y_i) \cap Z_j = \emptyset$ or $Z_j \subseteq \pi(Y_i)$ and the type of the cell $(Z_j)_b$ is independent of the choice of $b$ with $(Z_j)_b \not= \emptyset$ for all $i$ and $j$.
Set $X_{ij}=Y_i \cap \pi^{-1}(Z_j)$ for all $1 \leq i \leq l$ and $1 \leq j \leq q$.
Let $\{X_i\}_{i=1}^k$ be the family of non-empty $X_{ij}$'s.
It is easy to demonstrate that the family $\{X_i\}_{i=1}^k$ satisfies the requirement of the theorem.
We omit the details.
\end{proof}
\begin{corollary}
Consider an almost o-minimal expansion of an ordered group $\mathcal M=(M,<,0,+,\ldots)$.
For any definable subset $X$ of $M^n$ and a positive element $R \in M$, there exists a positive integer $K$ such that the definable set $X \cap (b+B)$ has at most $K$ semi-definably connected components for all $b \in M^n$.
Here, $B=]\!-\!R,R[^n$ and $b+B$ denotes the set given by $\{x \in M^n\;|\; x-b \in B\}$.
\end{corollary}
\begin{proof}
Consider the definable set $Y$ defined by
\begin{equation*}
\{(y,x) \in M^n \times M^n\;|\; x-y \in X\} \text{.}
\end{equation*}
Applying Theorem \ref{thm:main}, there exists a partition $M^{n} \times B = X_1 \cup \ldots \cup X_K$ such that $B=(X_1)_b \cup \ldots \cup (X_K)_b$ is a definable cell decomposition $B$ partitioning the definable set $Y_b \cap B$ for any $b \in M^{n}$.
It means that the definable set $X \cap (b+B)$ is the union of at most $K$ cells.
The set $X \cap (b+B)$ has at most $K$ semi-definably connected components because cells are semi-definably connected.
We have finished the proof.
\end{proof}
\subsection{Structures elementarily equivalent to an almost o-minimal structure}
Structures elementarily equivalent to an almost o-minimal structure is not necessarily almost o-minimal as demonstrated in Proposition \ref{prop:not_almost}.
However, a weaker version of Theorem \ref{thm:main} holds true for such structures.
\begin{lemma}\label{lem:last}
Let $\mathcal M=(M,<,\ldots)$ be an expansion of a dense linear order.
Consider a definable set $C \subseteq M^n$ defined by a first-order formula with parameter $\overline{c}$.
There exists a first-order sentence with parameters $\overline{c}$ expressing the condition for $C$ being a definable cell of type $(j_1, \ldots, j_d)$.
\end{lemma}
\begin{proof}
We prove the lemma by the induction on $n$.
When $n=1$, the definable set $C$ is a cell if and only if $C$ is a point or an open interval.
This condition is clearly expressed by a first-order sentence.
We next consider the case in which $n>1$.
The notation $\pi:M^n \rightarrow M^{n-1}$ denotes the projection forgetting the last factor.
The condition for $\pi(C)$ being a cell is represented by a first-order sentence with parameters $\overline{c}$ by the induction hypothesis.
We only prove the lemma in the case in which the definable set $C$ is of the form
\begin{equation*}
C=\{(x,y) \in M^{n-1} \times M\;|\; f(x) < y < g(x)\}\text{,}
\end{equation*}
where $f$ and $g$ are definable continuous functions defined on $\pi(C)$.
We can demonstrate the lemma in the other cases in a similar way.
The above condition is equivalent to the following conditions:
\begin{itemize}
\item For any $x \in \pi(C)$, the fiber $C_x=\{y \in M\;|\; (x,y) \in C\}$ is a bounded interval.
\item Set $f(x)=\inf\{y \in M\;|\;(x,y) \in C\}$ and $g(x)=\sup\{y \in M\;|\;(x,y) \in C\}$ for any $x \in \pi(C)$, then $f$ and $g$ are continuous on $\pi(C)$.
\end{itemize}
The above conditions are obviously expressed by first-order sentences with parameters $\overline{c}$.
\end{proof}
\begin{theorem}\label{thm:udcd}
Consider a structure $\mathcal M = (M, <,0,+, \ldots)$ elementarily equivalent to an almost o-minimal expansion of an ordered group.
Let $\{A_\lambda\}_{\lambda\in\Lambda}$ be a finite family of definable subsets of $M^{m+n}$.
There exist an open box $B$ in $M^n$ containing the origin and a finite partition into definable sets
\begin{equation*}
M^m \times B = X_1 \cup \ldots \cup X_k
\end{equation*}
such that $B=(X_1)_b \cup \ldots \cup (X_k)_b$ is a definable cell decomposition of $B$ for any $b \in M^m$ and either $X_i \cap A_\lambda = \emptyset$ or $X_i \subseteq A_\lambda$ for any $1 \leq i \leq k$ and $\lambda \in \Lambda$.
Here, the notation $S_b$ denotes the fiber of a definable subset $S$ of $M^{m+n}$ at $b \in M^m$.
\end{theorem}
\begin{proof}
Consider a structure $\mathcal N=(N,+,0,<,\ldots)$ elementarily equivalent to an almost o-minimal expansion of an ordered group $\mathcal M=(M,+,0,<,\ldots)$.
We first reduce to the case in which the $A_\lambda$ are definable without parameters for all $\lambda \in \Lambda$.
There exist parameters $\overline{c} \in N^p$ and first-order formulae $\varphi_{\lambda}(x,y,\overline{c})$ with parameters $\overline{c}$ defining the definable sets $A_{\lambda}$ for all $\lambda \in \Lambda$.
Set $A'_\lambda =\{(z,x,y) \in N^p \times N^m \times N^n\;|\; \mathcal N \models \varphi_\lambda(x,y,z)\}$.
If the corollary holds true for the family $\{A'_\lambda\}_{\lambda \in \Lambda}$, the corollary also holds true for the family $\{A_\lambda\}_{\lambda \in \Lambda}$ because $A_\lambda$ is the fiber $(A'_\lambda)_{\overline{c}}=\{(x,y) \in N^m \times N^n\;|\; (\overline{c},x,y) \in A'_\lambda\}$.
Hence, we may assume that the $A_\lambda$ are definable without parameters for all $\lambda \in \Lambda$.
Let $\varphi_\lambda(x,y)$ denote the first-order formulae without parameters defining the definable sets $A_\lambda$.
Let $A_\lambda^{\mathcal M}$ be the definable subset of $M^{m+n}$ defined by the formula $\varphi_\lambda(x,y)$ for each $\lambda \in \Lambda$.
By Theorem \ref{thm:main}, there exist an open box $B^{\mathcal M}$ in $M^n$ containing the origin and a partition into definable sets
\begin{equation*}
M^m \times B^{\mathcal M} = X_1^{\mathcal M} \cup \ldots \cup X_k^{\mathcal M}
\end{equation*}
such that the fibers $(X_i^{\mathcal M})_b$ are definable cells of a fixed type for all $b \in M^m$ with $(X_i^{\mathcal M})_b \not= \emptyset$ and either $X_i^{\mathcal M} \subseteq A_\lambda^{\mathcal M}$ or $X_i^{\mathcal M} \cap A_\lambda^{\mathcal M} = \emptyset$ for all $1 \leq i \leq k$.
There exist parameters $\overline{d} \in M^p$ and first-order formulas $\psi_i(x,y,\overline{d})$ with parameters $\overline{d}$ defining the definable sets $X_i^{\mathcal M}$ for all $1 \leq i \leq k$.
Using the first-order formulas $\psi_i(x,y,\overline{d})$, the condition that
\begin{itemize}
\item there exists an open box $B^{\mathcal M}$ in $M^n$ containing the origin and
\item $M^m \times B^{\mathcal M} = X_1^{\mathcal M} \cup \ldots \cup X_k^{\mathcal M}$
\end{itemize}
can be expressed by a first-order sentence $\Phi(\overline{d})$ with parameters $\overline{d}$.
Let $\Psi_i(\overline{d})$ be the sentence expressing the condition $X_i^{\mathcal M} \subseteq A_\lambda^{\mathcal M}$ or $X_i^{\mathcal M} \cap A_\lambda^{\mathcal M} = \emptyset$ for any $1 \leq i \leq k$.
The condition for the fiber $(X_i^{\mathcal M})_b$ being either a cell or an empty set for any $b \in M^m$ is expressed by a first-order formula $\Pi_i(\overline{d})$ with parameters $\overline{d}$ by Lemma \ref{lem:last}.
We have
\begin{equation*}
\mathcal M \models \Phi(\overline{d}) \wedge \bigwedge_{i=1}^k \left( \Psi_i(\overline{d}) \wedge \Pi_i(\overline{d}) \right)
\end{equation*}
by the definitions of $\Phi(\overline{d})$, $\Psi_i(\overline{d})$ and $\Pi_i(\overline{d})$.
We therefore get
\begin{equation*}
\mathcal M \models \exists \overline{d}\ \Phi(\overline{d}) \wedge \bigwedge_{i=1}^k \left( \Psi_i(\overline{d}) \wedge \Pi_i(\overline{d}) \right) \text{.}
\end{equation*}
Since $\mathcal N$ is elementarily equivalent to $\mathcal M$, we finally obtain
\begin{equation*}
\mathcal N \models \exists \overline{d}\ \Phi(\overline{d}) \wedge \bigwedge_{i=1}^k \left( \Psi_i(\overline{d}) \wedge \Pi_i(\overline{d}) \right) \text{.}
\end{equation*}
Take $\overline{d'} \in N^p$ satisfying the above condition and set $X_i=\{(x,y) \in N^m \times N^n\;|\; \mathcal M \models \psi_i(x,y,\overline{d'})\}$ for all $1 \leq i \leq k$.
Then, there exists an open box $B$ in $N^n$ containing the origin such that the partition $N^m \times B=X_1 \cup \ldots \cup X_k$ is the desired partition.
\end{proof}
\begin{corollary}
A structure elementarily equivalent to an almost o-minimal expansion of an ordered group is a uniformly locally o-minimal structure of the first kind.
\end{corollary}
\begin{proof}
The corollary immediately follows from Theorem \ref{thm:udcd}.
\end{proof}
|
1,116,691,500,929 | arxiv | \section{Introduction}
The increased relevance of social media in our daily life has been accompanied by an exigent demand for a means to affirm the authenticity and authority of content sources. This challenge becomes even more apparent during the dissemination of real-time or breaking news, whose arrival on such platforms often precedes eventual traditional media reportage~\cite{b2,b3}. In line with this need, major social networks such as Twitter, Facebook and Instagram have incorporated a verification process to authenticate handles they deem important enough to be worth impersonating. Usually conferred to accounts of well-known public personalities and businesses, \textit{verified accounts}\footnote{The exact term varies by platform, with other social networks using the term ``Verified Profiles''. However in the interest of consistency, all owner-authenticated accounts are referred to as \textit{verified accounts}, and their owners as \textit{verified users}.} are indicated with a badge next to the screen name (e.g., \scalerel*{\includegraphics{Figs/TwiVer.png}}{B} on Twitter and \scalerel*{\includegraphics{Figs/FaceVer.png}}{B} on Facebook). Twitter's verification policy~\cite{b1} states that an account is verified if it belongs to a personality or business deemed to be of sufficient public interest in diverse fields, such as journalism, politics, sports, etc. However, the exact decision making process behind evaluating the strength of a user's case for verification remains a trade secret. This work attempts to unravel the likely factors that strengthen a user's case for verification by delving into the aspects of a user's Twitter presence, that most reliably predict platform verification.
\subsection{Motivation}
Our motivation behind this work was two-fold and is elaborated in the following text.
\textbf{Lack of procedural clarity and imputation of bias:} Despite repeated statements by Twitter about verification not being equivalent to endorsement, aspects of the process -- the rarity of the status and its prominent visual signalling~\cite{b14} -- have led users to conflate authenticity and credibility. This perception was confirmed in full public view when Twitter was backed into suspending its requests for verification in response to being accused of granting verified status to political extremists~\footnote{\url{https://www.bbc.com/news/technology-41934831}}, with the insinuation being that the verified badge lent their otherwise extremist opinions a facade of mainstream credibility.
This however, engendered accusations of Twitter's verification procedure harbouring a liberal bias. Multiple tweets imputing the same gave rise to the hashtag \#VerifiedHate. Similar insinuations have been made by right-leaning Indian users of the platform in the lead up to the 2019 Indian General Elections under the hashtag \#ProtestAgainstTwitter. These hitherto unfounded allegations of bias prompted us to delve deeper into understanding what may be driving the process and inferring whether these claims were justified or could the difference in status be explained away by less insidious factors relating to a user's profile and content.
\textbf{Positive perception and coveted nature:} Despite having its detractors, the fact remains that a verified badge is highly coveted amongst public figures and influencers. This is with good reason as in spite of being intended as a mark of authenticity, prior work in social sciences and psychology points to verified badges conferring additional credibility to a handle's posted tweets~\cite{b4,b5,b6}. Psychological testing~\cite{b10} has also revealed that the credibility of a message and its reception is influenced by its purported source and presentation rather than just its pertinence or credulity. Captology studies~\cite{b11} indicate that widely endorsed information originating from a well-known source is easier to perceive as trustworthy and back up the former claim. This is pertinent as owners of verified accounts are usually well-known and their content is on an average more frequently liked and retweeted than that of the generic Twittersphere~\cite{b16,b15}.
Adding to the desirability of exclusive visual indicators is the demanding nature of credibility assessment on Twitter. The imposed character limit and a minimal scope of visually customizing content, coupled with the feverish rate at which content is consumed -- with users on average devoting a mere three seconds of attention per tweet~\cite{b13} -- makes users resort to heuristics to judge online content. There is substantial work on heuristic based models for online credibility evaluation~\cite{b7,b12,b55}. Particularly relevant to this inquiry is the \textit{endorsement heuristic}, which is associated with credibility conferred to it (e.g. a verified badge) and the \textit{consistency heuristic}, which stems from endorsements by several authorities (e.g. a user verified in one platform is likely to be verified on others).
Unsurprisingly, a verified status is highly sought after by preeminent entities, as evidenced by the prevalence of get-verified-quick schemes such as promoted tweets from the now suspended account `@verified845'~\cite{b18,b19}. Our work attempts to obtain actionable insights into verification process, thus providing entities looking to get verified a means to strengthen their case.
\subsection{Research Questions}
The aforementioned motivating factors pose a few avenues of research enquiry which we attempt to answer in this work are are detailed below.
\begin{itemize}
\item [\textbf{RQ1:}] Can the verification status of a user be predicted from profile metadata and tweet contents? If so what are the most reliably discriminative features?
\item [\textbf{RQ2:}] Do any inconsistencies exist between verified and non-verified users with respect to peripheral aspects like the choice and variety of topics they tweet about?
\end{itemize}
\subsection{Contributions}
Our contributions can be summarized as follows:
\begin{itemize}
\item We motivate and propose the problem of predicting verification status of a Twitter user.
\item We detail a framework extracting a substantial set of features from data and meta-data about social media users, including friends, tweet content and sentiment, activity time series, and profile trajectories. We plan to make this dataset of 407,165 users and 494 million tweets, publicly available upon publication of the work.\footnote{\url{http://precog.iiitd.edu.in/requester.php?dataset=twitterVerified19}}
\item Additionally, we factored in state-of-the-art bot detection analysis into our predictive model. We use these features to train highly-accurate models capable of discerning a user's verified status. For a general user, we are able to provide a zero to one score representing their likelihood of being verified in Twitter.
\item We report the most informative features in discriminating verified users from non-verified ones and also shed light on the manner in which the span and gamut of topic coverage between their tweets differs.
\end{itemize}
The rest of the paper is organized as follows. Section~\ref{sec:related} details relevant prior work, hence putting our work in perspective. Section~\ref{sec:dataset} elaborates our data acquisition methodology. In Sections~\ref{sec:resultsAndAnalysis} and~\ref{sec:topicAnalysis}, we conduct a comparative analysis between verified and non-verified users, addressing RQ1 and RQ2 respectively, and attempt to uncover features that can reliably classify them. We conclude with a brief summary in Section~\ref{sec:conclusions}.
\section{Related Work}
\label{sec:related}
Previous studies have focused on measuring user impact in social networks. As user impact might be a critical factor in deciding who gets verified on Twitter~\cite{b1}, it is important to study how certain users in particular networks have more impact/influence as compared to the others. Cha et al.~\cite{cha2010measuring} studied the dynamics of influence on Twitter based on three key measures: in-degree, retweeets, and user-mentions. They show that in-degree alone is not sufficient to measure the influence of a user on Twitter. Bakshy et al.~\cite{bakshy2011everyone} demonstrate that URLs from users who have been influential in the past tend to generate larger cascades on the Twitter follower graph. They also show that URLs considered more interesting and that kindle positive emotions, spread more. Canali et al.~\cite{canali2012quantitative} identify key users on social networks who are important sources or targets for content disseminated online. They use a dimensionality-reduction based technique and conduct experiments with YouTube and Flickr datasets to obtain results which outperform the existing solutions by 15\%. The novelty of their approach is that they use attribute rich user profiles and not just stay limited to their network information. On the other hand, Lampos et al.~\cite{lampos2014predicting} predict user impact on Twitter using features, such as user statistics and tweet content, that are under the control of the user. They experiment with both linear and non-linear prediction techniques and find that Gaussian Processes based models perform the best for the prediction task. Klout~\cite{kloutservice} was a service that measured the influence of a person using information from multiple social networks. Their initial framework~\cite{rao2015klout} used long lasting (e.g., in-degree, pagerank centrality, recommendations etc) and dynamic features (reactions to a post such as retweets, upvotes etc.) to estimate the influence of a person across nine different social networks.
Further studies have tried to classify users based on factors such as celebrity status, socioeconomic status etc. Lampos et al.~\cite{lampos2016inferring} classify the socioeconomic status of users on Twitter as high, middle or lower socioeconomic, using features such as tweet content, topics of discussion, interaction behaviour, and user impact. They obtain an accuracy of 75\% using a nonlinear, generative learning approach with a composite Gaussian Process kernel. Preoctiuc-Pietro et al.~\cite{preoctiuc2015studying} present a Gaussian Process regression model, which predicts the income of the user on Twitter. They examined factors that help characterize user income on Twitter and analyze their relation with emotions, sentiments, perceived psycho-demographics, and language used in posts. Further, Marwick et al.~\cite{marwick2011see} qualitatively study the behaviours of celebrities on Twitter and how it impacts creation and sharing of content online. They aim to conceptualize ``celebrity as a practice'' in terms of personal information revelation, language usage, interactions, and affiliation with followers, among other things. There are also other studies that try to characterize usage patterns~\cite{al2015human} and personalities~\cite{tadesse2018personality} of varied users on Twitter.
Multiple existing studies attempt to detect and analyze automated activity on Twitter~\cite{chu2012detecting,zhang2011detecting,gilani2017depth,dickerson2014using,wang2010detecting,chavoshi2016identifying} and differentiate bot activity from human or partial-human activity. Conversely, Chu et al.~\cite{chu2012detecting} identify users on Twitter that generate automated content. The verification badge was a key feature used for the purpose. Holistically characterizing features that resemble automated activity, and the extent to which exhibiting the same can hurt a user's case for verification is further explored in Section~\ref{cluster}.
Past studies on verified accounts have focused on elucidating their behaviors and properties on Twitter. Hentschel et al.~\cite{hentschel2014finding} analyze verified users on Twitter and further use this information to identify trustworthy ``regular'' (not fake or spam) Twitter users. Castillo et al.~\cite{b4} attempt to identify credible tweets based on a variety of profile features including whether the user was authenticated by the platform or not. Along similar lines, Morris et al.~\cite{b5} examined factors that influence profile credibility perceptions on Twitter. They found that possessing an authenticated status is one of the most robust predictors of positive credibility. Paul et el.~\cite{b17} performed multiple network analyses of the verified accounts present on Twitter and reveal how they diverge from earlier results on the network as a whole. Hence, to summarize, there exists a rich body of literature establishing the enhancement of credibility and perceived importance a verified badge endows a user with. However, no prior work, to the best of our knowledge, has attempted to characterize attributes that make the aforementioned status more attainable.
\section{Dataset}
\label{sec:dataset}
In this section, we present details of our dataset and the data collection process along with a summary of the diverse features.
\subsection{User Metadata}
The \href{https://twitter.com/verified}{`@verified'} handle on Twitter follows all accounts on the platform that are currently verified. We queried this handle on the 18\textsuperscript{th} of July 2018 and extracted the IDs of 297,776 users (of which 231,235 have their primary language set to English) who were verified at the time. In the interest of verifying Twitter's assertion that likeliness of an handle's verification is commensurate with public interest in that handle and nothing else~\cite{b1,b30}, we sought to obtain a random controlled subset of non-verified users on the platform. Pursuant to this need, we leveraged Twitter's Firehose API -- a near real-time stream of public tweets and accompanying author metadata -- in order to acquire a random set of 284,312 non-verified users, controlling for a conventional measure of public interest, by ensuring that the number of followers of every non-verified user obtained was within 2\% that of a unique verified user that we had previously acquired.
Twitter provides a REST Application Programming Interface (API) with various endpoints that make data retrieval from the site in an organized manner easier. We used the REST API to acquire profile metadata of the user handles obtained previously including account age, number of friends, followers and tweets. Additionally, we obtained the number of public Twitter lists a user was part of and the handle's profile description. Metadata features extracted from user profiles have previously been used for classifying users and inferring activity patterns on Twitter~\cite{b24,b25}. We further focused our work to the subset of users who had English listed as their profile language thus enabling us to focus on the largest linguistic group on the platform~\cite{b31} and leaving us with 231,235 English verified users and 175,930 non-verified users.
\subsection{Content Features}
Utilizing Twitter's Firehose API, we acquired all tweets authored by the aforementioned users over a one year collection period spanning from 1\textsuperscript{st} June 2017 to 31\textsuperscript{st} May 2018. In total, our collection process acquired roughly 494,452,786 tweets. The tweet texts were retained and any accompanying media such as GIFs were deemed surplus to requirements and discarded.
From the text we extracted linguistic and stylistic features such as the number and proportion of \textit{Part-Of-Speech} (POS) tags, effectively obtaining a user's breakdown of natural language component usage. Work demonstrating the importance of content features in location inference~\cite{b33}, tweet classification~\cite{b37}, and network characterization~\cite{b35} further led us to extract the frequency of hashtags, retweets, mentions and external links used by each user. Prompted by studies showing that the deceptiveness of tweets could be inferred from the length of sentences constituting them~\cite{b32}, we computed additional features including average words per sentence, average words per tweet, character level entropy and frequency and proportion of long words (word length greater than six letters) per user.
In the interest of better discerning the emotions conveyed by the tweets authored by a user and responses they may evoke in the potential audience, sentiment analysis presented itself as an effective tool. Sentiment gleaned from Twitter conversations has been used to predict financial outcomes~\cite{b38}, electoral outcomes~\cite{b27} as well as the ease of content dissemination~\cite{b39}. We used Vader~\cite{b23}, a popular social media sentiment analysis lexicon, which has previously been widely used in a plethora of applications ranging from predicting elections~\cite{b27,b29} to forecasting cryptocurrency market fluctuations~\cite{b40}. We extracted positive, negative and neutral sentiment scores and an additional fourth compound score, which is a nonlinear normalized sum of valence computed based on established heuristics~\cite{b66} and a sentiment lexicon. All four scores are computed per user, weighted by tweet length.
\subsection{Temporal Features}
Existing research suggests that temporal features relating to content generation and activity levels on Twitter can be used to infer emergent trending topics~\cite{b41} as well as influential users~\cite{b42}.
\begin{table*}[t!]
\begin{threeparttable}
\centering
\begin{tabular}{ c l | c l }
\hline
\parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{\textbf{User Metadata}}}} & Number of followers & \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{\textbf{Temporal Features}}}}
& Average number of followers last year \Tstrut\\
& Number of friends & & Average number of friends last year \\
& Number of statuses & & Average number of statuses last year \\
& Number of public list memberships & & Proportion of followers gained in last 3 months \\
& Account age & & Proportion of friends gained in last 3 months \\
& & & Proportion of statuses generated in last 3 months \\
& & & Proportion of followers gained in last 1 month \\
& & & Proportion of friends gained in last 1 month \\
& & & Proportion of statuses generated in last 1 month \\
& & & Average duration between statuses \Bstrut\\
\hline
\parbox[t]{2mm}{\multirow{14}{*}{\rotatebox[origin=c]{90}{\textbf{Content Features}}}} & Number of POS tags\tnote{1} & \parbox[t]{2mm}{\multirow{14}{*}{\rotatebox[origin=c]{90}{\textbf{Miscellaneous Features}}}} & LIWC analytic summary score \Tstrut\\
& Frequency of POS tags\tnote{1} & & LIWC authentic summary score \\
& Average number of words per sentence & & LIWC clout summary score \\
& Average number of words per tweet & & LIWC tone summary score \\
& Character level entropy & & Botometer complete automation probability \\
& Proportion of long words\tnote{2} & & Botometer network score \\
& Positive sentiment score\tnote{3} & & Botometer content score \\
& Negative sentiment score\tnote{3} & & Botometer temporal score \\
& Neutral sentiment score\tnote{3} & & Tweet topic distribution\tnote{4} \\
& Compound sentiment score\tnote{3} & & \\
& Frequency of hashtags & & \\
& Frequency of retweets & & \\
& Frequency of mentions & & \\
& Frequency of external links posted & & \Bstrut\\
\hline
\end{tabular}
\caption{List of features extracted per user by our framework.}\label{tab:1}
\begin{tablenotes}
\item[1] Part Of Speech (POS) tags include nouns, personal pronouns, impersonal pronouns, adjectives, adverbs, verbs, auxiliary verbs, prepositions and articles.
\item[2] Long words are defined as words longer than 6 letters.
\item[3] Sentiment scores are weighted over all tweets of a user by tweet length.
\item[4] Scores over 100 topics are extracted from the tweets.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Leveraging the Twitter Firehose, we gathered fine-grained time series of user statistics including number of friends, followers and statuses, thus permitting us to compute their averages over our one year collection period. Furthermore, positing that a user's likelihood of verification may be predicated on how ascendant their reach in the platform is, we compute the proportion of friends and followers gained over the last one month and the last three months of our collection period. Additionally, similar trajectory encoding features are computed for tweet activity levels over the aforementioned one and three month windows, and the average time between statuses is extracted using the status count time series on a per user basis.
\subsection{Miscellaneous Features}
Attempting to capture qualitative cognitive and emotional cues from a user's tweets, we acquired the four LIWC 2015~\cite{b46} summary statistics named Analytic, Clout, Authentic and Tone for each user in our dataset. The summary dimensions indicate the presence of logical and hierarchical thinking patterns, confidence and leadership, personal cues and emotional tone, respectively, in the tweets of a user. LIWC categories have been scientifically validated to perform well in determining affect on Twitter~\cite{b49,b26} and have been previously used to detect sarcasm~\cite{b48} and for mental health diagnoses from Twitter conversations~\cite{b47}.
Furthermore, positing that accounts perceived as being completely or partially automated may have a harder time getting verified, we leveraged Botometer -- a flagship bot detection solution~\cite{b20} that exposes a free public API. The system is trained on thousands of instances of social bots and the creators report AUC ROC scores between 0.89 and 0.95. Botometer utilizes features spanning the gamut from network attributes to temporal activity patterns. Additionally, it queries Twitter to extract 300 recent tweets and publicly available account metadata, and feeds these features to an ensemble of machine learning classifiers, which produce a Complete Automation Probability (CAP) score, which we acquire for every user in our dataset. We also augment our dataset with the temporal, network and content category automation scores for each user.
Finally, we also look to glean into the topics that users tweet about. Topic modelling has been effectively used in categorizing trending topics on Twitter~\cite{b36} and inferring author attributes from tweet content~\cite{b28}. To this end, we ran the Gibbs sampling based Mallet implementation of Latent Dirichlet Allocation (LDA)~\cite{b50} setting the number of topics to 100 with 1000 iterations of sampling. Although, such a topic model could be applied on a per tweet basis and subsequently aggregated by user, we find this approach to not work very well as most tweets are simply a sentence long. To overcome this difficulty, we follow the workaround adopted by previous studies by aggregating all
the tweets of a user into a single document~\cite{b44,b45}. In effect, this treatment can be regarded as an application of the author-topic model~\cite{b43} to tweets, where each document has a single author.
\subsection{Rectifying Class Imbalance}
Focusing our analysis on the Twitter Anglosphere left us with a substantially skewed class distribution of 231,235 verified users and 175,930 non-verified users in our dataset. In keeping with existing research on imbalanced learning on Twitter data~\cite{b51,b52}, we used a two-pronged approach to rectify this -- a minority over-sampling technique named ADASYN~\cite{b21} which generates samples based on the feature space of the minority examples and a hybrid over and under-sampling technique called SMOTETomek which additionally also eliminates samples of the over-represented class~\cite{b22} and has been found to give exemplary results on imbalanced datasets\cite{b53}. Augmenting our classifier's training data in the aforementioned manner allowed us to attain near-perfect classification scores.
The data collected is classified and summarized in Table~\ref{tab:1}. We intend to anonymize and make this dataset accessible to the public in a manner compliant with Twitter terms, once this work is published.
\section{Results and Analysis}
\label{sec:resultsAndAnalysis}
We commence our analysis by eliminating all features that could be deemed surfeit to requirements. To this end, we employed an all-relevant feature selection model~\cite{b58} which classifies features into three categories: confirmed, tentative and rejected. We only retain features that the model is able to confirm over 100 iterations.
To evaluate the effectiveness of our framework in discerning verification status of users, we examine five classification performance metrics -- precision, recall, F1-score, accuracy and area under ROC curve -- for five classifiers. The first two methods intended at establishing baselines were a Logistic Regressor and a Support Vector Classifier. Further, three methods were used to gauge how far the classification performance could be pushed using the features we collected. These were (1) a Generalized Additive Model trained by nested iterations, setting all terms to smooth, (2) a Multi Layered Perceptron with 3 hidden layers of 100, 30 and 10 neurons respectively, using Adam as an optimiser and ReLU as activation and (3) state-of-the-art Gradient Boosting tool named XGBoost with a maximum tree depth of 6 and a learning rate of 0.2. The results obtained are detailed in Table~\ref{tab:2}. The first batch of results are obtained by training on the original unadulterated training split. Even without rectifying class distribution biases, we are able to attain a high classification accuracy of 98.9\% on our most competitive classifier.
The second and third batches are trained on data rectified for class imbalance using the adaptive synthetic over-sampling method (ADASYN) and a hybrid over and under-sampling method (SMOTETomek), respectively. The ADASYN algorithm generates samples based on the feature space of the minority class data points and is a powerful method that has seen success across many domains~\cite{b59} in neutralizing the deleterious effects of class imbalance. The SMOTETomek algorithm combines the above over-sampling strategy with an under-sampling method called Tomek link removal~\cite{b60} to remove any bias introduced by over-sampling. This rectification did improve results, generally improving the performance of our two baseline choices and especially helping us inch closer to perfect performance with gradient boosting. However, particularly surprising was the detrimental effect of class re-balancing on the MLP classifier which in all likeliness also learned the non-salient patterns in the re-balanced data. Also unexpectedly, the ADASYN re-balancing outperformed the more sophisticated SMOTETomek re-balancing in pushing the performance limits of the support vector (89.1\% accuracy) and gradient boosting (99.1\% accuracy) approaches. This might be owing to the fact that the Tomek link removal method omits informative samples close to the classification boundary thus affecting the learned support vectors and decision tree splits.
Our results suggest that near perfect classification of the Twitter user verification status is possible without resorting to complex deep-learning pipelines that sacrifice interpretability.
\begin{table*}[t!]
\begin{threeparttable}
\centering
\begin{tabular}{ | c | l | c | c | c | c | c | }
\hline
\textbf{Dataset} & \textbf{Classifier} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-Score} & \textbf{Accuracy} & \textbf{ROC AUC Score}\Tstrut\Bstrut\\
\hline
&Logistic Regression &0.86 &0.86 &0.86 &0.859 &0.854\Tstrut\\
Original &Support Vector Classifier &0.89 &0.89 &0.89 &0.887 &0.883 \\
imbalanced &Generalized Additive Model\tnote{1} &0.97 &0.98 &0.98 &0.975 &0.976 \\
data &3-Hidden layer NN (100,30,10) ReLU+Adam &0.98 &0.98 &0.98 &0.983 &0.977 \\
&XGBoost Classifier &0.99 &0.99 &0.99 &\textbf{0.989} &\textbf{0.990}\Bstrut\\
\hline
&Logistic Regression &0.86 &0.86 &0.86 &0.856 &0.858\Tstrut\\
ADASYN &Support Vector Classifier &0.89 &0.89 &0.89 &0.891 &0.891 \\
class &Generalized Additive Model\tnote{1} &0.97 &0.97 &0.97 &0.974 &0.973 \\
rebalancing &3-Hidden layer NN (100,30,10) ReLU+Adam &0.96 &0.96 &0.96 &0.959 &0.957 \\
&XGBoost Classifier&0.99 &0.99 &0.99 &\textbf{0.991} &\textbf{0.991}\Bstrut\\
\hline
&Logistic Regression &0.86 &0.86 &0.86 &0.860 &0.856\Tstrut\\
SMOTETomek &Support Vector Classifier &0.90 &0.90 &0.90 &0.903 &0.901 \\
class &Generalized Additive Model\tnote{1} &0.98 &0.97 &0.98 &0.974 &0.974 \\
rebalancing &3-Hidden layer NN (100,30,10) ReLU+Adam &0.97 &0.97 &0.97 &0.966 &0.968 \\
&XGBoost Classifier&0.99 &0.99 &0.99 &\textbf{0.990} &\textbf{0.991}\Bstrut\\
\hline
\end{tabular}
\caption{Summary of classification performance of various approaches using metadata, temporal and contextual features on the original and balanced datasets.}\label{tab:2}
\begin{tablenotes}
\item[1] The generalized additive models were trained using all smooth terms.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Feature Importance Analysis}
\begin{figure}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth,height=9.5cm]{Figs/Features.jpg}}
\caption{\textbf{Normalized density estimations of the six most discriminative features for verified (blue) and non-verified users (red).}}
\label{fig:fig1}
\end{figure}
To compare the usefulness of various categories of features, we trained gradient boosting classifier, our most competitive model, using each category of features alone. While we achieved the best performance with user metadata features, content features were not far behind. Evaluated on multiple randomized train-test splits of our dataset, user metadata and content features were both able to consistently surpass 0.88 AUC. Additionally, temporal features alone are able to consistently attain an AUC of over 0.79.
The individual feature importances were determined using the Gini impurity reduction metric output by the gradient boosting model trained on the unmodified dataset. To rank the most important features reliably, the model was trained 100 times with varying combinations of hyperparameters (column sub-sampling, data sub-sampling and tree child weight) and the features determined to be the most important were noted. The most reliably discriminative features and their normalized density distributions over the values they attain are detailed in Figure~\ref{fig:fig1}. These features generally exhibit intuitive patterns of separation based on which an informed prediction can be attempted, e.g., the very highest echelons of public list membership counts are populated exclusively by verified users while the very low extremes of propensity for authoritative speech as indicated by LIWC Clout summary scores are exclusively displayed by non-verified users.
The top 6 features are sufficient to reach performance of 0.9 AUC on their own right and the top 10 features are sufficient to further push those numbers up to 0.93. This is largely owing to the fact that substantial redundancy was observed among sets of highly correlated features such as some linguistic (tendency to use long words and impersonal pronouns highly correlate with high analytic LIWC summary scores) and temporal trajectory (most ascendant users score highly in both the 1 month and 3 month features in terms of tweets authored and followers gained) features.
\subsection{Clustering and characterization}
\label{cluster}
In order to characterize accounts with a higher resolution than a binary verification status will permit, we apply K-Means++ on the normalized user vectors selecting the 30 most discriminative features indicated by the XGBoost model -- our most competitive classifier. We settle on 8 different clusters based on evaluation including the inflection point of the clustering inertia curve and the proportion of variance explained. In the interest of an intuitive visualization, two dimensional embeddings obtained using t-SNE dimensionality reduction method~\cite{b54} are presented. Tuning the perplexity metric appropriately, the method considers the similarity of data points in our feature space and embeds them in a manner that reflects their proximity in the feature space. The embeddings are plotted and our classifier responses for members of the different clusters are detailed in Figure~\ref{fig:fig2}.
\begin{figure}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth,height=6cm]{Figs/Cluster.jpg}}
\caption{\textbf{t-SNE embeddings of accounts coloured by cluster. The distribution of verification probabilities by cluster, as predicted by our classifier, are faceted on the right.}}
\label{fig:fig2}
\end{figure}
Investigating these clusters allows us to further unravel combinations of attributes that strengthen a user's case for verification. Clusters C0 and C2 are composed nearly exclusively of non-verified users. Cluster C0 can largely be characterized as the Twitter layman with a high proportion of experiential tweets. This narrative further plays out in our collected features with members of this cluster on average having short tweets, high incidence of verb usage and scoring especially high in the LIWC Authenticity summary. Cluster C2 can be characterized as an amalgamation of accounts exhibiting bot-like behavior. Members of this cluster scored highly on the complete, network and content automation scores in our feature set. Furthermore, members in C2 possessed attributes previously linked to spammers such as copious usage of hashtags~\cite{b56} and external links~\cite{b57}. Manual inspection verified the substantial presence of automated content such as local weather updates in this cluster. Unsurprisingly, members of this cluster were predicted to possess the lowest verification probability by our classifier.
The composition of clusters C4 and C6 leans towards verified users, with members of C4 having a tendency to post longer tweets and retweet more frequently than author content, while members of C6 almost exclusively retweet on the platform with slightly over 93\% of their content being such. Cluster C5 is nearly entirely comprised of verified users and includes elite Twitteratti that comprise the core of verified users on the platform. These users have by far the highest list memberships on average while also scoring very highly on the LIWC Clout summary. Predictably, members of this cluster were predicted to possess the highest verification probability by our classifier.
\begin{table}[ht!]
\begin{threeparttable}
\centering
\begin{tabular}{ | c | c | c | c | }
\hline
\textbf{Cluster} & \textbf{Population} & \textbf{Accuracy} & \textbf{ROC AUC Score}\Tstrut\Bstrut\\
\hline
C0 & 19462 & 0.996 & 0.989\Tstrut\\
C1 & 26259 & 0.986 & 0.986\\
C2 & 19356 & 0.994 & 0.984\\
C3 & 46178 & 0.988 & 0.987\\
C4 & 90843 & 0.989 & 0.987\\
C5 & 105701 & 0.993 & 0.986\\
C6 & 39248 & 0.990 & 0.989\\
C7 & 60118 & 0.987 & 0.986\Bstrut\\
\hline
\end{tabular}
\caption{Classification performance of our most competitive model broken down by cluster.}\label{tab:3}
\end{threeparttable}
\end{table}
The remaining clusters C1, C3 and C7 are comprised of a mix of verified and non-verified users. However, further inspection revealed that they have very divergent trajectories. Members of cluster C1 are ascendant both in terms of reach and activity levels as evidenced by the proportion of their followers gained and statuses authored in the last one and three months of our collection period. These members can be said to constitute a nouveau-elite group of users. This is further backed up by the fact that these users are lacking in their presence in public lists as compared to the very established elite in cluster C5. Manual inspection also verifies that many of these users have attained verification during our collection period. This is in stark contrast with members of C3 and C7 who are either stagnant or declining in their reach and activity levels and show very low engagement with the rest of the platform in terms of retweets and mentions. Remarkably, our classifier is able to make this distinction and rates members of C1 as slightly better candidates for verification on average than members of C3 or C7. The relative difficulty of classifying users in these mixed clusters is demonstrated in the performance breakdown detailed in Table ~\ref{tab:3}.
\section{Topic Analysis for Verified vs Non-Verified Users}
\label{sec:topicAnalysis}
Having deduced important predictive features present in a user's metadata, linguistic style and activity levels over time with respect to verification status, we next investigate the presence of similar predictive patterns in the choice and variety of tweet topic usage amongst users.
\subsection{Content Topics}
\begin{figure}[!bht]
\centerline{\includegraphics[width=0.95\linewidth,height=9.5cm]{Figs/Topics.jpg}}
\caption{\textbf{Normalized density estimations of usage for the six most discriminative topics for verified (blue) and non-verified users (red). Listed alongside are the top three most probable keywords for each topic.}}
\label{fig:fig3}
\end{figure}
\begin{table*}[t!]
\begin{threeparttable}
\centering
\begin{tabular}{ | r | c | c | c | c | c | }
\hline
\textbf{Classifier} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-Score} & \textbf{Accuracy} & \textbf{ROC AUC Score}\Tstrut\Bstrut\\
\hline
Generalized Additive Model\tnote{1} &0.83 &0.83 &0.83 &0.832 &0.831 \Tstrut\\
3-Hidden layer NN (100,30,10) ReLU+Adam &0.88 &0.88 &0.88 &\textbf{0.882} &\textbf{0.880} \\
XGBoost Classifier &0.82 &0.82 &0.82 &0.824 &0.823 \Bstrut\\
\hline
\end{tabular}
\caption{Summary of classification performance of various approaches on inferred topics.}\label{tab:4}
\begin{tablenotes}
\item[1] The generalized additive models were trained using all smooth terms.
\end{tablenotes}
\end{threeparttable}
\end{table*}
In order to obtain a topical breakdown of a user's tweets in an unsupervised manner, we ran the Gibbs sampling based Mallet implementation of Latent Dirichlet Allocation (LDA)~\cite{b50} with 1000 iterations of sampling. Narrowing down on the correct number of topics $T$ required us to execute multiple runs of the model while varying our choices for the number of topics. The model was executed for 30, 50, 100, 150 and 300 topics and the likelihood estimates were noted. It must be mentioned that in all cases the likelihood estimates stabilized well within the 1000 iteration limit we set. The likelihood keeps rising in value up to $T = 100$ topics, after which it sees a decline. This kind of profile is often seen when varying the hyperparameter of a statistical model, with the optimal model being rich enough to fit the information available in the data, yet not complex enough to begin fitting noise. This led us to conclude that the tweets we collected over a year are best accounted for by incorporating 100 separate topics. We set $\alpha = T/50 $ and $\beta = 0.01 $, which are the default settings recommended in prior studies~\cite{b62} and maintain the sum of the Dirichlet hyperparameters, which can be interpreted as the number of virtual samples contributing to the smoothing of the topic distribution, as constant. The chosen value of $\beta$ is small enough to permit a fine-grained breakdown of tweet topics covering various conversational areas.
We again commenced the prediction by pruning down our topical feature set using the all relevant feature selection method we used earlier~\cite{b58} in Section ~\ref{sec:resultsAndAnalysis}. This allowed us to hone in on the 76 topics that were confirmed to be predictive of verification status. To evaluate the effectiveness of our framework in discerning verification status of users from topic cues, we examine five classification performance metrics -- precision, recall, F1-score, accuracy and area under ROC curve -- for the three classifiers that were most competitive in our previous classification task. These were (1) a Generalized Additive Model trained by nested iterations, setting all terms to smooth, (2) a Multi Layered Perceptron with 3 hidden layers of 100, 30 and 10 neurons respectively, using Adam as an optimiser and ReLU as activation and (3) Gradient Boosting tool named XGBoost with a maximum tree depth of 5 and a learning rate of 0.3. The results obtained are detailed in Table \ref{tab:4}. The results demonstrate that it is eminently possible to infer the verification status of a user purely using the distribution of topics they tweet about with a high accuracy. The MLP classifier was the most competitive in this task, reliably pushing past 88.2\% accuracy.
In the interest of interpretability, we evaluate the predictive power of each topic with respect to the classification target. To this end, we obtain individual topic importances using the ANOVA F-Scores output by GAM -- our second most competitive model on this task. In order to rank the features reliably, the procedure is run on 50 random train-test splits of the dataset and the topics with the lowest F-Scores noted. The most reliably discriminative topics and the normalized density distributions of their usage are detailed in Figure~\ref{fig:fig3}. Owing to multiple topics largely belonging to popular broad conversational categories such as sports and politics, some redundancy was observed in the way of multi-collinearity. This is further backed up by the fact that the top 15 most important topics alone can discern verification status with an AUC of 0.76 while the top 25 topics can push those numbers up to an AUC of 0.8 nearly approximating the GAM performance on the whole feature set (AUC 0.83). These topics generally exhibit intuitive patterns of separation based on which an informed prediction can be made, e.g., the users who tweet most frequently about climate change are all verified while controversial topics like middle-east geopolitics are something verified users prefer to devote limited attention to.
\subsection{Topical Span}
\begin{figure}[!bht]
\centerline{\includegraphics[width=0.95\linewidth,height=6cm]{Figs/TopicMN.jpeg}}
\caption{\textbf{Square-root scaled proportion of users by optimal number of topics.}}
\label{fig:fig4}
\end{figure}
Peripheral aspects of topics such as their geographical distribution~\cite{b64} and the viability of embeddings they induce for sentiment analysis~\cite{b64} tasks have been explored before. This prompted us to extend our inquiry into peripheral measures such as inconsistencies in the variety and number of topics the two classes of users tweet about. In order to obtain an optimal mix of the number of topics per user in an unsupervised manner, we leveraged the use of an Hierarchical Dirichlet Process (HDP) model implementation~\cite{b63} for topic inference. This method streams our corpus of tweets and performs an online Variational Bayes estimation to converge at an optimal number of topics $T$, for each user. Once again, we set $\alpha = T/50 $ and $\beta = 0.01 $, which are the default settings recommended in existing studies~\cite{b62}.
The distribution of cardinality for topic sets by verification status are detailed in Figure~\ref{tab:4}. Inspection of the distribution uncovers a clear trend with non-verified users clearly being over-represented in the lower reaches of the distribution (1--4 topics), while a comparatively substantial portion of verified users are situated in the middle of the distribution (5--10 topics). Also noteworthy is the fact that the very upper echelons of topical variety in tweets are occupied solely by verified users. We posit that this may be owing to the fact that news handles (e.g., \href{https://twitter.com/BBC}{`@BBC'}: 13 topics) and content aggregators (e.g., \href{https://twitter.com/gifs}{`@GIFs'}: 21 topics) are over represented in the set of verified users. The validation of this assertion is left for future work.
\section{Conclusion}
\label{sec:conclusions}
The coveted nature of platform verification on Twitter has led to the proliferation of verification scams and accusations of systemic bias against certain ideological demographics. Our work attempts to uncover actionable intelligence on the inner workings of the verification system, effectively formulating a checklist of profile attributes a user can work to improve upon to render verification more attainable.
This article presents a framework that computes the strength of a user's case for verification of Twitter. We introduce our machine learning system that extracts a multitude of features per user, belonging to different classes: user metadata, tweet content, temporal signatures, expressed sentiment, automation probabilities and preferred topics. We also categorize the users in our dataset into intuitive clusters and detail the reasons behind their likely divergent outcomes from the verification procedure. Additionally, we demonstrate role, that a user's choices and variety over conversational topics plays in precluding or effecting verification.
Our framework represents the first of its kind attempt at discerning and characterizing verification worthy users on Twitter and is able to attain a near perfect classification performance of 99.1\% AUC. We believe this framework will empower the average Twitter user to significantly enhance the quality and reach of their online presence without resorting to prohibitively priced social media management solutions.
\bibliographystyle{ACM-Reference-Format}
\balance
|
1,116,691,500,930 | arxiv | \section{Introduction}
Terahertz (THz) communication is considered as a key wireless technology to alleviate the spectrum bottleneck and support high data rates in the future\cite{tt1}. The THz band, ranging from $0.1$ to $10$ THz, supports huge transmission bandwidth, and owns multiple appealing transmission windows separated by the attenuation absorption peaks\cite{tt2}. Despite huge bandwidth on unlicensed spectrum, the challenge of using THz band spectrum comes from the severe propagation loss due to both spreading path loss and molecular absorption\cite{tt3}.
To compensate for the propagation loss, various technologies, e.g., massive multiple-input multiple-output (MIMO) \cite{m1}, coordinated multi-point transmission \cite{cm}, and intelligent reflecting surface\cite{b1}, can be integrated in THz communications to provide effective spatial diversity gains. In the THz massive MIMO systems as we concern, the transmitter and receiver equipped with large-scale antenna arrays can realize directional communication with sufficient beam gains by dynamically controlling the amplitude and phase shifts on each antenna element\cite{mm}. Nevertheless, the conventional beamforming technologies usually require accurate channel state information (CSI) between the transmitter and receiver for optimizing data transmission, which is challenging in THz systems since the pilot signals, generally being transmitted without adequate beam gains, may not be effectively detected by the receiver owing to the severe propagation loss\cite{b2}.
This issue has already been encountered in millimeter-wave (mmWave) systems. In this context, a new approach, called \emph{beam training}, has been proposed for effective directional communication by testing beam pairs without requiring any CSI\cite{t1,t2,t3,ywang,Qsu,ones,parallel,wzhong,thzh}. A feasible beam training scheme should contain the designs of codebook and training protocol\cite{wide1,wide2,wide3,wide4,wide5}, in which the former determines the radiation pattern of the beams (i.e., codewords), while the latter focuses on how to use these predefined beams to realize beam alignment at the transmitter and receiver. After a successful beam alignment, beam tracking technologies can be applied for mobile transceivers, which assist to reduce the training overhead\cite{tr1,tr2,tr3,mk1,mk2,mk3,tr4,tr5}. It is worth noting that the existing beam tracking techniques are developed independently of the beam training techniques, which may rely on a certain antenna geometry, transceiver architecture, as well as form of channel information. To facilitate a generic system design, a unified beam tracking and training design is thus stringently needed. Besides, most existing schemes are tailored for the uniform linear array, i.e., 2D beamforming. However, due to the high directivity of THz wave, 3D beamforming by uniform planar array (UPA) has practical potential for emerging THz applications, e.g., efficient integrated networks of terrestrial links, unmanned aerial vehicles (UAV), and satellite communication systems\cite{wm,sk}.
To this end, we propose a unified 3D beam training and tracking procedure for THz communications in this paper. As a holistic design, this procedure contains many novel aspects, in terms of architecture, framework, 3D codebook, and training/tracking protocol. In particular, this procedure only needs to search the codewords according to our proposed protocol, instead of calculating the real-time beamforming according to the CSI, thus facilitating the low-complexity implementation of beam training and tracking in THz communications in practice. It is worth mentioning that our proposed scheme is catering to the line-of-sight (LoS) propagation path, which is the dominant component of the THz channel. When it is applied to the multi-path lower-frequency channel, e.g., in mmWave indoor scenario, the efficiency may be compromised since the received signals would be interfered by the non-negligible non-line-of-sight (NLoS) components. The contributions of this paper are summarized as follows.
\begin{itemize}
\item We consider a novel quadruple-uniform planar array (QUPA) architecture that covers omni-direction in the azimuth and $\frac{\pi }{2}$ range in the elevation domains, in which each UPA only supports $\pm\frac{\pi }{4}$ range three-dimensionally. Compared to the conventional single UPA architecture that covers $\pm\frac{\pi }{2}$ range in both azimuth and elevation, each UPA in the QUPA has substantially less angular deflection and the beam squint loss can be reduced. Besides, since each UPA only serves a confined range, higher array gains can be achieved by using the directional antenna element tailored for the certain coverage.
\item We propose a holistic communication framework to build a unified 3D beam training and tracking procedure. Instead of performing beam training or tracking over a fixed frequency, our proposed communication framework adopts dynamic on-demand beam training/tracking depending on the real-time quality of service. This can effectively reduce potential outages that may occur in the fixed-frequency-based conventional schemes, owing to the narrow-beam transmission and fast movement of transceivers in the THz communications.
\item For realizing beam training, we first develop a new 3D hierarchical codebook that pre-defines some codewords for narrow beams and wide beams stage-by-stage. Although the 3D beams can be simply written as the Kronecker product of the beams the 2D codebook\cite{wide5}, this approach yields an irregular beams' coverage since the beams' azimuth distribution are various at different elevation angles\footnote{For our considered QUPA architecture, if straightforwardly using the Kronecker product of existing 2D codewords, the azimuth coverage expands when beams are above/below 90 degrees of elevation angle, which makes the total coverage of the QUPA cannot constitute an exact sphere.}. By contrast, our proposed approach specifies how to judiciously design the distribution of beams within a given 3D coverage requirement, which guarantees the maximum worst-case training performance. Then, we develop a new 3D training protocol to find the optimal narrow-beam pair based on our proposed codebook, which incurs significantly lower training complexity compared to the existing schemes. The codebook and the protocol are developed based on a 3D grid, and we call this scheme grid-based (GB) training.
\item For realizing beam tracking, we develop an efficient protocol that searches the codewords in a fast and efficient manner, rather than calculating the channel variations by, e.g., location-based prediction, angular-based prediction, and Kalman filters, in conventional schemes with high complexity. The proposed protocol combines two tracking modes with different search times. The first mode needs to search the beams in the vicinity of the formerly used beam pair on our predefined grid, while the second one directly chooses a new beam pair for connection based on the changing trend of the previously used beam pairs on the grid. As there are two tracking modes jointly realizing the beam alignment, we call this scheme a grid-based hybrid (GBH) tracking.
\item Numerical results demonstrate the superiority of our proposed beam training and tracking over the benchmark methods. Compared to the existing training codebooks, our proposed wide beams have a smaller dead zone with the lowest misalignment probability during the training, while our proposed narrow beams show no overlap between different UPAs and yield the highest received SNR after the training. Compared to the existing tracking schemes, our proposed beam tracking yields the highest worst-case performance. By combining the first and second tracking modes, no outage occurs via our proposed beam tracking over all the test time.
\end{itemize}
The rest of this paper is organized as follows. Section II introduces the system and describes the problem. In Section III, we present the framework of our unified beam training and tracking procedure. Section IV develops the beam training and tracking approaches. Section V demonstrates the performance improvement of the proposed scheme through the numerical results. Finally, we conclude the paper in Section VI.
\indent \emph{Notation:} We use small normal face for scalars, small bold face for vectors, and capital bold face for matrices. The superscript ${{\rm{\{ }} \cdot \}^T}$ and ${{\rm{\{ }} \cdot \}^H}$ denote the transpose and Hermitian transpose, respectively. $\mathcal{CN}(\mu,\sigma^2)$ means circularly symmetric complex Gaussian (CSCG) distribution with mean of $\mu$ and variance of $\sigma^2$. $|\cdot|$ represents the modulus operator. $\bmod _N (i)$ returns the remainder after division of $i$ by $N$. ${\rm{ceil}}(\cdot)$ returns the nearest integer greater than or equal to its argument.
\section{System and Problem Descriptions}
In this section, we introduce the considered system model and formulate the problems of beam training and tracking.
\begin{figure}
\centering
\includegraphics[width=3in]{model.eps}
\caption{(a) QUPA geometry. (b) Transmit and receive ranges of QUPA on the $xy$-plane. (c) Transmit and receive ranges of QUPA on the $yz$-plane.}\label{mo}
\vspace{-12pt}
\end{figure}
\subsection{System Model}
We consider a point-to-point THz massive MIMO system with four half-wave spaced UPAs, i.e., QUPA, equipped at the transmitter and receiver, respectively. Without loss of generality, we assume that both the transmitter and receiver have the same architecture where four identical UPAs with $N_a$ elements are equipped around a cube. As shown in Fig. \ref{mo}(a), we use $x$, $y$, and $z$-axes to refer to the axes of the standard Cartesian coordinate system for the QUPA. In the case of the first UPA, with $N_y$ and $N_z$ elements on the $y$ and $z$-axes respectively ($N_a=N_y N_z$), the array response vector can be expressed in a conventional form\footnote{We assume the signal phase at the center of the UPA is zero.}, i.e.,
\begin{equation}
\begin{split}
&{{\bf{a}}_1}(\phi ,\theta ) \!=\! \frac{1}{{\sqrt {{N_a}} }}[{e^{j\pi [ - \frac{{({N_y} - 1)}}{2}\sin \phi \sin \theta - \frac{{({N_z} - 1)}}{2}\cos \theta ]}},...,\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad{e^{j\pi ({n_y}\sin \phi \sin \theta + {n_z}\cos \theta )}},...,\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad\qquad{e^{j\pi [\frac{{({N_y} - 1)}}{2}\sin \phi \sin \theta + \frac{{({N_z} - 1)}}{2}\cos \theta ]}}{]^T},
\end{split}
\end{equation}
where $\phi$ and $\theta$ are the azimuth angle to $x$-axis and the elevation angle to $z$-axis respectively, ${n_y} = - \frac{{({N_y} - 1)}}{2} + 1, - \frac{{({N_y} - 1)}}{2} + 2,...,\frac{{({N_y} - 1)}}{2} - 1$, ${n_z} = - \frac{{({N_z} - 1)}}{2} + 1, - \frac{{({N_z} - 1)}}{2} + 2,...,\frac{{({N_z} - 1)}}{2} - 1$. Given that the perpendicular direction of the $k^\mathrm{th}$ array is $(\frac{{(k - 1)\pi }}{2},\frac{\pi }{2})$, the response vector of the $k^\mathrm{th}$ array can be thereby written as
\begin{align}\label{array}
&{{\bf{a}}_k}(\phi ,\theta ) =\notag\\
&\quad\frac{1}{{\sqrt {{N_a}} }}[{e^{j\pi \left[ { - \frac{{({N_y} - 1)}}{2}\sin \left( {\phi - \frac{{(k - 1)\pi }}{2}} \right)\sin \theta - \frac{{({N_z} - 1)}}{2}\cos \theta } \right]}},...,\notag\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{e^{j\pi \left[ {{n_y}\sin \left( {\phi - \frac{{(k - 1)\pi }}{2}} \right)\sin \theta + {n_z}\cos \theta } \right]}},...,\\
&\qquad \qquad \qquad{e^{j\pi \left[ {\frac{{({N_y} - 1)}}{2}\sin \left( {\phi - \frac{{(k - 1)\pi }}{2}} \right)\sin \theta + \frac{{({N_z} - 1)}}{2}\cos \theta } \right]}}{]^T}.\notag
\end{align}
To provide omni-directional communication with adequate array gains, four UPAs are tailored for beamforming in four different space ranges by using the directional antenna elements. As shown in Fig. \ref{mo}(b) and (c), each array is dedicated to transmitting and receiving signals only in the range within $ \pm \frac{\pi }{4}$ to the perpendicular direction of the array, in both azimuth and elevation domains. As such, the transmit/receive range of $k^\mathrm{th}$ array is denoted by
\begin{equation}\label{omi}
{\Omega _k}=\left\{ {({\phi _k},{\theta _k})\left| {\begin{array}{*{20}{c}}
{ - \frac{\pi }{4} + (k - 1)\frac{\pi }{2} \le {\phi _k} \le \frac{\pi }{4} \!+\! (k - 1)\frac{\pi }{2},}\\
{\frac{\pi }{4} \le {\theta _k} \le \frac{{3\pi }}{4}.}
\end{array}} \right.} \right\}.
\end{equation}
Let $s$ denote a transmitted symbol with unit power to the $k^\mathrm{th}$ transmit UPA, the processed received signal from the $m^\mathrm{th}$ receive UPA can be expressed as
\begin{equation}\label{rec}
{y_{k,m}} = \sqrt P {\bf{w}}_m^H {{{\bf{H}}_{k,m}}{{\bf{f}}_k}s} + {\bf{w}}_m^H{\bf{n}},
\end{equation}
where $P$ represents the transmit power, ${{\bf{H}}_{k,m}}\in {\mathbb{C}^{N_a \times N_a}}$ is the channel matrix between the $k^\mathrm{th}$ transmit UPA and $m^\mathrm{th}$ receive UPA, $s$ is the data symbol, ${\bf{f}}_k \in {\mathbb{C}^{N_a \times 1}}$ (resp. ${\bf{w}}_m \in {\mathbb{C}^{N_a \times 1}}$) is the normalized beamforming precoder (resp. decoder) at $k^\mathrm{th}$ (resp. $m^\mathrm{th}$) UPA, and ${\bf{n}} \sim \mathcal{CN}({\bf{0}},\sigma^2{\bf{I}})$ is the zero-mean additive Gaussian noise with power $\sigma^2$. Hence, the decoding signal-to-noise ratio (SNR) of $s$ from the $k^\mathrm{th}$ transmit UPA to the $m^\mathrm{th}$ receive UPA is given by
\begin{equation}\label{rec2}
{\Gamma _{k,m}} = \frac{P}{{{\sigma ^2}}}{\left| {{\bf{w}}_m^H {{{\bf{H}}_{k,m}}{{\bf{f}}_k}} } \right|^2}.
\end{equation}
\subsection{Channel Model}
THz massive MIMO channels generally consist of one LoS path and a few NLoS paths. According to this fact, we adopt the Saleh-Valenzuela
channel model for THz communications. As such, channel ${{\bf{H}}_{k,m}}$ in (\ref{rec}) and (\ref{rec2}) can be further specified as
\begin{subequations}
\begin{align}
&{{\bf{H}}_{k,m}} = {\bf{H}}_{k,m}^{{\rm{LoS}}} + {\bf{H}}_{k,m}^{{\rm{NLoS}}},\\
&{\bf{H}}_{k,m}^{{\rm{LoS}}} = \sqrt{{G_t^k}{G_r^m}{F_k}({\phi _t},{\theta _t}){F_m}({\phi _r},{\theta _r})}\notag\\
&\qquad\qquad\qquad\qquad\qquad\times\alpha _{\rm{L}}{{\bf{a}}_m}({\phi _r},{\theta _r}){{\bf{a}}_k}{({\phi _t},{\theta _t})^H}\label{chan},\\
&{\bf{H}}_{k,m}^{{\rm{NLoS}}}= \;\sum\limits_{l = 1}^{L{\rm{ - }}1} \sqrt{{{G_t^k}{G_r^m}{F_k}(\phi _t^l,\theta _t^l){F_m}(\phi _r^l,\theta _r^l)}}\notag\\
&\qquad\qquad\qquad\qquad\qquad\times\alpha _{\rm{N}}^l {{\bf{a}}_m}(\phi _r^l,\theta _r^l){{\bf{a}}_k}{(\phi _t^l,\theta _t^l)^H},
\end{align}
\end{subequations}
where ${\bf{H}}_{k,m}^{{\rm{LoS}}}$ and ${\bf{H}}_{k,m}^{{\rm{NLoS}}}$ are the LoS and NLoS components, respectively. $L$ denotes the number of propagation paths between the transmitter and receiver. ${\alpha}_{{\rm{L}}}$ describes the complex gain of the LoS path and ${\alpha _{{\rm{N}}}^l}$ is the complex gain of the $l^\mathrm{th}$ NLoS path. ${{\bf{a}}_k}({\phi _t},{\theta _t})$ and ${{\bf{a}}_m}({\phi _r},{\theta _r})$ are the normalized transmit and receive array response vectors, which follow the definition given in (\ref{array}). $ ({\phi _t},{\theta _t})$ and $({\phi _r},{\theta _r})$ (resp. $({\phi _t^l},{\theta _t^l})$ and $({\phi _t^l},{\theta _t^l})$) are the LoS path's (resp. $l^\mathrm{th}$ NLoS path's) azimuth and elevation angles of departure and arrival (AoDs/AoAs), respectively\footnote{We emphasize that different from the convention that the AoDs/AoAs are defined for a single UPA, in this paper, the path angles are defined for the QUPA, as shown in Fig. \ref{mo}.}. ${G_t^k}$ and ${G_r^m}$ are the transmit and receive antenna gains at the $k^\mathrm{th}$ transmit UPA and the $m^\mathrm{th}$ receive UPA, respectively, which can be written as
\begin{equation}\label{ff}
G_t^k \;({\mathrm{or}}\;G_r^m)=\frac{{4\pi N_a }}{{\int\limits_{\varphi = 0}^{2\pi } {\int\limits_{\theta = 0}^\pi {F_k(\varphi ,\theta )\sin \theta d\theta d\varphi } } }},
\end{equation}
where ${F_k}({\phi },{\theta })$ is the normalized power radiation pattern of the antenna element at the $k^\mathrm{th}$ UPA. As each UPA only serves a confined range, higher array gains can be achieved by using the directional antenna element tailored for the certain coverage. To this end, an ideal power radiation pattern of ${F_k}({\phi },{\theta })$ can be expressed as
\begin{equation}\label{fk}
F_k(\phi ,\theta ) = \left\{ {\begin{array}{*{20}{c}}
{1}, & {(\phi ,\theta )\in {\Omega _k}}
\\
0 , &{{\rm{otherwise}}}
\end{array} ,} \right.
\end{equation}
which yields the transmit and receive antenna gains of $G_t^k=G_r^m=4\sqrt{2}N_a$. Moreover, due to the limited angular deflection, i.e., $[ - \frac{\pi }{4},\frac{\pi }{4}]$, the beam squint loss can be effectively reduced in the wideband beamforming. According to \cite{bs1}, the normalized wideband beam gain is the maximum value of the beam patterns' intersection at all frequencies of the signal band, which is given by
\begin{figure*}
\centering
\includegraphics[width=5.4in]{frame.eps}
\caption{The diagram of our proposed framework on unified procedure.}\label{fra}
\vspace{-12pt}
\end{figure*}
\begin{equation}
{A_f}(\varphi ) = \left| {\frac{{\sin (\frac{{\sqrt{N_a}\pi B}}{{4{f_c}}}\sin {\varphi})}}{{\sqrt{N_a}\sin (\frac{{\pi B}}{{4{f_c}}}\sin \varphi )}}} \right|,
\end{equation}
where $B$ is the baseband bandwidth, $\varphi$ is the beam's direction, and $f_c$ is the carrier frequency. Thus, the maximum beam squint loss can be expressed as $1 - {A_f}({\varphi _{\max }})$, in which $\varphi _{\max }$ is the maximum beam angular deflection. Thus, compared to the conventional UPA, the beam squint loss at QUPA can reduce
\begin{equation}
L = \frac{{{A_f}(\pi /4) - {A_f}(\pi /2)}}{{1 - {A_f}(\pi /2)}}.
\end{equation}
The reduction $L$ decreases with the increase of $\frac{B}{f_c}$. For example, when $\frac{B}{f_c}=5\%$ (resp. to $20\%$), the QUPA can reduce $49.5\%$ (resp. to $41.4\%$) beam squint loss when $N_a=256$.
\subsection{Problem Statement}
To enable reliable THz communication, the precoder ${\bf{f}}_k$
at the $k^\mathrm{th}$ transmit UPA and the decoder ${\bf{w}}_m$ at the $m^\mathrm{th}$ receive UPA are needed to be optimized under normalized power to maximize the decoding SNR specified in (\ref{rec2}), which is equivalent to solving the following problem
\begin{equation}\label{tran}
\begin{split}
\{ {\bf{w}}_m^{{\rm{opt}}},{\bf{f}}_k^{{\rm{opt}}}\} = \arg &\max {\left| {{\bf{w}}_m^H{{\bf{H}}_{k,m}}{{\bf{f}}_k}} \right|^2}\\
&{\rm{s}}{\rm{.t}}{\rm{.}}\;\;{\left\| {{{\bf{f}}_k}} \right\|^2} \le 1,\;\;{\left\| {{{\bf{w}}_m}} \right\|^2} \le 1.
\end{split}
\end{equation}
Provided that ${{\bf{H}}_{k,m}}$ is perfectly known at the transmitter and receiver, the optimal precoder ${\bf{f}}_k^{{\rm{opt}}}$ and the decoder ${\bf{w}}_m^{{\rm{opt}}}$ can be easily derived by applying the singular value decomposition on ${{\bf{H}}_{k,m}}$. However, the pilot signals with omnidirectional radiation may not be effectively detected due to the severe path loss in THz channels. In the light of this, we need to find ${\bf{f}}_k^{{\rm{opt}}}$ and ${\bf{w}}_m^{{\rm{opt}}}$ by testing the precoder-decoder pairs (i.e., beam pairs) predefined in a codebook, without any channel state information. This process is referred to as \emph{beam training.}
After obtaining ${\bf{f}}_k^{{\rm{opt}}}$ and ${\bf{w}}_m^{{\rm{opt}}}$ for ${{\bf{H}}_{i,j}^{{\rm{LoS}}}}$ over a transmission interval $T$, the LoS channel might be changed in the next interval due to the movement (or rotation) of both the transmitter and receiver, i.e., ${\bf{H}}_{k,m}^{{\rm{LoS}}}(T+1) \ne {\bf{H}}_{k,m}^{{\rm{LoS}}}(T)$. Thus, one strategy for maintaining the communication is to re-apply the beam training at the transmission interval $T+1$. However, in practice, the positions of the transmitter and receiver vary gradually, which implies that ${\bf{H}}_{k,m}^{{\rm{LoS}}}(T + 1)$ is closely related to ${\bf{H}}_{k,m}^{{\rm{LoS}}}(T)$. In sight of this, we can find ${\bf{f}}_k^{{\rm{opt}}}(T+1)$ and ${\bf{w}}_m^{{\rm{opt}}}(T+1)$ quickly by testing beam pairs in a reduced codebook based on the prediction of ${\bf{H}}_{i,j}^{{\rm{LoS}}}(T+1)$. This process is referred as \emph{beam tracking.}
In the next sections, we aim to design a unified 3D beam training and tracking procedure for our considered system, with both low computational complexity and time consumption.
\section{Framework on A Unified 3D Beam Training and Tracking Procedure}\label{framew}
In this section, we present a novel communication framework that has dynamic beam training/tracking frequency depending on the real-time communication quality to reduce outages. Fig. {\ref{fra}} shows the block diagram of the framework on a unified training and tracking procedure.
This procedure starts with the beam training to find the optimal beam pair to establish reliable communication. Then, with the obtained beam pair, data is transmitted in the subsequent time blocks. When the decoding SNR of the data is lower than a threshold, which indicates that the adopted beam pair is no longer the optimal one, beam tracking mode 1 is applied to find a new beam pair. The beam tracking mode 1 only needs the information of the former recorded beam pair. When a reliable communication link is established, data is transmitted in the subsequent time blocks until the decoding SNR declines below the threshold again. If the number of the recorded beam pairs is greater than $2$, the procedure gives priority to using beam tracking mode 2, which is faster than mode 1, to find a new beam pair. Once beam tracking mode 2 fails to find a reliable beam pair, beam tracking mode 1 will be subsequently applied as a compensation. If both tracking modes are ineffective, the beam training is applied again in the unified procedure. It is worth mentioning that the frequency of applying beam training/tracking is depending on the real-time SNR instead of being fixed. How to set the threshold value will be discussed in Section IV. B. 1).
\section{Beam Training and Tracking}
In this section, we first introduce an exhaustive 3D beam training to show the basic approach of beam alignment. Next, we develop a more efficient GB beam training, including the hierarchical codebook design and the training protocol, to achieve a better performance-complexity trade-off. Finally, we develop a simple yet effective GBH beam tracking that contains two modes of tracking protocol for jointly realizing the fast beam alignment.
\subsection{Beam Training}
\subsubsection{Exhaustive 3D Beam Training}\label{exhs}
Note that the small wavelength and severe path loss significantly limits scattering in THz communication, where the gain of the NLoS paths is much lower than that of the LoS counterpart\cite{ITUR}. Therefore, in this paper, we only consider the LoS component in the beam training. By substituting (\ref{chan})
into (\ref{tran}), the beam training problem is equivalent to
\begin{equation}\label{ori}
\begin{split}
\{ {\bf{w}}_m^{{\rm{opt}}},{\bf{f}}_k^{{\rm{opt}}}\} = \arg &\max {\left| {{\bf{w}}_m^H\underbrace {{{\bf{a}}_m}({\phi _r},{\theta _r}){{\bf{a}}_k}{{({\phi _t},{\theta _t})}^H}}_{{\rm{can}}\;{\rm{not}}\;{\rm{be}}\;{\rm{obtained}}}{{\bf{f}}_k}} \right|^2}\\
&{\rm{s}}{\rm{.t}}{\rm{.}}\;{{\bf{f}}_k} \in \mathcal{F}_k,{{\bf{w}}_m} \in \mathcal{W}_m.
\end{split}
\end{equation}
Without the codebook constraint, an optimal solution to (\ref{ori}) is given by $\{ {\bf{w}}_m^{{\rm{opt}}} = {{\bf{a}}_m}({\phi _r},{\theta _r}),{\bf{f}}_k^{{\rm{opt}}} = {{\bf{a}}_k}({\phi _t},{\theta _t})\} $. Since the optimal beam pair follow the form of array response vector, a straightforward method to reach a desired solution is to traverse all beam pairs from the codebooks composed of array response vectors with different angles\cite{ywang}. The codebook for the $i^\mathrm{th}$ UPA contains $N^2$ narrow beams (i.e., codewords) with $N$ azimuth angles times $N$ elevation angles uniformly distributed in range $\Omega_i$ (which is specified in (\ref{omi})). This method is also referred to as \emph{exhaustive 3D beam training.} However, when it applies to our considered system with four UPAs, the transmitter and receiver have $4N^2$ narrow beams on each. Thus, there are $16N^4$ beam pairs to be tested in the exhaustive 3D beam training, which is quite time consuming when $N$ is large. Next, we propose a low-complexity yet effective GB beam training including the designs of hierarchical codebook and training protocol.
\begin{figure}
\centering
\includegraphics[width=3in]{2sample.eps}
\caption{(a) An example of the beam patterns of two narrow beams. (b) Comparison between uniformly and non-uniformly distributed beams.}\label{2sam}
\vspace{-12pt}
\end{figure}
\subsubsection{Hierarchical Codebook Design}\label{HCD}
Our proposed 3D hierarchical codebook pre-defines some codewords for narrow beams and wide beams stage-by-stage. The narrow beams act as the solution candidates, which determines the overall training performance. The wide beams are used for identifying the direction of the best narrow beam, which assists to reduce the training complexity. Firstly, we design ${N^2} = {2^S}$ narrow beams that cover $\Omega_k$ in union, where $S$ is the number of stages of our proposed hierarchical codebook. These narrow beams lie in the bottom stage, i.e., stage $S$, and one narrow beam among will be selected as the optimal solution after the beam training. Based on (\ref{ori}), all the narrow beams ought to follow the form of array response vector. As such, the design of the ${N^2}$ narrow beams is reduced to determine their directions, i.e., $\phi$ and $\theta$ in ${{\bf{a}}_k}(\phi ,\theta )$. However, we would like to point out that the direction of these narrow beams should not be uniformly distributed due to the following fact. Assume that an optimal decoder is used in the beam training, based on (\ref{ori}), the received normalized decoding power is given by
\begin{equation}
\begin{split}
&\mathop {\max }\limits_{{{\bf{f}}_k}} {\left| {{\bf{a}}_k}({\phi _t},{\theta _t})^H{{\bf{f}}_k} \right|^2}\\
&{\rm{s}}.{\rm{t}}.\;{{\bf{f}}_k} \in \mathcal{C}_k^S,
\end{split}
\end{equation}
where $\mathcal{C}_k^S$ represents the $N^2$ narrow beams to be designed in stage $S$ of $\mathcal{C}_k$. Due to the randomness of the wireless channel (random ${\phi _t}$ and ${\theta _t}$), the quality of codewords $\mathcal{C}_k^S$ can be judged by its one-side worst-case performance, i.e.,
\begin{equation}\label{worst}
\begin{split}
&{\eta _{{\rm{worst}}}} = \mathop {\min }\limits_{{\phi _t},{\theta _t}} \mathop {\max }\limits_{{{\bf{f}}_k}} {\left| {{\bf{a}}_k}({\phi _t},{\theta _t})^H{{\bf{f}}_k} \right|^2}\\
&{\rm{s}}.{\rm{t}}.\;{{\bf{f}}_k} \in \mathcal{C}_k^S.
\end{split}
\end{equation}
To analyze the one-side worst-case performance of these narrow beams, we define the normalized narrow beam gain of ${{\bf{a}}_k}(\phi ,\theta )$ in the direction of $({\phi _t},{\theta _t})$ as
\begin{equation}\label{narrowg}
\begin{split}
A[{{\bf{a}}_k}(\phi ,\theta ),({\phi _t},{\theta _t})] &= \left| {{{\bf{a}}_k}{{(\phi_t ,\theta_t )}^H}{{\bf{a}}_k}({\phi},{\theta})} \right|.
\end{split}
\end{equation}
By plotting the normalized narrow beam gain of ${{\bf{a}}_k}(\phi ,\theta )$ in all directions, we can reach its beam pattern. Interestingly, we notice that the narrow beam is thinner in the boresight direction, while is wider in the directions of coverage edge. As shown in Fig. \ref{2sam}(a), for the first UPA, the pattern of ${{\bf{a}}_1}(0 , \pi/2)$ is thinner than that of ${{\bf{a}}_1}(\pi/4 , \pi/4)$. In the sight of this, to guarantee a high worst-case performance, the beams in center of $\Omega _k$ should be distributed tightly while that around the edge of $\Omega _k$ should be distributed loosely. For example, Fig. \ref{2sam}(b) shows the radiation patterns of $20$ narrow beams with different colors uniformly or non-uniformly distributed on the $xy$-plane. The non-uniformly distributed beams that are distributed tightly around $\phi=0$ can yield improved worst-case performance.
Motivated by this, we endeavor to design the directions for narrow beams of $\mathcal{C}_k^S$ to guarantee the highest worst-case performance. As a result, in the bottom stage of our hierarchical codebook, the $N^2$ narrow beams of $\mathcal{C}_k^S$ are given by
\begin{subequations}\label{15o}
\begin{align}
C_k^S &= \left\{ {{{\bf{a}}_k}\left( {{\phi _n},{\theta _p}} \right)|n,p = 1,2,...,N} \right\},\;\\
{\phi _n} &= \arcsin \left( {\frac{{\sqrt 2 (2n - 1 - N)}}{{2N}}} \right) + \frac{{(k - 1)\pi }}{2},\label{15b}\\
{\theta _p} &= \arccos \left( {\frac{{\sqrt 2 (2p - 1 - N)}}{{2N}}} \right). \label{15c}
\end{align}
\end{subequations}
\begin{proposition}
If $N^2$ narrow beams (with $N$ azimuth angles times $N$ elevation angles) are adopted to cover $\Omega_k$ in union, the codewords proposed in (\ref{15o}) guarantee the highest worst-case performance (defined in (\ref{worst})), which is given by (normalized by the best-case performance)
\begin{equation}\label{p1}
{\eta _{{\rm{worst}}}} = \frac{{\sin \left[ {{{\left( {\sqrt 2 {N_z}\pi } \right)} \mathord{\left/
{\vphantom {{\left( {\sqrt 2 {N_z}\pi } \right)} {4N}}} \right.
\kern-\nulldelimiterspace} {4N}}} \right]\sin \left[ {{{\left( {\sqrt 2 \beta {N_y}\pi } \right)} \mathord{\left/
{\vphantom {{\left( {\sqrt 2 \beta {N_y}\pi } \right)} {4N}}} \right.
\kern-\nulldelimiterspace} {4N}}} \right]}}{N_yN_z{\sin \left[ {{{\left( {\sqrt 2 \pi } \right)} \mathord{\left/
{\vphantom {{\left( {\sqrt 2 \pi } \right)} {4N}}} \right.
\kern-\nulldelimiterspace} {4N}}} \right]\sin \left[ {{{\sqrt 2 \beta \pi } \mathord{\left/
{\vphantom {{\sqrt 2 \beta \pi } {4N}}} \right.
\kern-\nulldelimiterspace} {4N}}} \right]}},
\end{equation}
where
\begin{equation}\label{p2}
\beta = \left\{ {\begin{aligned}
&{1,\quad\qquad\qquad\qquad{\rm{when}}\;N\;{\rm{is}}\;{\rm{odd}}}\\
&{\sin \Big( {\arccos \frac{{\sqrt 2 }}{{2N}}} \Big),\;\;{\rm{when}}\;N\;{\rm{is}}\;{\rm{even}}}
\end{aligned}} \right..
\end{equation}
\end{proposition}
The proof is relegated to Appendix A. We have mentioned that the design of the wide beams in the upper stages is to reduce the training complexity while the training performance is determined by the $N^2$ narrow beams of (\ref{15o}) in the bottom stage. Thus, Proposition 1 guarantees the normalized worst-case performance of our proposed GB beam training.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{unirange.eps}
\caption{(a) An illustration of the range of the narrow beams shown on $\Theta (\theta )$ and $\Phi (\phi )$ on a 3D grid, where $N^2=16$. (b) Beams' distribution and their coverage in different stages on a 3D grid, where $S=4$.}\label{uni}
\vspace{-12pt}
\end{figure}
Next, we introduce our approach to design the wide beams in the upper stages, i.e., stage $0$ to $S-1$. Each wide beam in the stage $s$ covers two beams in the stage $s+1$ while the beam in stage $0$ covers the whole range of $\Omega_k$. As such, we have $2^s$ beams in the stage $s$. The codewords for wide beams are no longer array response vectors and we use $\bm{\omega} _i^{k,s}$ to represent the $i^\mathrm{th}$ codeword in the stage $s$ of the hierarchical codebook $\mathcal{C}_k$. For ease of illustrating their 3D range, we define two functions as $\Theta (\theta ) = - \cos \theta$ and $\Phi (\phi ) = \sin (\phi - \frac{{\pi (k - 1)}}{2}) + \sqrt{2} (k - 1)$. In this way, as shown in Fig. \ref{uni}(a), the range of the narrow beams can be represented by the squares on a 3D grid, where the beam direction is in the center of the square. Based on this representation, Fig. \ref{uni}(b) shows our proposed beams' distribution as well as their coverage in different stages.
According to the beam index in Fig. \ref{uni}(b) and the beams' distribution in (\ref{15o}), the codewords of the narrow beams can be expressed as
\begin{equation}\label{narrow}
\begin{split}
&\bm{\omega} _i^{k,S} = {{\bf{a}}_k}({\phi _n},{\theta _p}),\;\;i = 1,2,...,{N^2},\\
&n = {\bmod _N}(i),\;\;p = {\rm{ceil}}(i/N),\;\;(\ref{15b}),\;\;(\ref{15c}),
\end{split}
\end{equation}
where $\bmod _N (i)$ returns the remainder after division of $i$ by $N$, and ${\rm{ceil}}(\cdot)$ denotes the ceiling function. To develop the codewords of wide beams for $\mathcal{C}_k$, we have to construct a dense grid that represents all directions in front of the $k^\mathrm{th}$ UPA, i.e.,
\begin{equation}
\hat\Omega_k \!=\! \Big\{ {(\phi ,\theta )|\phi \in [ - \frac{\pi }{2} \!+\! \frac{{\pi (k \!-\! 1)}}{2},\frac{\pi }{2} \!+\! \frac{{\pi (k \!-\! 1)}}{2}],\theta \in [0,\pi ]} \Big\},
\end{equation}
which is larger than $\Omega_k$. As there are $N\times N$ narrow beams within range $\Omega_k$, as shown in Fig. \ref{gridK}, we construct $2N\times 2N$ grid blocks within this range and total $4N\times 4N$ grid blocks within $\hat\Omega_k$\footnote{The number of grid blocks can be larger than $4N\times 4N$, which however does not bring noticeable performance gain for the design of wide beams.}. If each of the rest $4N\!\times\! 4N\!-\!2N\!\times\! 2N$ blocks has the same size of that within $\Omega_k$, the total coverage is beyond $\hat\Omega_k$. Thus, we set them smaller and uniformly distributed on $\Theta (\theta )$ and $\Phi (\phi )$ to exactly cover $\hat\Omega_k$. As such, the center directions of the grid blocks for $k$th UPA can be represented as
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{gridK.eps}
\caption{A dense grid that represents all directions in front of the $k^ {\mathrm{th}}$ UPA, where $N^2=16$.}\label{gridK}
\vspace{-12pt}
\end{figure}
\begin{figure*}[t]
\begin{equation}
\left\{ {4N(\beta \!+\! m) \!+\! \alpha \!+\! n\left| {\begin{aligned}
&\;{{\rm{when }}\;m = -w,...,-1\;{\rm{and}}\;\delta,...,\delta\!+\!w\!-\!1:\;\;n = 1\!-\!w,...,\nu \! +\! w}\\
&\;\;{{\rm{when }}\;m = 0,1,...,\delta\!-\!1:\;\;n = 1\!-\!w,...,0\;{\rm{and}}\;\nu \! +\! 1,...,\nu\!+\!w\;}
\end{aligned}} \right.} \right\} \tag{28}
\end{equation}
\centering
\includegraphics[width=6.7in]{phase1.eps}
\caption{A two-phase protocol for the GB beam training, wherein $A^*=2$ and $B^*=4$.}\label{ph1}
\end{figure*}
\begin{small}
\begin{equation*}
\begin{split}
&(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l})\;\;{\rm{with}}\;j= 1,2,...,4N,\;{\rm{and}}\;l= 1,2,...,4N,\\
&\phi _{{\rm{grid}}}^{k,j} \!=\! \left\{ {\begin{split}
&{\arcsin \left( {\frac{{(1 - \sqrt 2 /2)(2j - 1)}}{{2N}} - 1} \right) + \frac{{\pi (k - 1)}}{2},}\\
&{\arcsin \left( {\frac{{\sqrt 2 \left[ {2(j - N) - 1} \right]}}{{4N}} - \frac{{\sqrt 2 }}{2}} \right) + \frac{{\pi (k - 1)}}{2},}\\
&{\arcsin \left( {\frac{{(1 \!-\! \sqrt 2 /2)[2(j\!-\!3N) \!-\! 1]}}{{2N}} \!+\! \frac{{\sqrt 2 }}{2}} \right) \!+\! \frac{{\pi (k \!-\! 1)}}{2},}
\end{split}} \right.
\end{split}
\end{equation*}
\end{small}with piecewise $j = 1,...,N$, $j =N \!+\! 1,...,3N$, and $j =3N \!+\! 1,...,4N$, respectively.
\begin{small}
\begin{equation*}
\begin{split}
&\theta _{{\rm{grid}}}^{k,l} = \left\{ {\begin{split}
&{\arccos \left( {\frac{{(1 - \sqrt 2 /2)(2l - 1)}}{{2N}} - 1} \right),}\\
&{\arccos \left( {\frac{{\sqrt 2 \left[ {2(l - N) - 1} \right]}}{{4N}} - \frac{{\sqrt 2 }}{2}} \right),}\\
&{\arccos \left( {\frac{{(1 - \sqrt 2 /2)[2(l-3N) - 1]}}{{2N}} + \frac{{\sqrt 2 }}{2}} \right),}
\end{split}} \right.\;\\
\end{split}
\end{equation*}
\end{small}with piecewise $l = 1,...,N$, $l =N \!+\! 1,...,3N,$, and $l =3N \!+\! 1,...,4N$, respectively. According to the proposed beams' distribution as well as their coverage shown in Fig. {\ref{gridK}}, the set of grid directions/blocks covered by $\bm{\omega} _i^{k,s}$ can be expressed as
\begin{subequations}
\begin{align}
&\Upsilon _i^{k,s} = \left\{ {(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l})\;\left| {j \in \;{J_{s,i}},l \in {L_{s,i}}} \right.} \right\},\\
&{J_{s,i}} \!=\! \left\{ {N \!+\! \nu\cdot {{\bmod }_\mu }(i\!-\!1) \!+\! 1,...,N \!+\! \nu ({{\bmod }_\mu }(i\!-\!1)\!+\!1)} \right\},\\
&{L_{s,i}} = \left\{ {N + \delta ({\rm{ceil}}\left( {\frac{i}{\mu }} \right) - 1) + 1,...,N + \delta \cdot {\rm{ceil}}\left( {\frac{i}{\mu }} \right)} \right\},\\
&\nu = {2^{{\rm{ceil}}\left( {\frac{{S - s}}{2}} \right) + 1}},\;\;\delta = {2^{{\rm{ceil}}\left( {\frac{{S - s - 1}}{2}} \right) + 1}},\;\;\mu = {2^{{\rm{ceil}}\left( {\frac{{s - 1}}{2}} \right)}},\label{20d}
\end{align}
\end{subequations}
where $\nu$ represents the number of the grid blocks at the same elevation covered by $\bm{\omega} _i^{k,s}$, $\delta$ represents the number of the grid blocks at the same azimuth covered by $\bm{\omega} _i^{k,s}$, and $\mu$ represents the number of beams in stage $s$ at the same elevation.
Regarding the wide beams in stage $s$, some prior works \cite{wide1,wide3,thzh,wzhong} expect that $\bm{\omega} _i^{k,s}$ can only achieve beam gain within its coverage $\Upsilon _i^{k,s}$ and cannot achieve the gain in other range, i.e.,
\begin{equation}\label{dc}
{{\bf{a}}_k}{(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l})^H}{\bm{\omega}} _i^{k,s} = \left\{ {\begin{aligned}
&{1,\;\;(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l}) \in \Upsilon _i^{k,s}}\\
&{0,\;\;\;\;\;\;{\rm{otherwise}}\;\;\;\;\;}
\end{aligned}} \right.,
\end{equation}
holds true for all $j=1,2,...,4N$ and $l=1,2,...,4N$.
However, it is worth mention that the feasible wide beam realized by the beamformer cannot exactly fit (\ref{dc}), and only results in an approximate pattern, which has notable trenches between adjacent ones. This is because the requirement of drastic jump/drop between $0$ and $1$ in (\ref{dc}) may squeeze the resulting beam pattens for minimizing the approximation error. These trenches bring dead zone and impair the overall performance of beam training. To eliminate them, we modify the criterion given in (\ref{dc}) by adding a buffer zone, which is effective and will be validated in Section \ref{BPC}. The buffer zone ${\rm B}_i^{k,s}$ is the periphery of $\Upsilon _i^{k,s}$ with width of $w$. To write it in the mathematical form, we first set an enlarged zone of $\Upsilon _i^{k,s}$,denoted by $\hat\Upsilon _i^{k,s}$ as
\begin{equation}
\begin{split}
&\hat\Upsilon _i^{k,s} = \left\{ {(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l})\;\left| {j \in \;{\hat J_{s,i}},l \in {\hat L_{s,i}}} \right.} \right\},\\
&{\hat J_{s,i}} = \{N + \nu\cdot {{\bmod }_\mu }(i-1)+1-w,...,\\
&\qquad\qquad\qquad\qquad\quad N + \nu ({{\bmod }_\mu }(i-1)+1)+w \},\\
&{\hat L_{s,i}} = \left\{ N + \delta ({\rm{ceil}}\left( {\frac{i}{\mu }} \right) - 1)+1-w ,...,\right.\\
&\left.\qquad\qquad\qquad\qquad \quad N + \delta \cdot {\rm{ceil}}\left( {\frac{i}{\mu }} \right)+w \right\},\;(\ref{20d}).\\
\end{split}
\end{equation}
Then, we have ${\rm B}_i^{k,s}=\hat\Upsilon _i^{k,s}-\Upsilon _i^{k,s}$. Thus, in our proposed criterion, we expect that
\begin{equation}\label{dc2}
{{\bf{a}}_k}{(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l})^H}{\bm{\omega}} _i^{k,s} = \left\{ {\begin{aligned}
&{1,\;\;(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l}) \in \Upsilon _i^{k,s}}\\
&{\chi ,\;\;(\phi _{{\rm{grid}}}^{k,j},\theta _{{\rm{grid}}}^{k,l}) \in {\rm B}_i^{k,s}}\\
&{0,\;\;\;\;\;\;{\rm{otherwise}}\;\;\;\;\;}
\end{aligned}} \right.,
\end{equation}
where $\chi \in (0,1)$ is the expected beam gain in the buffer zone. Define a matrix as
\begin{equation}
\begin{split}
&{\bf{A}} = \left[ {{{\bf{a}}_k}(\phi _{{\rm{grid}}}^{k,1},\theta _{{\rm{grid}}}^{k,1}),...,} \right.\\
&\qquad\qquad\left. {{{\bf{a}}_k}(\phi _{{\rm{grid}}}^{k,4N},\theta _{{\rm{grid}}}^{k,1}),...,{{\bf{a}}_k}(\phi _{{\rm{grid}}}^{k,4N},\theta _{{\rm{grid}}}^{k,4N})} \right].
\end{split}
\end{equation}
Then, we can rewrite (\ref{dc2}) in a more compact form as
\begin{equation}
{{\bf{A}}^H}[{\bm{\omega}} _1^{k,s},{\bm{\omega}} _2^{k,s},...,{\bm{\omega}} _{{2^s}}^{k,s}] = {\bm{\Xi} _s},
\end{equation}
where ${\bm{\Xi} _s}$ is a $16N^2 \times 2^s$ matrix. The $i^\mathrm{th}$ column of ${\bm{\Xi} _s}$ has an element of 1 in the rows
\begin{equation}
\left\{ {4N(\beta + m) + \alpha + n\left| {m = 0,1,...,\delta - 1;n = 1,2,...,\nu } \right.} \right\},
\end{equation}
in which $\alpha = N + \nu \cdot{\bmod _\mu }(i-1)$, $\beta = N + \delta ({\rm{ceil}}\left( {\frac{i}{\mu }} \right) - 1)$, $\delta$ and $\nu$ are defined in (\ref{20d}), whereas it has an element of $\chi$ in the rows of
(28) in the top of the next page, and has an element of $0$ in other rows. As a result, the $i^\mathrm{th}$ codeword in the stage $s=0,1,...,S-1$ of $\mathcal{C}_k$ can be obtained as
\setcounter{equation}{28}
\begin{equation}\label{wide}
{\bm{\omega}} _i^{k,s} = {({\bf{A}}{{\bf{A}}^H})^{ - 1}}{\bf{A}}{\bf{\Xi}} _s(:,i).
\end{equation}
So far, we have obtained all the codewords in the hierarchical codebook $\mathcal{C}_k$. The narrow beams in stage $S$ are given by (\ref{narrow}) and the wide beams in stage $s=0,1,...,S-1$ are given by (\ref{wide}). Next, we propose a low-complexity training protocol for our considered system. For ease of exposition, we call the two nodes as Alice and Bob respectively.
\subsubsection{Beam Training Protocol}
As shown in Fig. {\ref{ph1}}, two phases are developed to achieve different groups of measurements. In Phase 1, we find the optimal UPA pair whose beam range covers the LoS path. Two similar steps are carried to obtain the optimal UPA at Bob and Alice respectively. In step 1, Alice simultaneously uses all UPAs to transmit wide beams via the precoder of ${\bm{\omega}} _1^{k,0}$ for the $k^\mathrm{th}$ UPA. Meanwhile, Bob simultaneously uses all UPAs to receive wide beams via the decoder of ${\bm{\omega}} _1^{m,0}$ by using the $m^\mathrm{th}$ UPA. Then, Bob compares the power of the decoding signals from the four UPAs and selects the one (labeled as $B^*$) with the maximum received signal power. In step 2, Bob only transmits the wide beam by the selected UPA with the precoder of ${\bm{\omega}} _1^{B^*,0}$. Meanwhile, Alice simultaneously uses all UPAs to receive wide beams via the decoder of ${\bm{\omega}} _1^{m,0}$ for the $m^\mathrm{th}$ UPA. Then, Alice finds the UPA (labeled as $A^*$) with the maximum received signal power in the same way. After the two steps, the optimal UPA pair is obtained as the $A^*$th UPA of Alice and the $B^*$th UPA of Bob. In Phase 2, we aim to find the optimal narrow-beam pair between $A^*$th UPA of Alice and the $B^*$th UPA of Bob. Two similar steps are carried to obtain the optimal narrow beam at Bob and Alice respectively. Step 1 of phase 2 follows step 2 of phase 1, in which Bob transmits a wide beam via the precoder of ${\bm{\omega}} _1^{B^*,0}$ for $B^*$th UPA. Meanwhile, Alice uses the $A^*$th UPA to receive wide beams via testing some codewords in $\mathcal{C}_{A^*}$ from stage $1$ to stage $S$. In each stage, Alice tests two beams and selects the one with larger detected power and in the next stage, Alice tests two beams that are within the range of the selected beam. By recursively repeating this way, Alice can reach a narrow beam (labeled as $a^*$) in the stage $S$. In the step 2, Alice transmits a narrow beam via the precoder of ${\bm{\omega}} _{a^*}^{A^*,S}$ for $A^*$th UPA. Meanwhile, Bob uses the $B^*$th UPA to hierarchically test codewords in $\mathcal{C}_{B^*}$ similarly, and reach a narrow beam (labeled as $b^*$) in the stage $S$. After the two steps, the optimal narrow-beam pair is obtained as ${\bm{\omega}} _{a^*}^{A^*,S}$ of Alice and ${\bm{\omega}} _{b^*}^{B^*,S}$ of Bob.
\subsection{Beam Tracking}\label{trackingpro}
In this subsection, we propose a low-complexity GBH beam tracking to find the optimal narrow-beam pair in a faster way. It combines two tracking modes with different search times. The first mode needs to search the beams in the vicinity of the former used beam pair, while the second one directly chooses a new beam pair for connection based on the changing trend of the previously used beam pairs. Fig. \ref{epl} shows an example of our unified procedure operation on the timeline. When an aligned beam pair is adopted, we call the period of the subsequent data-transmission time blocks an interval. The decoding SNR at the end of each interval is considered below a threshold and will trigger a new beam tracking. Before developing the beam tracking, we first determine the decoding SNR threshold.
\begin{figure*}[t]
\centering
\includegraphics[width=6.2in]{trackepl.eps}
\caption{An example of operation by our unified training and tracking procedure on the timeline.}\label{epl}
\vspace{10pt}
\centering
\includegraphics[width=6.8in]{direc.eps}
\caption{The possible path variation between two intervals in four different cases. }\label{dir}
\vspace{-12pt}
\end{figure*}
\subsubsection{Decoding SNR threshold}
With the fixed channel and transit power, we assume that the decoding SNR is merely determined by the beamforming, and thus the SNR will be used to identify the quality of current beam pair. In practice, the SNR may fluctuate occasionally due to the instability of RF devices, same-frequency interference, and etc. In this case, we should consider the effective decoding SNR within a time window, rather than the instant decoding SNR. Denote the optimal narrow-beam pair in the interval $T_n$ by ${{{\bf{\bar w}}}_n}$ and ${{{\bf{\bar f}}}_n}$. Based on (\ref{rec2}), the maximum decoding SNR in the interval $T_n$ can be expressed as
\begin{equation}
\begin{split}
&\Gamma ({T_n}) = \mathop {\max }\limits_t \frac{P}{{{\sigma ^2}}}{\left| {{\bf{\bar w}}_n^H{\bf{H}}(t){{{\bf{\bar f}}}_n}} \right|^2}\;\\
&\;\;\;\;\;\;\;\;\;\;\;\;{\rm{s}}{\rm{.t}}{\rm{.}}\;\;t \in \;{\rm{Interval }}\;{T_n}\;.
\end{split}
\end{equation}
Here, we propose a reasonable threshold via the following proposition, proved in Appendix B.
\begin{proposition}
The decoding SNR threshold in interval $T_n$ can be set as $\eta_{{\rm{worst}}}^4\Gamma ({T_n})$, which guarantees that the current beam pair is no longer the optimal one. $\eta _{\rm{worst}}$ is given by (\ref{p1}).
\end{proposition}
Proposition 2 provides a reasonable threshold for practical implementation. This threshold is not fixed but depends on the maximum decoding SNR in each interval. Since $\Gamma ({T_n})$ cannot be determined before the end of the interval, the corresponding threshold may change during the interval $T_n$. Next, we discuss the possible path directions in a new interval.
\subsubsection{The Possible Path Directions in a New Interval}
When the decoding SNR is less than the threshold $\eta _{{\rm{worst}}}^4\Gamma ({T_n})$, the direction of the LoS path lies outside the range of both ${{{\bf{\bar w}}}_n}$ and ${{{\bf{\bar f}}}_n}$. Fig. \ref{dir} presents four examples of different cases of possible path directions in a new interval.
\begin{itemize}
\item In case 1, the path directions at the maximum decoding SNR in interval $T_n$ are in the center of narrow-beam pair. When the decoding SNR is $\eta _{{\rm{worst}}}^4\Gamma_1 ({T_n})$, both path directions are on the coverage edge.
\item In case 2, the path directions at the maximum decoding SNR in interval $T_n$ are in the center of narrow-beam pair, i.e., $\Gamma_2 ({T_n})=\Gamma_1 ({T_n})$. When the decoding SNR is $\eta _{{\rm{worst}}}^4\Gamma_2 ({T_n})$, one direction is within the range and the other one is out of the range.
\item In case 3, the path directions at the maximum decoding SNR in interval $T_n$ are not in the center of narrow-beam pair, i.e., $\Gamma_3 ({T_n})<\Gamma_1 ({T_n})$. When the decoding SNR is $\eta _{{\rm{worst}}}^4\Gamma_3 ({T_n})$, both path directions are out of the coverage edge.
\item In case 4, the path directions at the maximum decoding SNR in interval $T_n$ are not in the center of narrow-beam pair, i.e., $\Gamma_4 ({T_n})<\Gamma_1 ({T_n})$. When the decoding SNR is $\eta _{{\rm{worst}}}^4\Gamma_4 ({T_n})$, one direction is within the range and the other one is out of the range.
\end{itemize}
The four cases indicate that in a new interval, the optimal narrow beam on one side must be the original one in the last interval or a neighbor one. As such, there are $9\times9$ candidates in the new interval, one of which is the optimal narrow-beam pair. The optimal solution can be effectively found by our proposed GBH beam tracking, whose protocol of two tracking modes is presented in Fig. {\ref{tr1}}.
\subsubsection{The First Tracking Mode}
\begin{figure*}[t]
\centering
\includegraphics[width=6.8in]{track1.eps}
\caption{A protocol of the two tracking modes for the GBH beam tracking. }\label{tr1}
\vspace{-12pt}
\end{figure*}
Based on the optimal narrow-beam pair adopted in interval $T_1$, tracking mode 1 aims to find the new optimal one among the $9\times9$ neighboring alternatives via two steps. In step 1, Alice transmits a wide beam that covers its $9$ narrow-beam candidates. Meanwhile, Bob successively receives $3$ wide beams and selects the one with the largest received signal power, where each wide beam covers $3$ candidates with the same azimuth angle. Then, Bob successively receives $3$ narrow beams within its range and selects the one with the largest received signal power as the optimal narrow beam. In step 2, Bob transmits the obtained narrow beam. In the meanwhile, Alice successively receives beams in the same manner to acquire its optimal narrow beam. After the two steps, Alice and Bob can relocate the optimal narrow-beam pair in interval $T_1$ via only $12$ tests. It is worth mentioning that the codewords of the wide beams used for beam tracking are selected from a dedicated codebook, which can be easily constructed according to the approach proposed in Section \ref{HCD}.
\subsubsection{The Second Tracking Mode}
In interval $T_n$ ($n\ge 3$), we could apply tracking mode 2, which is based on the optimal narrow-beam pairs adopted in the last two intervals, i.e., $T_{n-1}$ and $T_{n-2}$. Considering the narrow beam on one side (i.e., Alice side or Bob side), we assume that the transition of the beams between $T_{n}$ and $T_{n-1}$ is the same as that between $T_{n-1}$ and $T_{n-2}$. This assumption has a high probability in practice since the movement or rotation of the transmitter/receiver usually has a strong correlation in a short period. In the second tracking mode, Alice and Bob respectively test their predicted optimal narrow beam pair based on the prediction shown in Fig. {\ref{tr1}}, wherein the predicted one in interval $T_n$ is shown in $9$ different cases. If the decoding SNR is above the threshold of the former interval, this implies the testing narrow-beam pair is the optimal one. Hence, the GBH tracking is completed directly. If not, Alice and Bob should subsequently apply tracking mode 1, i.e., seek the optimal one among the $9\times9$ candidates, to complete the GBH tracking.
\subsection{Complexity and Applicability Analysis}
\begin{table}[t]
\centering
\caption{Comparison of Complexity and Applicability to THz massive MIMO.} \label{ta_com}
\vspace{0pt}
\hspace{0.45cm}
\begin{tabular}{|c|c|c|}
\hline
Approaches & \tabincell{c}{Applicability} & \tabincell{c}{Search Complexity} \\ \hline
Exhaustive training & Yes & $16{N^4} $ \\ \hline
One-sided search\cite{ones} & No & $2N^2$\\ \hline
Parallel search\cite{parallel} & No & $\left. 16N^4 \middle/ N_{RF} \right.$ \\ \hline
Two-step training\cite{Qsu} & No & $8 N^2$ \\ \hline
MR training \cite{wzhong} & No & $8\log _2{N^2} + 16$ \\ \hline
Proposed training & Yes & $4\log _2{N^2} + 2$\\ \hline
Proposed tracking & Yes & $12$ or $1$ \\ \hline
\end{tabular}
\vspace{-10pt}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=6.5in]{3narrow.eps}
\caption{Comparison of the proposed narrow beams with the benchmarks, where each UPA has 256 antenna elements.}\label{nacom}
\vspace{-12pt}
\end{figure*}
In this subsection, we compare the search complexity, as well as the applicability to THz massive MIMO, of our proposed beam tracking and training with other 3D training schemes. As mentioned in Section \ref{exhs}, the exhaustive beam training needs to test $16N^4$ beam pairs for our considered systems, which is quite time-consuming when $N$ is large.
To reduce the complexity, IEEE 802.11ad utilizes a one-sided beam search algorithm \cite{ones}, where each user exhaustively searches the narrow beams while the BS transmits the signal in omni-direction, which incurs the complexity of for our considered systems $2N^2$. The authors in \cite{parallel} proposed a parallel search that uses $N_{RF}$ RF chains at BS to transmit multiple narrow beams simultaneously while all users use an exhaustive training, which incurs the complexity of $\left. 16N^4 \middle/ N_{RF} \right.$ for our considered systems. The authors in \cite{Qsu} proposed a two-step beam training that decomposes the 3D space into $N$ horizontal or vertical sectors (with different elevation/azimuth angles). The two-step beam training has a time complexity of $8N^2$ for our considered systems. In \cite{wzhong}, a multi-resolution (MR) beam training is proposed by searching the wide-beam pairs first and then the narrow-beam pairs in ${\log _4}(4{N^2})$ stages with $16$ pairs in each stage, which incurs a time complexity of $16{\log _4}(4{N^2})$ for our considered systems. However, omnidirectional beam in \cite{ones} and simultaneously transmitting multiple beams in \cite{parallel} and \cite{Qsu} are not practical in THz communication due to the unaffordable transmit power. Moreover, how to realize the wide beams, i.e., the design of the wide-beam codewords, has not been carefully studied in \cite{Qsu} and \cite{wzhong}.
In our proposed beam training, as shown in Fig. \ref{ph1}, there are $4$ tests in Phase 1 and two tests in each stage of Phase 2 that contains $2{\log _2}{N^2}$ stages in two steps. Thus, our proposed beam training has a time complexity of $4{\log _2}{N^2} + 4$. Besides, there is no feedback needed via our scheme while the scheme proposed in \cite{wzhong} needs feedback at every stage. In our proposed beam tracking, as shown in Fig. \ref{tr1}, the first tracking mode requires $12$ beam tests, whereas the second mode requires only one beam test. We summarize the complexity and the applicability of the above approaches in Table \ref{ta_com}.
\section{Numerical Results}\label{nu}
In this section, numerical results are provided to demonstrate the performance of our proposed beam training and tracking. The operating frequency is set to $0.26$ THz and the operating bandwidth is set to $20$ GHz. The communication distance is $100$ m and the noise power spectral density is $-174$ dBm/Hz. Referring to ITU-R P.676-9 \cite{ITUR} and the free space loss formula, the propagation loss is taken as $-124.6$ dB. According to the first standard at THz band, i.e., IEEE 802.15.3d \cite{vp}, the transmit power is set to $25$ dBm and the antenna gains of $30$ dB are required for mitigating propagation loss. Based on (\ref{ff}) and (\ref{fk}), we use the UPA with $16 \times 16$ elements that incurs the antenna gain of $31.6$ dB.
\subsection{Beam Patterns of the Narrow Beams}\label{BPN}
\begin{figure*}[t]
\centering
\includegraphics[width=6.6in]{3wide.eps}
\caption{Comparison of the proposed wide beams with the benchmarks, where each UPA has 256 antenna elements.}\label{wicom}
\vspace{-12pt}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=7in]{narrowgain.eps}
\caption{(a) Received SNR versus number of narrow beams. (b) Worst-case performance versus test number. (c) Successful alignment rate versus SNR.}\label{ng}
\vspace{-12pt}
\end{figure*}
We first consider the beam patterns of our proposed narrow beams, which are in the bottom stage of the hierarchical codebook. For comparison, we present two benchmarks as follows.
\begin{itemize}
\item {\bf{Uniform Real Angles \cite{ywang}:}} For the $k^\mathrm{th}$ UPA, we extend the codebook in \cite{ywang} to our considered coverage by setting $N$ azimuth angles uniformly distributed in $[ - \frac{\pi }{4} + (k - 1)\frac{\pi }{2},\frac{\pi }{4} + (k - 1)\frac{\pi }{2}]$ and $N$ elevation angles uniformly distributed in $[\frac{\pi }{4},\frac{3\pi }{4}]$.
\item {\bf{Uniform Virtual Angles\cite{wide5}:}} For the first UPA, we consider the simplified array response vector with virtual angles (also known as spatial angles), i.e.,
\begin{equation}
\begin{split}
&{{\bf{a}}_1}(\widetilde \phi ,\widetilde \theta ) = \frac{1}{{\sqrt {{N_a}} }}[1,...,{e^{j\pi ({n_y}\widetilde \phi + {n_z}\widetilde \theta )}},...,\\
&\qquad\qquad\qquad\qquad\qquad{e^{j\pi [({N_y} - 1)\widetilde \phi + ({N_z}- 1)\widetilde \theta ]}}]^T,
\end{split}
\end{equation}
where $\widetilde \phi$ and $\widetilde \theta$ are the virtual azimuth and elevation angles within $[-1,1]$. We set $N$ virtual azimuth angles and $N$ virtual elevation angles uniformly distributed $[ - \frac{{\sqrt 2 }}{2},\frac{{\sqrt 2 }}{2}]$. For the $k^\mathrm{th}$ UPA ($k=2,3,4$), we rotate the beam patterns of the first UPA $(k - 1)\frac{\pi }{2}$ in azimuth. As the uniform virtual angles are the optimal distribution for ULA, this benchmark can also be regarded as the Kronecker product scheme extended from the existing 2D codebook\cite{wide5}.
\end{itemize}
Fig. \ref{nacom} plots the narrow-beam patterns on our proposed codewords in (\ref{15o}) and the benchmarks, where each UPA uses $16 \times 16$ narrow beams to cover its range. For each scheme, three views, i.e., the first UPA's front view (FV), the first UPA's left view (LV), and QUPA's total view (TV), are presented for distinguishing their differences. It can be easily observed from LV that the trenches of the narrow beams with uniform real angles are the deepest, which indicates the lowest worst-case performance. It is interesting to point out that from the FV, although the beams with uniform virtual angles show a figure of a square, the beams at their left and right edges are not vertical. This can be noticed from the LV that the azimuth coverage range will expand when beams are above/below $\frac{\pi}{2}$ of elevation angle. Thus, the total coverage cannot exactly constitute a sphere, which can be seen from the TV that there are some overlaps between adjacent UPAs. Compared to the benchmarks, the beams on our proposed distribution yield the highest worst-case performance, which can be seen from the LV. Moreover, using our proposed distribution, there shows no coverage overlap between different UPAs.
\subsection{Beam Patterns of the Wide Beams}\label{BPC}
Next, we consider the beam patterns of our proposed wide beams $\{\bm{\omega} _i^{k,s}\}_{s=0}^{S-1}$ in (\ref{wide}), which are in stages $2$ to $S$ of the codebook. For comparison, we extend the inverse approach adopted in \cite{wide1,wide3,thzh,wzhong} to the 3D scenario as a benchmark.
Fig. \ref{wicom} plots the wide-beam patterns in stage $0$ to $3$ realized by our proposed approach and the benchmark, respectively, where the adopted hierarchical codebook has $16\times 16$ narrow beams in the bottom stage. It is observed that in stage $0$, the wide beams realized by the two approaches both have notable trenches between adjacent UPAs. However, the trenches of our proposed wide beams are relatively smaller. In stages $1$ to $3$, the wide beams realized by the benchmark all have remarkable trenches even within the coverage range of each UPA. By comparison, there are no trenches within it in the patterns of our proposed wide beams. The beam patterns imply that using our proposed wide beams will have less dead zone during the beam training, and thus are expected to achieve better performance, i.e., higher successful alignment rate.
\subsection{Performance of Beam Training}
\begin{figure*}[t]
\centering
\includegraphics[width=6.8in]{trackingp.eps}
\caption{The beam tracking performance under four different schemes:
(a) Performance Upperbound.
(b) Angle-based Tracking \cite{tr4,tr5}.
(c) Location-based tracking \cite{tr1,tr3}.
(d) Proposed GBH tracking.}\label{tkp}
\vspace{-12pt}
\end{figure*}
To validate this point, we evaluate the average/worst-case received SNR after the beam training and the successful alignment rate during the beam training. With a successful alignment, the received SNR is determined by the narrow beams at the bottom stage. Fig. {\ref{ng}}(a) shows the received SNR by using different narrow beams, in which the worst-case guarantee is presented as a baseline. As can be seen, the received SNR of all schemes increases with the number of narrow beams. Our proposed narrow beams outperform the benchmark scheme in \cite{ywang} and \cite{wide5}, where the scheme of using uniform real angles yields the worst performance. With the increase of the number of narrow beams, the gap between the average performance and the worst-case performance decreases.
Next, we validate the theoretical worst-case performance $\eta_{\rm{worst}}$ provided in Proposition 1. To this end, we generate $N_{\rm{test}}$ incoming narrow beams with random AoAs and use the proposed beam training to find the maximum achievable beam gain. Then, we select the lowest one as the worst-case performance in the $N_{\rm{test}}$ tests. Fig. {\ref{ng}}(b) plots the worst-case performance versus the number of tests. It can be observed that for all three different setups, the worst-case performance is gradually approaching $\eta_{\rm{worst}}$ with the increase of the test number. Fig. {\ref{ng}}(c) shows three sets of results (denoted by different colors,
respectively) of the successful alignment rate by using our proposed GB beam training as well as the benchmark scheme\cite{wide1,wide3,thzh,wzhong}. We observe that both schemes can achieve a 100\% successful alignment rate with sufficiently high SNR, and our proposed codebooks can outperform the benchmark codebook in different setups.
\subsection{Performance of Beam Tracking}
We evaluate the performance of our proposed beam tracking. As the tracking has much less search complexity than the training, we consider more narrow beams, i.e., $32 \times 32$ narrow beams, for each UPA to cover their range. To simulate the relative motion between Alice and Bob, we assume that Alice is stationary and staying at $(0,0,0)$ m, whereas Bob is moving from the point of $(100,0,0)$ m. During the motion, we regard Bob as a UAV that randomly changes its moving direction and flies with a maximum speed of $100$ km/h. The total simulation time is $30$ s and each beam test costs $1$ ms. The tracking performance is valued by the changes of the normalized double-side beam gain.
Fig. \ref {tkp} shows the performance upperbound and the performance by different schemes. Specifically, we consider the angle-based tracking \cite{tr4,tr5} and location-based tracking \cite{tr1,tr3} as benchmarks. It is worth mentioning that the benchmarks apply the tracking every second periodically, while our proposed GBH beam tracking dynamically applies the procedure based on the SNR threshold given in Proposition 2. The performance upperbound provides an ideal baseline, as we assume that there is a way to accurately obtain the best narrow-beam pair over all the test time. As shown in Fig. \ref {tkp}(a), due to the finite number of beams and background noise, the double-side beam gain cannot maintain to $1$ even in the scheme of upperbound performance. In Fig. \ref {tkp}(b) and (c), the performance of both the angle-based tracking and location-based tracking suffers several inaccurate predictions. This is because Bob's trajectory is connected by multiple segments of linear motion and the above tracking approaches cannot cater for swerves. It can be observed from Fig. \ref {tkp}(d), our proposed GBH beam tracking yields the highest worst-case performance. Assume that the communication outage occurs when the normalized double-side beam gain is below $0.2$. In this case, the angle-based and location-based tracking suffer $6$ and $4$ outages, respectively. By contrast, by combining the first and second tracking modes, no outage occurs via our proposed beam tracking over all the test time.
\section{Conclusions}
We developed a unified 3D beam training and tracking procedure based on a QUPA architecture. To be exact, we first proposed a novel framework to realize the on-demand beam training and tracking with dynamic frequency for THz communication. For realizing beam training, we developed a new hierarchical codebook, in which the narrow beams guarantee the highest worst-case performance and the wide beams have a smaller dead zone. Then, we proposed a low-complexity training protocol to find the optimal narrow-beam pair. As for beam tracking, we developed two tracking modes to jointly realize the beam alignment for mobile transceivers in a fast way. Numerical results plot the 3D beam patterns of the codewords in our proposed codebook, which visually verifies the effectiveness and superiority over benchmark codebooks. Besides, the results show that our proposed GB beam training has advantages on both the beam gain and the successful alignment rate. Our proposed GBH tracking was shown to be able to effectively reduce the outages and maintain adequate beam gain over all the test time. The core of our unified procedure is the proposed framework and training/tracking protocol, based on which the beam codebook can be reconsidered catering for various requirements, e.g., beam coverage\cite{wnrong} and wideband effects\cite{fgao1}. It is also interesting to consider the 3D training and tracking procedure for THz IRS-assisted systems in the future\cite{TIRS1,TIRS2}.
\begin{appendices}
\section{Proof of Proposition 1}
Note that the directions of beam intersections have relatively lower beam gain. If all the directions of intersections have the same beam gain, the worst-case performance is the highest. Without loss of generality, we discuss the proposed narrow beams for the first UPA, i.e., $\mathcal{C}_1^S$. Note that the normalized narrow beam gain of ${{\bf{a}}_1}(\phi ,\theta )$ defined in (\ref{narrowg}) can be further expressed as
\begin{equation}
\begin{split}
&A[{{\bf{a}}_1}(\phi ,\theta ),({\phi _t},{\theta _t})]\\
&=\Bigg| \frac{1}{{{N_a}}}\sum\limits_{{n_z} = 0}^{{N_z} - 1} \sum\limits_{{n_y} = 0}^{{N_y} - 1} {e^{j\pi \left[ {\left( {{n_z} - \frac{{({N_z} - 1)}}{2}} \right)\left( {\cos \theta - \cos {\theta _t}} \right)} \right]}}\\
&\qquad\qquad\qquad\times{e^{j\pi \left[ {\left( {{n_y} - \frac{{({N_y} - 1)}}{2}} \right)\left( {\sin \theta \sin \phi - \sin {\theta _t}\sin {\phi _t}} \right)} \right]}} \Bigg|\\
& = \frac{1}{{{N_a}}}\Bigg| \left[ {\sum\limits_{{n_z} = 0}^{{N_z} - 1} {{e^{j\pi \left[ {\left( {{n_z} - \frac{{({N_z} - 1)}}{2}} \right)\left( {\cos \theta - \cos {\theta _t}} \right)} \right]}}} } \right]\\
&\qquad\qquad\times\left[ {\sum\limits_{{n_y} = 0}^{{N_y} - 1} {{e^{j\pi \left[ {\left( {{n_y} - \frac{{({N_y} - 1)}}{2}} \right)\left( {\sin \theta \sin \phi - \sin {\theta _t}\sin {\phi _t}} \right)} \right]}}} } \right] \Bigg|\\
& = \frac{1}{{{N_a}}}\Bigg| {e^{ - j\frac{{\pi ({N_z} - 1){m_1}}}{2}}}\frac{{\left( {1 - {e^{j\pi {N_z}{m_1}}}} \right)}}{{1 - {e^{j\pi {m_1}}}}}\\
&\qquad\qquad\qquad\qquad\qquad\quad\times{e^{ - j\frac{{\pi ({N_y} - 1){m_2}}}{2}}}\frac{{\left( {1 - {e^{j\pi {N_y}{m_2}}}} \right)}}{{1 - {e^{j\pi {m_2}}}}} \Bigg|\\
& = \frac{1}{{{N_a}}}\left| {\frac{{\left( {{e^{j\frac{{\pi {N_z}{m_1}}}{2}}} - {e^{ - j\frac{{\pi {N_z}{m_1}}}{2}}}} \right)}}{{\left( {{e^{j\frac{{\pi {m_1}}}{2}}} - {e^{ - j\frac{{\pi {m_1}}}{2}}}} \right)}}\frac{{\left( {{e^{j\frac{{\pi {N_y}{m_2}}}{2}}} - {e^{ - j\frac{{\pi {N_y}{m_2}}}{2}}}} \right)}}{{\left( {{e^{j\frac{{\pi {m_2}}}{2}}} - {e^{ - j\frac{{\pi {m_2}}}{2}}}} \right)}}} \right|\\
& = \frac{1}{{{N_a}}}\left| {\frac{{\sin \left[ {({N_z}\pi {m_1})/2} \right]}}{{\sin \left[ {(\pi {m_1})/2} \right]}}} \right| \cdot \left| {\frac{{\sin \left[ {({N_y}\pi {m_2})/2} \right]}}{{\sin \left[ {(\pi {m_2})/2} \right]}}} \right|,
\end{split}
\end{equation}
where ${m_1} = \cos \theta - \cos {\theta _t}$ and ${m_2} = \sin \theta \sin \phi - \sin {\theta _t}\sin {\phi _t}$. Define a two-dimension transformation as
\begin{equation}
\left\{ {\begin{aligned}
&{V\left( \theta \right) = \cos \theta, \;}\\
&{H(\theta ,\phi ) = \sin \theta \sin \phi, }
\end{aligned}} \right.
\end{equation}
The narrow beam response vector can be expressed as a new vector function that depends on $V$ and $H$, i.e., ${{\bf{a}}_1}(\phi ,\theta ) \Rightarrow {{{\bf{\hat a}}}_1}(V,H)$. As such, the normalized narrow beam gain of ${{\bf{a}}_1}(\phi ,\theta )$ in the direction of $({\phi _t},{\theta _t})$ can be rewritten as that of ${{{\bf{\hat a}}}_1}(V,H)$ in the direction of $({V_t},{H_t})$, i.e.,
\begin{equation}
\begin{split}
A[{{\bf{a}}_1}(\phi ,\theta ),({\phi _t},{\theta _t})] &= A[{{{\bf{\hat a}}}_1}(V,H),({V_t},{H_t})] \\
&= \frac{1}{{{N_a}}}{f_z}(V - {V_t}){f_y}(H - {H_t}),
\end{split}
\end{equation}
where
\begin{equation}
{f_z}(x) = \left| {\frac{{\sin \left[ {{{\left( {{N_z}\pi x} \right)} \mathord{\left/
{\vphantom {{\left( {{N_z}\pi x} \right)} 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]}}{{\sin \left[ {{{\pi x} \mathord{\left/
{\vphantom {{\pi x} 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]}}} \right|,{f_y}(x) = \left| {\frac{{\sin \left[ {{{\left( {{N_y}\pi x} \right)} \mathord{\left/
{\vphantom {{\left( {{N_y}\pi x} \right)} 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]}}{{\sin \left[ {{{\pi x} \mathord{\left/
{\vphantom {{\pi x} 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]}}} \right|.
\end{equation}
Fig. {\ref{four}} illustrates four beams with codewords ${{{\bf{\hat a}}}_k}(V_1,H_1)$, ${{{\bf{\hat a}}}_k}(V_1,H_2)$, ${{{\bf{\hat a}}}_k}(V_2,H_1)$, and ${{{\bf{\hat a}}}_k}(V_2,H_2)$. Assuming that $V_t$ is the direction of intersections between beam 1 and beam 3 (or between beam 2 and beam 4), the two beams should yield the same beam gain on the direction of $V_t$, i.e.,
\begin{figure}[t]
\centering
\includegraphics[width=2.2in]{edge.eps}
\caption{The directions of four narrow beams w.r.t. $V$ and $H$.}\label{four}
\vspace{-12pt}
\end{figure}
\begin{subequations}
\begin{align}
& A[{{{\bf{\hat a}}}_k}({V_1},H),({V_t},H)] = A[{{{\bf{\hat a}}}_k}({V_2},H),({V_t},H)]\\
&\Rightarrow {f_z}({V_1} - {V_t}) = {f_z}({V_2} - {V_t}) \Rightarrow \left| {{V_1} - {V_t}} \right| = \left| {{V_2} - {V_t}} \right|\notag\\
&\Rightarrow \;{V_t} = \frac{{{V_1} + {V_2}}}{2}.\label{vt}
\end{align}
\end{subequations}
Based on (\ref{vt}), the normalized narrow beam gain on the direction of the intersection between ${{{\bf{\hat a}}}_k}({V_p},H)$ and ${{{\bf{\hat a}}}_k}({V_{p+1}},H)$ can be written as
\begin{align}
A\Big[{{{\bf{\hat a}}}_k}({V_p},H),(\frac{{{V_p} + {V_{p + 1}}}}{2},H)\Big]
= \frac{1}{{{N_a}}}{f_z}(\frac{{{V_p} - {V_{p + 1}}}}{2}){f_y}(0).\label{ann}
\end{align}
To satisfy that all the directions of the intersections have the same beam gain, based on (\ref{ann}), we have that ${f_z}(\frac{{{V_p} - {V_{p + 1}}}}{2})$ is the the same for all $n$, which is equivalent to that $V_{p + 1}-V_p$ is the same for all $n$. According to the range of $V$, we have
\begin{equation}
\theta \in [\frac{\pi }{4},\frac{{3\pi }}{4}] \Rightarrow V(\theta ) \in [ - \frac{{\sqrt 2 }}{2},\frac{{\sqrt 2 }}{2}].
\end{equation}
Since there are $N$ beams on the elevation, we set $\{V_p\}_{p=1}^N$ satisfying equal $V_{p + 1}-V_p$ within $[ - \frac{{\sqrt 2 }}{2},\frac{{\sqrt 2 }}{2}]$ and regard $\pm \frac{{\sqrt 2 }}{2}$ as the directions of intersections. Consequently, we obtain
\begin{equation}\label{vv}
{V_p} = - \frac{{\sqrt 2 }}{2} + \frac{{\sqrt 2 (2p - 1)}}{{2N}},p = 1,2,...,N.
\end{equation}
Similarly, the normalized narrow beam gain on the direction of the intersection between ${{{\bf{\hat a}}}_k}(V,H_n)$ and ${{{\bf{\hat a}}}_k}({V},H_{n+1})$ is determined by $H_{n+1}-H_n$. According to the range of $H$ with fixed $V(\theta)$, we have,
\begin{equation}
\phi \in [ - \frac{\pi }{4},\frac{\pi }{4}] \Rightarrow H(\theta ,\phi ) \in [ - \frac{{\sqrt 2 }}{2}\sin \theta ,\frac{{\sqrt 2 }}{2}\sin \theta ].
\end{equation}
Since there are $N$ beams on the azimuth, we set $\{H_n\}_{n=1}^N$ satisfying equal $H_{n + 1}-H_n$ within $[ - \frac{{\sqrt 2 }}{2}\sin \theta ,\frac{{\sqrt 2 }}{2}\sin \theta ]$ and regard $\pm \frac{{\sqrt 2 }\sin \theta}{2}$ as the directions of intersections. Consequently, we obtain
\begin{equation}\label{hh}
{H_n}(\theta) = - \frac{{\sqrt 2 }}{2} + \frac{{\sqrt 2 (2n - 1)\sin \theta }}{{2N}},n = 1,2,...,N.
\end{equation}
By leveraging the following inverse transformation,
\begin{equation}
\left\{ {\begin{aligned}
&{{\theta _p} = \arccos {V_p}},\qquad p=1,2,...,N\\
&{{\phi _n} = \arcsin \frac{{{H_n}}}{{\sin \theta }}},\quad \;n=1,2,...,N
\end{aligned}} \right. ,
\end{equation}
we get the $N \times N$ narrow beams shown in (\ref{15o}). By this design, for any $N$ beams distributed with the same elevation angle $\theta$, all the directions of intersections have the same gain, denoted by $\eta_{\theta}$. However, it is interesting to point out that the same gain of intersections can be only guaranteed in terms of the 2D azimuth plane since $\eta_{\theta}$ changes with $\theta$.
The corresponding worst-case performance is the normalized narrow beam gain on the direction of intersection between ${{{\bf{\hat a}}}_k}({V_n},H_n(\theta _n))$ and ${{{\bf{\hat a}}}_k}({V_{n+1}},H_{n+1}(\theta _{n+1}))$, i.e.,
\begin{equation}\label{2277}
\begin{split}
&A\Big[{{{\bf{\hat a}}}_k}({V_n},{H_n}),(\frac{{{V_n} + {V_{n + 1}}}}{2},\frac{{{H_n} + {H_{n + 1}}}}{2})\Big]\\
&\qquad\qquad\quad= \frac{1}{{{N_a}}}{f_z}(\frac{{{V_n} - {V_{n + 1}}}}{2}){f_y}(\frac{{{H_n} - {H_{n + 1}}}}{2}).
\end{split}
\end{equation}
Define a function $G(v,h)=\frac{1}{{{N_a}}}{f_z}(v){f_y}(h)$. By substituting (\ref{vv}) and (\ref{hh}) into (\ref{2277}), we have
\begin{equation}\label{worstt}
\begin{split}
&A\Big[{{{\bf{\hat a}}}_k}({V_n},{H_n}),(\frac{{{V_n} + {V_{n + 1}}}}{2},\frac{{{H_n} + {H_{n + 1}}}}{2})\Big] \\
&\qquad\qquad\qquad\qquad\qquad\qquad= G\Big( {\frac{{\sqrt 2 }}{{2N}},\frac{{\sqrt 2 \sin \theta }}{{2N}}} \Big).
\end{split}
\end{equation}
Equation (\ref{worstt}) indicates that if $N$ beams are distributed with the same elevation angle $\theta$, the worst-case performance of these beams decreases when $\theta$ close to $\pi/2$. Based on (\ref{vv}) and (\ref{2277}), the elevation angles $\{\theta_p\}_{p=1}^N$ are given by (\ref{15c}). Thus, when $N$ is odd, the closet $\theta_p$ is with $p=\frac{N+1}{2}$. When $N$ is even, the closet $\theta_p$ is with $p=\frac{N}{2}$ or $p=\frac{N}{2}+1$. Thereby, we obtain the worst-case performance of the all $N^2$ beams as given in (\ref{p1}) and (\ref{p2}).
\section{Proof of Proposition 2}
Assume that the directions of the LoS path at the maximum decoding SNR in the interval $T_n$ are exactly in the center of the range of the optimal narrow-beam pair. In this case, the decoding SNR is the highest, i.e.,
\begin{equation}\label{e27}
{\Gamma ^*}({T_n}){\rm{ = }}\frac{P}{{{\sigma ^2}}}{\left| {{\bf{\bar w}}_n^H{{\bf{H}}^{\rm{*}}}{{{\bf{\bar f}}}_n}} \right|^2} \ge \Gamma ({T_n}){\rm{ = }}\frac{P}{{{\sigma ^2}}}{\left| {{\bf{\bar w}}_n^H{\bf{H}}{{{\bf{\bar f}}}_n}} \right|^2}.
\end{equation}
When the directions of the LoS path are on the coverage edges of both ${{{\bf{\bar w}}}_n}$ and ${{{\bf{\bar f}}}_n}$, the decoding SNR holds that
\begin{equation}\label{e28}
\Gamma {\rm{ = }}\frac{P}{{{\sigma ^2}}}{\left| {({\eta _{{\rm{worst}}}}{\bf{\bar w}}_n^H){{\bf{H}}^{\rm{*}}}({\eta _{{\rm{worst}}}}{{{\bf{\bar f}}}_n})} \right|^2}{\rm{ = }}\eta _{{\rm{worst}}}^4{\Gamma ^{\rm{*}}}({T_n}).
\end{equation}
Thus, if ${{{\bf{\bar w}}}_n}$ and ${{{\bf{\bar f}}}_n}$ are not the optimal narrow-beam pair, the decoding SNR should be less than $\eta _{{\rm{worst}}}^4{\Gamma ^{\rm{*}}}({T_n})$. However, ${\Gamma ^{\rm{*}}}({T_n})$ is unavailable in practice. As a result, we choose a lower bound, i.e., $\eta _{{\rm{worst}}}^4\Gamma ({T_n})$, as an alternative based on the inequality in ({\ref{e27}}).
\end{appendices}
|
1,116,691,500,931 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure}[]
\centering
\includegraphics[width=1\linewidth]{RGBGrayRetrieval.jpg}
\caption{The retrieval results of the model trained with visible (RGB) image and the model trained with grayscale image on Market1501\cite{market1501} are displayed. It shows that the color deviation between the query image and gallery image will affect the retrieval results, and the retrieval results of some samples will be better after ignoring the color information. The numbers on the images indicate the rank of similarity in the retrieval results, the red and green numbers denote the wrong and correct results, respectively.}
\label{fig:onecol}
\end{figure}
Person re-identification (ReID) is to match the same person across diferent cameras and scenes\cite{survey,market1501,baseline,Li_2021_CVPR}. This technology have been widely applied to video surveillance\cite{Hou_2021_CVPR,Tian_2021_CVPR,Liu_2021_CVPR}, image retrieval\cite{sketch-criminal,sketch-based}, criminal investigation\cite{sketch-criminal}, target tracking\cite{Beyer_2017_CVPR_Workshops} and others. The challenge of this task is that images captured by different cameras often contain significant intra-class variation caused by variations in viewpoint, human pose changes, occlusions, and color deviation under variable camera conditions, etc. As a result, the appearance of the same pedestrian image with great changes, making intra-class (the same pedestrian) metric distance larger than inter-class (different pedestrians). ReID usually combines representation learning\cite{zheng2017discriminatively,matsukawa2016person,fan2019spherereid,lin2019improving,zheng2016person,yao2019deep} with metric learning\cite{wang2019ranked,varior2016gated,varior2016siamese,schroff2015facenet,liu2017end,cheng2016person,hermans2017defense}, and combines classification loss\cite{yao2019deep,fan2019spherereid,zheng2017discriminatively} with triplet loss\cite{hermans2017defense,cheng2016person} in the training stage to optimize the neural network. In the inference stage, it is only necessary to use Cosine distance or Euclidean distance to measure the similarity between the query image and the gallery image, and then rank the gallery images according to the similarity, and finally use the re-ranking technique\cite{re_ranking} to further refine the search results.
The complexity of the inherent challenge of ReID means that its demand for data has the same complexity, and the complex data demand is difficult to meet and balance by the training set, which is also accompanied by the potential problem that the model overfits the only training data and lacks robustness. The dataset is hard to cover different camera environments and all their variations at different times, so the trained models tend to overfit the given training set and lack robustness to additional scenarios. It is no doubt that color features are important discriminative features, but color features instead limit the model to make correct predictions in some cases. For example, because white and gray, black and dark blue, and brown and yellow are similar under some lighting conditions, it is difficult for the model to make correct predictions for negative samples that are similar to target after overfitting the color deviation variations. As shown in (a) to (c) in Figure 1, the prediction results made by the model trained with grayscale images in this case are better after discarding the color bias.
Realistically, color deviations cause domain gaps exist both between datasets and within datasets\cite{zheng2019joint,wei2018person}. These color biases are practically inexhaustible. Instead of generating a variety of data to let the model "see" these variations (especially intra-class variations)\cite{zheng2019joint} during training to enhance the robustness to input variations, it is better to balance the weight between color features and other important discriminant features implicitly. Aiming at the inherent color deviation problem that the images obtained under different shooting conditions, this paper proposes a strategy to eliminate deviation with deviation based on the assumption that the retrieval results of some samples will be better when discarding color information, which is named random color dropout (RCD). RCD balances the weight between color features and other important discriminant features in neural network by discarding part of the color information in the training data, so as to overcome the influence of color deviation. This strategy exists in various forms. For example, color deviation can be overcome with biased grayscale information or sketch information (or contour information). Taking grayscale as an example, it can be randomly selected a rectangular area in the RGB image and replace its pixels with the same rectangular area in the corresponding grayscale image , thus it generates a training image with different areas of biases during ReID model training. Compared with existing methods based on generative adversarial networks (GANs)\cite{goodfellow2014generative,zheng2019joint,wei2018person,zhong2018camera,deng2018image,liu2018pose,qian2018pose}, the proposed method is more lightweight and effective because it not only does not introduce new noise but also saves a large amount of computational resources. At the same time, this strategy enables the model to naturally have cross-modal retrieval\cite{sketch-criminal,sketch-based,zhu2020hetero,huang2022modality,ye2018visible,li2020infrared} capabilities. For example, when taking the contour information as the intermediary to overcome the color deviation, the cross-modal retrieval between sketch and RGB visible image can be realized.
In addition, this paper analyzes the relationship between RCD and the generalization ability of neural networks from the perspective of classification, and reveals the intrinsic reasons that networks trained with RCD may outperform ordinary networks. Experiments show that the proposed method not only increases the robustness of the model to color deviation but also bridges the domain gap between different datasets, which has significant advantages over the existing state-of-the-art method. Taking the grayscale as an example, the RCD strategy proposed in this paper includes global grayscale transformation, local grayscale transformation, and a combination of these two. The method has the following advantages:
It is a lightweight approach which does not require any additional parameter learning or memory consumption. It can be combined with various CNN models without changing the learning strategy. It is a complementary approach to existing data augmentation.
The main contributions of this paper are summarized as follows:
$\bullet$ This paper proposes a learning strategy which against color deviation with information deviation, which decreases the overfitting and increases generalization ability of the model.
$\bullet$ A simple and effective cross-modal retrieval method is proposed, which does not need complex network design.
$\bullet$ This paper proves that the network trained with RCD may be better than the ordinary network from the perspective of classification.
$\bullet$ The strategy proposed in this paper is proved to be effective in improving ReID performance through extensive experiments and analysis. The effectiveness of the proposed method is verified on several baselines and representative datasets.
This work was previously published as a preprint on Arxiv and extended on the basis of it, including related demonstration and cross-modal retrieval.
\begin{figure*}[htbp]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=0.8\linewidth]{framework3.jpg}
\caption{Framework diagram of our Random Color Dropout (RCD): The application of global grayscale transformation and local grayscale transformation in the framework.}
\end{figure*}
\section{Related Work}
The complexity of the inherent challenge of ReID means that its demand for data has the same complexity, and the failure to fully meet the complex demand of the data is the source of the problem of overfitting and insufficient generalization of the model to the training data. Improving generalization ability is the focus of research in convolutional neural networks (CNNs). Therefore, data augmentation is effective in improving the generalization ability of the model.
\subsection{Classic Data Augmentation}
Many data augmentation\cite{krizhevsky2012imagenet} methods have been proposed, such as random cropping\cite{krizhevsky2012imagenet}, flipping\cite{simonyan2014very}, which are well known to play an important role in classification, detection and ReID. CutMix\cite{yun2019cutmix} replaces one patch
of an image with a patch from another image. Random erasing or cutout\cite{zhong2020random,devries2017improved} adds noise block to the image to regularize the network, while it helps to solve the occlusion problem in the ReID. The above methods are regarded as indispensable methods, and they are applied to various baselines\cite{luo2019bag,he2020fastreid,zheng2020vehiclenet,zheng2018discriminatively}. These techniques have been proved to be effective in improving the prediction accuracy, and they are complementary to each other\cite{luo2019bag}.
In solving the problem of color deviation, the early work\cite{li2014deepreid} used the filter and the maximum grouping layer to learn the illumination transformation, divided the pedestrian image into more small pieces to calculate the similarity, and uniformly handled the problems of misalignment, occlusion and illumination variation under the deep neural network; \cite{liao2015person} performed pre-processing before feature extraction and used multiscale Retinex algorithm to enhance the color information of light shaded regions to improve the color changes caused by lighting condition changes.
With the increasing maturity of GANs, GANs-based approaches for data augmentation have become an active research field.
\subsection{Data Augmentation Based on GANs}
The goal of these methods is to mitigate the effect of color deviation or human-pose variation, and to improve the robustness of the model by learning the invariant features from the variation of the input. The appearance details and the emphases generated by different GANs-based methods are also different, but their goal is all to compensate for the difference between the source and target domains. For example, CamStyle [52] generates new data for transferring different camera styles to learn invariant features between different cameras to increase the robustness of the model to camera style changes; CycleGAN\cite{zhu2017unpaired} was applied in\cite{deng2018image,zhong2019invariance} to transfer pedestrian image styles from one dataset to another; StarGAN\cite{choi2018stargan} was used by\cite{zhong2018generalizing} to generate pedestrian images with different camera styles. Wei et al.\cite{wei2018person} proposed PTGAN to achieve pedestrian image transfer across different ReID datasets. It uses semantic segmentation to extract foreground masks to assist style transfer, and converts the background into the desired style of the dataset while keeping the foreground unchanged. different from global style transfer, DGNet\cite{zheng2019joint} utilizes GANs to transfer clothing among different pedestrians by manipulating appearance and structural details to generate more diverse data to reduce the impact of color changes on the model, which effectively improves the generalization ability of the model. In addition,\cite{bak2018domain} uses 3D engine and environment rendering technology to build a virtual pedestrian data set with multiple lighting conditions, which is combined with other large real data sets to jointly pre-train a model.
The method proposed in this paper has been partially validated in other works. \cite{Gong_2022_CVPR} proved that the lack of robustness to color deviation is one of the main reasons why the model is vulnerable to adversarial metric attacks\cite{bouniot2020vulnerability,bai2019metric,wang2020transferable}, and enhanced the model's adversarial defense using the method proposed in this paper; It is adopted in the new baseline proposed in\cite{ni2021flipreid,chen2021benchmarks} to help increase the generalization of the model. In addition, \cite{ryu2021detection} showed that the method proposed in this paper is also suitable for object detection.
\section{Proposed Methods}
\label{sec:}
The RCD strategy proposed in this paper includes global transformation, local transformation, and a combination of the two. Taking grayscale as an example, it includes global grayscale transformation, local grayscale transformation, and combinations of the two. At the end of this subsection, we give the corresponding analysis of the proposed method. The framework of this method is showed in Figure $\color{red}2$.
\subsection{Global Grayscale Transformation}
In the data loading, it randomly samples K identities and M images of per person to constitute a training batch which size equals to $B=K\times M$. The set is denoted as $x^{v}=\{x_i^{v}|i=1,2,...,M\times K\}$, where $x_i^{v}=\{x_i^{v}|y_i \}$ represents the i-th sample image of the training batch, and $y_i$ represents the class label of the pedestrian.
Taking the grayscale as an example, this method randomly performs global grayscale transformation on the training batch with a probability, and then inputs into the model for training. This process can be defined as:
\begin{equation}
I_{g} = t(R,G,B)
\end{equation}
where $t(\bullet)$ is the grayscale image conversion function, which is implemented by performing pixel-by-pixel accumulation calculations on the R, G, and B channels of the original visible RGB image; y is the label of the sample, the converted grayscale image label of $x^g$ are the same as the original ones:
\begin{equation}
(x^{g}|y) = (x^v|y)
\end{equation}
the procedure of LGT is shown in Algorithm.$\color{red}1$.
\begin{algorithm}[t]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInput{Initialization}{Initialization}
\caption{Global Graycale Transformation}\label{algorithm 1}
\Input{Input image $I$; \\
Graycale transformation probability $p$; \\
.}
\Output{Grayscale images $I^{\ast}$.}
\Initialization{$p_1 \leftarrow $ Rand (0, 1).}
\eIf{$p_1 \geq p$}{
$I^{\ast} \leftarrow I$; \\
\Return{$I^{\ast}$}.
}{
$I^{\ast} \leftarrow $ t($I$);\\
\Return{$I^{\ast}$}.
}
\end{algorithm}
\subsection{Local Grayscale Transformation}
In addition to transforming the data globally, we also consider transforming the data locally so that the model adapt better to the significantly varying bias due to color dropout from the local variation.
The local grayscale transformation (LGT) for each visible image $x^v$ can be achieved by the following equations:
\begin{equation}
x^g = t(x^v),
\end{equation}
\begin{equation}
rect = RandPosition(x^v),
\end{equation}
\begin{equation}
x^{lg} = LGT(x^v,x^g,rect)
\end{equation}
and
\begin{equation}
(x^{lg}|y) = (x^v|y)
\end{equation}
where $x^g$ is the grayscale images, and $t(\bullet)$ is the grayscale transformation funtion; $RandPosition(\bullet)$ is used to generate a random rectangle in the image, and the function of $LGT(\bullet)$ is to give the pixels in the rectangle corresponding to the $x^g$ image to the $x^v$ image; $x^{lg}$ is the sample after local grayscale transformation, and $y$ is the label of the transformed image.
\begin{algorithm}[t]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInput{Initialization}{Initialization}
\caption{Local Graycale Transformation}\label{algorithm 2}
\Input{Input image $I$; \\
Image size $W$ and $H$; \\
Area of image $S$; \\
Transformation probability $p$; \\
Area ratio range $s_l$ and $s_h$; \\
Aspect ratio range $r_1$ and $r_2$.}
\Output{Transformed image $I^{\ast}$.}
\Initialization{$p_1 \leftarrow $ Rand (0, 1).}
\eIf{$p_1 \geq p$}{
$I^{\ast} \leftarrow I$; \\
\Return{$I^{\ast}$}.
}{
\While {True}{
$S_t\leftarrow $ Rand $(s_l, s_h)$$\times S$;\\
$r_t \leftarrow $ Rand $(r_1, r_2)$;\\
$H_t \leftarrow \sqrt{S_t \times r_t}$,~ $W_t \leftarrow \sqrt{\frac{S_t}{r_t}}$;\\
$x_t \leftarrow $ Rand $(0, W)$,~ $y_t \leftarrow $ Rand $(0, H)$;\\
\If{$x_t + W_t \le W$ and $y_t + H_t \le H$}{
$Position \leftarrow (x_t, y_t, x_t+W_t, y_t+H_t)$;\\
$I(Position) \leftarrow $ $t(Position)$;\\
$I^{\ast} \leftarrow I$;\\
\Return{$I^{\ast}$}.
}
}
}
\end{algorithm}
In the process of model training, we conduct LGT randomly transformation on the training batch with a probability. For an image $I$ in a batch, denote the probability of it undergoing LGT be $p_r$, and the probability of it being kept unchanged be $1-p_r$. In this process, it randomly selects a rectangular region in the image and replaces it with the pixels of the same rectangular region in the corresponding grayscale image. Thus, training images which include regions with different levels of grayscale are generated. Among them, $s_l$ and $s_h$ are the minimum and maximum values of the ratio of the image to the randomly generated rectangle area, and the $S_t$ of the rectangle area limited between the minimum and maximum ratio is obtained by $S_t$ ← $Rand (s_l ,s_h ) \times S$, $r_t$ is a coefficient used to determine the shape of the rectangle. It is limited to the interval ($r_1$, $r_2$ ). $x_t$ and $y_t$ are randomly generated by coordinates of the upper left corner of the rectangle. If the coordinates of the rectangle exceed the scope of the image, the area and position coordinates of the rectangle are re-determined. When a rectangle that meets the above requirements is found, the pixel values of the selected region are replaced by the corresponding rectangular region on the grayscale image converted from RGB image. As a result, training images which include regions with different levels of grayscale are generated, and the object structure is not damaged. The procedure of LGT is shown in Algorithm.$\color{red}2$.
\subsection{Loss function}
ReD usually combines classification loss and triplet loss to train the model\cite{stong_baseline,FastReID}. Here, we use $x_i^{v}$ to denote the $i$-th RGB image in a training batch, and $x_i^{g}$ to denote the image obtained after GGT or LGT conversion. Thenthe features of $x_i^{v}$ and $x_i^{g}$ can be expressed as:
\begin{equation}
\left\{
\begin{array}{ll}
f_i^{v}=f(x_i^{v})\\
f_i^{g}=f(f_i^{g})
\end{array}
\right.
\end{equation}
The Euclidean distance between two samples $x_i^{g}$ and $x_j^{g}$ is denoted as $D(x_i^{g},x_j^{g})$, where The subscript $\{ i, j \}$ denotes the image index in the training batch. Formally, let $x_i^{g}$ be the anchor sample, the triple $\{ x_i^{g},x_j^{g}, x_k^{g}\}$ is selected in the following way:
\begin{equation}
P_{i,j}^{g} = \max_{\forall y_i=y_j}D(x_i^{g},x_j^{g})
\end{equation}
\begin{equation}
N_{i,k}^{g} = \min_{\forall y_i\ne y_j}D(x_i^{g},x_k^{g})
\end{equation}
For each anchor point $x_i^{g}$, the above strategy selects the positive sample pair with the same pedestrian class label and the farthest the most distant positive sample pair with the same pedestrian category label and the nearest negative sample pairs, forming a triplet $\{ x_i^{g},x_j^{g^{+}}, x_k^{g^{-}} \}$ for mining grayscale information. In general, the use of The boundary parameter $\varepsilon$ is used to control the spacing of the positive and negative sample pairs. In summary, we can define the following triadic loss for training:
\begin{equation}
L_{g} = \frac{1}{n}\sum_{i=1}^{n}\max[\varepsilon+D(x_i^{g},x_j^{g^{+}})+D(x_i^{g},x_k^{g^{-}})]
\end{equation}
In addition, $x^v$ and $x^g$ are trained using a shared identity classifier $\phi$. The predicted probability of identity labels $y_i$ is define as $p(y_i|x_i^{g};\phi)$. The ID loss is represented as follows:
\begin{equation}
L_{ide} = -\frac{1}{n}\sum_{i=1}^{n}log(p(y_i|x_i^{g};\phi))
\end{equation}
Therefore, the overall loss during random grayscale transformation is:
\begin{equation}
L_{total}=L_{g}+L_{ide}
\end{equation}
\subsection{Analysis of Random Color Dropping Policy}
Here suppose there are $m$ instances, the expected output, i.e. $D = [d_1, d_2, …, d_m]^T$ where $d_j$ denotes the expected output on the $j$-th instance, and the actual output of the $i$-th component neural network, i.e. $F_i=[f_{i1}, f_{i2}, …, f_{im}]^T$ where $f_{ij}$ denotes the actual output of the $i$-th component network on the $j$-th instance. $D$ and $F_i$ satisfy that $d_j$ $\in \{-1, +1\} (j = 1, 2, …, m)$ and $f_{ij}$$\in\{-1, +1 \} (i = 1, 2, …, N; j = 1, 2, …, m ) $ respectively. It is obvious that if the actual output of the $i$-th component network on the $j$-th instance is correct according to the expected output then $f_{ij}d_j = +1$, otherwise $f_{ij}d_j = -1$. Thus the generalization error of the $i$-th component neural network on those $m$ instances is:
\begin{equation}
E_i = \frac{1}{m}\sum_{j=1}^m {Error(f_{ij}d_j)}
\end{equation}
where $Error(x)$ is a function defined as:
\begin{equation}
Error(x)=\left\{
\begin{array}{ll}
1, \quad\quad if \quad x=-1\\
0.5, \quad if \quad x=0\\
0, \quad\quad if \quad x=1
\end{array}
\right.
\end{equation}
Here we introduce a vector $Sum = [Sum_1, Sum_2, …, Sum_m]^T$ where $Sum_j$ denotes the sum of the actual output of all the component neural networks on the $j$-th instance, i.e.
\begin{equation}
Sum_j = \sum_{i=1}^N f_{ij}
\end{equation}
Then the output of the neural network ensemble on the j-th instance is:
\begin{equation}
\hat{f_j} = Sgn(Sum_j)
\end{equation}
where $Sgn(x)$ is a function defined as:
\begin{equation}
Sgn(x)=\left\{
\begin{array}{ll}
1, \quad\quad if \quad x>0\\
0, \quad\quad if \quad x=0\\
-1, \quad if \quad x<0
\end{array}
\right.
\end{equation}
It is obvious that $\hat{f}_j\in \{-1, 0, +1 \} (j = 1, 2, …, m)$ . If the actual output of the ensemble on the $j$-th instance is correct according to the expected output then $ \hat{f_j}d_j = +1$; if it is wrong then $ \hat{f}_jd_j = -1$; otherwise $ \hat{f}_jd_j = 0$, which means that there is a tie on the j-th instance, e.g. three component networks vote for +1 while other three networks vote for -1. Thus the generalization error of the ensemble is:
\begin{equation}
\hat{E} = \frac{1}{m}\sum_{j=1}^m {Error(\hat{f_{j}}d_j)}
\end{equation}
Here suppose that the k-th component neural network is trained using grayscale images. Then the output of the new set on the $j$-th instance is:
\begin{equation}
\hat{f_j^{'}} = Sgn(Sum_{j(j \neq k)} - f_{kj})
\end{equation}
and the generalization error of the new ensemble is:
\begin{equation}
\hat{E^{'}} = \frac{1}{m}\sum_{j=1}^m {Error(\hat{f_{j}^{'}}d_j)}
\end{equation}
It is assumed that a certain number of networks with deviations will not affect the performance of the overall neural network, and the retrieval results of some examples will be better after ignoring the color information.
\begin{equation}
Error(\hat{f_j^{'}}d_j) \leqslant Error(\hat{f_j}d_j)
\end{equation}
From Eq.$\color{red}(18)$ and Eq.$\color{red}(20)$ we can derive that if Eq.$\color{red}(21)$ is satisfied then $\hat{E}$ is not smaller than $\hat{E^{'}}$, which means that the ensemble including k-th component neural network which is trained using grayscale images is better than the one no including:
\begin{equation}
\begin{split}
\sum_{j=1}^m\{Error(Sgn(Sum_j)d_j)- \\Error(Sgn(Sum_{j(j \neq k)+f{kj}})d_j) \} \geq 0
\end{split}
\end{equation}
\section{Comparison and Analysis}
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=1\linewidth]{GGT.jpg}
\caption{Performance of GGT under different hyperparameters on Market1501.}
\end{figure}
\subsection{Datasets and Evaluation criteria}
\textbf{Datasets}. Market-1501~\cite{market1501} includes 1,501 pedestrians captured by six cameras (five HD cameras and one low-definition camera). DukeMTMC~\cite{duke} is a large-scale multi-target, multi-camera tracking dataset, a HD video dataset recorded by 8 synchronous cameras, with more than 2,700 individual pedestrians. The above two datasets are widely used in ReID studies. MSMT17\cite{wei2018person}, created in winter, was presented in 2018 as a new, larger dataset closer to real-life scenes, containing a total of 4,101 individuals and covering multiple scenes and time periods.
These three datasets are currently the largest datasets of ReID, and they are also the most representative because they collectively contain multi-season, multi-time, HD, and low-definition cameras with rich scenes and backgrounds as well as complex lighting variations.
Sketch ReID dataset\cite{sketch-criminal} contains 200 persons, each of which has one sketch and two photos. Photos of each person were captured during daytime by two cross-view cameras. It cropped the raw images (or video frames) manually to make sure that every photo contains the one specific person. It have a total of 5 artists to draw all persons’sketches and every artist has his own painting style.
\textbf{Evaluation criteria}. Following existing works~\cite{market1501}, Rank-k precision and mean Average Precision (mAP) are adapted as evaluation metrics. Rank-1 denotes the average accuracy of the first return result corresponding to each query image. mAP denotes the mean of average accuracy, the query results are sorted according to the similarity, the closer the correct result is to the top of the list, the higher the score.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=1\linewidth]{LGT.jpg}
\caption{Performance of LGT under different hyperparameters on Market1501.}
\end{figure}
\subsection{Hyper-Parameter Setting}
During CNN training, two hyper-parameters need to be evaluated. One of them is GGT probability $p$. Firstly, we take the hyper-parameter $p$ as 0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3,..., 1 for the GGT experiments. Then we take the value of each parameter for three independent repetitions of the experiments. Finally, we calculate the average of the final result. The results of different p are shown in Fig.$\color{red}3$. We can see that when $p=0.05$, the performance of the model reaches the maximum value in Rank-1 and mAP. If we do not specify, the hyper- parameter is set $p=0.05$ in the next experiments.
Another hyper-parameter is LGT probability $p_r$. We take the hyper-parameter $p_r$ as the same as above for the LGT experiments, whose selection process is similar to the above $p$. The results of different $p_r$ are shown in Fig.$\color{red}4$.
Obviously, when $p_r=0.4$ or $p_r=0.7$, the model achieves better performance. And the best performance is achieved when $p_r=0.4$. If we do not specify, the hyper- parameter is set $p_r=0.4$ in the subsequent experiments.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=1\linewidth]{GGT-LGT.jpg}
\caption{Performance of combining GGT with LGT under different hyperparameters on Market1501.}
\end{figure}
\textbf{Evaluation of GGT and LGT}. Compared with the best results of GGT on baseline\cite{zheng2018discriminatively}, the accuracy of LGT is improved by 0.5\% and 1.4\% on Rank-1 and mAP, respectively. Under the same conditions using re-Ranking\cite{re_ranking}, the accuracy of Rank-1 and mAP is improved by 0\% and 0.4\%, respectively. Therefore, the advantages of LGT are more obvious when re-Ranking is not used. However, Fig.$\color{red}4$ also shows that the performance improvement brought by LGT is not stable enough because of the obvious fluctuation in LGT, while the performance improvement brought by GGT is very stable. Therefore, we improve the stability of the method by combining GGT with LGT.
\textbf{Evaluation by Combining GGT with LGT}. First, we fix the hyper-parameter value of GGT to $p=0.05$, then keep the control variable unchanged to further determine the hyper-parameter of LGT. Finally, we take the hyper-parameter pr of LGT to be 0.1, 0.2, ···, 0.7 to conduct combination experiments of GGT and LGT, and conduct 3 independent repeated experiments for each parameter $p_r$ to get the average value. The result is shown in $\color{red}5$. It can be seen that the performance improvement brought by the combination of GGT and LGT is more stable and with less fluctuation, and the comprehensive performance of the model is the best when the hyper-parameter value of LGT is $p_r=0.4$.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=1\linewidth]{GST.jpg}
\caption{Diagram of Global Sketch Transformation (GST) and Local Sketch Transformation (LST).}
\end{figure}
\subsection{Comparison Experiments}
\textbf{Performance comparison and analysis}. We first evaluate baseline \cite{zheng2018discriminatively} on the Market-1501 dataset\cite{market1501}. To be consistent with recent works, we follow the new training/testing protocol to conduct our experiments by k-reciprocal re-ranking (RK)~\cite{re_ranking}. As can be seen from Fig.$\color{red}3$ and Fig.$\color{red}4$, our method improves by 1.2\% on Rank-1 and 3.3\% on mAP on the baseline, and 1.5\% on Rank-1 and 2.1\% on mAP above baseline in the same conditions using the re-Ranking\cite{re_ranking}.
Secondly, we further test the method in this paper on the baselines\cite{stong_baseline,FastReID} with better performance. As we can see from Table.$\color{red}1$ to Table.$\color{red}3$, the best results of our method improve by 0.6\% and 1.3\% on the Rank-1 and mAP on the strong baseline\cite{stong_baseline}, respectively, and 0.8\% and 0.5\% Rank-1 and mAP above baseline under the same conditions using the re-Ranking\cite{re_ranking}, respectively. On fastReID\cite{FastReID}, our method is 0.2\% higher and 0.9\% than baseline in Rank-1 and mAP, respectively, and higher 0.1\% and 0.3\% than baseline under using re-Ranking.
The default configuration on the Strong Baseline\cite{stong_baseline} and FastReID\cite{FastReID} uses data augmentation such as random flipping\cite{simonyan2014very}, cropping\cite{krizhevsky2012imagenet}, and erasing\cite{zhong2020random}. The method proposed in this paper further improves the model accuracy on the basis of using them, which shows that our method can be combined with other data augmentation methods.
\begin{table}[]
\centering
\setlength\tabcolsep{3pt
\caption{Performance comparison on Market1501 dataset.}
\begin{tabular}{ccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{2}{c}{Market1501} \\ \cline{2-3}
& Rank-1(\%) & mAP(\%) \\ \hline
IANet\cite{IANet} & 94.4 & 83.1 \\
DGNet\cite{zheng2019joint} & 94.8 & 86.0 \\
SCAL\cite{SCAL} & 95.8 & 89.3 \\
Circle Loss\cite{Circle} & 96.1 & 87.4 \\
SB\cite{stong_baseline} & 94.5 & 85.9 \\ \hline
SB\cite{stong_baseline} + RK\cite{re_ranking} & 95.4 & 94.2 \\
SB + GGT(ours) & \textbf{94.6} & 85.7 \\
SB + GGT+ RK(ours) & \textbf{96.2} & \textbf{94.7} \\
SB + LGT(ours) & \textbf{95.1} & \textbf{87.2} \\
SB + LGT + RK(ours) & \textbf{95.9} & \textbf{94.4} \\ \hline
FastReID\cite{FastReID} & 96. 3 & 90.3 \\
FastReID + RK & 96.8 & 95.3 \\
FastReID + GGT(ours) & \textbf{96.5} & \textbf{91.2} \\
FastReID + GGT + RK(ours) & \textbf{96.9} & \textbf{95.6} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\setlength\tabcolsep{3pt
\caption{Performance comparison on DukeMTMC dataset.}
\begin{tabular}{ccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{2}{c}{DukeMTMC} \\ \cline{2-3}
& Rank-1(\%) & mAP(\%) \\ \hline
IANet\cite{IANet} & 87.1 & 73.4 \\
DGNet\cite{zheng2019joint} & 86.6 & 74.8 \\
SCAL\cite{SCAL} & 89.0 & 79.6 \\ \hline
SB\cite{stong_baseline} & 86.4 & 76.4 \\
SB + RK\cite{re_ranking} & 90.3 & 89.1 \\
SB + GGT(ours) & \textbf{87.8} & \textbf{77.3} \\
SB + GGT+ RK(ours) & \textbf{90.9} & \textbf{89.2} \\
SB + LGT(ours) & \textbf{87.3} & \textbf{77.3} \\
SB + LGT + RK(ours) & \textbf{91} & \textbf{89.4} \\ \hline
FastReID\cite{FastReID} & {92.4} & {83.2} \\
FastReID + RK & 94.4 & 92.2 \\
FastReID + LGT(ours) & \textbf{92.8} & \textbf{84.2} \\
FastReID + LGT + RK(ours) & \textbf{94.3} & \textbf{92.7} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\setlength\tabcolsep{3pt
\caption{Performance comparison on MSMT17 dataset.}
\begin{tabular}{ccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{2}{c}{MSMT17} \\ \cline{2-3}
& Rank-1(\%) & mAP(\%) \\ \hline
IANet\cite{IANet} & 75.5 & 46.8 \\
DGNet\cite{zheng2019joint} & 77.2 & 52.3 \\
RGA-SC\cite{zhang2020relation} & 80.3 & 57.5 \\
SCSN\cite{chen2020salience} & 83.8 & 58.5 \\
AdaptiveReID\cite{ni2021adaptive} & 81.7 & 62.2 \\ \hline
FastReID\cite{FastReID} & 85.1 & 63.3 \\
FastReID + GGT(ours) & \textbf{86.2} & \textbf{65.3} \\
FastReID + GGT\&LGT(ours) & \textbf{86.2} & \textbf{65.9} \\ \hline
\end{tabular}
\end{table}
\textbf{Cross-domain tests}. Cross-domain person re-identification aims at adapting the model trained on a labeled source domain dataset to another target domain dataset without any annotation. It is pointed out by\cite{stong_baseline} that the higher accuracy of the model does not mean that it has better generalization capacity. In response to the above potential problems, we use cross-domain tests to verify the robustness of the model. Experiments show that the proposed method effectively enhances the generalization capacity of the model, and the Table $\color{red}2$ shows the cross-domain experiments of the proposed method between two datasets, Market-1501\cite{market1501} and DukeMTMC\cite{duke}. In order to further explore the effectiveness of the proposed method in cross-domain experiments, we use GGT to conduct the following cross-domain experiments on strong baseline\cite{stong_baseline}. The experiments are shown in Table.$\color{red}4$.
In Table 4, +REA means that the trick of Random Erasing is used in model training, -REA means turning it off. Experimental results show that random erasing\cite{zhong2020random} can also significantly improve the performance of the ReID model, but it will cause a significant drop in cross-domain performance. The proposed method can not only significantly improve the cross-domain performance of the ReID model, but also be more robust because of learning more discriminative features.
\begin{table}[]\small
\centering
\setlength\tabcolsep{1pt
\caption{The performance of different models is evaluated on cross-domain dataset. M→D means that we train the model on Market1501\cite{market1501} and evaluate it on DukeMTMC\cite{duke}.}
\begin{tabular}{cclcl}
\hline
\multirow{3}{*}{Methods} & \multicolumn{4}{c}{Cross-Domain} \\ \cline{2-5}
& \multicolumn{2}{c}{M→D} & \multicolumn{2}{c}{D→M} \\ \cline{2-5}
& \multicolumn{2}{c}{Rank-1/mAP(\%)} & \multicolumn{2}{c}{Rank-1/mAP(\%)} \\ \hline
SB\cite{stong_baseline}+REA\cite{zhong2020random}+RK & \multicolumn{2}{c}{33.6/24.3} & \multicolumn{2}{c}{51.6/32.3} \\
SB+REA+GGT+RK(ours) & \multicolumn{2}{c}{\textbf{37.8/27.8}} & \multicolumn{2}{c}{\textbf{55.4/35.7}} \\
SB-REA+RK & \multicolumn{2}{c}{45.5/37.0} & \multicolumn{2}{c}{58.2/37.8} \\
SB-REA+GGT+RK(ours) & \multicolumn{2}{c}{\textbf{48.2/37.9}} & \multicolumn{2}{c}{\textbf{65.0/43.7}} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\setlength\tabcolsep{8pt
\caption{Performance comparison between our LGT conversion and DGNet data augmentation on Market1501.}
\begin{tabular}{cclcl}
\hline
\multirow{2}{*}{Methods} & \multicolumn{4}{c}{Market1501} \\ \cline{2-5}
& \multicolumn{2}{c}{Rank-1} & \multicolumn{2}{c}{mAP(\%)} \\ \hline
Baseline\cite{zheng2018discriminatively} & \multicolumn{2}{c}{88.8} & \multicolumn{2}{c}{71.6} \\
Baseline + DGNet\cite{zheng2019joint} & \multicolumn{2}{c}{{88.9}} & \multicolumn{2}{c}{{72.1}} \\
Baseline+ LGT(ours) & \multicolumn{2}{c}{\textbf{90.0}} & \multicolumn{2}{c}{\textbf{74.9}} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\setlength\tabcolsep{3pt
\caption{Cross-domain performance comparison between our LGT and DGNet on Market1501.}
\begin{tabular}{cclcl}
\hline
\multirow{2}{*}{Methods} & \multicolumn{4}{c}{Market1501→DukeMTMC} \\ \cline{2-5}
& \multicolumn{2}{c}{Rank-1} & \multicolumn{2}{c}{mAP} \\ \hline
Baseline\cite{zheng2018discriminatively} & \multicolumn{2}{c}{37.8} & \multicolumn{2}{c}{27.0} \\
Baseline + DGNet\cite{zheng2019joint} & \multicolumn{2}{c}{{36.7}} & \multicolumn{2}{c}{{25.6}} \\
Baseline+ LGT(ours) & \multicolumn{2}{c}{\textbf{39.7}} & \multicolumn{2}{c}{\textbf{27.9}} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\setlength\tabcolsep{3pt
\caption{Performance comparison on Market1501 dataset.}
\begin{tabular}{cclcl}
\hline
\multirow{2}{*}{Methods} & \multicolumn{4}{c}{Market1501} \\ \cline{2-5}
& \multicolumn{2}{c}{Rank-1(\%)} & \multicolumn{2}{c}{mAP(\%)} \\ \hline
Baseline\cite{zheng2018discriminatively} & \multicolumn{2}{c}{88.8} & \multicolumn{2}{c}{71.6} \\
Baseline + RK\cite{re_ranking} & \multicolumn{2}{c}{\textbf{90.5}} & \multicolumn{2}{c}{\textbf{85.2}} \\
Baseline+ GST+LST(ours) & \multicolumn{2}{c}{\textbf{88.9}} & \multicolumn{2}{c}{\textbf{72.6}} \\
\multicolumn{1}{l}{Baseline+ GST+LST+RK(ours)} & \multicolumn{2}{c}{\textbf{91.2}} & \multicolumn{2}{c}{\textbf{86.8}} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]\small
\centering
\setlength\tabcolsep{2pt
\caption{Performance comparison between our RCD and Adversarial Feature Learning on Sketch ReID dataset.}
\begin{tabular}{cccc}
\hline
Sketch ReID dataset & Rnak-1(\%) & Rnak-5(\%) & Rank-10(\%) \\ \hline
AFL\cite{sketch-criminal} & 34.0 & 56.3 & 72.5 \\
GST+LST(ours) & \textbf{42.5} & \textbf{70.0} & \textbf{87.5} \\ \hline
\end{tabular}
\end{table}
\begin{figure}[]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=0.8\linewidth]{tsne.jpg}
\caption{t-SNE~\cite{tsne} visualization of six randomly selected images with different identities on Market1501\cite{market1501}. Each image corresponds to the randomly generated images with color deviation. The same color means that they are obtained by transformation of the same image. Dots means original example.}
\end{figure}
\textbf{Comparison of state-of-the-arts}. A comparison of the performance of our method with the state-of-the-art methods DGNet\cite{zheng2019joint} on Market1501\cite{market1501} is shown in Table.$\color{red}5$ and Table.$\color{red}6$. As can be seen from Table.$\color{red}5$, our method delivers a performance improvement that far exceeds that of DGNet, the state-of-the-art GAN-based method, by more than 2.7 percentage points on mAP, which suggests that the proposed method is superior to existing data augmentation.
As can be seen from Table.$\color{red}6$, the generalization ability of the proposed method in cross-domain tests is improved by 1.9 percentage points in the Rank-1 compared with the baseline\cite{zheng2018discriminatively}, which further shows that the proposed method is better than the existing data augmentation based on generative models. It is worth noting that when the data generated by DGNet is used for model training, the cross-domain performance of the model is poorly, which confirms the point of this paper that color deviation is is difficult to exhaust and that instead of enhancing robustness to input changes by generating a variety of data for the model to "see" during training, it is better to implicitly reduce the weight of the model in the discriminant feature of color information.
\textbf{Cross-modal retrival}. Another form of strategy proposed in this paper is to take sketch image as the intermediary of balancing weight. By applying the proposed global homogeneity transformation and local homogeneity transformation, the sketch image is transformed as a homogeneous image, as shown in Figure.$\color{red}6$. It can not only improve the robustness of the model, but also realize the sketch-based ReID. We can see this clearly from Table.$\color{red}.7$ and Table.$\color{red}.8$.
In terms of cross-modal retrieval\cite{song2017deep,sketch-criminal,basaran2020efficient}, in order to match images of different modalities, existing approaches usually achieve cross-modal retrieval with the help of attention mechanisms\cite{song2017deep}, multi-stream networks\cite{basaran2020efficient}, and generative adversarial networks\cite{sketch-criminal}. Lu et al. proposed a cross-domain adversarial feature learning (AFL) method for sketch re-identification and contributed the sketch character re-identification dataset\cite{sketch-criminal}.
In order to make a fair comparison, as same as AFL\cite{sketch-criminal}, the method proposed in this paper is firstly trained on the Market-1501 dataset, and then fine tuned on sketch ReID dataset. In parameter setting, this paper set 5\% Global Sketch Transformation and 70\% Local Sketch Transformation. The experiment result shows that the performance improvement in the Sketch Re-identification more than 8\%. This experiment also shows the generality of the proposed method.
\begin{figure}[]
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[width=1\linewidth]{GradCam.jpg}
\caption{Comparison of Grad-CAM~\cite{Grad-CAM} activation map between normally trained model and our proactive defense model.}
\end{figure}
\textbf{Visualization analysis}. As the show in Figure $\color{red}7$, the model trained by DCR is robust to color variations. Therefore, we can observe that the features of examples with color deviation exhibit clustering effects better.
Grad-CAM~\cite{Grad-CAM} uses the gradient information flowing into the last convolutional layer of the CNN to visualize the importance of each neuron in the output layer for the final prediction, by which it is possible to visualize which regions of the image have a significant impact on the prediction of a model. As shown in Figure $\color{red}8$, we can see that the the normally trained model activates irrelevant parts in the case of severe color deviation, while the model trianed with RCD is still effectively activating some important parts.
\section{Conclusion}
In this paper, a simple, effective and general strategy that can be applied in computer vision to overcome color deviation. Neither does the method require large scale training like GAN, nor introduces any noise. The method uses a random homogeneous transformation to realize the modeling of different modal relationships. The model balances the weights between color features and discriminative non-color features by fitting differentiated homogeneous information in a mixed domain with information bias during the training process, thus reducing the negative impact of color deviation on ReID. In addition, this paper reveals the intrinsic reasons why networks trained with RCD outperform ordinary networks from a classification perspective. At the same time, experiments on several datasets and baselines show that the proposed method is effective and outperforms the state of the arts, and extra experiments show that the proposed strategy has natural cross modal properties.
{\small
\bibliographystyle{unsrt}
|
1,116,691,500,932 | arxiv | \section{Introduction}
\label{sec:intro}
\textit{Where should we send the police? Who should we give housing to? How should we educate our children? Who should we give unemployment benefits to? Which families should we investigate for child abuse?} AI-based predictive algorithms are being used or are being considered for use across all of these everyday public sector decisions \cite{holstein2018student,chouldechova2018case,brayne2017policing,toros2018homeless,panoptykon2015unemployed}. Many of these technologies have faced public scrutiny and opposition. For example, in St. Paul, Minnesota, an algorithm intended to assess which children were at risk of getting involved in the juvenile justice system was blocked by a group of impacted parents and teachers who organized to oppose it \cite{pomeroy2019community}. While some government agencies have established track records of community engagement around the deployment of new technologies, the perspectives of stakeholders who will be most impacted by algorithms are not always adequately considered \cite{brown2019toward,holtenmoller2020shifting,robertson2020if,zhu2018value}.
In this paper, we aim to address the following research question: \textit{What do impacted stakeholders think about data-driven technologies in the child welfare system?} To do so, we held seven workshops with 35 expert stakeholders who are personally impacted by child protective services (CPS) and/or work in CPS. We first explained to our participants how current data-driven predictive risk models (henceforth \textit{PRMs}) are designed and used. We then talked with participants about their perspectives on these technologies. We also encouraged participants to weigh in on whether current PRMs address the main problems they see in CPS, and to imagine other possibilities for data and data-driven tools beyond current PRMs. Prior work with impacted stakeholders has explored the design and use of PRMs \cite{brown2019toward}. Our study is the first in academic ML and HCI to ask stakeholders whether these technologies should be used at all and to imagine new futures beyond them. Yet, these conversations have been ongoing outside these academic disciplines \cite{webeimagining2020mothers,endup2021}.\footnote{See, e.g., the 2021 upEND Movement Convening keynote with Derecka Purnell and Dorothy Roberts: \url{https://youtu.be/udIq9oRDcDQ}.}
Our participants brought up several important themes: In Section~\ref{sec:participant-concerns}, we note that most participants opposed current PRMs because they saw them as exacerbating existing problems in CPS. These findings are consistent with, yet more specific and more critical than, prior work \cite{brown2019toward}. We present these first as a primer to more novel, constructive suggestions in Sections~\ref{sec:new-uses}, ~\ref{sec:guidelines}, and \ref{sec:low-tech-alternatives}. In Section~\ref{sec:new-uses}, we present participants' suggestions for new data-driven tools beyond PRMs which better support impacted communities, e.g. to evaluate the child welfare system and the people who work in it, to recommend mandated reporters when \textit{not} to make a report, and to allocate resources to families to prevent child maltreatment. In Section~\ref{sec:guidelines}, participants recommended guidelines to mitigate possible harms of PRMs if they must be used in the future. In Section~\ref{sec:low-tech-alternatives}, participants suggested low-tech and no-tech alternatives better address the problems that motivate the use of PRMs. Overall, our work advances ongoing discussions around data-driven tools in CPS. We argue against current PRMs, and give new avenues to work in solidarity with impacted communities, beyond just designing algorithms for CPS agencies.
\section{Related Work}
\label{sec:related-work}
\subsection{Algorithms in child welfare}
CPS agencies have been using checklist-style actuarial risk assessments (henceforth \textit{diagnostic checklists}), such as Structured decision-making (SDM) \cite{sdm}, for decades to assess how likely they think a family is to harm their children. Many agencies also use \textit{practice models} such as Signs of Safety (SofS) and Safety Organized Practice (SOP) \cite{turnell1997aspiring} as decision-making guides, often in conjunction with diagnostic checklists \cite{mickelson2017assessing}. For a case study of diagnostic checklists, see \cite{saxena2021framework}. \citet{saxena2020human} note that predictive risk models (PRMs) which apply machine learning to administrative data have grown in popularity since around 2015. Some PRMs have been developed by private companies \cite{eckerdrsf,mindshare,sas}. However, due to high error rates and proprietary opacity, many have been dropped \cite{nccpr2017losangeles,nash2017losangeles,jackson2017illinois}. Other PRMs have been developed through public-academic partnerships \cite{vaithianathan2017,chouldechova2018case,riley2018can,douglascounty}. PRMs are currently being used or deployed in (at least) Pennsylvania, New York, Florida, Washington, Oregon, Colorado, and California \cite{aclu2021family}. For an extensive list of algorithms used in the U.S. child welfare system, see \cite{aclu2021family} or \cite{saxena2020human}. PRMs have been deployed in response to racial biases and disparities \cite{dettlaff2011disentangling,Kim2017lifetime}, inaccurate and inconsistent decisions, child fatalities \cite{netflix2020gabrielfernandez}, etc. Proponents of PRMs argue they make more accurate decisions than both workers and diagnostic checklists; and that they make more consistent, objective, and equitable decisions \cite{dare2016ethical,chouldechova2018case,hurley2018algorithm,stack2018cyf,dhs2019impactsummary}. Some critics disagree with these points, arguing that PRMs are still discriminatory and still too inaccurate \cite{eubanks2018automating,nccpr2018predictive,church2017silver}. Others argue that PRMs risk ``coding over the cracks'' without addressing the foundational flaws in child welfare, and that communities should instead organize around systemic improvements to address these flaws \cite{glaberson2019coding}. Others still argue that CPS is not a flawed system but a carceral one that plays a dual, paradoxical role \cite{pelton1994,roberts2002shattered,roberts2007paradox, copeland2021only} to police families while supporting them --- and that the supportive, ``welfare'' side is an over-stated veneer to cover up the real carceral side \cite{roberts2022torn}. These critics argue that PRMs introduce new ways for CPS to police Black, Indigenous, and poor families \cite{abdurahman2021calculating,roberts2019digitizing,roberts2022torn}.
\subsection{Participatory algorithm design}
Influenced by action research and the work of Paulo Freire \cite{freire1972pedagogy}, \textit{participatory design} developed around the 1970s by Scandinavian researchers working to gain workers more power over the design of technologies they use on the job \cite{kyng1979systems,sandberg1979computers,bjerknes1987computers,gregory2003scandinavian}. Participatory methods have since become a mainstay in HCI and CSCW \cite{muller1993participatory,kensing1998participatory}, but have been broadened beyond their Marxist roots \cite{spinuzzi2002scandinavian,bjorgvinsson2010participatory}. More recently, many have called for increased participation to ensure that diverse stakeholders' perspectives, needs, and values are reflected in the design of AI systems \cite{paml,loi2018pd,varshney2021participatory,pair2020boundary,zhu2018value,wong2020democratizing}. Yet, without clear political motivations beyond ``democratization'' of AI governance, participatory work in ML differs widely based on ``which stakeholders are involved'' and ``what is on the table'' \cite{delgado2021stakeholder,wolf2018changing,sloane2020participation}. Some propose consulting ``the public'' or broadly-defined ``stakeholders'' on their preferences around specific, technical design decisions \cite{lee2019webuildai,awad2018moral,noothigattu2018voting,kahng2019statistical,grgichlaca2018,ilvento2019metric,bechavod2020metric,jung2021algorithmic,johnston2020preference,robertson2020if}. Others intentionally work with specific groups who are most impacted by these technologies, yet still do not empower impacted stakeholders to engage in broader design decisions \cite{brown2019toward,holtenmoller2020shifting,saxena2020participatory,cheng2021soliciting,smith2020community,aragon2022human,smith2020community,halfaker2020ores}. While more common across HCI and CSCW, less work in participatory ML empowers stakeholders to decide on the ``scope and purpose for AI, including whether it should be built or not'' \cite{delgado2021stakeholder}. Specifically around the design of algorithms in child welfare,\footnote{This point might be broadened to public algorithms in general, e.g. \cite{holtenmoller2020shifting}. Though, forthcoming work centers people seeking government services \cite{scott2022algorithmic}.} prior participatory work has either collaborated with government agencies or solely engaged with government workers in their studies \cite{brown2019toward,saxena2020participatory,kawakami2022partnerships,cheng2022disparities,kawakami2022exploring}.\footnote{Harding argues ``value-neutral'' sciences side with the powerful, e.g. ``the welfare department instead of the people who were receiving welfare'' \cite{harding2016standpoint}.} Most similar to our work, \citet{brown2019toward} partnered with a CPS agency to aid the development of a PRM by conducting participatory design workshops where they asked workers and community stakeholders about scenarios related to specific design choices. Our work differs from \citet{brown2019toward} in that we: 1) worked independently of a CPS agency, 2) asked whether PRMs should be used in the first place, and 3) asked open-ended questions about other technologies or non-technical changes beyond just designing algorithms for CPS agencies. Our approach can be seen as human-centered \cite{chancellor2019who,aragon2022human,chancellor2022practices}: where the humans that we center are impacted communities, not government agencies. Drawing from standpoint theory \cite{collins1997standpoint,harding2004feminist} and the Marxist roots of participatory design \cite{gregory2003scandinavian},\footnote{As Ehn describes: ``In the interest of emancipation, we deliberately made the choice of siding with workers and their organisations'' \cite{ehn1993scandinavian}.} we engaged with parents and workers who were most impacted by, but most disempowered around, decisions on data and technologies in CPS to better understand a ``view of technology \textit{from below}'' \cite{abdurahman2021body}.\footnote{While frontline CPS workers have power over families, they have little say around their working conditions nor the technologies they use \cite{cheng2022disparities,kawakami2022partnerships}.} These methodological differences may have led to novel suggestions in Sections~\ref{sec:new-uses}, \ref{sec:low-tech-alternatives}, and \ref{sec:guidelines}, which go beyond those uncovered in prior community-engaged research.
\section{Background}
Figure~\ref{fig:intro-explanations-overview} demonstrates how data-driven predictive risk models (PRMs) work and how they are used currently in child welfare. Many U.S. child welfare agencies currently use PRMs, mostly to assist workers in ``front end'' decisions, such as which families to investigate or how to investigate them \cite{eckerdrsf,vaithianathan2017,aclu2021family}. A few agencies are starting to use PRMs to allocate services to families before they are reported or to prevent foster care placement \cite{dana2019predictive,abdurahman2021calculating,hellobabyfaq}. No agencies currently use PRMs in decisions after investigation, e.g. in court; however, there are currently no regulations around how PRMs can or cannot be used. Figure~\ref{fig:data-algorithm-intro} demonstrates how a typical PRM is developed and used in CPS. Different agencies or PRMs can use different kinds of data. However, most algorithms use family demographics (excluding race) and past CPS data, e.g. about prior reports on the family \cite{chouldechova2018case,goldhaber2019impact,saxena2020human}; some use other governmental data, e.g. criminal, public health, or public benefits data \cite{vaithianathan2017}. Many PRMs are designed to predict the likelihood of some observable proxy for abuse or neglect, which are often vague and rarely observable \cite{saxena2020human}. A machine learning (ML) algorithm then uses this data to train a model (the PRM). Finally, this PRM is applied to new case data and the PRM's assessment ---interpreted as the likelihood of some proxy for abuse or neglect--- is shown to CPS workers, who use it when making decisions \cite{saxena2020human,kawakami2022partnerships}. Although no PRM is currently used to fully automate decisions, some suggest this is possible \cite{eubanks2018automating,nccpr2019racial,cheng2022disparities,De-Arteaga2020case}. Others note that automation is a spectrum: CPS agencies can pressure workers to conform to PRMs' recommendations in some cases more than others \cite{cheng2022disparities,kawakami2022partnerships}.
\label{sec:background}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/cps-decision-making-overview}
\caption{Front-end child welfare decision-making where PRMs are currently used, and the workers involved.}
\label{fig:decision-making-intro}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/data-algorithm-overview.png}
\caption{A simplified diagram of how PRMs are trained and used on specific cases.}
\label{fig:data-algorithm-intro}
\end{subfigure}
\caption{Diagrams shown to participants in Activity 1 of the workshops to explain how current PRMs work and where they are used.}
\label{fig:intro-explanations-overview}
\Description{Diagrams shown to participants in Activity 1 of the workshops. There are two subfigures in this figure: one on the left and one on the right. The subfigure on the left is a flowchart diagram of the steps of front-end decisions in child welfare. The diagram contains light blue figures of first a person reporting a case to CPS, then a worker taking that call, then a caseworker and a supervisor who investigate the cases, then two arrows stemming from the caseworker saying whether the case was substantiated or not. The subfigure on the right is a simplified diagram of how PRMs are trained and used. It contains a grid with numerical and categorical variables which represents historical case data. Then there is an arrow next to that grid pointing right towards three columns of data. Each column of data represents a new case to apply the PRM to. Under each column is a red box that says High risk or a green box that says Low risk, which represents the PRM's label based on patterns in the data in that column.}
\end{figure}
\section{Methods}
\label{sec:methods}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{figures/workshop-protocol}
\caption{Outline of the study protocol, including the three workshop activities.}
\label{fig:protocol}
\Description{Outline of the workshop protocol, which contains three activities, each represented as a box in the figure. The three boxes are separated with right-pointing arrows. The left box is captioned ``Activity 1: Background.'' The image is the same as Figure~\ref{fig:intro-explanations-overview}, a simplified visual explanation of how PRMs are trained using historical county data and how they are applied on new case data. The middle box is captioned ``Activity 2: Questions.'' It depicts a Zoom call where workshop participants are answering questions, which are then written down on a shared Google Doc. The right box is captioned ``Activity 3: Co-design.'' It depicts a Zoom call where workshop all participants are organized in a grid talking to each other.}
\end{figure}
Our work takes a human-centered, participatory approach to the design and use of predictive risk models (PRMs) and data-driven technologies in child welfare. We conducted 7 workshops with 35 participants total. Workshops were conducted over Zoom, each with 4 to 7 participants who were impacted by or worked in CPS.
\begin{table*}[h]
\resizebox{16cm}{!}{%
\begin{tabular}{|p{0.45cm}|p{6cm}|p{0.35cm}|p{0.35cm}||p{0.45cm}|p{6cm}|p{0.35cm}|p{0.35cm}|}
\hline
\textbf{ID} & \centering{\textbf{CPS Personal or Job Experience}} & \textbf{Q1} & \textbf{Q2} & \textbf{ID} & \centering{\textbf{CPS Personal or Job Experience}} & \textbf{Q1} & \textbf{Q2}\\ \hline
\textbf{P1} & ED of private CPS agency; former foster youth & NA & Yes & \textbf{P19} & CPS worker & No & Yes\\ \hline
\textbf{P2} & PhD student studying public algorithms & NA & No & \textbf{P20} & Impacted parent; ED of parent advocacy group & Yes & No\\ \hline
\textbf{P3} & Attorney (family law \& ICWA) & NA & No & \textbf{P21} & Impacted parent; assistant editor & Yes & No\\\hline
\textbf{P4} & DHS licensor & Yes & Yes & \textbf{P22} & Impacted parent; parent trainer & Yes & No\\\hline
\textbf{P5} & CPS worker & NA & Yes & \textbf{P23} & Impacted parent; parent advocate & Yes & Yes\\\hline
\textbf{P6} & PhD student studying public algorithms & NA & No & \textbf{P24} & Impacted parent; parent advocate & Yes & Yes\\\hline
\textbf{P7} & CPS management & NA & Yes & \textbf{P25} & Impacted parent; parent advocate & Yes & Yes\\\hline
\textbf{P8} & Teacher (mandated reporter) & NA & No & \textbf{P26} & Impacted parent & NA & NA\\\hline
\textbf{P9} & CPS worker; lecturer & NA & Yes & \textbf{P27} & Impacted parent; parent advocate & Yes & Yes\\\hline
\textbf{P10} & Attorney & NA & No & \textbf{P28} & Impacted parent; parent advocate trainer & Yes & Yes\\\hline
\textbf{P11} & Psychologist; attorney & NA & No & \textbf{P29} & Perinatal social worker & No & Yes\\\hline
\textbf{P12} & Impacted parent & Yes & No & \textbf{P30} & CPS field director; academic faculty & Yes & Yes\\\hline
\textbf{P13} & ED of private services agency & NA & Yes & \textbf{P31} & CPS project manager & No & Yes\\\hline
\textbf{P14} & CPS administrator & Yes & Yes & \textbf{P32} & CPS worker & Yes & Yes\\\hline
\textbf{P15} & CPS worker & No & Yes & \textbf{P33} & Impacted parent & NA & NA\\\hline
\textbf{P16} & CPS administrator & No & Yes & \textbf{P34} & Impacted parent; parent advocate & Yes & Yes\\\hline
\textbf{P17} & MFT therapist; transracial adoptee & No & No & \textbf{P35} & Impacted parent; parent advocate & Yes & No\\\hline
\textbf{P18} & - & - & - & \textbf{P36} & Impacted parent; parent advocate & NA & NA\\\hline
\end{tabular}
}
\caption{Participants' personal or job experiences with CPS. Q1 was: \textit{Have you ever been investigated by a child welfare agency?} Q2 was: \textit{Do you have child social work experience or education?} We did not explicitly ask participants about more personal experiences to avoid harmful disclosure (see Appendix~\ref{sec:voluntary-disclosure}) \cite{harrington2019deconstructing}. We omit P18's responses, since they participated in only part of a workshop.}
\label{tab:participant-experience-short}
\Description[Participant child welfare experience]{Participant child welfare experience}
\end{table*}
\xhdr{Recruitment \& Demographics.}
Our participants were mostly impacted parents and caseworkers, plus a few private service providers, psychologists, attorneys, students, one former foster youth, and one adoptee. Table~\ref{tab:participant-experience-short} describes participants’ personal and job experiences in CPS. See Table~\ref{tab:participant-demographics} in Appendix~\ref{sec:demographics-responses} for participants' demographics. The majority of participants were Black and/or Latina women in New York or California, however there was a mix of racial/ethnic backgrounds, genders, and locations represented. 14 participants said they were impacted parents. 20 said they worked for a CPS agency or had an education in child social work --- of these, at least 8 worked in public agencies. Only 2 participants had significant technical knowledge about PRMs. Children under 18 were excluded. We recruited 23 participants through an online recruitment form distributed via email using a snowball sampling approach. We reached out to multiple existing contacts who work in CPS or teach in schools of social work in the U.S to distribute our recruitment form. We also recruited 13 participants through an existing contact in an organization for impacted parents in the northeastern region of the U.S. This agency also trained parent advocates, which is likely why many parents in our study also said they worked in CPS. Many participants had a mix of CPS experiences, e.g. workers who had been investigated. Thus, each participant reflects deep knowledge of multiple aspects of CPS and impacted communities.
\xhdr{Protocol.}
See Figure~\ref{fig:protocol} for an illustration of our study protocol. Participants were given an almost identical short survey before and after the workshop to gauge their opinions on CPS and PRMs. See Appendix~\ref{sec:survey} for a full list of survey questions and a description of responses.\footnote{As discussed in Appendix~\ref{sec:survey}, post-survey responses showed that the workshops did not significantly change participants' perspectives.} Workshops were semistructured, starting with 10 minutes of background on PRMs (similar to Section~\ref{sec:background}) including showing Figures~\ref{fig:decision-making-intro} and \ref{fig:data-algorithm-intro}, followed by a 60-minute conversation led by three questions about CPS and PRMs (see below), ending with a 20-minute design activity to elicit ideas about how to use and design PRMs, and how to improve child welfare beyond PRMs. Throughout each of these study activities, we tried to present information about PRMs and avoid value judgments of participants' responses to not sway participants.
In the Questions phase (Activity 2) of the workshop, we asked participants three questions to center conversations:
\begin{enumerate}
\item What do you think are the goals or outcomes of an ideal system for protecting children?
\item What are some pros and cons of PRMs?
\item How should workers and interventions look in an ideal system for protecting children?
\end{enumerate}
Although the workshops were centered around PRMs, the first and third questions did not specifically mention PRMs in order to give space for participants to bring up comments or concerns about CPS in general. For each of these questions, we shared a document with participants to add their comments to. Our team of facilitators also took notes in real-time. We did not erase the documents between workshops, so that participants in later workshops could comment on past participants' thoughts.
In the Co-design activity (Activity 3), we asked participants to write down at least 4 ideas to change PRMs or CPS. Then, we asked each participant to share 2 of their ideas and write those on a shared document. Finally, we asked participants whether they agreed or disagreed with other participants' ideas, and asked the group to make one collective list of ideas (without mandating consensus). Our design activity was based on Crazy Eights \cite{knapp2016sprint,designkit2021}. Though there may be drawbacks to these kinds of open-ended design activities \cite{harrington2019deconstructing}, we draw inspiration from abolitionists in ``imagining a safer world'' for Black and other minoritized people \cite{roberts2022torn}.
\xhdr{Ethics \& Institutional Review.}
To minimize the risk of unintended harms to participants, we consulted domain experts and impacted parents when designing our study \cite{pierre2021getting,harrington2019deconstructing}. Two workshops included only impacted parents to reduce the risk of conflicts or power imbalances with other kinds of stakeholders. We also worked with leaders of the parent organization who helped with recruiting assist in facilitating these two workshops. We did not ask participants to disclose personal experiences with CPS (besides whether they had been investigated), due to potential harms of such disclosures \cite{harrington2019deconstructing}. However, as a result, participants may have had additional relevant personal experiences that they did not disclose to us. This study, including all questions, study materials, and recruitment methods, was approved by the Institutional Review Board of Carnegie Mellon University.
\xhdr{Qualitative Analysis.}
We transcribed all 10.5 hours of online workshop recordings into text, then used thematic analysis~\cite{braun2006thematic} to analyze our data. We conducted open coding on the data, generating over 1000 codes. We performed an affinity mapping process, comparing and clustering alike codes, then identified themes that emerged from this affinity mapping. Examples of themes include: problems with diagnostic checklists, labels and stigmatization, and decisions not to use PRMs for. In Section~\ref{sec:results}, we present a subset of these themes which are most relevant to FAccT readers, leaving out some themes specific to child welfare which did not pertain to PRMs nor future design work.
\xhdr{Positionality.}
Most of the authors of this paper are academic ML and HCI researchers, white or Asian, and have little personal CPS experience (although one author is also Black and Latina and works in CPS). Our participants are mostly frontline caseworkers or Black and Latina mothers who have been in the system. The lead author, who ran all workshops, is a white man, which may have influenced participants' responses \cite{ogbonnaya-ogburu2020critical}. We anonymize participants' responses so that they could speak freely (especially workers who may be retaliated against). At the same time, we acknowledge that this may mean we quote and get academic credit for the ideas of participants with different lived experiences than most of us. Yet, we also consider ``the researcher as an active participant throughout the research process'' \cite{copeland2021only}. We see this work as depicting a conversation between us ``socially-minded'' technological researchers and our participants, who are impacted by the technologies that our field has (or we have) created.
\xhdr{Limitations.}
\lsdelete{Child welfare agency leadership and designers of PRMs were absent from our pool of participants (although two agency administrators participated in our study). Some participants (even impacted parents who opposed child welfare and PRMs) said they would have liked to talk more directly with proponents of PRMs. For example, P35 said, \textit{``I would like to have had [CPS] upper management in on how they see the [PRMs] and how they think [they] will be used.''} Future work in this direction may include studies with child welfare agency leadership and algorithm designers to better understand why they want to use these algorithms, or workshops which include both impacted communities and leadership or designers. } One limitation of our work is that we recruited few foster youth and adoptees, who may have differing views from the mostly parents and workers we spoke with. Another is that our study was not geographically restricted. Because CPS differs by location, our participants' responses do not necessarily reflect a specific community (nor do we claim them to). Future work may include qualitative studies focused on former foster youth and adoptees, or focused on a specific locale (e.g. one county or city).
\section{Results}
\label{sec:results}
In this section, we present prominent themes that emerged from the workshops, which we believe to be most interesting to FAccT readers. See Table~\ref{tab:suggestion-summary} for a summary of suggestions. Section~\ref{sec:participant-concerns} outlines participants' concerns with PRMs, which many viewed as exacerbating existing problems in CPS. In Section~\ref{sec:new-uses}, participants offer suggestions for new work that researchers can do for impacted communities, beyond creating PRMs for CPS agencies. Section~\ref{sec:guidelines} includes suggestions on how to mitigate potential harms caused by PRMs if they continue to be used. Section~\ref{sec:low-tech-alternatives} offers no-tech or low-tech alternatives which may better address many of the problems that have motivated the use of PRMs.
\begin{table*}[h]
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|p{3cm}|p{11.5cm}|}
\hline
\textbf{Section} & \textbf{Participants' Suggestion}\\ \hline
\multirow{2}{3cm}{Harms of PRMs (Section~\ref{sec:participant-concerns})} & PRMs reinforce CPS' punishment, undersupport, disempowerment of families \\ \cline{2-2}
& PRMs perpetuate existing biases and racism in CPS \\ \hline
\multirow{4}{3cm}{Data \& design beyond PRMs (Section~\ref{sec:new-uses})} & Researchers \& designers should work in solidarity with impacted families (to oppose CPS) \\ \cline{2-2}
& Use data to evaluate CPS, workers, reporters, interventions, etc \\ \cline{2-2}
& Technology to recommend mandated reporters when not to report \& where to reroute calls \\ \cline{2-2}
& Use PRMs to allocate resources (but not if this expands surveillance) \\ \hline
\multirow{6}{3cm}{Mitigating PRM harms (Section~\ref{sec:guidelines})} & Strict regulations on how data \& PRMs can \& cannot be used \\ \cline{2-2}
& Regular evaluation of PRMs before \& after deployment, especially on racial biases \\ \cline{2-2}
& Give impacted families more control over CPS policy, data, \& technology decisions \\ \cline{2-2}
& Include data on CPS, workers, reporters, interventions, etc in PRMs \\ \cline{2-2}
& Do not use demographics nor zip codes in PRMs \\ \cline{2-2}
& PRMs (and CPS more broadly) should focus on strengths, rather than deficits\\ \cline{2-2}
& Do not fully automate CPS decisions \\ \hline
\multirow{5}{3cm}{Low- \& no-tech alternatives to PRMs (Section~\ref{sec:low-tech-alternatives})} & Improve hiring, training, working conditions, \& team-based decision-making \\ \cline{2-2}
& Make policy \& legislative changes to address systemic harms \\ \cline{2-2}
& Give money directly to families instead of spending on CPS or PRMs \\ \cline{2-2}
& (Maybe) use diagnostic checklists \& practice models instead of PRMs \\ \cline{2-2}
& Abolish the child welfare system \\ \hline
\end{tabular}
}
\caption{Summary of participants' suggestions presented in Section~\ref{sec:results}.}
\label{tab:suggestion-summary}
\end{table*}
\subsection{Concerns that PRMs reinforce systemic problems in CPS}
\label{sec:participant-concerns} 19 of 32 participants who responded to our survey disagreed that current PRMs would lead to better outcomes in CPS; only 5 agreed (8 were neutral). Personal experiences led many participants to hold negative views of CPS, e.g. P1 who explained her views simply by: \textit{``31 years working in the system.''} Like \citet{brown2019toward}, our participants disliked PRMs due to ``system-level concerns;'' yet, our participants gave more pointed criticisms.
\textbf{Participants disliked PRMs for further entrenching CPS in what they saw as punishment, undersupport, and disempowerment.} Participants (both parents and workers) said that CPS often punishes families instead of supporting them. P24, a parent, said, \textit{``so many people have been treated badly... when they were on a good foot, but because they don't have enough support, certain things got out of hand, and they wasn't given the opportunity to pick up the pieces... They just automatically get scolded and child removed.''} P9, a caseworker, echoed this, using the disparate treatment of foster families versus original families as an example: \textit{``we punish the [original] parents for not doing all the right things''} while \textit{``our foster family agencies have a plethora of resources and support and funding to ensure that that child's needs are met.''}\footnote{For example, P9 said, \textit{``on welfare, a mother of one or a few children is only going to receive between \$300 to \$400 a month, and that is now in the state of California capped out... For a foster parent... the least amount that I've seen in my county is \$1,000 a month.''}}
Participants said current PRMs would not help support families. P12 said that CPS' \textit{``goal is really to support families, and I just don't think this tool plays any role in actually supporting families.''} Rather, participants said PRMs widen surveillance by encouraging CPS to process more cases and intervene more (P1,P13,P20,P30), getting more families involved in CPS (P7,P11,P12,P13), and getting more families stuck in the system for too long (P7,P14,P27,P29,P33). P12 said that she worried PRMs would cause \textit{``more monitoring, more surveillance, more intervention in Black and Brown and poor communities.''} P27 said, \textit{``I don't trust the algorithm, because it's... been set up to just surveil Brown and Blacks.''} McMillan explains: ``It's surveillance:... Coming into someone's home, checking their drawers, cabinets, and strip searching their children, how is that support?'' \cite{webeimagining2020mothers}. A number of our participants said this exact scenario happened to them or happens regularly to people they work with, e.g. P26 said CPS came to \textit{``strip my kid butt naked and go through my cabinets and uproar and turn my whole house upside down.''} P12 described calling a domestic violence hotline for help, and instead getting investigated by CPS and having her child removed. PRMs claim to improve efficiency: Participants said this could be helpful if it got families out of the system quicker, but harmful if it got more families investigated (P7,P11,P12,P13). Participants saw similarities between PRMs in CPS and the criminal system (e.g. \cite{propublica2016compas,albright2019judge,stevenson2021algorithmic,stevenson2018assessing}) and the use of criminal data in PRMs in CPS (e.g. \cite{vaithianathan2017}) as further solidifying CPS as a carceral institution (P33,P35,P36). P33 worried PRMs would embolden CPS workers and police, allowing them to act like \textit{``attack dogs''} on families with high risk scores. P12 said PRMs would add an \textit{``extra layer''} for parents to fight through: \textit{``not only are you fighting the... agency, now you're gonna have to fight this computer system.''} Workers also worried about an extra layer of blame if they disagreed with a PRM, reinforcing a ``Cover Your Ass'' mentality (P3,P7,P9,P19).\footnote{P19 said, \textit{``If we still do have a child fatality... then it's another [reason] to be like `Well, you had this tool and this tool told you that this family needed X, Y, Z.'''}}\footnote{This is similar to treating workers in the loop as ``moral crumple zones'' \cite{elish2019crumple}.} Participants said PRMs reinforce caseworkers' power over families and their role as gatekeepers. P29 said, \textit{``the power holder is the caseworker that's inputting the information and so it's already starting from a standpoint of they're the end-all be-all.''} Finally, participants said PRMs allow designers and CPS leadership to control on-the-ground decisions and justify harms. P12 said, \textit{``Computers don't make decisions; people make decisions and program the computers to carry out those decisions. So we're not going to turn around and say, `Wow. Oh, it's the computer that's creating this decision and this is why 80\% of the children who go into foster care are from the Black communities.'''}
\textbf{Participants said PRMs perpetuate existing biases and racism in CPS.} 26 participants said they did not trust CPS to make unbiased decisions; only 1 participant said they did. P12, whose child was placed in foster care, said, \textit{``I've been through the system, I know how harmful it is and how racist it is and how destructive it is to Black and Brown and marginalized communities and even poor people.''} Participants thought PRMs would not address the most prominent causes of biases based on race or class, such as laws and policies which justify differential treatment of poor and Black families within CPS, or biased reporting outside CPS. Participants also thought that PRMs would not eliminate workers' biases because they still allowed for worker discretion (P1,P5,P6,P7,P9,P15,P16),\footnote{As we discuss further in Section~\ref{sec:guidelines}, many also did not want to fully automate decisions with algorithms (P2,P6,P10,P14,P17).} and would even exacerbate racial biases because of biased or ``dirty'' data. \Citet{brown2019toward} heard similar worries about biased workers and data; yet our participants go further, saying that PRMs reinforce racism and classism in CPS. Participants said CPS stigmatizes poverty (P1,P2,P5,P10,P14,P33,P36) and PRMs which use governmental records and demographics justify and exacerbate this (P1,P2,P5,P33). Overall, most participants suggested that PRMs were at best ineffectual, and at worst counterproductive, at mitigating existing discrimination and disparities based on race and class (see \cite{dettlaff2020racial}). Some participants thought PRMs would perpetuate or exacerbate other existing biases, e.g. against former foster youth (P1,P5,P36) or people with mental illnesses (P36).
There were some exceptions to these overall sentiments. Though, even those who liked PRMs said they might reduce individual workers' biases and improve decision-making, but they would not address systemic issues. P4 said larger reforms were needed to address systemic discrimination, but that these changes would not happen overnight. \textit{``In the meantime,''} P4 thought PRMs could help day-to-day decisions now, especially if they used the \textit{``right data''}: \textit{``If you put the correct data points in... maybe we can take some of that subjective bias out of it.''} A few other participants echoed similar sentiments about incremental benefits of PRMs coupled with systemic changes (P13,P14,P31). This sentiment of PRMs helping ``in the meantime'' has been echoed by proponents of PRMs, including CPS agencies defending their use \cite{DHSResponse}. Beyond these exceptions, most participants saw current PRMs in CPS as exacerbating what they saw as CPS' tendency to punish instead of support families, particularly poor, Black, Brown, and Indigenous ones.
\subsection{Beyond PRMs: New directions to work in solidarity with impacted communities}
\label{sec:new-uses}
Although most participants opposed current PRMs, many gave constructive suggestions on how researchers and designers can use data and technologies to support impacted communities, beyond just designing PRMs for CPS agencies.
\textbf{Participants suggested that researchers and designers should work in solidarity with impacted families and communities to use data to oppose CPS} (P19,P24,P25,P36). For a number of participants, the desire for researchers to work with communities manifested through suspicion that us authors were working with CPS agencies or did not have communities' interests at heart.\footnote{To reiterate, we told participants that our study was being conducted and funded independently from any agency.} P11 speculated that our study was being conducted by the \textit{``inventors of [PRMs]''} in order to \textit{``anticipate... the objections... of potentially skeptical people [so that] the sponsors will be [better equipped]... to resist the objections''} in order \textit{``to further develop their tools and sell them, and thus become prominent in their academic fields, or make money, or both.''} In another workshop, P24 asked, \textit{``What is the point of all this data-driven focus mess?''} then asked the lead author to consider whether they were doing this work to publish a paper and further their academic career or whether it was work which could actually benefit families harmed by CPS.\footnote{The lead author especially appreciates this personal confrontation to push his thinking and work in the right direction.} Given that most prior work on CPS in ML and HCI has been conducted to help develop algorithms to assess families or in partnership with CPS agencies \cite{brown2019toward,cheng2022disparities,kawakami2022partnerships,saxena2020participatory,saxena2021framework}, these suspicions seem justified. Instead, participants suggested specific ways researchers could better work in solidarity with communities. For example, participants suggested using data about families who have successfully fought CPS to produce strategies and suggestions for other impacted families to do the same (P13,P20,P24). Others suggested using data to help parent advocates verify or disprove negative and/or erroneous claims that CPS agencies make about parents (P25,P36).
\textbf{Participants suggested using data to evaluate the child welfare system and the people who work in it}, including reporters of alleged abuse, foster parents and homes, CPS workers, agencies, interventions, services, etc (P1,P2,P4,P6,P9,P12,P25,P28,P29,P30,P31,P33). Participants said administrative data collected on families reflect more on CPS and other governmental systems than they do on individual parents (P1,P3,P4,P12). P1 said, \textit{``if you've had 6 open cases, that means [CPS has] had 6 times where we weren't helpful to a family. It's measuring the system... It doesn't tell us anything about the people.''} Participants thought that data and data-driven tools (such as PRMs) should be used to assess harms caused by CPS and help communities push for change (P1,P4,P10,P23,P28). P10 said, \textit{``it doesn't make sense at all to me, why high or low risk is even what anyone thinks is being predicted... [PRMs] could just as easily be measuring the extent of racism, the extent of surveillance.''} This hearkens back to Roberts' \cite{roberts2002shattered} call to ``measure the extent of community damage caused by the child welfare system.'' For example, data-driven tools could be used to evaluate CPS workers, like they have been used on other street-level bureaucrats \cite{carton2016identifying}.
\textbf{Participants suggested designing an algorithm for mandated reporters to recommend whether and where to make a report} (P14,P19). P19 said such an algorithm should address the following questions: \textit{``Is this something I should make a call on? Is this something I should reach out to a prevention agency or agency that could possibly service the family prior to just calling it into [CPS]?''} P14 and P19 said the goal here is to reduce the number of families in the system, either by recommending not to report or rerouting calls somewhere else.
\textbf{Participants suggested using PRMs to allocate resources, but some worried this would expand surveillance and stigmatization} (P1,P2,P4,P14,P31). Although many participants said PRMs should not be used for coercive interventions, e.g. investigations or home removals, some suggested using PRMs to connect families to resources and services. P2 said, \textit{``what I would want to see in the future is using these tools to decide on resource allocation, like who should have priority for access to services; instead of starting an investigation, more framing it from a more positive and supportive side.''} Specifically, participants wanted more direct assistance to help with childcare or alleviate poverty, which many viewed as a common root cause of neglect and abuse (which is backed by prior work \cite{drake2014poverty}). P7 said, \textit{``the goal would be to... have finances available to support families in need as a preventative measure, or housing, or employment, or... medical services''} or even something like \textit{``Supernanny [to] go into homes and be there to help the family.''} Beyond individual assistance, some participants suggested community- or neighborhood-based approaches \cite{casey2000neighborhood,roberts2005community}. P1 suggested to \textit{``use data to find the top 3 zip codes where child protection is involved and get some of our local Fortune 500 companies to create living wage jobs in those zip codes.''}
However, participants also worried that expanding services provided by CPS or connected to CPS through mandated reporters would expand surveillance and place a stigma on families.\footnote{Some participants advocated for getting rid of anonymous (or all) mandated reporting to decrease the chances that assistance would lead to CPS intervention.} P15, a caseworker, said, \textit{``those who... have more contact with systems... are the ones who get reported on constantly.''} P1, a private CPS worker, said, \textit{``people can't ask for help without a report.''} P33, a parent, said, \textit{``[PRMs] put a stigma on people themselves... You know, it's not like anybody's saying, `Well, I want my significant other to run out and leave me with the child by myself and I struggle, so I had to get on welfare.' ... Basically to survive, I get a stigma.''}\footnote{P14 suggested that this may be a problem of semantics, suggesting that replacing `high risk' with `high need' might make communities more comfortable. However, other participants said they would be uncomfortable with any label from a PRM.} P20, a parent, said this leads \textit{``communities [to] hide in their struggles [rather] than say they need support or reach out for needed resources.''} Prior work describes this tension where families want more supportive resources but fear more CPS intervention \cite{roberts2008paradox,burton2021toward,roberts2022torn}. Recent work shares our participants' fear that PRMs used to allocate services will ``[sweep] into the carceral net low-risk individuals who previously would not have been on the government's punitive radar at all'' \cite{roberts2019digitizing,abdurahman2021calculating}. Empirical work suggests that PRMs which use data on public services may lead to over-surveillance of Black families \cite{cheng2022disparities}. Some participants (P14, P19) worried about using PRMs for ``preventive services,'' which are services CPS agencies offer to prevent child maltreatment or future CPS involvement \cite{antle2009prevention,casey2021prevention,testa2020evolution,waldfogel2009prevention}. Recent work \cite{abdurahman2021calculating} suggests that PRMs will increasingly be used for preventive services, due to funding from the newly-enacted Family First Prevention Services Act (FFPSA), early examples in New York and Pittsburgh to look to \cite{hellobabyfaq,abdurahman2021calculating,dana2019predictive}, and to avoid criticism like that of PRMs used for screening or investigations \cite{eubanks2018automating}. P14, an administrator, confirmed their agency is doing exactly this: \textit{``The [FFPSA] is... requiring a lot more evidence-based preventive services... One of the things that [our agency is] looking at is `What about primary or secondary prevention?' In Allegheny County, they have another... preventive risk modeling tool called Hello Baby''} \cite{hellobabyfaq}.
\lsdelete{
\textbf{P31 proposed using data to better track progress and outcomes for families.} P31 said most CPS agencies currently measure family progress by tracking which services they are provided and which they complete. However, P31 said they should use data (which P31 said agencies will likely already have) to track better outcomes, e.g. \textit{``Are they making appointments? Are they doing this? Are they being responsible? As opposed to [an] array of services and it's a checklist and it feels like you're just going through the motions.''} P30 agreed and said that tracking progress using more granular, personalized data would be a way for agencies to better calibrate services to families, e.g. to follow families as they progress slower or faster, instead of judging families based on standardized time frames for service completion.
}
\subsection{Guidelines for mitigating harms of PRMs}
\label{sec:guidelines}
As stated in Section~\ref{sec:participant-concerns}, most participants opposed current PRMs. Yet, many said that if these tools were to continue being used, they would like more guidelines around their use and design to reduce harms.
\textbf{Participants wanted stricter rules on how data and PRMs can and cannot be used}, so that data collected, or tools designed, for one purpose do not end up being used for another purpose (P2,P13,P33). P2 said they \textit{``would like to see some sort of policies to be put in place that would prevent tools like this being misused in the future... [and] really strict guidelines about how we can use these tools.''} For example, local governments could implement legislation like Community Control Over Police Surveillance (CCOPS), which requires elected representatives to approve any government data or surveillance technologies (including PRMs) \cite{ccops}. Some participants said PRMs should not be used for placement decisions (P10,P12,P26,P33) nor day-to-day decisions in general (P12,P17).
\textbf{Participants said PRMs should be evaluated before and regularly after deployment} (P4,P14,P30,P35). One big reason agencies have said they use PRMs is to mitigate workers' biases and address racial disparities in the system. Our participants suggested evaluating PRMs on whether they actually do this. Recent work suggests this, as well \cite{drake2020practical,green2021flaws}. See, for example, prior work auditing PRMs \cite{goldhaber2019impact,cheng2022disparities}. However, P6 thought that evaluating whether algorithms help or harm may be difficult, especially if overall group effects such as racial disparities are improved, but individual families are harmed more. Future work on auditing algorithms should clarify how best to measure group and individual impacts.
\textbf{Participants wanted more impacted families involved in CPS policy and technology decisions.} Participants recommended that involving communities to make decisions and set policies can help mitigate biases at the unit- or agency-level (P5,P10,P14,P17,P24,- P26,P29,P30). P14, an administrator, said, \textit{``I think that families are the experts of their own, particularly even youth.''} P23, a parent, said, \textit{``we have to be part of the language that's controlling and setting the laws and that's... happening at every level of engagement for our families.''} \citet{roberts2002shattered} argues to shift control of CPS to Black families, specifically. Participants said families should be more involved around how new technologies are used and created (P1,P2,P5,P7,P10,P13,P14,P15,P22,P29). P14 said, \textit{``if you're developing anything, [it] needs to be community-led and... family led.''} P29 said, \textit{``those that are creating [PRMs] should also be diverse and really reflect the communities that will be impacted by it, so that they're thinking... intentionally.''} P22 said that PRMs might help make more equitable decisions \textit{``if they understood parents more.''}
\textbf{Participants said PRMs evaluating families should at least include data on CPS, reporters, workers, foster parents or homes, agencies, interventions, services, etc} (P1,P4,P7,P10,P30). P1 said, \textit{``I don't know how you create these tools to measure the right thing if the data that goes in doesn't include specifically who the child protection social worker is, what intervention they received, and at what dosage;... unless you're measuring the other half of the equation, it's hard for me to imagine that you can get a good assessment.''} Prior work also suggests including intervention data in PRMs \cite{coston2020counterfactual}. This is feasible, since data is already collected on all parts of the system except for anonymous reporters. However, JMacForFamilies is campaigning for NY State Senate Bill S7326 to require data collection on reporters in New York \cite{jmacforfamilies}.
\textbf{Participants said PRMs (and CPS more broadly) should focus on strengths, rather than deficits of families} (P1,P3,P4,P12,-
P13,P16,P17,P29,P31,P35). By focusing primarily on risk factors and predicting negative outcomes, P29 worried PRMs put families \textit{``at a deficit''}. P35 worried that PRMs do not adequately \textit{``take into account the... things [families] may have done or are doing to keep [their] child safe.''} Instead, P13 said PRMs should predict \textit{``strengths and success.''} More broadly in CPS, P14 said, \textit{``the narrative that we think about families needs to shift, as well, to one of a strength-based... interaction.''} This sentiment is echoed in prior work, as well \cite{saxena2020human,holtenmoller2020shifting}.
\textbf{Participants said PRMs should not use
demographics nor zip codes} (P3,P12,P15,P26,
P27,P28,P29,P30,P33,P36). Participants worried PRMs using zip codes and demographics (which are correlated with race and class) would justify discrimination of poor and Black families (P1,P2,P5,P33). Participants said using demographics was not new to CPS. For example, P12 said, \textit{``right now without using data analytics, they’re still looking at your age, they’re still looking at your zip code.''} Yet, PRMs justify this practice. Participants said using zip codes and demographics was disciminatory because these factors were irrelevant to parental (un)fitness. P28 said, \textit{``it is unfair to say `because I live in this neighborhood, that must mean I'm a shitty parent'... It's unfair... to say `6 out of 10 of my neighbors had had [a CPS] case, so it's most likely I'm gonna have [a CPS] case'.''} P26 said it \textit{``makes no sense''} to use \textit{``your demographics, or past somebody else's history to determine whether you're a fit parent... because life is unpredictable.''} Prior work argues people are unpredictable \cite{birhane2021impossibility}.
\textbf{Participants said PRMs should not automate CPS decisions} (P2,P6,P10,P14,P17). P17 said, \textit{``[full automation is] too much power, it's too much impact, and 99.9\% of the time, [the PRM] fails.''} P6 pointed out a tension between automation versus worker bias: \textit{``I don't... believe that we should just hand the entire decision-making process over to a tool... [But] if we allow a caseworker to override the tool's guidance... then is that just sort of a form of bias in itself?''}\footnote{Automation is not all or nothing: forms of `soft automation' include agencies mandating or pressuring workers to follow PRMs \cite{cheng2022disparities,kawakami2022partnerships}.} Prior work has also grappled with this tension: some argue humans in the loop often make biased decisions \cite{albright2019judge,green2021flaws}. Others argue more automation can worsen disparities and decision quality \cite{De-Arteaga2020case,eubanks2018automating,cheng2022disparities}.
\subsection{No-tech and Low-tech Alternatives to PRMs}
\label{sec:low-tech-alternatives}
Participants suggested changes they thought would better address many of the problems motivating the use of PRMs, particularly which do not require AI-based technology (low-tech) or require no technology at all (no-tech).\footnote{We borrow ``low-tech'' and ``no-tech'' from \citet{baumer2011implication}.}
\textbf{Participants suggested improving hiring, training, working conditions, and team-based decision-making instead of PRMs.} \lsdelete{Overall, participants thought that improving these factors would help workers make good, consistent, unbiased decisions more than a PRM would. } First, participants said improved hiring practices would improve decision-making and alleviate biases, instead of using PRMs (P17,P19,P23,P24,P26,P32). Some said agencies should be more selective about who they hire; P24, a parent, said, \textit{``they need to stop hiring workers who just come out of college that don't have no children or have real life experience.''} Some also thought hiring more diverse workers could decrease racial biases (P29,P31,P32).\footnote{Though, some prior work argues that diverse or ``culturally-sensitive'' workers do not resolve racialized harms or discrimination \cite{roberts2002shattered}.} Second, participants said CPS agencies should improve supervision, especially of young or inexperienced workers (which is common in CPS \cite{edwards2020characteristics}) (P10,P16). Third, participants said team-based decision-making (especially diverse teams) could alleviate workers' individual biases (P7,P9,P15,P17,P31,P32). Fourth, participants said agencies should improve worker training (P10,P14,P16,P17,P19,P24). Finally, participants said agencies should improve working conditions, such as giving workers more time to make decisions, reducing caseloads, and increasing pay (P16,P17,P26,P32). This is important, since high case volumes have been a motivation for PRM use.\footnote{For example, Emily Putnam-Hornstein said Allegheny County created the Family Screening Tool \cite{vaithianathan2017} because they ``were fielding significant volumes of calls... and they were trying to figure out whether they could use data'' to address this \cite{netflix2020gabrielfernandez}.} Participants also suggested smaller caseloads would reduce turnover, which would help retain workers who were hired and trained properly and reduce the number of new, inexperienced workers. P16, a retired administrator, said, \textit{``I have always found that workers that were well-supported ---and whatever that means to them, not as the administration defines--- can be very helpful in the longevity and the decreasing of turnover.''} P19 said PRMs should be unnecessary: \textit{``if you're a good social worker, you already know which one of your cases are more high risk and how to prioritize those cases.''} P26, a parent, said, \textit{``[CPS] staff needs to be trained better, paid better, and maybe if they had happy workers, they care about their job and what they do.''}
\textbf{Participants wanted policy and legislative changes instead of PRMs} (P3,P4,P9,P19,P24,P26,P29). Participants said a lot of systemic biases in CPS are caused by laws and policies. For example, P20 said many old laws \textit{``harm families, or target low-income Brown and Black families.''} In order to address systemic biases, participants recommended changing these laws. Participants suggested changing mandated reporting laws (P13,P19,P24). P26 suggested repealing laws and policies, like the Adoption and Safe Families Act (ASFA) \cite{adler2001asfa}. These echo growing movements to repeal ASFA \cite{repealasfa} and change mandated reporting laws \cite{jmacforfamilies}. P4 also said they want new funded mandates to get resources to communities and address systemic problems.
\textbf{Participants suggested giving money directly to communities instead of spending it on CPS services or PRMs} (P1,P2,P7,- P11,P12,P13,P24,P26,P33). P12 said, \textit{``the people making [PRMs]... financially benefit,... where this money could be set to pay for housing and other basic needs.''} For example, PRM developers in Allegheny County were paid over \$1 million \cite{afstfaq}. Beyond development costs, participants also noted ongoing training and maintenance costs. P13 said, \textit{``How much it's gonna cost to train... the child protection workers [to use PRMs]... is also money that's being taken away from families.''} Allegheny County also hired specific employees (``Data Entry Specialists'') to help with data entry for their PRM \cite{vaithianathan2017}.
\textbf{Participants proposed using diagnostic checklists and practice models instead of PRMs, but others said these low-tech tools had their own problems.} Some suggested using diagnostic checklists (e.g. SDM \cite{sdm}) or practice models (e.g. SofS, SOP \cite{turnell1997aspiring}) instead of, or alongside, PRMs to alleviate workers' individual biases and improve decision-making (P1,P2,P17,P19). P1 suggested \textit{``integrating things like Signs of Safety. There are practice models... that help [workers] explore some very concrete, specific questions that help force them not to just make decisions based on their own hunch.''} However, other participants said these low-tech tools had built-in biases (see \cite{saxena2022how}) and workers frequently manipulate them (against their training) to produce any desired output (P1,P5,P6,P7,P9,P15,P16).\footnote{P5 said, \textit{``I was trained... using Signs of Safety and SOP and saying, `Well I may see this risk but I'm seeing protective factors that I think mitigate that, so I'm going to override [SDM] and not do that.'... But in practice, that's bullshit. I will override to make a 10-day an IR [Investigative Response] all the time.''}} Some said diagnostic checklists could be used better if workers were better trained and held accountable to follow the training (P5,P10,P14). Other participants thought tools should spur thought and nudge workers towards good decisions, not predict bad outcomes or give specific recommendations. P17 praised the Columbia-Suicide Severity Rating Scale (C-SSRS) \cite{posner2008columbiasuicide,posner2011columbiasuicide}: \textit{``There are some yes or no answers and it's not about ‘Oh, I want to get this kid 5150ed,'\footnotemark it's seeing what is the next step with a *thought*. So if I have information, then I use my *brain*, if I'm a human behind it. And I'm not the only one making this decision: I'm with a team.''} Finally, some participants saw PRMs as a repeat of diagnostic checklists. When presented with a list of pros and cons to PRMs, P16, a retired administrator, said, \textit{``all these things you have up here are just the same sort of precursor work they did for [SDM] before it came into play. It's no different... and in child welfare things tend to cycle back, probably, you know, decade on, decade off, decade on. So I'm just very curious about what's bringing this up again.''}
\footnotetext{5150 is involuntary hospitalization of someone with suicidal behavior. P17 uses this as an example of a label or recommendation a tool could give.}
\textbf{Some participants suggested abolishing the child welfare system and starting anew.} However, participants' thoughts on abolition were varied. At the end of one workshop, all four participants (all CPS workers) agreed that abolition would be the best solution (P29,P30,P31,P32). At the same time, a number of impacted parents who were very critical of the system said they did not think it should be abolished, but that it should be heavily reduced and reformed. Views on PRMs and abolition were also interestingly varied. P33 suggested that CPS should be reformed, but that PRMs should be abolished completely. P30 and P31 said to abolish CPS, but not PRMs: \textit{``I agree with tearing the system down. I just think that there's a place for the tools.''} P4 said that regardless of whether or not CPS is reformed or abolished, these are longer term changes and PRMs could help in the short term. See \cite{roberts2002shattered} or \cite{roberts2022torn} for more on child welfare abolition.
\section{Discussion}
\label{sec:discussion}
Here, we review novel suggestions and broader themes in Section~\ref{sec:results}, argue against the use of PRMs in child welfare, compare our study's approach with prior work, and highlight the suggestion to work in solidarity with impacted communities in the future.
\xhdr{Against predictive algorithms in CPS.} Our participants gave more novel suggestions and critical feedback than in prior participatory work with impacted communities and workers in CPS \cite{brown2019toward,saxena2020participatory,cheng2021soliciting,kawakami2022partnerships,cheng2022disparities,kawakami2022exploring}. For example, \citet{brown2019toward} suggest their participants' ``general distrust in the existing
system'' (which they somewhat vaguely describe as ``system-level concerns'') led to ``low comfort in algorithmic decision-making,'' and suggested these problems could be improved through ``greater transparency and improved communication strategies.'' Most of our participants also had ``low comfort'' in PRMs: They did not want them to be used. In Section~\ref{sec:participant-concerns}, our participants said PRMs would reify existing tendencies to punish instead of support poor, Black, and other marginalized families, and solidify existing power imbalances in CPS. Even if there are problems with PRMs, proponents argue for their use because they are better than any alternative, i.e. diagnostic checklists or nothing \cite{dare2016ethical}. In Section~\ref{sec:low-tech-alternatives}, however, participants gave low- and no-tech alternatives to address the problems motivating the use of PRMs: improved hiring, training, and working conditions; law and policy changes; giving money to families instead of CPS; and giving communities control of CPS. Overall, our participants thought PRMs are ``doing more harm than good'' and could be ``replaced by an equally viable low-tech or non-technological approach;'' thus, we argue that PRMs should not be used at all \cite{baumer2011implication}.
\xhdr{Mitigating harms of PRMs.} Our participants also gave suggestions to mitigate the harms of PRMs (likely because they knew the above arguments are unlikely to stop agencies from using them). These suggestions largely differ from standard approaches to ``trustworthy'' AI. For example, participants did not ask for ``greater transparency'' around PRMs: They asked for regulations around how PRMs can and cannot be used, better evaluations of PRMs' impacts on communities, and more decisions about PRMs being made by impacted communities. Participants suggested that ``improved communication'' would not help either: although P14 suggested calling PRM labels \textit{``high need''} instead of \textit{``high risk''}, many participants said that it matters more who is giving the labels (CPS agencies) and what they are doing with them, e.g. surveillance instead of support.
\xhdr{Agreement between workers and parents.} Critiques of PRMs and CPS did not come only from parents, but from workers as well. This is surprising, because some described conflict between parents and workers. P32 said, \textit{``many of the white social workers have no knowledge of the suffering that goes on in the lives of the individuals they serve and cannot relate to their struggle.''} However, our worker and parent participants often agreed, and workers criticized CPS more than we expected. While this may be a result of self-selection bias, we believe it reveals a subset of CPS workers (not all of them) who work in CPS despite seeing how harmful it is to families (cf. \cite{copeland2021only}). These workers may be important accomplices for impacted communities organizing for change.
\xhdr{Why is it important to work with impacted stakeholders in child welfare?} For one, impacted stakeholders may generate ideas which researchers may not, due to lack of contextual knowledge or differing lived experiences. Many suggestions in Section~\ref{sec:new-uses} include these kinds of new design ideas. For another, impacted community perspectives are important in their own right, regardless of their value for novel research. Even when participants' suggestions are at odds academic work ---e.g. participants suggesting PRMs not use demographics, while prior work \cite{dwork2012fairness} suggests using demographics to mitigate disparities in PRMs,--- these suggestions are important because they reflect impacted stakeholders' perspectives. The general call to incorporate perspectives of impacted stakeholders into the design process \cite{zhu2018value,bjorgvinsson2010participatory} is heightened by the fact that the algorithms we focus on are used by governments which are accountable to the public \cite{lee2019webuildai,brown2019toward,holtenmoller2020shifting,saxena2020participatory,cheng2021soliciting}. If governments do not participate with impacted communities before they implement new technologies, they risk harming these communities, facing public scrutiny, or losing legitimacy \cite{pomeroy2019community,whittaker2018ai,propublica2016compas,lee2019webuildai}. Arnstein's Ladder of Civic Participation \cite{arnstein1969ladder} organizes participatory governance into levels of community involvement and empowerment. Lower levels involve consulting impacted communities on specific choices in later stages of development, but restricting communities' power to control whether public projects are implemented at all (which may verge on ``pseudo-participation'' \cite{palacin2020pseudoparticipation} or even ``participation-washing'' \cite{sloane2020participation}). Higher levels include empowering communities to negotiate the scope of public projects. Our work lies higher than prior work on Arnstein's Ladder \cite{arnstein1969ladder} in terms of scope, because we asked participants whether PRMs should be used in the first place, whereas prior work did not \cite{brown2019toward,saxena2020participatory,cheng2021soliciting,kawakami2022exploring,kawakami2022partnerships,cheng2022disparities}. However, prior work may have been limited in what kinds of choices they put ``on the table'' for stakeholders, because they worked with CPS agencies, which are either mandated to use, or have already chosen to use, algorithms \cite{saxena2021framework,delgado2021stakeholder}. Yet, by working with CPS agencies, prior work may have more influence over the design and use of algorithms (albeit in constrained ways). In our work, by contrast, we had more freedom to ask participants more basic questions about PRMs because we worked independently from a CPS agency. Yet, CPS agencies have no reason to listen to our suggestions. Thus, by Arnstein's measure \cite{arnstein1969ladder}, our work may not redistribute power to communities as much as prior work, because (by not working with a CPS agency) we do not have much power to change CPS policy on our own. This highlights not only tradeoffs in working with government agencies, but also the importance for researchers to collaborate with workers' and community groups who can apply power to influence agencies, while maintaining independence from agencies.
\xhdr{Work in solidarity with impacted communities.} Finally, our participants also suggested that researchers work in solidarity with impacted communities, even to oppose CPS agencies. This may have been overlooked in prior work because they centered public agencies. For example, \citet{brown2019toward} ask ``What can researchers and designers working in partnership with public service agencies do... to raise comfort levels among affected communities?'' then answer: ``Facilitate... positive relationships between child welfare workers and families.'' Yet, if researchers only encourage positive relationships, we may alienate people who have been harmed by CPS and do not want to stay positive. We should follow our participants' suggestion and work with impacted communities as ``academic accomplices'' \cite{asad2019academic}, whether that means evaluating CPS and workers, getting data in the hands of impacted communities (which is not always easy \cite{abdurahman2021calculating,sapien2016foil}), designing tools to recommend mandated reporters \textit{not} to report, joining with parents and advocates to fight against CPS agencies, or advocating for (non-technical) systemic changes. As groups like JMacForFamilies \cite{jmacforfamilies}, Movement for Family Power \cite{familypower}, the upEND Movement \cite{upEND}, and Rise \cite{rise} exemplify, impacted communities have been organizing themselves. Our participants suggest we work with them.
\begin{acks}
We thank Dr. Stevie Chancellor, Leah Ajmani, and our reviewers for their feedback, as well as Anushka Saxena and Janet Li for early work on this project. This work was supported by the National Science Foundation (NSF) under Award Nos. 2001851, 2000782, 1952085, the NSF Program on Fairness in AI in collaboration with Amazon under Award No.1939606, and Carnegie Mellon University Block Center for Technology and Society Award No. 53680.1.5007718.
\end{acks}
\newpage
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,933 | arxiv | \section{Introduction}
\label{sec:intro}
In the spectator-quark model,
the inclusive weak decays of the $B$-meson can be pictured as
QCD corrected $b$-quark decays.
Since in perturbative QCD, virtual light quarks
appear in loop diagrams,
their presence might suggest
an enhancement of the rate through
terms involving powers of $\ln m_{loop}$
(we denote by $m_{loop}$
the mass of any light quark circulating in the
penguin loops). The possible effects of such
terms ``long-distance" contributions were
explored, for example, in Ref. \cite{Deshpande:LDPenguins}.
On the other hand, the finiteness for
$m_{loop} \rightarrow 0$ has been invoked in other papers, such as
\cite{ali:bdg_1,Soares,ali:bdg_2}.
In this paper, we address this problem
in the context of effective field theory.
Our analysis is perturbative and therefore relates to current,
rather than constituent quark masses \cite{soni-vtd}.
We refer to a particular flavor changing neutral current process,
$b \rightarrow s \gamma $ decay, but our results can easily be extended
to other interesting FCNC processes,
such as $b \rightarrow d\gamma$.
We argue that, to any order in perturbative QCD,
the limit $m_{loop} \rightarrow 0$ in gauge invariant penguin amplitudes is
finite, and the presence of virtual light
quarks in these internal loops does not result in
logarithmic divergences as $m_{loop} \rightarrow 0$.
To be specific we argue that powers of $\ln m_{loop}$ are always accompanied
by positive powers of $m_{loop}$ \cite{GREUB}.
For example, in $b \rightarrow (s,d) \gamma $ decays there is
a contribution from penguin loops with a virtual $u$ quark
($m_{loop}=m_u$), but the amplitude
does not diverge as $m_u \rightarrow 0$.
For definiteness, we discuss the exclusive $b\rightarrow s\gamma$ final state.
Of course, penguin loops are not the only
source of dependence on light quark masses.
In the partonic $b\rightarrow s\gamma$ amplitude,
subdiagrams
involving gluons (photons) attached to the outgoing $s$-quark
as well as soft gluons attaching the $b$-quark with the
$s$-quark jet, are collinear divergent in perturbation theory in general.
In practical calculations, these regions give logarithms of $m_s$
that are not suppressed by powers of $m_s$.
In addition, the amplitude for any
exclusive final state contains purely infrared divergences associated with
the masslessness of the gluon.
These kinds of divergence have already been
treated in the literature
(see for instance \cite{GREUB,ali:radiative,Kapustinetal}).
As part of
our analysis on penguin loops, we shall show that
no additional logarithms arise
from light quark loops collinear to the
outgoing photon
in the particular case of
the two-body $b\rightarrow s\gamma$ decay amplitude.
We derive our results by building the effective hamiltonian step by step,
showing at each stage which kinds of dependence on light masses are to be expected,
and by applying analyticity arguments and IR power
counting techniques developed in perturbative QCD \cite{sterman}.
\section{The Effective Field Theory}
\label{short-distance}
The effective field theory for $B$-decays
describes the physics at the scale $\mu \simeq m_b
\ll m_W$, after the top quark and the $W$ boson
have been integrated out. The
effective hamiltonian
at the lowest order in $\alpha_{em}$
is defined by a sum of local operators whose matrix
elements between initial and final states reproduce the amplitude at
low energies
\begin{equation}
\label{sd-eff-ham}H_{eff}=\frac{G_F}{\sqrt{2}}{\cal V}_{CKM}\sum_iC_i(\mu
)O_i(\mu )\, ,
\end{equation}
where ${\cal V}_{CKM}$ denotes the appropriate factor (typically quadratic
in CKM matrix elements).
The utility of the effective hamiltonian formalism is that it
separates short distance contributions, described by the coefficients,
which can be calculated perturbatively,
from long distance contributions,
incorporated in the matrix elements of the local operators.
We can summarize the steps to build the effective hamiltonian
for $b \rightarrow s \gamma$ as:
(i) calculating
the coefficients
by matching the full theory onto the effective theory at
high scale;
(ii) evolving the coefficients
down to the scale $O(m_b)$ by the renormalization group (RG) equations;
(iii) evaluating the matrix elements of the operators.
In the same spirit of Ref.~\cite{giulia:bdg},
we will analyze the role of the light masses during these three steps.
\subsection{Matching}
The first step consists in matching the effective theory onto the full
theory. To match means to extract the coefficients $C_i$
by comparing the amplitude in the full theory and
in the effective theory at the same order in $\alpha_s$.
At the matching scale, all IR
behavior cancels and logarithms
of light masses are
then eliminated in the coefficients.
As an example,
we shall discuss weak penguin diagrams in $b\rightarrow s\gamma$, like the one
shown in Fig.\ 1.
The effective flavor changing gauge invariant couplings of the photon
are of the type $F_{\mu \nu } \partial^\nu
\overline{s}_L\gamma ^\mu
b_L$ and $\overline{s}_L F_{\mu \nu }\sigma^{\mu\nu} b_R$.
In a renormalizable theory like the standard model,
the corresponding amplitudes must be UV finite, because
the above operators have dimension higher than 4
and cannot arise as counterterms. The
amplitude of the full theory, to which the effective theory is matched,
can be calculated by expanding in the ratio ${q}/{m_W}$, where $q$
is the photon momentum.
This expansion can introduce infrared
divergences.
Since the external photon is taken on shell,
the mass of the quark in the loop of Fig.\ 1 acts as an IR regulator.
The expansion in photon momentum results in terms of the form
$F_1\,\,(q^2\gamma ^\mu$ $-q^\mu \gamma \cdot q)$
and $F_2\,\, i\sigma ^{\mu \nu }q_\nu $.
The form factor $F_1$ includes an IR-sensitive term
$\frac 23({x_i-1})\ln x_i$,
where $x_{i}={m_{i}^2}/{m_W^2}$, with $m_i$ the mass of
quark $i$, circulating in the loop, while in
$F_2$ $\ln x_i$ appears only multiplied by powers of $x_i$
(see, for instance,
Appendix B of Ref~\cite{InamiLim}).\footnote{Note, however, that
the $F_1$ term decouples from an on-shell photon with physical
polarization.}
In the effective hamiltonian,
the local operators contain only light quark
fields; the heavy quark fields have been integrated out.
In the corresponding diagrams
for $b\rightarrow s\gamma$,
there is an analogous term of the
type $\frac 23\ln ({m_i}^2/\mu^2)$, where $\mu $ is
introduced to fix
the scale where the operators are renormalized,
through the introduction
of a counterterm proportional to
$F_{\mu \nu } \partial^\nu
\overline{s}_L\gamma ^\mu b_L$.
The coefficients of the effective
hamiltonian are found precisely from the difference of the diagrams
of the full theory and the corresponding diagrams in the effective theory.
If the internal quark is heavy,
logarithms of heavy masses will be included in the matching coefficients of
the hamiltonian. If the internal quark is light, $m_i=m_c$ or $m_u$, by
performing the matching at $\mu =m_W$ we have an exact
cancellation of the
term $\frac 23\ln x_i$ in the coefficients.
In other words, at the matching scale all IR
behavior cancels, and the coefficients are manifestly finite for $
m_i\rightarrow 0$.
\subsection{RG rescaling}
Perturbative QCD corrections
introduce logarithms
$\alpha _s^n(\mu ){\ln {}^m}(\mu /m_W)$, with $m\le n$.
The RG rescales the coefficients of the effective hamiltonian to scales
lower than the matching scale $m_W$,
and resums such logarithms. After the RG group rescaling,
we are left with a residual dependence on the scale $\mu $,
due to the truncation of the perturbative series.
In $B$-decays, the first
threshold is at $\mu=m_b$, and we can stop rescaling at that point;
in $D$ and $K$ decays one can do a new matching and then use the RG once
again to go to a still lower scale. In any case, if we are to work with
perturbation theory, the final scale must be greater than 1
GeV or so. It is self-evident that this step cannot introduce logarithms of
light masses.
\subsection{Matrix elements}
The final step consists in calculating the matrix elements of the operators
in the low energy theory.
In general, matrix elements include
loops of virtual light quarks. We will study the limit
where the penguin loop quark mass $m_{loop}$
goes to zero. We want to show that for the
specially interesting case of $b\rightarrow s\gamma $ this limit also does
not introduce IR divergences. This implies that, in the final result, any
term of the type $\ln m_{loop}$
will always appear multiplied by powers of $m_{loop}$
(that is, as $m_{loop}^a\ln^b m_{loop}$ with $a > 0$).
A number of explicit calculations
corroborate this expectation;
see particularly Ref.~\cite{GREUB}, where
several matrix elements at two loops are
calculated.
Let us consider the Feynman diagrams that describe the matrix elements in
the effective theory at arbitrary order in QCD perturbation theory. An
arbitrary Feynman diagram with $N$ lines may be written in terms of Feynman parameters as
\begin{eqnarray}
& & G = \prod_{lines \,\, i} \int^1_0 d\alpha_i\; \delta\left(\sum_i
\alpha_i-1\right)
\prod_{loops \,\,r}
\int d^4 k_r\; D(\alpha_i,k_r,p_s)^{-N}
F(\alpha_i,k_r,p_s) \nonumber \\
& &D(\alpha_i,l_j,p_s) =
\sum_j \alpha_j \left(l^2_j(p_r,k_s) -m_j^2\right)+i \epsilon\, ,
\end{eqnarray}
where $l_j$ is the momentum of the $j$th line and $\alpha_j$ its Feynman
parameter ($l_j$ is
linear in the loop momenta $k_r$ and
in the external momenta $p_s$).
The function $F$ contains overall factors that do not enter the arguments
at this point.
This integral can be viewed as a contour integral in a
multidimensional complex $\alpha_j,k_r$ space. In order to find possible
logarithms, we have to look for the regions of non-analyticity of the
integral. The points of the contour where $D$ vanishes are called ``singular
points'' (SP's), and possible singularities of this integral must arise from
zeros of the denominator $D$. In fact, only certain SP's, referred to as
pinch SP's, give singularities that cannot be avoided by deforming the
integration contours. Necessary conditions for a pinch SP are given by
the so-called Landau equations~\cite{landau}.
With each SP is associated a reduced diagram, constructed from the complete
graph by simply contracting all lines which are off-shell at the SP. The
reduced diagrams of pinch SP's have a direct physical
interpretation~\cite{sterman,coleman}.
They can be interpreted as a
picture of a classical, energy- and momentum-conserving process occurring in
space-time, with all internal particles real, on the mass-shell, and moving
forward in time. We may turn this interpretation around, in order to
identify SP's that are pinched. We select the reduced diagrams associated with
an arbitrary Feynman graph that admit such a physical interpretation; these
diagrams identify pinch SP's.
Once we have all the reduced diagrams relevant to a particular Feynman
graph, we know its sources of non-analyticity. At this stage, it becomes
important to have criteria for determining which pinch SP's may actually
introduce infrared divergences in the diagram. The presence of
pinch SP's reveals the presence of non-analytic terms, such as logarithms of
light masses. If these logarithms are suppressed by powers of the light
masses themselves, however, the corresponding amplitude will not diverge
in the zero-mass limit.
In order to identify possible IR divergences we use IR power counting. An
obvious complication for IR power counting in Minkowski space is that $k^2=0$
does not imply $k=0$, so that a naive dimensional counting will not necessarily
express the real behavior of the integral in the IR limit. A method for
dealing with this problem is to change variables, and approximate the
integral near each pinch SP, so that every denominator is a homogeneous
function of a set of variables that vanish there~\cite{sterman,stbook}. This
integral will be referred to as the ``homogeneous'' integral and these
variables as the ``normal'' variables; the remaining variables, called
``intrinsic'', parametrize the relevant surface of SP's, and do not
contribute directly to the singular behavior. The IR behavior of the
homogeneous
integral will be determined by dimensional power counting involving only the
normal variables.
In the following section, we apply the power counting procedure just sketched
to $b\rightarrow s\gamma$, and verify that the amplitude
is free of unsuppressed logarithms of $m_{loop}$ for $m_{loop}\rightarrow 0$.
\section{The decay $b \rightarrow s \gamma$}
The decay $b \rightarrow s \gamma$ has been extensively studied in the
framework of effective field theory~\cite{bsg:inclusive}. Beyond leading
order, the matrix elements of
the
operators in the effective
theory include light quark loops in general.
Let us consider one of these diagrams:
precisely the penguin diagram in Fig.\ 2, without QCD corrections. There is
a light quark ($u$-quark or $c$-quark) running in the loop. We consider the
zero mass limit for this quark.
The pinch SP corresponding to the diagram in Fig.\ 2 is associated with a
reduced diagram that coincides with the original one. At the pinch SP, the
reduced diagram can be interpreted as a process occurring in space-time,
with all internal particles real, on the mass-shell, and moving forward in
time. Then it is easy to see that the two light quarks and the photon belong
to the same jet, where a jet is defined as a connected set of massless
lines, which are on shell with finite energy, and have momenta proportional
to a single lightlike momentum ($p^\mu_\gamma$ in this case). Therefore, the
reduced diagram in Fig.\ 2 can be viewed as a massive $b$ quark decaying
into two
jets, one consisting of the light quark loop and the photon, the other
consisting of the $s$-quark line only. We use the term ``hard" for any
vertex of a reduced diagram where lines from two or more jets are attached.
We will also refer to on-shell massless lines with zero 4-momentum as soft
lines; a ``soft subdiagram" is one consisting of only soft lines.
Let us first analyze arbitrary reduced diagrams that contribute to the
four-quark operator with two jets, $J_\gamma$ and $J_s$, a single hard part
and a soft subdiagram. Fig.\ 3 shows a typical diagrammatic structure
for $J_\gamma$; Fig.\ 4 shows the general form of these
diagrams. Afterwards, we shall treat
the remaining, relevant reduced diagrams. The diagrams of Fig.\ 4 admit a rather
simple IR power counting, analogous to the treatment of form factors for
quark-antiquark production in ${\rm e}^+{\rm e}^-$ annihilation \cite
{sterman,stbook}. An appropriate choice of normal variables for all these
processes
is the four components $k_s^\mu,\ \mu=0\dots 3$, of loops $k_s$ that pass
through the soft subdiagram, $S$, and the {\em squares} $k_j^2$ and
$(k_{s,\perp}^j){}^2$ for each loop $k_j$ that is internal to a jet. For the
latter, the transverse momentum is defined relative to the jet's direction.
The superficial IR degree of divergence related to any reduced diagram of
this sort is
\begin{equation}
D =\sum_{i=\{\gamma ,s\}}\,\left( 2L_i-N_i+b_i+\frac
32f_i+t_i\right) ,
\label{pceq}
\end{equation}
where $L_i$ and $N_i$ are the numbers of loops and lines in $J_i$, while
$b_i$ and $f_i$ are the numbers of soft gluons and soft photons attached to
$J_i$ at the pinch SP. The factor $t_i$ comes from the numerator momenta in
$J_i$ and is bounded from below\footnote{For this
argument, we ignore unphysically-polarized gluon lines that attach the jet
to the hard scattering in covariant gauges; they do not affect the
overall power counting
discussed here {\protect \cite{sterman,stbook}}.},
\begin{equation}
t_i\ge {\rm max}\left\{ \frac 12[u_3^i-v_i],0\right\} ,
\label{tinequal}
\end{equation}
where $u_3^i$
and $v_i$ are, respectively,
the number of three-point vertices in $J_i$
and the
number of soft vector particles attached to $J_i$. The Euler identity and counting relations
between the numbers of lines and vertices of various orders may then be used
to bound the IR degree of divergence by
\begin{equation}
D \ge \sum_{\{i=\gamma ,s\}}\left\{ {\frac 12}(h_i-1)+f_i+%
{\frac 12}(v_i-u_3^i)\theta (v_i-u_3^i)\right\} \,,
\label{pcinequal}
\end{equation}
where $h_i$ is the number of lines attaching jet $i$ to the decay vertex.
Clearly, the lower limit on $D $ is found by taking $h_i=1$ and $f_i=0$.
We can easily check that for the lowest-order diagram, Fig.\ 2, $D >0$,
because in this case $h_\gamma =2$ for the photon jet, which immediately
gives $D =1/2$. We may trace the positive value of $D $ to $u^\gamma_3=1
$ in Eq.\ (\ref{tinequal}). This suppression is a direct result of the
transversality of the emitted photon, which requires
at least one power of the transverse momentum of the soft quark loop.
Therefore, the diagram of Fig.\ 2 is IR convergent,
when the photon is on shell. The all-order power counting
expression, Eq.\ (\ref{pcinequal}) shows that this reasoning holds to any
order for diagrams of the form of Fig.\ 4, since $h_\gamma =2$ for all of
them when the hard vertex is a four-quark operator.
We can readily extend this reasoning to the remaining pinch SP's that are
relevant to $b\rightarrow s\gamma $. Further corrections to the four-quark
operators are shown in Fig.\ 5. The only singular points involving an
on-shell light quark penguin loop that we have not yet considered are those in which
soft gluons attach to the $b$ quark (Fig.\ 5a) and those in which the
penguin loop itself is soft (Fig.\ 5b).
First, consider additional gluons attached to the $b$ quark.
The propagator of a quark radiating a soft gluon
behaves as $1/2 p \cdot k$
(where $k$ is the momentum of the gluon) and so contributes $-1$ to the power
counting. For power counting purposes, the $b$ quark
then acts like a third jet in Eq.\ \ref{pceq}, with no internal
loops ($L_i=0$), no soft external fermions ($f_i=0$), no
numerator suppression ($t_i=0$), and with the number of its lines
equal to the number of soft gluons attached to the $b$ line
($N_i=b_i$). It is easy to verify that pinch SP's of this sort
leave Eq.\ (\ref{pcinequal}) unchanged.
Second, consider the class of pinch SP's for
the four-quark operator in which the photon is radiated by the $s$ (or $b$)
quark, and the light quark penguin loop appears as part of the soft subdiagram
(Fig.\ 5b).
This diagram is highly suppressed in the IR compared to those
of Fig.\ 5a, because a fermion propagator with momentum $k^\mu$ diverges only linearly
at $k^\mu=0$, in contrast to a boson propagator, which diverges
quadratically. Thus, penguin loops, whether connected directly
to the photon or not, are finite in the zero-mass limit.
So far, we have restricted ourselves to light quarks in penguin
loops only, connected directly to the operators of the effective theory.
Mixing of operators in the effective theory, however,
gives rise to diagrams in which there is no penguin loop.
We will now show that in the two-body amplitude there are
no additional logarithms of light quark masses associated with
loops in the photon jet, that is, collinear to the outgoing photon.
Consider Fig.\ 6a, in which the gluon emerges from the
hard subdiagram (effectively, the operator
$O_8\sim m_b\,\bar s_L \sigma^{\mu\nu} T_a \, b_R \,G^a_{\mu\nu}$
in the standard classification). These diagrams contain yet another
set of pinch SP's, as shown, in which this gluon changes into a
photon due to rescattering of a loop of
virtual light quarks with soft gluons.
We may once again use the power
counting of Eq.\ (\ref{pcinequal}), but this time $h_\gamma =1$, and on a
diagram-by-diagram basis, the amplitude produces logarithms of the light
quark mass. Note that charge conjugation invariance requires at least two
soft gluons attached to the quark loop (two C-odd gluons cannot produce a
C-odd photon), so that this effect appears first at three loops in the
perturbative matrix element.
Nevertheless, the contribution from any SP of this sort cancels in a gauge
invariant set of diagrams, because the photon does not carry color. This
result follows from the factorization of soft gluons from jets, an important
ingredient in factorization proofs for inclusive cross sections \cite
{CoSt81,CSSrv}.
Leaving technicalities aside, soft gluons can couple only to the total color
charge of a jet. In fact, the contribution of the SP of Fig.\ 6a may also be
pictured as in Fig.\ 6b, which shows that the total effect of the soft
gluons is to insert a nonabelian phase factor in the color product between
the decay vertex and the gluon line. The double line represents the
nonabelian phase; the relevant Feynman rules are described, for example, in
\cite{CSSrv}. Here, however, it is only the topology of the figure that is
important. The remaining jet in Fig.\ 6b consists of a gluon and a photon,
connected by the light quark loop, whose color trace vanishes identically.
Let us note that this factorization does not require
a sum over final states; indeed, the cancellation of soft gluons
in inclusive processes in Refs.\ \cite{CoSt81,CSSrv} requires the
factorization as a first step, for each final state individually.
Thus, logarithms of $m_{loop}$
associated with these processes are absent, since, as we have
just seen, a gluon cannot
fragment by interacting only with soft quanta.
Note that this reasoning does not
apply to final states
describing the collinear splitting of
the gluon into a photon plus other
gluons.
In summary, we have shown, to all orders in $\alpha_s$ and to the lowest order
in $\alpha_{ew}$, that penguin-like diagrams relevant for the process
$b \rightarrow s (d) \gamma$ can be safely calculated by taking the
massless limit for the $u$-quark circulating in the penguin loop.
This implies, for instance, that no large ``enhancements''
(i. e. IR unsafe contributions) in the amplitudes
involving the CKM matrix elements $V_{ub} V^\star_{ud}$
are expected from this source at any fixed order in perturbation theory.
We have also observed that in the perturbative expansion of the two-body
final state, no logarithms of light-quark masses arise from
loops collinear to the outoing photon.
\subsection*{Acknowledgements}
We wish to thank Howard Georgi,
Tobias Hurth, Michelangelo Mangano and A.\ Soni for helpful
conversations. We would also like to thank Z.\ Ligeti and J.\ Soares
for useful communications.
G.R.\ thanks the Institute for Theoretical Physics at
Stony Brook for its hospitality.
This work was supported in part by the National Science Foundation,
under grants PHY9722101 and PHY9218167.
|
1,116,691,500,934 | arxiv | \section{Introduction}
There has been a renewed interest in the study of black hole shadows since the release of the first reconstructed image of the surroundings of the supermassive black hole M87* by the Event Horizon Telescope (EHT) collaboration \cite{eht19I,eht19V}. While the shadow of a non-rotating black hole is a circle, a rotating black hole has an asymmetric shadow\cite{bardeen} whose shape depends on the black hole mass and spin, as well as the inclination angle of the observer. In alternative theories of gravity the shadow also presents characteristics that depend on the parameters of the specific model.\cite{perlick21}
Anisotropic fluids in general relativity have been studied in the context of compact objects such as stars or black holes, and in particular spherically symmetric black holes surrounded by an anistropic fluid have recently been introduced\cite{cho18}. The anisotropy allows the matter around the black hole to remain static by means of a negative radial pressure, and the resulting metric has a very general form, also being found in some alternative theories of gravity\cite{kumar20}. In this work we study the shadows of the charged and rotating version\cite{kim20} of these black holes. We adopt units such that $G=c=1$.
\section{Black hole solution}
The energy-momentum tensor corresponding to the anisotropic fluid adopted by Cho \& Kim\cite{cho18} has the form
\begin{equation}
T_\mu{}^\nu = \mathrm{diag}(-\rho, p_1, p_2, p_2)
\end{equation}
in spherical coordinates, with a radial pressure $p_1$ and an angular pressure $p_2$. A barotropic equation of state
\begin{equation}
p_i = w_i \rho
\end{equation}
is assumed. It can be shown that in order to have a static black hole solution the value $w_1 = -1$ must be chosen, leaving $w_2$ as the only free parameter, which we will simply name $w$. It is also possible to add to the black hole an electric charge\cite{kiselev03}, leading to the solution
\begin{equation}\label{eq:metrica-esf}
ds^2 = -f(r)dt^2 + f(r)^{-1}dr^2 + r^2(d\theta^2 + \sin^2\theta\, d\varphi^2),
\end{equation}
with
\begin{gather}
f(r) = 1 - \frac{2m(r)}{r}, \\
m(r) = M - \frac{Q^2}{2r} + \frac{K}{2 r^{2w-1}},
\end{gather}
where $M$, $Q$, and $K$ are integration constants. $M$ and $Q$ are the mass and the electric charge of the black hole, respectively, while $K$ is related to the energy density of the anisotropic fluid. It can be seen that by changing the equation of state of the fluid\textemdash that is, by changing $w$\textemdash one arrives at a different power of $r$ in the last term of $f(r)$. The squared charge $Q^2$ is clearly positive; however, there is no impediment to continuing the solution to negative values, replacing $Q^2$ by a parameter $q$ which may take either sign, and we will do so in the following. When $q$ is negative the second term in $m(r)$ can no longer be interpreted as arising from an electric charge, but it can be found for example in some braneworld models\cite{aliev05}.
Applying the Newman-Janis algorithm to the spherically symmetric solution \eqref{eq:metrica-esf} leads to the corresponding axisymmetric metric
\begin{equation}\label{eq:metrica}
ds^2 = - \frac{\rho^2 \Delta}{\Sigma} dt^2 + \frac{\Sigma \sin^2\theta}{\rho^2} (d\varphi - \Omega\, dt)^2 + \frac{\rho^2}{\Delta} dr^2 + \rho^2 d\theta^2,
\end{equation}
where
\begin{gather}
\rho^2 = r^2 + a^2 \cos^2\theta, \\
\Delta = r^2 - 2 m(r) r + a^2, \\
\Sigma = (r^2 + a^2)^2 - a^2 \Delta \sin^2\theta, \\
\Omega = \frac{2 a m(r) r}{\Sigma},
\end{gather}
and
\begin{equation}
m(r) = M - \frac{q}{2r} + \frac{K}{2 r^{2w-1}},
\end{equation}
as in the spherically symmetric case. When $K=0$ the Kerr-Newman metric is recovered, with $q$ replaced by $Q^2$. It is important to keep in mind that outside of general relativity, the metric obtained through the Newman-Janis algorithm may correspond to a different energy-momentum tensor than that of the original spherically symmetric metric\cite{hansen13}. We will assume throughout this work that $w > 0$ so that the spacetime is asymptotically flat. The weak, strong and dominant energy conditions impose various restrictions\cite{cho18,kim20} on the allowed values of $w$ and $K$, but since they do not affect the calculation of the shadow we will not take them into account.
We require the presence of an event horizon, so that the spacetime does not contain a naked singularity. The event horizon is located at the largest solution of $\Delta(r) = 0$, so that its disappearance corresponds to parameter values for which $\Delta(r)$ has a double root. The values of $w$ and $K$ for which this happens can be found parametrically as functions of the radius $r$ of the double root\cite{badia20}, and are given by
\begin{equation}\label{eq:w-crit}
w = \frac{a^2 + q - Mr}{\Delta_\text{KN}}
\end{equation}
and
\begin{equation}\label{eq:k-crit}
K = \frac{\Delta_\text{KN}}{r^{2r(r-M)/\Delta_\text{KN}}},
\end{equation}
where $\Delta_\text{KN} = r^2 - 2Mr + a^2 + q$ is the functional form of $\Delta$ in a Kerr-Newman-like spacetime. Plotting these curves for various values of $a^2 + q$ shows the regions in parameter space for which an event horizon exists, as seen in Figure \ref{fig:eh}, in which we have set $M=1$ for simplicity. It can be seen from the figure that unlike for the Kerr-Newman spacetime, where $a^2+q \leq M^2$ is a necessary condition for the existence of an event horizon, the presence of the fluid allows for black holes with $a^2+q > M^2$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{f1-wk.eps}
\caption{Presence of an event horizon for our adopted metric. The shaded regions indicate the parameter values for which the spacetime contains a naked singularity, while in the white regions there is an event horizon. The solid curves separating them are described paremetrically by Eqs. \eqref{eq:w-crit} and \eqref{eq:k-crit}. We have set $M=1$, so that all parameters are dimensionless.}
\label{fig:eh}
\end{figure}
\section{The black hole shadow}
We will briefly review the standard method for finding the null geodesics for photons and the shadow, adapted to our selected spacetime\cite{bardeen,badia20}. The metric \eqref{eq:metrica} is independent of the $t$ and $\varphi$ coordinates, leading to the conserved quantities $E = -p_t$ and $L=p_\varphi$; however, there is an additional hidden symmetry with an associated conserved quantity, the Carter constant $\mathcal{Q}$. This constant can be found by assuming a separable solution for the Hamilton-Jacobi equation, leading to the first-order equations of motion
\begin{gather}
\rho^2 \dot{t} = \frac{r^2+a^2}{\Delta} P(r) + aL - a^2 \sin^2\theta\, E, \\
\rho^2 \dot{\varphi} = \frac{aP(r)}{\Delta} + \frac{L}{\sin^2\theta} -aE, \\
(\rho^2 \dot{r})^2 = \mathcal{R}(r), \\
(\rho^2 \dot{\theta})^2 = \Theta(\theta),
\end{gather}
where
\begin{gather}
P(r) = E(r^2+a^2) - aL, \\
\mathcal{R}(r) = P(r)^2 - \Delta [(L-aE)^2 + \mathcal{Q}], \\
\Theta(\theta) = \mathcal{Q} + \cos^2\theta \left(a^2 E^2 - \frac{L^2}{\sin^2\theta}\right).
\end{gather}
For convenience, we define the impact parameters $\xi = L/E$ and $\eta = \mathcal{Q}/E^2$. The trajectories that make up the shadow contour have the same impact parameters as the spherical photon orbits\textemdash that is, the solutions of the equations of motion that stay at a constant value of $r$. These can be found by solving $\mathcal{R}(r) = 0 = \mathcal{R}'(r)$, and the corresponding impact parameters are given parametrically by
\begin{gather}
\xi = \frac{4 m(r) r^2 - (r + m(r) + m'(r)r)(r^2 + a^2)}{a(r - m(r) - m'(r)r)}\\
\eta = r^3 \frac{4a^2 (m(r)-m'(r)r) - r(r - 3m(r) + m'(r)r)^2}{a^2 (r - m(r) - m'(r)r)^2}.
\end{gather}
A distant observer at an inclination angle $\theta = \theta_\text{o}$ can use the celestial coordinates $(\alpha, \beta)$, which are given by
\begin{gather}
\alpha = - \frac{\xi}{\sin\theta_\text{o}}, \\
\beta = \pm \sqrt{\eta + \cos^2\theta_\text{o} \left(a^2 - \frac{\xi^2}{\sin^2\theta_\text{o}}\right)}; \label{eq:beta}
\end{gather}
they correspond to horizontal and vertical displacement in the image plane, respectively.
Fig. \ref{fig:sombras} shows the shadows produced by a black hole with spin $a/M = 0.9$ as seen by a distant equatorial observer, for various values of the parameters $w$, $K$ and $q$. We find the expected behaviors from the Kerr and Kerr-Newman spacetimes: the shadow is asymmetrical and displaced as a consequence of the spin of the black hole, and its size decreases as the charge increases.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{f2-sombras.eps}
\caption{The shadows of a black hole with $a/M = 0.9$ for an observer at $\theta_\text{o} = \pi/2$, for various values of the parameters. All quantities have been adimensionalized by setting $M=1$.}
\label{fig:sombras}
\end{figure}
To better characterize the size and shape of the shadow, we define three observables\cite{badia20,kumar20b} that can be calculated from a given shadow contour: the area of the shadow, its oblateness, and its horizontal displacement. The area is simply
\begin{equation}
A = 2 \int \beta\, d\alpha,
\end{equation}
where the plus sign in Eq. \eqref{eq:beta} has been chosen; the oblateness measures the deviation of the shadow from circularity and is defined as
\begin{equation}
D = \frac{\Delta\alpha}{\Delta\beta},
\end{equation}
where $\Delta\alpha$ and $\Delta\beta$ are the extent of the shadow in the horizontal and vertical directions respectively; finally, the horizontal displacement is more properly described as the $\alpha$-coordinate of the centroid, given by
\begin{equation}
\alpha_c = \frac{1}{A} \int 2\alpha\beta\, d\alpha.
\end{equation}
Working with observables has the benefit of making it easier to explore larger areas of parameter space at once; this can be seen in Figs. \ref{fig:area}, \ref{fig:elipt} and \ref{fig:cent}, showing the values of $A$, $D$ and $\alpha_c$ for $a/M = 0.9$ and various values of the other parameters. All plots are shown as functions of $q$, whose maximum value is determined by the requirement that there exist an event horizon.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{f3-areas-rec.eps}
\caption{The area of the black hole shadow for some values of the parameters. We set $M=1$, so that all quantities are dimensionless.}
\label{fig:area}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{f4-elipt-rec.eps}
\caption{The oblateness of the black hole shadow for some values of the parameters. We set $M=1$.}
\label{fig:elipt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{f5-cent-rec.eps}
\caption{The horizontal position of the shadow centroid for some values of the parameters. We set $M=1$.}
\label{fig:cent}
\end{figure}
It is clear from Fig. \ref{fig:area} that the shadow becomes smaller as $q$ increases or as $K$ decreases. This is to be expected, since both parameters appear with opposite signs in the metric. The oblateness and horizontal displacement show more interesting features. For $w < 1$ we see that the shadow becomes more compressed and more displaced as the charge increases. For $w = 2$ and $K > 0$, though, the charge can become large enough to reach a region where the behavior reverses, with $D$ becoming greater than one and $\alpha_c$ becoming greater than zero, reflecting a shadow that is wider than it is tall\textemdash unlike the Kerr shadow\textemdash and displaced in the opposite direction. However, the energy-momentum tensor of the fluid can have the physically undesirable property of having negative energy\cite{kim20,badia20} in this region of parameter space, so the corresponding shadows should be interpreted with care.
\section{Discussion}
In this work we have considered the effect of a fluid with anisotropic pressure on the shadow of a charged and rotating black hole. The fluid makes a contribution to the gravitational field of the black hole, and the resulting Hamilton-Jacobi equation for the geodesics is separable, so that a standard method for finding the shadow contour can be used. We have produced plots of the shadow for various values of the charge as well as the fluid anisotropy and density, and we have found that for many values of the parameters the shadow is qualitatively similar to the well known Kerr-Newman case, exhibiting the typical deviation from circularity due to the spin and shrinking as the charge increases or the energy density of the fluid decreases. However, in some regions of parameter space where the fluid has negative energy, the opposite behavior is seen, with the shadow becoming narrow and displaced in the opposite direction.
\section*{Acknowledgments}
This work has been supported by CONICET and Universidad de Buenos Aires.
|
1,116,691,500,935 | arxiv | \section{Introduction}
In a 2016 paper \cite{Ros}, Rosenberg commented ``Over the last thirty years ... it has become apparent that real $C^*$-algebras have a lot of extra structure not evident from their complexifications. At the same time, interest in real $C^*$-algebras has been driven by a number of compelling applications." He goes on to mention
the classification of manifolds of positive scalar curvature, representation theory, the study of orientifold string theories, and the real Baum-Connes conjecture.
Even the problem of whether a complex $C^*$-algebra is the complexification of a real $C^*$-algebra is a difficult one, with contributions
by Connes \cite{C}, Jones, St{\o}rmer, etc.
On the other hand,
B\"ottcher and Pietsch comment in
\cite{BP} that the impression sometimes given that results about complex
Hilbert spaces carry over mutatis mutandis to the real case,
should not be taken too literally. They go on to point out that
searching in
Mathematical Reviews for publications whose title contains the term ‘Hilbert space’ yields an output of approximately 10000, while asking for ‘real Hilbert space’ reduces the result to
100 (we have changed their numbers to the current ones). They suggest that this indicator, while
very rough, nonetheless
displays the shameful treatment of the real case.
Similarly there has been
very little work done on real operator algebras relatively speaking, and there is a frequent
lack of appreciation of the considerable difficulties that can arise.
In the present paper we present some foundations for a theory of real
operator algebras and real Jordan operator algebras, and the various morphisms
between these (see also \cite{Sharma,WTT}). A common theme is the ingredient of {\em real positivity} from
papers of the first author with Read
in \cite{BRI,BRII,BRord} (see also e.g.\ \cite{BBS,BSan,BNp,BWj,BNj,BNjp})
in the complex case, which we import to the real scalar setting here.
An operator on a Hilbert space $H$ is real positive if $T + T^* \geq 0$. Real positivity
is intended to be, for nonselfadjoint operator algebras or unital operator spaces, a substitute for positivity in $C^*$-algebras.
One indication that real positivity will be useful in the real theory is the fact (which follows from e.g.\ Lemma \ref{sfun}), that states on a real
$C^*$-algebra are the norm 1 real positive functionals, but they are not the norm 1 positive functionals (see the
example above Proposition 4.1 in \cite{ROnr}).
An associative real (resp.\ complex) {\em operator algebra} is a possibly nonselfadjoint closed real (resp.\ complex) subalgebra of $B(H)$, for a
real (resp.\ complex) Hilbert space $H$. See e.g.\ \cite{BLM} for the complex case of this theory, and
\cite{Sharma} for a preliminary study of the real case. By a real (resp.\ complex) {\em Jordan operator algebra} we
mean a norm-closed real (resp.\ complex) {\em Jordan subalgebra} $A$ of a real (resp.\ complex) $C^*$-algebra,
namely a norm-closed real (resp.\ complex) subspace closed under the
`Jordan product' $a \circ b = \frac{1}{2}(ab+ba)$. Or equivalently,
with $a^2 \in A$ for all $a \in A$ (this follows since $a \circ b = \frac{1}{2} ((a+b)^2 -a^2 -b^2)$). A characterization of these algebras is given in
\cite[Section 4.3]{WTT} and in Theorem 2.1 in \cite{BNj,BWj}.
The selfadjoint case, that is, closed selfadjoint real (resp.\ complex) subspaces of a
real (resp.\ complex) $C^*$-algebra which are closed under squares,
will be called real (resp.\ complex) {\em $JC^*$-algebras}.
Complex $JC^*$-algebras have a large literature, see e.g.\ \cite{Rod, HS,Top} for references.
The theory of (possibly nonselfadjoint) Jordan operator algebras over the complex field was initiated in \cite{BWj,BNj}
(see also \cite{BWj2,ZWthes} for some additional results, complements, etc).
Of course every operator algebra is a Jordan operator algebra.
Jordan operator algebras are the most general setting for many of the results below.
Thus we state such results in this setting;
the reader who does not care about nonassociative algebras should simply
restrict to the associative case. Indeed the statement of some of our results
contain the phrase `(Jordan) operator algebra', this invites the reader to simply ignore the word `Jordan'.
Another reason to consider real Jordan operator algebras is that
there are many examples, for example in quantum physics. In addition to the complex examples there are the {\em real $JC$-algebras}
(cf.\ \cite{HS,Top}), namely Jordan algebras of selfadjoint operators on a real Hilbert space.
We mention for example spin systems: families of selfadjoint unitaries $\{ u_i : i \in I \}$ on
a real Hilbert space $H$ with
$u_i \circ u_j = 0$ if $i \neq j$. For example the Pauli matrices, and the usual appropriate tensor product of these, are good examples of these.
The closed real span of such a family, and the identity $I$, is a real unital Jordan operator subalgebra of $B(H)$, which
is isomorphic to a Hilbert space (see e.g.\ p.\ 175 in \cite{Pisbk}).
We are able to include a rather large number of results
in a relatively short manuscript since many proofs are similar to their
complex counterparts, and thus we often need only discuss the
new points that arise.
Note that real (Jordan) operator algebras can be treated
to some extent in the framework of complex (Jordan) operator algebras with an involution
\cite{BWinv}. Indeed the study of these real algebras is
`the same as' (in some sense--but do not take this too literally) the study of
complex (Jordan) operator algebras with a period 2 conjugate linear (Jordan) automorphism,
which is isometric or completely isometric. Indeed the fixed points of such an automorphism is a
real (Jordan) operator algebra, and conversely the complexification of a real (Jordan) operator algebra
has such an automorphism $x + iy \mapsto x - iy$. Similarly for dual real (Jordan) operator algebras,
except now the automorphism is weak* continuous.
Thus often a result about real spaces is proved by applying the complex variant of the result to the complexification.
We will rarely spend much time on such results, focusing rather on situations where complexification fails.
In the latter case one may try to copy the proof of the complex variant, or invent a completely new
argument, or attempt a mixture of these two. Indeed there are a few pitfalls that one must
beware of, and standard tricks in the complex case that fail for real spaces. For example, for an operator $T$ on a real Hilbert space $H$
the condition $\langle T\xi,\xi \rangle \geq 0$ for every $\xi \in H$ does not imply that $T \geq 0$. Indeed
arguments involving states or numerical range often do not work in the real case. E.g.\ the matrix $E_{12} - E_{21}$
takes value $0$ at every real state on $M_2(\Rdb)$, but it is not Hermitian (selfadjoint). We know of no
simple test for $T \in B(H)$ to be positive or selfadjoint in the real case. Thus some aspects and directions in the theory
do not generalize very well to the real case, as we shall see e.g.\ in parts of Section 5.
We do not attempt to be exhaustive, rather we are content to prove enough results here so that the reader who is familiar with
the complex theory from the literature listed in the third paragraph
will feel, after finishing our paper, that they have a good grasp of how the real case works out.
In Section 2 we give some basic results on real completely positive maps on real operator algebras and real Jordan operator algebras, and completely positive maps on real operator systems.
In Section 3 we establish the variant of Meyer's unitization theorem for contractive homomorphisms on a
real Jordan operator algebra. Namely for a real Jordan subalgebra $A$ of $B(H)$ not containing $I_H$,
any contractive Jordan homomorphism from $A$ to $B(K)$
extends to a unital contractive homomorphism from $A + \Rdb I_H$ to $B(K)$. This result does not just follow by complexification--we have to exploit
the Cayley transform as in Meyer's remarkable original proof.
This implies in particular that the unitization of a real Jordan operator algebra is uniquely defined up to
isometric algebra isomorphism.
In Section 4 we study contractive approximate identities (or {\em cais}), generalizing to the real case some of the main results
from \cite{BWj} (and from the sequence of papers mentioned above) about cais and about
{\em approximately unital} algebras.
In Section 5 we study real positivity and real positive maps, generalizing to the real case some of the main results
from \cite[Section 2]{BNjp}.
Thus Section 5 is the appropriate variant for real Jordan operator algebras and real unital operator spaces, of
the theory of positive (but not completely positive maps) on complex $C^*$-algebras or operator systems.
It turns out that if we are interested in real positive maps as opposed to RCP maps (see Section 2)
then an extra condition is usually needed for the real theory, namely the notion of {\em systematic real positive}.
However even then some obstacles emerge that do not exist in the complex case. We give several examples of bad behavior.
For example unital contractions on a real operator system need not be selfadjoint, nor positive, and need not extend
to a contraction on the unitization.
Also real positive maps (resp.\ positive selfadjoint maps) need not extend to a real positive map (resp.\ positive selfadjoint map) on a complexification.
We also list some questions that we do not know the answer to but probably are also evidence of malaise in the real case at this
level of generality.
In the remaining part of Section 1 we give some background
and notation. The underlying scalar field is usually
$\Rdb$, and all spaces, maps or operators in this paper
are usually $\Rdb$-linear. For background on completely bounded maps, complex operator spaces, and associative operator algebras, we refer the reader
to \cite{BLM,ER,Pau, Pisbk}. For complex $C^*$-algebras the reader could consult e.g.\
\cite{P}. We will assume that the reader is familiar with some basics from the theory of
real $C^*$-algebra \cite{Li}.
We will often use facts from that theory that will be evident for readers
for the complex case, and whose real versions may all be found in \cite{Li}.
For the theory of complex Jordan operator algebras the reader will also want to consult \cite{BWj} frequently
for background, notation, etc, and will often be referred
there for various results that are used here. We write $M_n(X)$ for the space of $n \times n$ matrices
over a space $X$.
Ruan initiated the study of real operator spaces in \cite{ROnr,RComp}, and this study was continued in
\cite{Sharma}. A real operator space may either be viewed as a real subspace of $B(H)$, or abstractly as
a vector space with a norm $\| \cdot \|_n$ on $M_n(X)$ for each $n \in \Ndb$ (satisfying
{\em Ruan's axioms} \cite{ROnr}). Sometimes the sequence of norms $(\| \cdot \|_n)$ is
called the {\em operator space structure}. All spaces in the present paper are such operator spaces, although
sometimes we will not care about the higher matrix norms.
We will say that an operator space complexification $X_c = X + iX$ of an operator space $X$
is {\em completely reasonable} if the map
$\iota_X : x+iy \mapsto x - iy$ is a `complete isometry', for $x, y \in X$. Ruan proved that a real operator space
$X$ possesses a completely reasonable operator space complexification, which is unique up to complete isometry.
This unique complexification may be identified up to real complete isometry with the operator subspace of $M_2(X)$
of matrices of form
\begin{equation} \label{ofr} \begin{bmatrix}
x & -y \\
y & x
\end{bmatrix}
\end{equation}
for $x, y \in X$.
If $A$ is a real subalgebra (resp.\ Jordan subalgebra) of $B(H)$ then $A + i A$ is a complex subalgebra (resp.\ Jordan subalgebra) of $B(H)_c = B(H_c)$.
However the operator algebra (resp.\ Jordan operator algebra) complexification $A_c$ is not uniquely defined up to isometric isomorphism or isometric
Jordan isomorphism. That is, the
complexification of a real operator algebra $A$ is not uniquely defined at the Banach algebra level.
This is clear for example from Proposition \ref{wtex}.
If however we also take the operator space structure of $A$ into account then $A_c$ is uniquely defined up to
completely isometric algebra (resp.\ Jordan algebra) isomorphism.
This follows for example by \cite[Theorem 2.1]{RComp}. If $A \subset B(H)$ is a
real Jordan operator algebra then $A_c$ may be identified up to real complete isometric isomorphism with
the Jordan subalgebra of matrices of the form (\ref{ofr}) in $M_2(A) \cap M_2(B(H))$.
Note that $A \cap (A_c)^{-1} = A^{-1}$, since e.g.\ if $a(b+ i c) = 1$ then
$iac = 0$ and $ab = 1$.
The letters $H, K$ are reserved for real Hilbert spaces.
A {\em projection} in an algebra
is always an orthogonal projection (so $p = p^2 = p^*$). A (possibly nonassociative) normed algebra $A$ is {\em unital} if it has an identity $1$ of norm $1$,
and a map $T$
is unital if $T(1) = 1$. We write $X_+$ for the positive operators (in the usual sense) that happen to
belong to $X$. If $X$ is a subspace of a (real or complex) $C^*$-algebra $B$ then we write $C^*(X)$ or $C^*_B(X)$ for the
$C^*$-subalgebra of $B$ generated by $X$.
A {\em Jordan homomorphism} $T : A \to B$ between Jordan algebras
is a linear map satisfying $T(ab+ba) = T(a) T(b) + T(b) T(a)$ for $a, b \in A$, or equivalently,
that $T(a^2) = T(a)^2$ for all $a \in A$ (the equivalence follows by applying $T$ to $(a+b)^2$).
If $A$ is a Jordan operator subalgebra of $B(H)$, then the {\em diagonal}
$\Delta(A) = A \cap A^*$ is a $JC^*$-algebra. An element $q$ in a Jordan operator algebra $A$
is a projection if $q^2 = q$ and $\| q \| = 1$
(so these are just the orthogonal projections on the
Hilbert space which $A$ acts on, and which are in $A$). Clearly $q \in \Delta(A)$.
A {\em Jordan contractive approximate identity}
(or {\em J-cai} for short) for a Jordan operator algebra $A$, is a net
$(e_t)$ of contractions (i.e.\ elements of norm $\leq 1$) with $e_t \circ a \to a$ for all $a \in A$.
If a J-cai for $A$ exists then $A$ is called {\em approximately unital}. See Lemma \ref{jcai} for the
main result on such approximate identities. One consequence of this result is that an associative
operator algebra has a cai if and only if it has a J-cai.
We say that an operator $x$ in $B(H)$ is {\em real positive} if $x + x^* \geq 0$. (Sometimes this is called being {\em accretive}.)
This is the same as $x$ being real positive in $B(H_c)$, and hence it follows
that the characterizations mentioned in \cite[Lemma 2.4]{BSan}
of real positive elements in $B(H)$ are still valid. For example, $x + x^* \geq 0$ if and only
$\| I - tx \| \leq 1 + t^2 \| x \|^2$ for all $t > 0$. If $A$ is a unital subspace or
(Jordan) subalgebra of $B(H)$ then we define the real positive elements in
$A$ to be ${\mathfrak r}_{A} = \{ x \in A : x + x^* \geq 0 \}$. If $A$ is unital then it follows that the characterizations in \cite[Lemma 2.4]{BSan}
are true for $A$, and that the definition of `real positive' or ${\mathfrak r}_{A}$ does not depend on the
particular $B(H)$ that $A$ sits in (isometrically and unitally).
For a (Jordan) operator algebra or unital operator space $A$, because of the uniqueness of unitization up to isometric isomorphism (see Section 3),
we can define unambiguously ${\mathfrak F}_A = \{ a \in A : \Vert 1 - a \Vert \leq 1 \}$. Then
$$\frac{1}{2} {\mathfrak F}_A = \{ a \in A : \Vert 1 - 2 a \Vert \leq 1 \} \subset {\rm Ball}(A).$$
Note that $x \in {\mathfrak F}_A$ if and only if
$x^* x \leq x + x^*$. This is because $\Vert 1 - a \Vert \leq 1$ if and only if $(1 - a)^* (1 - a) \leq 1$.
It follows that ${\mathfrak F}_A \subset {\mathfrak r}_A$.
If $T : X \to Y$ we write $T_n$ for the canonical `entrywise' amplification taking $M_n(X)$ to $M_n(Y)$.
The completely bounded norm is $\| T \|_{\rm cb} = \sup_n \, \| T_n \|$, and $T$ is contractive (resp.\
completely contractive) if $\| T \| \leq 1$ (resp.\ $\| T \|_{\rm cb} \leq 1$).
A map $T$ is said to be {\em real positive} if it takes real positive elements to real positive elements.
We say that it is {\em real completely positive} or {\em RCP} if $T_n$ is real positive for all $n \in \Ndb$.
If $A$ is a real Jordan subalgebra of a real $C^*$-algebra $B$ then $A^{**}$ with its Arens product
is a weak* closed real Jordan subalgebra of the real von Neumann algebra $B^{**}$.
This follows by routine techniques as in the complex case (see \cite[Section 4.2]{WTT}
and \cite[Section 1]{BWj}). Since the diagonal $\Delta(A^{**})$ is
a real JW*-algebra (that is a weak* closed real $JC^*$-algebra), it follows that
$A^{**}$ is closed under meets and joins of projections.
States on a unital real Jordan operator algebra $A$ (that is, unital contractive
functionals) extend to states on any Jordan operator algebra complexification
$A_c$ by the real Hahn-Banach theorem. We will discuss states on approximately unital algebras in Section \ref{ai}.
\section{Real completely positive maps on real operator algebras} \label{Sec2}
\begin{lemma} Let $A$ be a real $C^*$-algebra and let $A_c$ be its complexification. If $x,y \in A$ then $x+iy\geq 0$ in $A_c$ if and only if $\begin{bmatrix}
x & -y \\
y & x
\end{bmatrix}$ is positive in $M_2(A).$
\end{lemma}
\begin{proof} For any complex $C^*$-algebra $B$ the map taking the $C^*$-subalgebra of $M_2(B)$ consisting of $2 \times 2$ matrices of the above form,
for $a, b \in B$, to $(a+ib,a-ib)$, is
a faithful $*$-homomorphism
onto $B \oplus B$. Now set $B = A_c$. The matrix in the lemma is positive in $M_2(A)$ if and only if it
is positive in $M_2(A)_c = M_2(A_c)$, hence if and only if $(x+iy , x-iy)$ is positive in $B \oplus B$.
Also the map $x+iy \mapsto x-iy$ is a $*$-automorphism of $B= A_c$, hence is positive.
Thus the matrix in the lemma is positive if and only if both $x \pm i y \geq 0$, and if and only if $x + iy \geq 0$.
\end{proof}
Another proof of the last result may be found in \cite[Lemma 3.3.1]{WTT}. This uses the spectrum and the fact, which is easy to see, that if $x,y \in A$ then $x+iy$ is selfadjoint in $A_c$ if and only if $x$ is selfadjoint and $y$ is antisymmetric (i.e.\ $y^* = -y$).
Let $X \subset B(H)$ be an operator space which is selfadjoint (that is
$x^* \in X$ if $x \in X$).
Write $X_{\rm sa}$ (resp.\ $X_{\rm as}$) for the selfadjoint (resp.\ antisymmetric) elements in $X$.
Note that $X = X_{\rm sa} \oplus X_{\rm as}$ (using the relation $x = \frac{1}{2} (x+ x^*) + \frac{1}{2} (x- x^*)$).
\begin{lemma} \label{Tsyjo} If $X, Y$ are real selfadjoint operator spaces and $T : X \to Y$ is real linear then $T$ is selfadjoint if and only if
$T(X_{\rm sa}) \subset Y_{\rm sa}$ and $T( X_{\rm as}) \subset Y_{\rm as}$. \end{lemma}
\begin{proof} The one direction is obvious. If $T(X_{\rm sa}) \subset Y_{\rm sa}$ and $T( X_{\rm as}) \subset Y_{\rm as}$
and $x = \frac{1}{2} (x+ x^*) + \frac{1}{2} (x- x^*)$, then $T(\frac{1}{2} (x+ x^*)) \in Y_{\rm sa}$ and
$T(\frac{1}{2} (x- x^*)) \in Y_{\rm as}$. Thus
$$T(x)^* = T(\frac{1}{2} (x+ x^*)) – T( \frac{1}{2} (x- x^*)) = T(x^*)$$
as desired. \end{proof}
A {\em real operator system} is a selfadjoint unital real subspace $X$ of $B(H)$ for a real Hilbert space $H$ (or of a real unital $C^*$-algebra).
We say that $x \in X$ is positive if and only if $x$ is positive in $B(H)$; so $X_+ = X \cap B(H)_+$.
One usually considers operator systems together with their matrix structure, with morphisms the completely positive selfadjoint maps,
or sometimes the completely positive unital selfadjoint maps. The matrix structure
consists usually of the positive cones $M_n(X)_+ \subset M_n(B(H))_+ = B(H^{(n)})_+$, for all
$n \in \Ndb$. A {\em real unital operator space} is a unital subspace $X$ of a real unital $C^*$-algebra (or of $B(H)$).
Again one usually considers these together with their operator space (i.e.\ matrix norm) structure, with morphisms the unital completely contractive maps.
A unital selfadjoint map defined on a real operator system $X$ is completely contractive if and only if it is completely positive, by \cite[Proposition 4.1]{ROnr}.
From this it is easily seen that the positive cone $X_+$ (and $M_n(X)_+$ for all $n \in \Ndb$) is independent of a choice of representation.
If $T : X \to Y$ then $T_c:X_c\to Y_c$ is defined as $T_c(x+iy)=T(x)+iT(y)$ for
$x,y \in A$.
\begin{lemma} \label{lemos} Let $X$ be a real operator system and $T:X\to B(H)$ be a completely positive map. Then $T_c:X_c\to B(H)_c$ is completely positive.
Also $T$ and $T_c$ are selfadjoint
and $\| T \| = \| T \|_{\rm cb} = \| T (1) \|$.
\end{lemma}
\begin{proof} We will temporarily write $\iota$ for $i \in \Cdb$, to avoid confusion with usual matrix subscripting.
Let $w = [x_{ij}+\iota \, y_{ij}]\geq 0$ in $M_n(X_c)=M_n(X)_c$, and set $z = \begin{bmatrix}
x_{ij} & -y_{ij}\\
y_{ij} & x_{ij}
\end{bmatrix}$.
Then by the Lemma above, $z \geq 0$ in $M_{2n}(X)$. Since $T$ is completely positive, then
$$T_{2n}(z) = \begin{bmatrix}
T(x_{ij}) & -T(y_{ij})\\
T(y_{ij}) & T(x_{ij})
\end{bmatrix}\geq 0$$ in $M_{2n}(X)$. Thus by the same lemma above, $(T_c)_n([x_{ij})+\iota \, y_{ij}]) = [[T(x_{ij})+\iota \, T(y_{ij})]\geq 0$.
Thus $T_c$ is completely positive.
Since a positive map between complex operator system is selfadjoint, $T_c$ is selfadjoint. Thus $T$ is also selfadjoint. The norm and completely bounded norm of $T_c$ equal $\| T (1) \|$ by e.g.\ \cite[Proposition 3.6]{Pau}, hence
$\| T \| = \| T \|_{\rm cb} = \| T (1) \|$.
\end{proof}
\begin{proposition} \label{rcps} Let $X$ be a real unital operator space or approximately unital real Jordan operator algebra
and $T:X\to B(H)$ real completely positive. Then $T_c$ is real completely positive.
Also $T$ has a well defined
extension $\tilde{T}$ to $X + X^*$ which is selfadjoint and completely positive.
Also $\| T \| = \| T \|_{\rm cb} = \| \tilde{T} \|_{\rm cb}$.
This equals $\| T (1) \|$ in the unital case, and in the nonunital case is
$\sup_t \, \| T(e_t) \|$ ($= \lim_t \, \| T(e_t) \|$) for any J-cai $(e_t)$ for $A$.
\end{proposition}
\begin{proof} We follow the idea and notation of Lemma \ref{lemos}. Let $w = [x_{ij}+\iota \, y_{ij}]$.
If $w + w^* \geq 0$ then $$\begin{bmatrix}
x_{ij} & -y_{ij}\\
y_{ij} & x_{ij}
\end{bmatrix}+\begin{bmatrix}
x^*_{ji} & y^*_{ji}\\
-y^*_{ji} & x^*_{ji}
\end{bmatrix} = \begin{bmatrix}
x_{ij} & -y_{ij}\\
y_{ij} & x_{ij}
\end{bmatrix}+ \begin{bmatrix}
x_{ij} & -y_{ij}\\
y_{ij} & x_{ij}
\end{bmatrix}^* \geq 0$$ in $M_{2n}(X)$. Since $T$ is completely real positive, $$\begin{bmatrix}
T(x_{ij}) & -T(y_{ij})\\
T(y_{ij}) & T(x_{ij})
\end{bmatrix}+\begin{bmatrix}
T(x_{ij}) & -T(y_{ij})\\
T(y_{ij}) & T(x_{ij})
\end{bmatrix}^* \geq 0 .$$
Reversing the steps at the start of the proof, but applied to the last matrix, we see that
$[T(x_{ij})+ \iota \, T(y_{ij})]+[T(x_{ij}) + \iota \, T(y_{ij})]^* \geq 0$. Hence $T_c$ is completely real positive.
In the unital case, by the complex theory (see e.g.\ \cite[Theorem 2.5]{BBS}) $T_c$ is bounded and extends to a selfadjoint
completely positive map $X_c + (X_c)^* \to B(H_c)$.
This restricts to a selfadjoint completely positive map $X + X^* \to B(H)$.
By Lemma \ref{lemos} we have $\| \tilde{T} \|_{\rm cb} = \| T (1) \| = \| \tilde{T} \|$, which implies
that the norm and completely bounded norm of $T$ equals $\| T (1) \|$ too.
In the approximately unital real Jordan operator algebra case, by the complex theory (see
\cite[Lemma 2.1]{BNjp} and \cite[Proposition 4.9]{BWj}) $T_c$ is bounded, so $T$ is bounded,
and $\| T_c \| = \| T \| = \| T_c^{**}(1) \| = \| T^{**}(1) \|$. Alternatively, let $\widehat{T_c} : X_c^{**} \to B(H_c)$ be the
canonical weak* continuous extension. This is RCP, e.g.\ by an argument in the proof of
\cite[Theorem 2.6]{BBS}. The restriction to $W = X^{**}$ is the
canonical weak* continuous extension $\hat{T} : X^{**} \to B(H)$. Then $$\| \widehat{T_c} \|_{\rm cb} =
\| \widehat{T_c} \| = \| \widehat{T_c} (1) \| = \| \hat{T} (1) \|.$$
We may then apply the unital case to $T^{**}$ (resp.\ $\hat{T}$).
For example the canonical extension of this weak* continuous map to $W + W^*$ is (selfadjoint and) completely positive.
Restricting to $X + X^*$ we see that the same is true for the canonical extension $\tilde{T}$ to $X + X^*$.
We have $$\| T \| \leq \| \tilde{T} \| \leq \| \tilde{T} \|_{\rm cb} \leq \| \widehat{T_c} \|_{\rm cb}
= \| \hat{T} (1) \| \leq \sup_t \, \| T(e_t) \| \leq \| T \| ,$$ for any J-cai $(e_t)$ for $A$.
Thus these are all equal. A similar argument works with $\hat{T}$ replaced by $T^{**}$.
We show that this number equals $\lim_t \, \| T(e_t) \|$. Indeed if a subnet
of $( \| T(e_t) \| )$ had limit $< \| T \|$, then by a further replacement we may suppose all terms in this subnet
were bounded above by $\alpha < \| T \|$. Replacing $(e_t)$ by the appropriate subnet
in the last centered equation would yield the contradiction $\| T \| \leq \alpha < \| T \|$.
\end{proof}
\begin{corollary} \label{uccrcp} If $T : X \to B(H)$ is a unital map on a unital
real operator space then $T$ is completely contractive if and only if $T$ is real completely positive. \end{corollary}
\begin{proof} The $(\Leftarrow$) direction follows from Proposition \ref{rcps}. The other direction can be seen for example by going to
the complexification and then using the complex case of the present result (see e.g.\ \cite{BBS}).
\end{proof}
By Proposition \ref{rcps} if $X$ is a real unital operator space then $X + X^*$ is well defined as a real operator system.
Indeed if $T : X \to Y$ is a surjective unital complete isometry
between unital operator spaces $X \subset B(H)$ and $Y \subset B(K)$, then $T$ is real completely positive.
The canonical extension $\tilde{T}: X + X^*\to B(K): x+y^*\mapsto T(x)+T(y)^*$ is selfadjoint and is a completely isometric complete order embedding
onto $Y + Y^*$.
As in the proof this may also be seen by extending to the complexification.
The operator space $X + X^*$ in $B(H)$ has the operator space complexification
$(X + X^*)_c \subset B(H_c)$. In addition, $X_c + X^*_c$ is a completely reasonable operator space complexification of
$X + X^*$.
By the uniqueness of the operator space complexification (see the introduction), $X_c + X^*_c = (X + X^*)_c.$
\begin{theorem} \label{contractive-hom} If $\pi:A\to B$ is a homomorphism between real $C^*$-algebras,
or a Jordan homomorphism between real $JC^*$-algebras, then
$\pi$ is contractive if and only if
$\pi$ is selfadjoint (hence is a $*$-homomorphism or Jordan $*$-homomorphism).
In this case $\pi$ is positive, indeed completely positive and
completely contractive in the real $C^*$-algebra case.
\end{theorem}
\begin{proof}
Assume that $\pi$ is contractive. By taking biduals we may assume that $A$ is a real $W^*$-algebra and $B = B(H)$.
For any projection $p \in A$ we have that $\pi(p)$ is a contractive idempotent, hence is an orthogonal projection. In particular, $\pi(1)$
is a projection. Replacing
$H$ by $\pi(1) H$, $\pi$ becomes unital.
If $x = x^*$ then by the spectral theorem we may approximate $x$ by real linear combinations of projections.
Using this and the fact about projections proved at the start of the proof, we see that
$\pi(x)$ is selfadjoint. Suppose that $x^* = -x$ and that $\varphi$ is a real state on $B$.
Then $\varphi \circ \pi$ is a real state on $A$, and so $\varphi (\pi(x)) = 0$. Thus $\pi(x)$ is antisymmetric by
\cite[Exercise 14A]{Good}
(see also Lemma 2.1.19 in \cite{WTT}). Thus $\pi$ is selfadjoint by Lemma \ref{Tsyjo}.
If $\pi$ is a Jordan $*$-homomorphism then $\pi_c : A_c\to B_c$ is a Jordan $*$-homomorphism.
By the corresponding fact for complex Jordan $C^*$-algebras, $\pi_c$ is contractive and positive. Thus, $\pi_{|A}=\pi$ is contractive and positive.
(The positivity can also be seen more directly by the spectral theorem as in the last paragraph.)
Let $A$ and $B$ be real $C^*$-algebras. If $\pi:A\to B$ is a contractive homomorphism, then by the above $\pi$ is a $*$-homomorphism. Then $\pi_c:A_c\to B_c$ is a $*$-homomorphism. Thus, $\pi_c$ is completely positive and completely contractive by a fact in complex $C^*$-algebras. Since $\pi_c|_A=\pi$, $\pi$ is
completely positive and completely contractive.
\end{proof}
The following is an analog of the Stinespring dilation and the Arveson extension theorem for completely positive maps on real unital Jordan operator algebras.
\begin{theorem} \label{Sti} Let $A$ be a unital subspace or real approximately unital Jordan subalgebra of a real $C^*$-algebra $B$ and let
$T:A\to B(H)$ be a real completely positive map. Then $T$ has a completely positive extension $\tilde{T}:B\to B(H)$. In addition there is a $*$-representation $\pi:B\to B(K)$ for a real Hilbert space $K$, and a contraction $V\in B(H,K)$, such that $$\tilde{T}(a)=V^*\pi(a)V , \qquad a\in B.$$ Moreover, this can be done with $\norm{T}=\norm{T}_{\rm cb}= \| \tilde{T} \|_{\rm cb}
= \norm{V}^2$, and this equals $\norm{T(1)}$ if $A$ is unital. \end{theorem}
\begin{proof} By Proposition \ref{rcps} $T$ is completely bounded, with $\| T \|_{\rm cb} = \| T \|$. If $A$ is unital then by Proposition \ref{rcps}
$T$ has a unital completely positive extension to $A+ A^*$, and we may extend further by \cite[Proposition 4.2]{ROnr} to a
selfadjoint completely positive map $\tilde{T} : B \to B(H)$, of cb norm $\norm{T(1)}$.
If $A$ is nonunital let $W = A^{**}$.
By the proof of Proposition \ref{rcps}
the canonical extension $u = \widetilde{\hat{T}}$ of $\hat{T}$ to $W + W^*$ is selfadjoint and completely positive, and has the same cb norm.
We may extend further by \cite[Proposition 4.2]{ROnr} to a
selfadjoint completely positive map $B^{**} \to B(H)$, of cb norm $\norm{\hat{T}(1)}$. Let $\tilde{T}$ be the restriction to $B$.
By Theorem 4.3 in \cite{ROnr}, there is a $*$-representation $\pi:B\to B(K)$ where $K$ is a real Hilbert space and bounded operator $V\in B(H,K)$ such that
$\tilde{T}(a)=V^* \pi(a) V$
for all $a\in B$, and $\norm{V}^2 = \norm{\hat{T}(1)} = \| T \|_{\rm cb} = \| T \|$. \end{proof}
\begin{corollary} \label{nrp2} A real positive linear functional on a unital real subspace or approximately unital real
Jordan subalgebra of a real $C^*$-algebra
$B$, extends to a positive selfadjoint functional on $B$ with the same norm. \end{corollary}
This follows e.g.\ from the last theorem and Lemma \ref{sfun} or can be seen more directly e.g.\ as in the proof of Lemma \ref{sfun}.
Indeed the functionals in the last result are just the positive multiples of {\em states}.
Let $X$ and $Y$ be operator spaces. If $T:X\to Y$ is a completely bounded, then $\| T \|_{\rm cb} = \| T_c \|_{\rm cb}$ by
\cite[Theorem 2.1]{RComp}.
However this is not true at the `Banach level', if $T : X\to Y$ is contractive, then $T_c$ may not be contractive. This depends on the operator space structures that are given to $X$ and $Y$, as we shall now see.
\begin{example} \label{wl21} Let $X$ and $Y$ be $l^1_2(\Rdb)$ with the maximal and minimal operator space structures from \cite{Sharma} respectively,
let $T : X \to Y$ be the identity map, a complete contraction. One obtains a complete contraction $T_c : X_c \to Y_c$. One can easily show that
$Y_c$ may be identified completely isometrically with the two dimensional complex $C^*$-algebra
$l^\infty_2(\Cdb)$ (since $l^1_2(\Rdb) \cong l^\infty_2(\Rdb)$ isometrically).
On the other hand,
$X_c$ is $l^1_2(\Cdb)$ with the maximal operator space structure.
To see this note that by Propositions 2.6 and 2.3 in \cite{Sharma}, and by the fact above, we have
$$({\rm Max}(l^1_2(\Rdb)))_c = (({\rm Min}(l^\infty_2(\Rdb)))^*)_c = (({\rm Min}(l^\infty_2(\Rdb)))_c)^* = l^\infty_2(\Cdb)^* = {\rm Max}(l^1_2(\Cdb)).$$
We also used the duality of Min and Max for complex operator spaces \cite[Section 1.4]{BLM}.
Thus $T_c$ cannot be an isometry or complete isometry, since it is well known that
$l^1_2(\Rdb)$ and $l^\infty_2(\Rdb)$ are not isometrically isomorphic. Indeed $(T^{-1})_c$ cannot be a contraction, even though $u = T^{-1}$ is an isometry.
\end{example}
\begin{proposition} \label{wtex} There exist real unital operator algebras $A$ and $B$ with operator algebra complexifications $A_c$ and $B_c$, and a contractive (even isometric) unital homomorphism $\theta : A \to B$ whose complexification $\theta_c: A_c \to B_c$ is not contractive.
\end{proposition}
\begin{proof} Let $X$ and $Y$ be as in Example \ref{wl21} above. We may view $X \subset B(H)$ and let $B = {\mathcal U}(X)$
be the set of `upper triangular' matrices
$$a=\begin{bmatrix}
\alpha \, I_H & x \\
0 & \beta \, I_H
\end{bmatrix}$$
where $\alpha,\beta \in \Rdb$ and $x\in X\subseteq B(H)$.
Note that ${\mathcal U}(X)$ is a real unital operator algebra, and is a subspace of the
real Paulsen system ${\mathcal S}(X) =
{\mathcal U}(X) + {\mathcal U}(X)^*$ (see \cite[Lemma 4.12]{Sharma} and the lines above it).
We claim that ${\mathcal U}(X)_c = {\mathcal U}(X_c)$ and ${\mathcal S}(X)_c = {\mathcal S}(X_c)$.
Indeed these follow easily from the facts that
$M_2(B(H))_c = M_2(B(H)_c) = M_2(B(H_c))$,
and ${\mathcal S}(X) \subset {\mathcal S}(X)_c \subset M_2(B(H))_c$,
and ${\mathcal S}(X) \subset {\mathcal S}(X_c) \subset M_2(B(H)_c)$.
Following the proof of Proposition 2.2.11 in \cite{BLM}, we obtain that
\begin{equation}\label{eq3}
\|a\|^2=\sup\{(|\alpha|\sqrt{1-t^2}+\|x\|t)^2+|\beta t|^2 : t\in[0,1] \}.\end{equation}
From this equation, we can easily see that
$$\bigg\| \begin{bmatrix}
\alpha & x \\
0 & \beta
\end{bmatrix}\bigg\|=
\bigg\|\begin{bmatrix}
| \alpha | & \|x\| \\
0 & | \beta |
\end{bmatrix}\bigg\|.$$
Similarly
$A = {\mathcal U}(Y)$ is an operator algebra. By the last norm formula the isometry $u : Y \to X$ in Example \ref{wl21} extends to an isometric unital
homomorphism $\theta_u : A \to B$. However suppose that
$\theta_u$ extended to a contractive unital map $r$ on ${\mathcal U}(Y)_c = {\mathcal U}(Y_c)$.
Then $r$ would be real positive, and hence by e.g.\ the proof of \cite[Lemma 2.1]{BNj} it would extend further
to a positive selfadjoint unital map on ${\mathcal S}(Y_c)$. By e.g.\ (1.25) in \cite{BLM}
this forces the $1$-$2$-corner map $Y_c \to X_c$ to be contractive. However this map is $u_c$, giving a contradiction.
\end{proof}
Many results in the theory of complex operator algebras involving completely contractive maps will be almost identical in the real
case. For example, Corollary 2.3 and Corollary 4.18 of \cite{BNj} or
Theorem 2.5 of \cite{BNp}
concerning completely contractive projections $P : A \to A$ on an
operator algebra or Jordan operator algebra, will be true in the real case.
This follows quickly by applying the complex case of these results to $P_c$.
Similarly for Banach-Stone theorems characterizing complete isometries between operator algebras or Jordan operator algebras
(such as \cite[Theorem 3.5]{BNjp} (2) (note $C = B$ there if $B$ is also an operator algebra, by (1)) or
\cite[Proposition 6.5]{BNp} or \cite[Theorem 4.5.13]{BLM}). See e.g.\ \cite[Theorem 4.4]{RComp}.
In passing we mention the
Kadison-Banach-Stone theorem for real $JC^*$-algebras (see e.g.\ \cite[Theorem 4.8]{IKR} and \cite{CDRV}): A surjective linear map $T : A \to B$
between real $JC^*$-algebras is
an isometry if and only if $T$ is a `triple morphism' (that is, preserves the natural `triple product').
In addition if these hold then $T$ is
a Jordan homomorphism if and only if it is positive. We sketch a proof of the last assertion: note that
by Theorem \ref{contractive-hom}, a contractive Jordan homomorphism is selfadjoint and positive. Conversely, if $T$ is
a
positive isometry then so is $T^{**}$. This uses the Kaplansky density theorem for real $JC^*$-algebras, which
may be proved following a standard proof for the complex case of that result. Then $u = T^{**}(1)$ is positive and
also is a partial isometry, indeed is a unitary in $B^{**}$ in the $JW^*$-algebra sense, since
$T^{**}$ is a triple morphism.
Hence $u = (u^2)^{\frac{1}{2}} = 1$. The triple morphism property then implies that $T$ is a Jordan homomorphism.
We do not know if there is a variant of the Banach-Stone theorems above or in \cite{BNjp} for surjective isometries
or surjective real positive isometries between e.g.\ unital real Jordan operator algebras.
Finally we mention some results on the $C^*$-envelope and injective envelope, some of
benefitted from discussions with Mehrdad Kalantar and which
we hope to present elsewhere. There is a difficulty here that we overcome which is
related to injective envelopes of dynamical systems.
For the $C^*$-envelope and injective envelope in the complex case we refer to
\cite[Chapter 15]{Pau} or \cite[Chapter 4]{BLM}, or the
papers of Hamana and Ruan referenced there.
A preliminary study of the injective envelope and $C^*$-envelope in the real case may be found in \cite{Sharma}. Using notation from those
sources we are able to prove:
\begin{theorem} \label{ijco} Let $A$ be a unital real operator space or operator system, or if $A$ is an approximately unital real operator algebra (or Jordan operator algebra).
Then $I(A)_c = I(A_c)$. Also, $I(A)$ is a unital real $C^*$-subalgebra of $I(A_c)$,
and if $C^*_e(A)$ is the $C^*$-subalgebra of $I(A)$ generated by $A$
then $C^*_e(A)_c = C^*_e(A_c)$.
\end{theorem}
In this result, $C^*_e({\mathcal S})$ has the universal property of the $C^*$-envelope: given any unital complete isometry
$j : {\mathcal S} \to D$ into a real $C^*$-algebra $D$ such that $j({\mathcal S})$ generates $D$ as a real $C^*$-algebra,
there exists a $*$-epimorphism $\pi : D \to C^*_e({\mathcal S})$ such that $\pi \circ j$ is the canonical inclusion of ${\mathcal S}$ in
$C^*_e({\mathcal S})$.
There is a result analogous to $I({\mathcal S})_c = I({\mathcal S}_c)$ in the context of Hamana's $G$-injective envelope \cite{Hamiecds,Hamiods}.
Namely that $I_G({\mathcal S}) = I({\mathcal S})$ for a finite group
$G$ and an operator system ${\mathcal S}$ which is a $G$-module in the sense of Hamana \cite{Hamiecds,Hamiods}.
A similar result holds for the $G$-$C^*$-envelope. The case of this where $G = \Zdb_2$ was the inspiration
for the last proof. We hope to present this elsewhere in work with Mehrdad Kalantar and a graduate student.
\section{Unitization (Meyer's theorem)} \label{uanf}
In \cite[Theorem 3.5]{Sharma} a real variant of Meyer's unitization theorem was proved for completely contractive homomorphisms
on real operator algebras.
Namely any completely contractive real linear homomorphism $A \to B(K)$ on a subalgebra $A$ of $B(H)$ not containing $I_H$,
extends to a unital completely contractive real linear homomorphism $A + \Rdb I_H \to B(K)$.
This implies that the unitization of a real operator algebra is uniquely defined up to
completely isometric algebra isomorphism.
The variant of Meyer's theorem for contractive $\Cdb$-linear homomorphisms on complex Jordan operator algebras
was noted in \cite{BWj}. However it is more difficult to prove the real version of the latter result, and we turn to this next.
\begin{lemma}\label{aboveMeyer1} Let $A\subseteq B(H)$ be a real (Jordan) operator algebra
and $A_c\subseteq B(H_c)$ be its complexification where $H$ is a real Hilbert space. Assume that $I_{H}\notin A$.
Then for $a,b\in A$ and $\lambda\in \Cdb$, we have
$$|\lambda|\leq \|(a+ib)+\lambda I_H\|.$$
\end{lemma}
\begin{proof} We may replace $A_c$ by the closed algebra generated by $a+ib$. Then
this follows from \cite[Lemma 2.1.12]{BLM}. \end{proof}
\begin{theorem}[Meyer type unitization]\label{Meyer-Real-Unique} Let $A$ be a real subalgebra (resp.\ Jordan subalgebra)
of $B(H)$, and assume that $I_H \notin A$. Let $\pi: A\to B(K)$ be a contractive homomorphism
(resp.\ Jordan homomorphism) for a real Hilbert space $K$. Let $A^1= span_{\Rdb}\{A, I_H\} \subseteq B(H)$ and define $\pi^o: A^1\to B(K)$ by $\pi^o(a+\lambda I_H)=\pi(a)+\lambda I_K$. Then $\pi^0$ is a contractive homomorphism (resp.\ contractive Jordan homomorphism).
\end{theorem}
\begin{proof} We follow the proof of Meyer's theorem for a complex operator algebra (see Theorem 2.1.13 in \cite{BLM}) using the fact that $A$ has a complexification which is a complex operator algebra.
It is easy to see that $\pi^0$ is a homomorphism (resp.\ Jordan homomorphism). To show that it is contractive,
let $T=a+\lambda I_H \in A^1$ for some $a\in A$ and $\lambda\in \Rdb$ and $\|T\|< 1$. We may effectively replace $A$
by the closed algebra generated by $a$, which is an operator algebra.
We claim that $\|\pi^\circ(T)\|< 1$.
We will regard everything as objects inside $B(H_c)$. In particular we view $A, A^1 = A + \Rdb I_H$, and $B(H)$ as
real subalgebras of
$B(H_c)$, and we view
$T$ as an operator in $B(H_c)$. By Lemma \ref{aboveMeyer1}, $|\lambda|<1$. Since $T$ is strictly contractive, by item (2)
in 2.1.14 in \cite{BLM} we have that $(I+T)(I-T)^{-1}$ is strictly accretive. Set $\alpha=(1+\lambda)/(1-\lambda)$. Then $\alpha>0$ and
$$\theta = \frac{1}{\alpha}(I+T)(I-T)^{-1} = I+\frac{1}{\alpha}\Big( (I+T)(I-T)^{-1}- (I+\lambda)(I-\lambda)^{-1} \Big)$$
is also strictly accretive. Note that by the Neumann lemma, $(I-T)^{-1}=\sum_{k=0}^\infty T^k \in A + \Rdb I$. We may write
$(I+T)(I-T)^{-1}- (I+\lambda)(I-\lambda)^{-1}$ as
\begin{equation*}
\begin{split}
(I-T)^{-1} \Big( (I+T)(I-\lambda)-(I-T)(I+\lambda) \Big)(I-\lambda)^{-1}
&= 2(I-T)^{-1} a (I-\lambda)^{-1}\\
&=\frac{2}{1-\lambda}(I-T)^{-1} a.
\end{split}
\end{equation*}
Since $A$ is an ideal in $A + \Rdb I$, $\theta-I=\alpha^{-1}(I+T)(I-T)^{-1}- (I+\lambda)(I-\lambda)^{-1}\in A$.
Since $\theta$ is accretive, $\theta+I$ is invertible.
By the principle $A \cap (A_c)^{-1} = A^{-1}$ mentioned in the introduction, $(\theta+I)^{-1}\in A + \Rdb 1$. So $(\theta-I)(\theta+I)^{-1}\in A$,
again since $A$ is an ideal of $A + \Rdb 1$.
Since $\pi_c^o$ is a unital homomorphism and $\theta+I$ is invertible, $\pi_c^o(\theta+I)=\pi_c^o(\theta)+I$ is invertible and $\pi_c((\theta+I)^{-1})= (\pi_c^o(\theta)+I))^{-1}.$ Thus,
$$\pi_c^o((\theta-I)(\theta+I)^{-1})=(\pi_c^0(\theta)-I)(\pi_c^o(\theta)+I)^{-1}.$$ We will use items (1) and (2)
in 2.1.14 in \cite{BLM} several times.
We know that $\theta$ is strictly accretive, thus $(\theta-I)(\theta+I)^{-1}$ is strictly contractive and is an element in $A \subseteq A_c$. Since $\pi_c^o|_A=\pi$, $\pi^\circ_c((\theta-I)(\theta+I)^{-1})\in B(K).$ Since $\pi$ is a contraction,
$$\| (\pi_c^0(\theta)-I)(\pi_c^o(\theta)+I)^{-1}\|_{B(K_c)} = \| \pi_c^\circ((\theta-I)(\theta+I)^{-1})\|_{B(K_c)}= \|\pi ((\theta-I)(\theta+I)^{-1})\|_{B(K)} < 1.$$
Thus, $\pi_c^o(\theta)$ is strictly accretive in $B(K_c)$. Thus,
$$\alpha \pi_c^0(\theta)=\pi_c^o((I+T)(I-T)^{-1})=(I+\pi_c^o(T))(I-\pi_c^o(T))^{-1}$$
is strictly accretive. Therefore $\pi_c^o(T) = \pi^\circ(T)$ is strictly contractive as desired.
\end{proof}
It now follows that the unitization of a real operator algebra is unique up to isometric isomorphism:
\begin{theorem} \label{Meyer-Real-Algebra}
Let $A$ be a real subalgebra (resp.\ Jordan subalgebra)
of $B(H)$, and assume that $I_H \notin A$. Let $\pi: A\to B(K)$ be an isometric homomorphism
(resp.\ isometric Jordan homomorphism) for a real Hilbert space $K$.
Then the unital homomorphism $\pi^\circ : A^1\to B(K)$ where $\pi^\circ(a+\lambda I_H)=\pi(a)+\lambda I_K$ is an isometric isomorphism
onto $\pi(A) + \Rdb \, I_H$.
\end{theorem}
\begin{proof} This follows from Theorem \ref{Meyer-Real-Unique} as in \cite[Corollary 2.1.15]{BLM}. \end{proof}
\begin{corollary} \label{uniqueJoaunitization} The unitization $A^1$ of a Jordan operator algebra
is unique up to isometric Jordan isomorphism. In addition, $(A^1)_c=(A_c)^1$ isometrically isomorphically.
\end{corollary}
\begin{proof} We follow the proof of Corollary 2.5 in \cite{BWj}. If $A$ is nonunital then we may assume that $A$ is represented on a Hilbert space $H$ and
the Jordan operator algebra unitization $A^1$ of $A$ is identified with $A + \Rdb \, I_H$. Then
the first assertion
follows from Theorem \ref{Meyer-Real-Algebra}. If $A$ is unital and $e$ is the identity of $A$, then $e$ is a
central projection of a unitization $A^1$. Also $e$ commutes with $A^*$ (adjoints with respect to a fixed unital
isometric representation of $A^1$; this follows for example since $e$ is selfadjoint in that representation by the statement
about $\Delta(A)$ in Lemma \ref{ApA}).
So $e$ is central in $C^*(A^1)$. From this it is easy to see that $$\|a+\lambda 1\|=\max\{\|e(a+\lambda 1) \|, \|(1-e)(a+\lambda 1)\|\}=\max\{\|a+\lambda e\|, |\lambda|\} , \qquad a \in A, \lambda \in \Rdb .$$
Since a unitization of complex Jordan operator algebra is unique up to isometric Jordan homomorphism (see \cite[Corollary 2.5]{BWj}), $(A^1)_c=(A_c)^1$.
\end{proof}
\section{Approximate identities} \label{ai} If $A$ is a Jordan subalgebra of a $C^*$-algebra $B$ (either real or complex case)
then we say that a net $(e_t)$ in Ball$(A)$ is
a $B$-{\em relative partial cai} for $A$ if
$e_t a \to a$ and $a e_t \to a$ for all $a \in A$. Here we are using the usual
product on $B$,
which may not give an element in $A$, and may depend on $B$.
We say that a net $(e_t)$ in Ball$(A)$ is
a {\em partial cai} for $A$ if
for every $C^*$-algebra $B$ containing $A$ as
a Jordan subalgebra, $e_t a \to a$ and $a e_t \to a$ for all $a \in A$,
using the product
on $B$. Note that partial cais are the same as cais if $A$ is an associative operator algebra.
We say that
$A$ is {\em approximately unital} if it has a partial cai.
If $A$ is an operator algebra or Jordan operator algebra then we recall that a net $(e_t)$ in Ball$(A)$ is
a {\em Jordan cai} or {\em J-cai} for $A$ if $e_t a + a e_t \to 2a$ for all $a \in A$.
\begin{lemma} \label{JOA_aprox_unital} Let $A$ be a real Jordan subalgebra of a real $C^*$-algebra $B$. Then
\begin{enumerate}
\item $A$ has a $B$-relative partial cai if and only if $A_c$ has a $B_c$-relative partial cai.
\item $A$ has a J-cai if and only if $A_c$ has a J-cai.
\item If $A_c$ has a partial cai then $A$ has a partial cai.
\end{enumerate}
\end{lemma}
\begin{proof} Note that $A_c$ is a complex Jordan subalgebra of the $C^*$-algebra $B_c$.
Any $B-$relative partial cai of $A$ is a $B_c$-relative partial cai of $A_c$. Conversely, if $(e_t+i \, f_t)$ is a $B_c$-partial cai of $A_c$, then $(e_t)$ is a $B$-partial cai of $A$. (See the fact in the
proof of \cite[Proposition 5.2.4]{Li}.)
Similarly for J-cai's. Item (3) follows from the ideas in (1).
\end{proof}
\begin{lemma} \label{jcai} If $A$ is a real Jordan subalgebra
of a real $C^*$-algebra $B$, then the following are equivalent:
\begin{itemize} \item [(i)] $A$ has a partial cai.
\item [(ii)] $A$ has a $B$-relative partial cai.
\item [(iii)]
$A$ has a J-cai.
\item [(iv)] $A^{**}$ has an identity $p$ of norm 1 with respect to the Jordan Arens product on $A^{**}$,
which coincides on $A^{**}$ with the restriction of the usual product
in $B^{**}$. Indeed $p$ is the identity of the von Neumann algebra
$C^*_B(A)^{**}$.
\end{itemize} If these
hold then any partial cai $(e_t)$ for $A$
is a cai for $C^*_B(A)$ (and for the associative operator algebra generated by $A$), and every J-cai for $A$ converges weak* to $p$.
\end{lemma}
\begin{proof} This holds almost exactly as in the complex case \cite[Lemma 2.6]{BWj}. We just indicate a proof that (iv) implies (i). Suppose that
$p$ is an identity for $A^{**}$. Viewing $A$ as with its operator space structure we have that $(A_c)^{**} = (A^{**})_c$ by \cite{RComp}.
Since the canonical map $A_c \to (A^{**})_c$ is a Jordan homomorphism, so is
its weak* continuous extension $(A_c)^{**} \to (A^{**})_c$. Thus $(A_c)^{**} = (A^{**})_c$ as dual real Jordan operator algebras.
Thus $p$ is the identity of $(A_c)^{**}$. By the complex case of the present result,
$A_c$ has a partial cai $(e_t + i f_t)$, with $e_t, f_t \in A$. Therefore by the proof of the last lemma $(e_t)$ is a partial cai for $A$.
\end{proof}
\begin{proposition} \label{coj} Let $A$ be an approximately unital real Jordan operator algebra and let $\pi:A\to B(H)$ be a contractive Jordan homomorphism. We let $P$ be the projection onto $K=[\pi(A)H]$. Then $\pi(e_t)\to P$ in the weak* (and WOT) topology of $B(H)$ for any J-cai $(e_t)$ for $A$. Moreover, for $a\in A$, we have $\pi(a)=P\pi(a)P,$ and the compression of $\pi$ to $K$ is a contractive Jordan homomorphism. Also, if $(e_t)$ is a partial cai for $A$, then $\pi(e_t)\pi(a)\to \pi(a)$ and $\pi(a)\pi(e_t)\to \pi(a)$. In particular, $\pi(e_t)|_K\to I_K$ SOT in $B(K)$.
\end{proposition}
\begin{proof} As in the proof of Lemma 2.19 in \cite{BWj}. \end{proof}
We will see in the proof of the next theorem that if $(a_t + i b_t)$ is a cai for $A_c$ in $\frac{1}{2} {\mathfrak F}_{A_c}$ (resp.\ ${\mathfrak r}_{A_c}$) then $(a_t)$
is a cai for $A$ in $\frac{1}{2} {\mathfrak F}_{A}$ (resp.\ ${\mathfrak r}_{A}$).
\begin{theorem}[Real case of Theorem 2.8 of \cite{BWj}] \label{frden} If $A$ is an approximately unital real Jordan operator algebra then $\mathfrak{F}_A$ is weak$^*$ dense in $\mathfrak{F}_{A^{**}}$ and $\mathfrak{r}_A$ is weak$^*$ dense in $\mathfrak{r}_{A^{**}}$. Finally, $A$ has a partial cai in $\frac{1}{2}\mathfrak{F}_A$.
\end{theorem}
\begin{proof}
Let ($A_c,\|\cdot\|_c$) be the operator space complexification of $A$.
Then $\mathfrak{F}_{A_c}$ is weak$^*$ dense in $\mathfrak{F}_{A_c^{**}}$. Let $x\in \mathfrak{F}_{A^{**}} \subset
\mathfrak{F}_{A_c^{**}}$. By the density in the complex case, there is a net $(a_t+ib_t)$ weak$^*$ converging to $x$, which implies
that $a_t$ weak$^*$ converges to $x$. Since $\|a_t-1\|\leq \|a_t+ib_t-1\|_c\leq 1$, we have $a_t\in \mathfrak{F}_A$. This shows that $\mathfrak{F}_A$ is weak$^*$ dense in $\mathfrak{F}_{A^{**}}$.
Similarly, if $x\in \mathfrak{r}_{A^{**}} \subset \mathfrak{r}_{A_c^{**}}$ then there is a net $(a_t+ib_t)$ in $\mathfrak{r}_{A_c}$ weak* converging to $x$. Since $(a_t+ib_t)+(a_t+ib_t)^*\geq 0$, we have $a_t+a_t^*\geq 0$. Moreover, $a_t$ weak* converges to $x$.
Finally, by the corresponding fact in the complex case, $A_c$ has a partial cai $(e_t+i \, f_t)$ in $\frac{1}{2}\mathfrak{r}_{A_c}$. Thus, $(e_t)$ is a partial cai in $A$. Since $\|1-\frac{1}{2}e_t\|\leq \|1-\frac{1}{2}(e_t+i \, f_t)\|\leq 1$, we have that $e_t\in \frac{1}{2}\mathfrak{r}_{A}$.
\end{proof}
A similar proof gives the analogue of Proposition 2.10 and Corollary 2.11 of \cite{BWj}:
\begin{proposition} \label{corde} Let $A$ be an approximately unital real Jordan operator algebra. Then the set of contractions in $\mathfrak{r}_A$ is weak* dense in the set of contractions in $\mathfrak{r}_{A^{**}}$.
\end{proposition}
\begin{proposition} If $A$ is a Jordan operator algebra with a countable Jordan cai, then $A$ has a countable partial cai in $\frac{1}{2}\mathfrak{F}_{A}$.
\end{proposition}
In the following result, which generalizes \cite[Theorem 4.1]{BWj} (which in turn derives from \cite[Theorem 2.1]{BRord}),
we write $x \preccurlyeq y$ to denote Re$(x) \leq {\rm Re}(y)$. Here Re$(x) = (x+x^*)/2$. Also ${\mathfrak c}_A = \Rdb^+ \, {\mathfrak F}_A$.
\begin{theorem} \label{brord} Let $A$ be a real Jordan operator algebra which generates a real $C^*$-algebra $B$, and let
${\mathcal U}_A$ denote the open unit ball $\{ a \in A : \Vert a \Vert < 1 \}$. The following are equivalent:
\begin{itemize} \item [(1)] $A$ is approximately unital.
\item [(2)] For any positive $b \in {\mathcal U}_B$ there exists $a \in {\mathfrak r}_A$
with $b \preccurlyeq a$.
\item [(2')] Same as {\rm (2)}, but $a \in \frac{1}{2} {\mathfrak F}_A$.
\item [(3)] For any pair
$x, y \in {\mathcal U}_A$ there exist $a \in \frac{1}{2} {\mathfrak F}_A$
with $x \preccurlyeq a$ and $y \preccurlyeq a$.
\item [(4)] For any $b \in {\mathcal U}_A$ there exist
$a \in \frac{1}{2} {\mathfrak F}_A$
with $-a \preccurlyeq b \preccurlyeq a$.
\item [(5)] For any $b \in {\mathcal U}_A$ there exist
$x, y \in \frac{1}{2} {\mathfrak F}_A$
with $b = x-y$.
\item [(6)] ${\mathfrak r}_A$ is a generating cone (that is, $A = {\mathfrak r}_A - {\mathfrak r}_A$).
\item [(7)] $A = {\mathfrak c}_A - {\mathfrak c}_A$.
\end{itemize}
\end{theorem}
\begin{proof} (1) $\Rightarrow$ (2') \
Let $A$ be an approximately unital real Jordan operator algebra, and $b \in A_+$ with
$\| b \| < 1$.
Then $A_c$ is approximately unital, and by \cite[Theorem 4.1 (2)]{BWj} there exists $x+iy \in \frac{1}{2} {\mathfrak F}_{A_c}$
such that $b \leq {\rm Re} (x+iy)$. This is easily seen to imply that $b \leq {\rm Re} (x)$.
Since $\| 1 - 2x \| \leq \| 1 - 2x - 2 iy \| \leq 1$ we have $x \in \frac{1}{2} {\mathfrak F}_{A}$.
The other implications are as in \cite[Theorem 2.1]{BRord} and \cite[Theorem 4.1]{BWj}, however some of the results invoked in those proofs
need to be replaced by their real variants from the present paper. Also we note that the implications (2') $\Rightarrow$ (3) and (2) $\Rightarrow$ (6) follow
from a fact from $C^*$-algebra theory. Namely, from the Claim: if $x$ and $y$ (with $y = -x$ in the second implication) are selfadjoint and in ${\mathcal U}_B$
then there is a positive element in ${\mathcal U}_B$ which is
greater than both. This is true in the real case too. To see this we may assume that $x$ and $y$ are also in $B_+$,
by replacing them by $x_+$ and $y_+$. In this case the Claim is usually an ingredient in standard proofs that $B$ has an increasing cai.
However this case of the Claim also follows from the same fact but in the complex case. Indeed there exists $a + ib \in B_c$ with (i)\
$\| a + ib \| < 1$, and (ii)\ $x$ and $y$ dominated by $a + ib$. However (i) implies that $\| a \| < 1$ and (ii) implies that
$x$ and $y$ are dominated by $a$.
\end{proof}
A Jordan ideal in a Jordan operator algebra $A$ is a subspace $J$ of $A$ with $J \circ A \subset J$.
\begin{proposition} \label{Mid} Let $A$ be an approximately unital real operator algebra (resp.\ Jordan operator algebra).
\begin{itemize}
\item [(1)] If $A$ is weak* closed then the weak* closed ideals (resp.\ Jordan ideals) in $A$ which possess an identity
are in a bijective correspondence with the central projections $e \in A$, via $e \mapsto Ae$ (resp.\ $e \mapsto e \circ A$).
These ideals are $M$-summands of $A$.
\item [(2)] The closed ideals (resp.\ Jordan ideals) in $A$ which possess a cai (resp.\ J-cai)
are the subspaces of $A$ whose weak* closure in $A^{**}$ are of the form in {\rm (1)} for a central projection $e \in A^{**}$. Thus they
are of form $\{ x \in A : x = exe \}$ for some such $e \in A^{**}$.
\end{itemize} \end{proposition}
\begin{proof} This follows just as in the complex case in e.g.\ \cite[Theorem 3.25]{BWj}. \end{proof}
We will do a more thorough study elsewhere of the $M$-ideals in real Jordan operator algebras (following on from \cite[Section 5]{Sharma}).
\begin{corollary} \label{quoi} If $J$ is an approximately unital closed two-sided ideal (resp.\ Jordan ideal) in a
real operator algebra (resp.\ Jordan operator algebra) $A,$
then $A/J$ is (completely isometrically isomorphic to)
a real operator algebra (resp.\ Jordan operator algebra). \end{corollary}
\begin{proof} This follows from Proposition \ref{Mid} just as in the complex case in e.g.\ \cite[Theorem 3.27]{BWj}. \end{proof}
Turning to one-sided ideals, Sharma showed in \cite[Section 5]{Sharma} that the closed right ideals in an approximately unital
real operator algebra $A$ which possess a left cai
are the subspaces of $A$ whose weak* closure in $A^{**}$ are of the form $eA^{**}$ for a projection $e \in A^{**}$. Thus they
are of form $\{ x \in A : x = ex \}$ for some such $e \in A^{**}$. The projections $e$ occurring here are called {\em open projections}.
The corresponding subspace $\{ x \in A : x = exe \}$ is called a {\em hereditary subalgebra} of $A$.
For a real Jordan operator algebra $A$ we define a hereditary subalgebra to be a subspace
of $A$ whose weak* closure in $A^{**}$ is of the form $eA^{**}e$ for a projection $e \in A^{**}$. Thus they
are of form $\{ x \in A : x = exe \}$ for such $e \in A^{**}$.
We will discuss elsewhere the noncommutative topology and hereditary subalgebras of real Jordan operator algebras, in the spirit of
\cite{BNj}.
\begin{lemma} \label{juni} If A is a nonunital approximately unital real Jordan operator algebra then the unitization $A^1$ is well defined up to completely isometric Jordan isomorphism, and the matrix norms are
$$\|[a_{ij}+\lambda_{ij} 1]\|=\sup\{\|[a_{ij}\circ c +\lambda_{ij} \, c]\|_{M_n(A)} : c \in {\rm Ball}(A)\}, \quad a_{ij}\in A, \lambda_{ij}\in \Rdb.$$
\end{lemma}
\begin{proof} The proof is the same as for the complex case in Proposition 2.12 in \cite{BWj}. \end{proof}
{\bf Remark.} A unitization of a real Jordan operator algebra $A$ need not be well defined/unique up to completely
isometric Jordan homomorphism. This may be seen by modifying the argument as in Proposition 2.1 in \cite{BWj2} (see
\cite[Remark 4.4.3]{WTT}). This means that some results about real Jordan operator algebras with no kind of approximate
identity may not be treatable
in the operator space category (as opposed to the Banach space category). If however $A$ is approximately unital then the last result
shows that this problem does not exist.
As in the complex case \cite[Theorem 2.8]{BWj}, a real approximately unital Jordan operator algebra $A$ is an $M$-ideal in $A^1$.
\begin{lemma}[Real case of Lemma 2.20 in \cite{BWj}] \label{blmf} Let $A$ be a real approximately unital Jordan operator algebra with a partial cai $(e_t)$. Denote the identity of $A^1$ by $1$. The following facts hold.
\begin{enumerate} \item If $\psi:A^1\to \Rdb$ is a functional on $A^1$, then $\lim_t\psi(e_t)=\psi(1)$ if and only if $\|\psi\|=\|\psi|_{A}\|$.
\item Let $\varphi \in A^*$. Then $\varphi$ uniquely extends to a functional on $A^1$ of the same norm.
\end{enumerate}
\end{lemma}
\begin{proof} (1) \ We have $(A^1)_c=(A_c)^1$ by Corollary \ref{uniqueJoaunitization}.
If $\psi:A^1\to \Rdb$ is a functional on $A^1$, then $\|\psi\|=\|\psi_c\|$ by Proposition 1.4.1 in \cite{Li}. By Lemma \ref{JOA_aprox_unital},
$(e_t)$ is a partial cai for $A_c$.
By Lemma 2.20 in \cite{BWj}, $\lim_t\psi(e_t)= \psi(1)$ if and only if $\|\psi_c\| =\|(\psi_c)_{|A_c} \|$. Now $(\psi_c)_{|A_c} = (\psi_{|A})_c$,
and so $$\|(\psi_c)_{|A_c} \| = \| (\psi_{|A})_c \| = \| \psi_{|A} \|$$ by Proposition 1.4.1 in \cite{Li}.
Thus $\lim_t\psi(e_t)=\psi(1)$ if and only if $\|\psi\|=\|\psi|_{A} \|$.
(2) \ This uses a similar idea: if $\psi, \rho$ are two Hahn-Banach extensions of $\varphi$ to $A^1$,
then $\psi_c, \rho_c$ are two Hahn-Banach extensions of $\varphi_c$ to $A_c^1$, by Proposition 1.4.1 in \cite{Li}.
\end{proof}
\begin{lemma}[Real case of Lemma 2.21 in \cite{BWj}] For a norm $1$ functional $\varphi$ on an approximately unital real Jordan operator algebra $A$, the following are equivalent:
\begin{enumerate}
\item $\varphi$ extends to a state on $A^1$.
\item $\varphi(e_t)\to 1$ for every partial cai $e_t\in A$.
\item $\varphi(e_t)\to 1$ for some partial cai for $A$.
\item $\varphi(e)=1$ where $e$ is the identity of $A^{**}$.
\item $\varphi(e_t)\to 1$ for every Jordan cai for $A$.
\item $\varphi(e_t)\to 1$ for some Jordan cai for $A$.
\end{enumerate}
\end{lemma}
\begin{proof} The proof is the same as for the complex case in Lemma 2.21 in \cite{BWj}. \end{proof}
The functionals on $A$ characterized in the last lemma are the {\em states} of $A$.
\section{Real positive elements and real positive maps}
As we said in the introduction, if $A$ is a unital subspace or unital
(Jordan) subalgebra of $B(H)$ then ${\mathfrak r}_{A}$ does not depend on the
particular $B(H)$ that $A$ sits in (isometrically and unitally). A similar statement holds
if $A$ is any (Jordan) subalgebra of $B(H)$.
Thus if $A$ is a Jordan operator algebra and $\pi : A \to B(K)$ is an isometric Jordan homomorphism
then, for example, $\pi(x) + \pi(x)^* \geq 0$ if and only if e.g.\ $\| 1 - tx \| \leq 1 + t^2 \| x \|^2$ for all $t > 0$. Here $1$ is the identity of $A^1$, or the
identity operator on $H$.
This is a simple consequence of Meyer's theorem
for Jordan operator algebras above. Hence $A \cap {\mathfrak r}_{A_c} = {\mathfrak r}_{A}$
if $A_c$ is any Jordan operator algebra complexification of $A$. Similarly it is clear from Meyer's theorem above
for Jordan operator algebras that ${\mathfrak F}_A = A \cap {\mathfrak F}_{A_c}$.
In the following proof we write ${\rm Re}(x)$ for $\frac{1}{2} (x+ x^*)$.
\begin{lemma} \label{sprf} If $X$ is a real
unital operator space or Jordan operator algebra then
${\mathfrak r}_X = \overline{\Rdb^+ {\mathfrak F}_X}$.
\end{lemma}
\begin{proof} If $\| 1 - x \| \leq 1$ then $\| 1 - {\rm Re}(x) \| \leq 1$. Therefore
$-1 \leq 1 - {\rm Re}(x) \leq 1$, and
so ${\rm Re}(x) \geq 0$. Thus $\overline{\Rdb^+ {\mathfrak F}_X}
\subset {\mathfrak r}_X$. The reverse inclusion in the unital operator space case can be
proved as is done in the complex case early in \cite[Section 2]{BBS}.
If $X$ is a Jordan operator algebra then
${\mathfrak r}_X \subset {\mathfrak r}_{X_c}
= \overline{\Rdb^+ {\mathfrak F}_{X_c}}$. Suppose that $x \in {\mathfrak r}_X$ and
$c_t \, x_t \to x$ with $c_t \in \Rdb^+$ and $x_t = a_t + i b_t \in {\mathfrak F}_{X_c}$.
Then $a_t \in X \cap {\mathfrak F}_{X_c} = {\mathfrak F}_{X},$ and $c_t \, a_t \to x$. So $x \in \overline{\Rdb^+ {\mathfrak F}_X}$.
\end{proof}
{\bf Remark.} As in \cite{BRord,BWj} we may consider the ${\mathfrak F}$-transform: By \cite[Lemma 2.5]{BRord},
if $x \in {\mathfrak r}_A$ for a real Jordan operator algebra $A$ then $${\mathfrak F}(x) = x(x+1)^{-1}
\in A \cap \frac{1}{2} {\mathfrak F}_{A_c} = \frac{1}{2} {\mathfrak F}_{A}.$$
Indeed let $D$ be the real operator algebra generated by 1 and $x$. Since $x + x^* \geq 0$, the numerical range of $x$ is in the right half plane. Hence the spectrum in $D_c$ of $x$ is in the right half plane. Hence -1 is not in that spectrum, so $1+x$ has an inverse in $D_c$, in fact in $D$ (since e.g.\ if $x (a+ib) = 1$ then $xa = 1$).
Then ${\mathfrak F}(x) = x(1+x)^{-1} \in AD \subset A$. Also, $1 - 2 x(1+x)^{-1} = (1-x)(1+x)^{-1}$, which is essentially the Cayley transform, which has norm $\leq 1$.
In fact this map has range $U_A \cap \frac{1}{2} {\mathfrak F}_{A}$, where $U_A = \{ a \in A : \| a \| < 1 \}$, as in \cite[Lemma 2.5]{BRord}.
Indeed suppose that $w\in U_A \cap \frac{1}{2} {\mathfrak F}_{A} \subset U_{A_c} \cap \frac{1}{2} {\mathfrak F}_{A_c}$.
We may suppose that $A$ is the closed algebra generated by $w$, which is an operator algebra. Then there exists
$x \in {\mathfrak r}_{A_c}$ with ${\mathfrak F}(x) = w$. However $x = w (1-w)^{-1} \in A$.
Using the ${\mathfrak F}$-transform one may give another proof of Lemma \ref{sprf} in the spirit of
e.g.\ \cite[Theorem 3.3]{BRII}.
\begin{lemma} \label{antis} Let $A$ be an operator system or real $JC^*$-algebra. Then $x \in A$ is antisymmetric if and only if
$x \in {\mathfrak r}_A \cap (-{\mathfrak r}_A)$.
\end{lemma}
This is useful because there are many nice metric characterizations of ${\mathfrak r}_A$, as we said earlier
when we defined that set (for example the conditions in \cite[Lemma 2.4]{BSan}).
\begin{lemma} \label{syjo} If $X$ is a real operator system then $X = X_{\rm sa} \oplus X_{\rm as}$. Also, $X_{\rm sa} = X_+ - X_+$, and
$X = {\mathfrak r}_X - {\mathfrak r}_X = \Rdb^+ ( {\mathfrak F}_X - {\mathfrak F}_X)$. \end{lemma}
\begin{proof} The first identity was established above Lemma \ref{Tsyjo}.
Let $x \in X_{\rm sa}$, then $$x = \frac{1}{2} (\| x \| 1 + x – ( \| x \| 1 - x ))
\in X_+ - X_+.$$ Similarly, since $\| x \| 1 - (\| x \| 1 \pm x)$ has norm $\leq \| x \|$, we have that
$\| x \| 1 \pm x \in \| x \| {\mathfrak F}_X \subset \Rdb^+ {\mathfrak F}_X$. The rest is clear.
\end{proof}
If $X$ is a real operator system and $T : X \to B(H)$ we say that $T$ is {\em systematically real positive} if
$x,y \in X$ with $x + y^* \in X_+$ implies that $T(x) + T(y)^* \geq 0$.
\begin{theorem} \label{27} Let $X$ be a real selfadjoint operator space and let $T : X \to B(H)$ be real linear. The following are equivalent:
\begin{itemize} \item [(i)] $T$ is systematically real positive.
\item [(ii)] $T$ is both positive and selfadjoint.
\item [(iii)] $T$ is real positive and selfadjoint.
\end{itemize} \end{theorem}
\begin{proof} Clearly a selfadjoint map is positive if and only if it is real positive, and if these holds then
$T : X \to B(H)$ is systematically real positive. Conversely suppose that $T$ is systematically real positive.
Let $x \in X_+$. Then $T(x) + T(0)^* \geq 0$, so $T$ is positive.
By Lemma \ref{syjo} we have $T(X_{\rm sa}) = T(X_+) - T(X_+) \subset B(H)_{\rm sa}$.
If $x^* = -x$ then $x + x^* = 0$, so that $T(x) + T(x)^*$ is both positive
and negative. Hence $T(x)^* = - T(x)$. That is, $T( X_{\rm as}) \subset Y_{\rm as}$.
So $T$ is selfadjoint by Lemma \ref{Tsyjo}.
\end{proof}
\begin{theorem} \label{db8}
If $T: A\to B(H)$ is a real positive linear map on a unital operator space $A$ whose restriction to
$\Delta(A) = A \cap A^*$ is selfadjoint (or systematically real positive).
Then the canonical extension $\tilde{T}: A+A^*\to B(H): x+y^*\mapsto T(x)+T(y)^*$ is well defined, selfadjoint, and positive.
\end{theorem}
\begin{proof}
Let $T : A \to B(H)$ be real positive with $T$ restricted to $\Delta(A)$ being
selfadjoint.
Define $\tilde{T}(a + b^*) = T(a) +
T(b)^*$
for $a, b \in A$.
To see that $\tilde{T}$ is well defined, suppose $a + b^* = x+ y^*$, for $a, b, x, y \in A$. Then $a-x = (y-b)^* \in \Delta(A)$, and
so $$T(a-x) = T((y-b)^*) = (T(y) - T(b))^* .$$ Thus, $T$ is well defined.
If $z = a + b^*$ is positive (usual sense), then $$z = z^* = b + a^* = \frac{1}{2} (a +
b^* + b + a^*)
= \frac{1}{2} (a + b) + (\frac{1}{2} (a + b))^*,$$ and $\frac{1}{2} (a + b) \in {\mathfrak r}_A$. Since $T$ is real positive
we have
$$\tilde{T}(z) = T(\frac{1}{2}(a + b)) + T(\frac{1}{2}(a + b))^* \geq 0.$$ So $\tilde{T}$ is positive.
\end{proof}
The converse of the theorem is true:
if $\tilde{T}$ is well defined, selfadjoint, and positive, then $T$ is real positive
and its restriction to
$\Delta(A) = A \cap A^*$ is selfadjoint and systematically real positive.
If $X$ is a unital real operator space then we say that a real linear map $T : X \to B(H)$ is {\em systematically real positive} if
$T$ extends to a positive selfadjoint map on $X + X^*$. This is equivalent, by the theorem, to
$T$ being real positive with $T$ restricted to $\Delta(A)$ being
selfadjoint. It is also equivalent to: $x,y \in X$ with $x + y^* \geq 0$ implies that $T(x) + T(y)^* \geq 0$.
One way to see the last equivalence is to note that this condition implies that $T$ restricted to $\Delta(A)$
is systematically real positive, hence selfadjoint by Theorem \ref{27}. Then apply the last theorem.
Note that a real positive linear map from an operator system $X$ into $B(H)_{\rm sa}$ is
systematically real positive. Indeed in this case $T(X_{\rm sa}) \subset B(H)_{\rm sa}$,
and as in the proof of Theorem \ref{27} above $T(X_{\rm as}) \subset (0) \subset B(H)_{\rm sa}$.
So $T$ is selfadjoint by Lemma \ref{Tsyjo}.
We discuss now the meaning of $A+A^*$ for a real operator algebra or real Jordan operator algebra $A$. If $A$ is also equipped with a compatible
operator space structure (that is, if $A$ is a Jordan subalgebra of $B(H)$ and has the inherited matrix norms), then this is relatively unproblematic.
This is essentially the case treated in \cite{WTT}. The point is that a completely isometric (Jordan) homomorphism $\theta : A \to B$ between
real (Jordan) operator algebras extends to a completely isometric (Jordan) homomorphism between their complexifications.
By the complex theory this extends to a completely isometric (Jordan) homomorphism between the unitizations of the complexifications,
and then to a completely isometric UCP map $(A_c)^1 + ((A_c)^1)^* \to (B_c)^1 + ((B_c)^1)^*$. This restricts to a
completely isometric selfadjoint complete order isomorphism $A+A^* \to B + B^*$.
On the other hand, if we treat $A$ without using matrix norms, that is, use morphisms that are isometric (Jordan) homomorphisms,
then it seems that the meaning of $A+A^*$ is more problematic. That is, we do not know at present if
an isometric (Jordan) homomorphism $\theta : A \to B$ extends to an isometric selfadjoint order isomorphism $A+A^* \to B + B^*$.
However we will know shortly from Lemma \ref{ApA} that it extends to a selfadjoint order isomorphism $\tilde{\theta}
: A+A^* \to B + B^*$. There is a canonical norm on $A+A^*$
for which the latter map is an isometry (namely the norm inherited from $C^*_{\rm max}(A)$, the universal $C^*$-algebra for contractive
(Jordan) homomorphisms from $A$, see \cite{WTT}), but we do not know yet if this norm always agrees with the one mentioned above in the last paragraph
using a suitable operator space structure on $A$. (It does follow from the later result Lemma \ref{rsu} that $\tilde{\theta}$ is isometric
on the selfadjoint part of $A+A^*$.) In any case, if we concern ourselves only with
the order structure on $A+A^*$ there are no problems.
\begin{lemma} \label{ApA} Let $\theta : A \to B$ be a contractive (Jordan) homomorphism
between real (Jordan) operator algebras. Then $\theta$ extends uniquely to a selfadjoint positive map $\tilde{\theta}
: A+A^* \to B + B^*$. Also, restriction of $\theta$ to $\Delta(A)$ is a (Jordan) $*$-homomorphism into $\Delta(B)$.
If $\theta$ is also an isometric isomorphism onto $B$ then
$\tilde{\theta}$ is a selfadjoint order isomorphism onto $B+B^*$.
\end{lemma}
\begin{proof} We know that $\Delta(A)$ is a real $JC^*$-algebra. Suppose that $B$ is a Jordan subalgebra of $B(H)$.
By Theorem \ref{contractive-hom},
the restriction of $\theta$ to $\Delta(A)$ is a Jordan $*$-homomorphism into $B(H)$.
In particular it is
selfadjoint, and maps into $\Delta(B)$. The proof of Theorem \ref{db8} now gives the first assertion. The second assertion follows from the first applied to
$\theta$ and $\theta^{-1}$.
\end{proof}
For a real operator algebra or real Jordan operator algebra $A$ then we say that a real linear map $T : A \to B(H)$ is {\em systematically real positive} if
$T$ is real positive with $T$ restricted to $\Delta(A) = A \cap A^*$ being
selfadjoint. It is also equivalent to the canonical extension $\tilde{T}: A+A^* \to B(H): x+y^*\mapsto T(x)+T(y)^*$ being well defined, selfadjoint, and positive.
Indeed the proof of Theorem \ref{db8} shows that if $T$ is real positive with $T$ restricted to $\Delta(A)$
selfadjoint then $\tilde{T}$ is well defined, selfadjoint, and positive.
Conversely, if the last condition holds then clearly $T$ is real positive and $T$ restricted to $\Delta(A) = A \cap A^*$ is
selfadjoint.
The latter implies that $x + y^* \geq 0$ implies that $T(x) + T(y)^* \geq 0$, but we are not sure if this condition is equivalent.
\bigskip
{\bf Remark.} Another class of maps that one could consider are the maps $T : A \to B$ that extend to a real positive
map on a (Jordan) operator algebra complexification. Then of course there are various variants of this class, such as
contractions that extend to a real positive contraction on such a complexification.
\begin{lemma} \label{ispos} Let $T : A \to B$ be a systematically real positive
map between real Jordan operator algebras. Then $T(\Delta(A)) \subset \Delta(B)$, and $T$ restricts to a positive
selfadjoint linear map
from $\Delta(A)$ to $\Delta(B)$. Thus $0 \leq T(1) \leq 1$ if $A$ is unital and $T$ is contractive.
\end{lemma}
\begin{proof} As above, $\tilde{T}: A+A^* \to B(H)$
is positive and selfadjoint. Hence so is its restriction to $\Delta(A)$.
\end{proof}
Let $T : X \to B(H)$ be a unital linear contraction on a unital operator space. Then $T$ is real positive in the sense that
$T$ takes ${\mathfrak r}_X$ to real positive operators. This follows from the fact that
${\mathfrak r}_X = \overline{\Rdb^+ {\mathfrak F}_X}$ (see Lemma \ref{sprf}).
However we shall see that $T$ need not be systematically real positive. Indeed if $X$ is a real operator system then
$T$ need not be selfadjoint, although $T$ is antisymmetric (that is, $T( X_{\rm as}) \subset Y_{\rm as}$). Indeed if $x^* = -x$ then $x + x^* = 0$, so that $T(x) + T(x)^*$ is both positive and negative. Hence $T(x)^* = - T(x)$.
\begin{example} \label{expoly} Let $X$ be the polynomials with real coefficients of degree $\leq 1$ in $C([0,1])$.
Let $x(t) = t$, and let $z$ be the matching monomial in the disk algebra. Let $g = \frac{1+z}{2}$, a
contraction in the disk algebra, and define $T(s + tx) = s + t g$ for $s, t \in \Rdb$. We claim that
$T$ is a unital contraction on $X$ which is not selfadjoint, nor systematically real positive, nor positive.
Indeed the norm of $s + t g$ in the disk algebra
is easily seen to be $|s+t/2| + |t|/2$, whereas it is an exercise that the norm of $s + t x$ in $X$ is $\max \{ |s|, |s+t| \}$.
Finally, by considering the cases that $s, t,$ and $s+t/2$ are positive and negative, one can prove easily that
$$|s+t/2| + |t|/2 \, \leq \, \max \{ |s|, |s+t| \} \, , \qquad s, t \in \Rdb.$$
That is, $T$ is a unital contraction on $X$. It clearly is not selfadjoint, hence is not systematically real positive.
This example illustrates some other points. First, although $T$ is a unital contraction on $X$, and although
$X$ is so very simple, $T$ is not a complete contraction. Indeed if it were then by \cite[Theorem 2.1]{RComp}
it extends to a unital complete contraction on the complexification. This extension would have to be selfadjoint
by the well known complex case \cite{Arv,Pau},
giving the contradiction that $T$ is selfadjoint. Also, there is not a real version of the
completely contractive version of von Neumann's inequality (sometimes attributed to Sz-Nagy,
and following from the Sz-Nagy dilation), else $T$ would be completely contractive. There is
a real form of von Neumann's inequality. (Indeed if $T$ is a contraction in $B(H) \subset B(H)_c$ then the map
$p + \bar{q} \mapsto p(T) + q(T)^*$ is a positive (hence completely positive, by \cite[Theorem 3.11]{Pau}) unital
contraction from a dense selfadjoint subspace of $C(\Tdb,\Cdb)$ into $B(H)_c$.
The restriction to the set of $p + \bar{q}$ for polynomials $p, q$ with real coefficients, is a
positive selfadjoint contraction into $B(H)$.)
Note that $R(s + tx) = T(s + tx) \oplus (s + tx) \in A(\Ddb) \oplus^\infty X$ is a unital isometry
which is not systematically real positive, nor selfadjoint, nor positive.
Thus if $X$ is a unital operator space, $X + X^*$ need not be `well defined' as an ordered Banach space. Indeed in the
above example $X$ and $T(X)$ are `isometrically the same' as unital operator spaces via the unital isometry $T$.
However $X + X^*$ has dimension 2, while $T(X) + T(X)^*$ has dimension 3, so $T$
certainly does not extend to a faithful map on $X + X^*$, let alone a positive selfadjoint isometric one, nor an order isomorphism onto its range.
On the other hand $X + X^*$ is `well defined' as an ordered linear space if we use morphisms on a unital operator space $X$ that are
real positive unital isometries which is selfadjoint on $\Delta(X)$. Indeed if $T : X \to Y$ is a surjective unital isometry
between unital operator spaces $X$ and $Y$, and if $T$ and $T^{-1}$ are systematically real positive, then
the canonical extension $\tilde{T}: A+A^*\to B(H): x+y^*\mapsto T(x)+T(y)^*$ is selfadjoint and an order embedding.
Also, $X + X^*$ is `well defined' as an operator system, if we use morphisms on a unital operator space $X$ that are
unital complete isometries. See the remark above Theorem \ref{contractive-hom}.
\end{example}
\begin{example} \label{rpne} A real positive map, or systematically real positive map, need not extend to a real positive map on a complexification.
A positive selfadjoint map on a real operator system need not extend to a positive map on a complexification.
Also, a unital linear contraction need not extend to a real positive map on a fixed complexification.
For an example of these, we proceed as in Proposition \ref{wtex}.
Let $u : Y \to X$ be a linear contraction that does not extend to a
contraction from $Y_c$ to $X_c$. Let $\theta_u$ be the canonical extension
of $u$ to a unital contractive homomorphism ${\mathcal U}(Y) \to {\mathcal U}(X)$.
As we said in Proposition \ref{wtex}, by Theorem \ref{db8}, or by the real variant of the Paulsen lemma in \cite[Lemma 4.12]{Sharma} (see also p.\ 492
in \cite{ROnr}),
$\theta_u$ extends to a positive selfadjoint map $\theta_u + (\theta_u)^*$ on the Paulsen system ${\mathcal S}(Y) =
{\mathcal U}(Y) + {\mathcal U}(Y)^*$ (which is even
real contractive by Lemma \ref{rsu}). Thus $\theta_u$ is systematically real positive.
But it does not extend to a real positive map on the complexification, by the argument in
Proposition \ref{wtex}.
\end{example}
\begin{example} A positive functional even on a real $C^*$-algebra need not be selfadjoint nor real positive.
The example above Proposition 4.1 in \cite{ROnr} shows this: apply that functional
to the real positive matrix with
all entries $1$ except for a -3 in the $1$-$2$ corner.
Note that if we scale this example to be a unital functional $\psi$, then $\Vert \psi \Vert \geq 1$.
But in fact $\Vert \psi \Vert > 1$ since if $\Vert \psi \Vert = 1$ then $\psi$ would be selfadjoint
by the next result. \end{example}
\begin{example} \label{f} A positive unital selfadjoint real linear
map on a complex operator system need not be a contraction. There is a $2 \times 2$ matrix counterexample due to Arveson
see e.g.\ A.2 in \cite{Arv}. Viewing this as a
real operator system this map is unital and real positive, but not a contraction.
Indeed a positive unital selfadjoint real linear
map on a real $JC^*$-algebra need not be bounded. For an example of this let $E$ be an infinite dimensional
space of selfadjoint operators on a Hilbert space $H$ with $x^2 \in \Rdb I_H$ for all $x \in E$. Let $A$
be the set of matrices
in $M_2(B(H))$ with diagonal entries in $\Rdb I_H$, and off-diagonal entries $x$ and $-x$ for $x \in E$.
Note that $A$ is a real $JC^*$-algebra. We have $A_{\rm sa} = (\Rdb I_H) \oplus (\Rdb I_H)$,
the positive elements in $A$ are $(\Rdb_+ \, I_H) \oplus (\Rdb_+ \, I_H)$, and
$A_{\rm as}$ consists of the matrices in $A$ with zero main diagonal entries.
If $T : E \to B(K)_{\rm sa}$ is an unbounded real linear map let $\theta_T$ be the map on $A$
taking $$\begin{bmatrix}
\lambda \, I & x\\
-x & \mu \, I
\end{bmatrix} \; \mapsto \; \begin{bmatrix}
\lambda \, I & Tx\\
-Tx & \mu \, I
\end{bmatrix} .$$ This is a positive unital selfadjoint real linear
map on a real $JC^*$-algebra which is not bounded. If $K = \Rdb$ in Example \ref{f} then $\theta_T : A \to M_2$.
This positive map is not $2$-positive, otherwise by Lemma \ref{inMn} it would be completely positive, and hence
completely contractive by Lemma \ref{lemos}. \end{example}
\begin{lemma} \label{sfun} For a functional $\varphi$ on a unital real operator space or
approximately unital real Jordan operator algebra $X$ the following are equivalent:
\begin{itemize} \item [(i)] $\varphi$ is real positive.
\item [(ii)] $\varphi$ is systematically real positive.
\item [(iii)] $\varphi$ is real completely positive (RCP).
\item [(iv)] $\varphi$ is a nonnegative multiple of a state.
\end{itemize}
Such functionals are bounded with $\| \varphi \| = \| \varphi \|_{\rm cb}$.
This equals $\varphi(1)$, or $\lim_t \, \varphi(e_t)$ in the case of
a cai $(e_t)$.
If $\varphi$ is unital then the above equivalent conditions
hold iff $\varphi$ is contractive.
If $X$ is a real operator system then a unital functional $\varphi$ on $X$ is contractive if and only if it is positive and selfadjoint; such a functional is completely positive.
\end{lemma}
\begin{proof} Suppose that $\varphi$ is a real positive functional on a unital real operator space $X$. Its restriction to
$\Delta(X) = X \cap X^*$ is a
real positive functional on an operator system. If $x^* = -x$ in $\Delta(X)$
then as in the proof of Theorem \ref{27} we have that $\varphi(x) = 0$.
So $\varphi$ is selfadjoint on $\Delta(X)$ by Lemma \ref{Tsyjo}.
Hence $\varphi$ is systematically real positive by Theorem \ref{db8}.
Such maps are real completely positive, as in the complex case. One way to see this
is if $\tilde{\varphi}$ is the canonical extension to $X + X^*$,
and if $x = [x_{ij}] \geq 0$ in $M_n(X + X^*)$ then $[\tilde{\varphi}(x_{ij})]$ is a selfadjoint matrix.
We have $$\langle [\tilde{\varphi}(x_{ij})] \xi , \xi \rangle
= \tilde{\varphi}(\xi^T x \xi) \geq 0 .$$ So $[\tilde{\varphi}(x_{ij})] \geq 0$.
So $\tilde{\varphi}$ is completely positive and hence $\varphi$ is real completely positive.
Since $\tilde{\varphi}$ is completely positive it extends to a
completely positive, hence completely bounded, map on the complexification (see e.g.\ Proposition \ref{rcps}).
Indeed the norm and completely bounded norm of this extension is
$\varphi(1)$ by e.g.\ \cite[Proposition 3.6]{Pau}, hence $\| \varphi \| = \| \varphi \|_{\rm cb} = \varphi(1)$.
If $A$ is an approximately unital real Jordan operator algebra then
$A = {\mathfrak r}_A - {\mathfrak r}_A$ by Theorem \ref{brord}.
If $\varphi$ is a real positive functional then the argument of \cite[Corollary 2.8]{BRord} shows that $\varphi$ is bounded.
Hence $\varphi^{**}$ is real positive, and by the above $\varphi^{**}$ is systematically real positive and
real completely positive (RCP),
with $\| \varphi^{**} \| = \| \varphi^{**} \|_{\rm cb} = \varphi^{**}(1)$.
Hence $\| \varphi \| = \| \varphi \|_{\rm cb} = \lim_t \, \varphi(e_t)$, the
latter since $e_t \to 1$ weak* for a cai $(e_t)$.
A contractive unital functional on a unital real operator space $X$ extends to a
contractive unital functional on a
real $C^*$-algebra. This is positive and selfadjoint by \cite[Proposition 5.2.6 (3)]{Li},
and so its restriction is systematically real positive, and indeed positive
if $X$ is an operator system.
In particular any state on a unital real operator space is real positive.
Similarly on an approximately unital real Jordan operator algebra (e.g.\ by taking the bidual and using the same
argument). Conversely, if $\varphi$ is real positive and nontrivial then $\frac{1}{\alpha} \, \varphi$ is a state, where
$\alpha$ is $\varphi(1)$, or $\lim_t \, \varphi(e_t)$ in the case of
a cai $(e_t)$.
A selfadjoint functional on an operator system is clearly real positive if and only if it is positive,
and is a positive multiple of a state. So if it is unital it is a state and has norm $1$.
Such maps are real completely positive and completely positive, as in the complex case--see e.g.\
the argument a few paragraphs above.
\end{proof}
{\bf Remark.} If $A$ is a real $JC^*$-algebra then a positive and selfadjoint
functional is real positive, systematically real positive, and completely positive, and
is RCP.
\bigskip
A unital linear contraction on a unital operator space is real positive
as was stated above. However the converse is false in general, see Example \ref{f}.
Nonetheless, we have:
\begin{lemma} \label{sfunfun} For a real linear map $T : X \to B$ from a unital real operator space or approximately unital real Jordan operator algebra
into a
commutative real $C^*$-algebra the following are equivalent:
\begin{itemize} \item [(i)] $T$ is real positive.
\item [(ii)] $T$ is systematically real positive.
\item [(iii)] $T$ is real completely positive.
\end{itemize}
Such maps are bounded with $\| T \| = \| T \|_{\rm cb}$.
This equals $\| T(1) \|$, or $\lim_t \, \| T(e_t) \|$ in the case that $A$ has
a J-cai $(e_t)$.
If $T$ is unital then the above hold iff $T$ is contractive. If $X$ is a real operator system and $T$ is unital
then $T$ is contractive if and only if it is positive and selfadjoint;
such a map is completely positive.
\end{lemma}
\begin{proof} This follows from Lemma \ref{sfun} and the usual trick for this
in the complex case. Indeed the commutative real $C^*$-algebra
may be replaced by $C(K,\Cdb)$ viewed as a real $*$-algebra by basic facts about commutative real $C^*$-algebras
(see \cite[Theorem 1.9]{Ros} or \cite{Li}).
Then apply Lemma \ref{sfun} to the linear functional $\psi_w = T(\cdot)(w)$ for fixed $w \in K$. This will be real positive
if $T$ is real positive, and so $\| \psi_w \| = \psi_w(1)$ by Lemma \ref{sfun}.
Thus $\| T \|$ equals
$$\sup \{ |T(f)(w) | : w \in K, f \in {\rm Ball}(X) \} = \sup \{ |T(1)(w) | : w \in K, f \in {\rm Ball}(X) \} =
\| T(1) \|.$$ In particular $T$ is bounded.
For an approximately unital real Jordan operator algebra $A$ we can take a weak* continuous extension on $A^{**}$ to
reduce to the unital case as in e.g.\ Proposition \ref{rcps}.
If $T$ is a unital contraction then $T$ is real positive as we said above Example \ref{expoly}. If further $X$ is a real operator system then
$T$ is positive and selfadjoint by (ii). It is also completely contractive since $\| T \| = \| T \|_{\rm cb}$, so applying the above to each
$T_n$ we see that it is completely positive.
\end{proof}
\begin{lemma} \label{inMn}
Let $T : A \to M_n$ be a linear map on a unital operator space, or a bounded linear map on an approximately unital real Jordan operator algebra,
which is real $n$-positive (that is, $T_n$ is real positive).
Then $T$ is RCP and systematically real positive,
and $\| T \|_{\rm cb} = \| T \|$. The latter equals $\| T (1) \|$ if $A$ is unital, otherwise equals
$\| T^{**}(1) \| = \lim_t \, \| T(e_t) \|$, if $(e_t)$ is a J-cai for $A$. \end{lemma}
\begin{proof} Suppose that $x = [x_{ij}]$ and $x + x^* \geq 0$ in $M_m(A + A^*)$.
Then $[T(x_{ij})] + [T(x_{ji})^*] = T_m(x) + T_m(x)^*$ is certainly selfadjoint. So to test if this is positive it is enough to check that
$\langle (T_m(x) + T_m(x)^*) \eta , \eta \rangle \geq 0$ for $\eta \in (\Rdb^n)^m$ and $m > n$.
However in a real Hilbert space $\langle (z+z^*) \eta , \eta \rangle = 2 \langle z \eta , \eta \rangle$.
Hence it is enough to check that $\langle T_m(x) \eta , \eta \rangle \geq 0$.
As in the proof of Proposition 2.2.2 in
\cite{ER} there is an isometry $\alpha : \Rdb^n \to \Rdb^m$ and $\xi \in (\Rdb^n)^n$ such that
$\eta = (\alpha \otimes I_n) (\xi)$.
Set $y = \alpha^* x \alpha$ then $y \in M_n(A)$ and
$$2 \langle T_n(y) \xi , \xi \rangle = \langle (T_n(y) + T_n(y)^*) \xi , \xi \rangle \geq 0$$
since $T_n$ is real positive. Hence as in Proposition 2.2.2 in \cite{ER},
$$ \langle T_m(x) \eta , \eta \rangle = \langle T_n( \alpha^* [x_{ij}] \alpha) \xi , \xi \rangle = \langle T_n(y) \xi , \xi \rangle \geq 0.$$
Thus $T$ is RCP.
The rest follows from Proposition \ref{rcps}. \end{proof}
{\bf Remark.} There is a {\em Schwarz inequality} for 2-positive real linear maps on real $C^*$-algebras, proved identically to
e.g.\ \cite[Proposition 3.3]{Pau}. See p.\ 492 in \cite{ROnr} for the Schwarz inequality for real UCP maps.
\bigskip
The map ${\rm Re} : B(H) \to B(H)_{\rm sa}$ is real positive, positive, unital, contractive
and selfadjoint.
We say that a map $u : X \to Y$ is {\em real contractive} if $\| {\rm Re} \, u(x) \| \leq \| {\rm Re} \, x \|$ for $x \in X$.
We say that $u$ is {\em real bounded} if there is a constant $c \geq 0$ with $\| {\rm Re} \, u(x) \| \leq c \, \| {\rm Re} \, x \|$ for $x \in X$,
and then we write $\| u \|_r$ for the least $c$ in this inequality. This is called the
{\em real bounded norm} (actually it is a seminorm).
\begin{lemma} \label{rsu} Let $A$ be a real unital operator space.
If $u : A \to B(H)$ is real positive and restricts to selfadjoint map on $\Delta(A)$ then
$u$ is
real bounded with $\| u \|_r = \| u(1) \|$.
Indeed $u$ extends to a positive selfadjoint map
$\tilde{u} : {\mathcal S} = A + A^* \to B(H)$, with $\| \tilde{u} \|_r = \| u(1) \|$.
If $A$ is an approximately unital real Jordan operator algebra
and $u : A \to B(H)$ is real positive and restricts to selfadjoint map on $\Delta(A)$ then
$u$ is
real bounded with $\| u \|_r = \sup_t \, \| u(e_t) \|$, where
$(e_t)$ is any Jordan cai for $A$. \end{lemma}
\begin{proof} First assume $A$ is unital.
The restriction of $u$ to $\Delta(A)$ is real positive, so by Theorem \ref{db8}
we have that $u$ is systematically real positive, $u(1) \geq 0$,
and $u$ extends to a positive selfadjoint
$\tilde{u} : {\mathcal S} = A + A^* \to B(H)$.
For any unit vector $\xi \in H$, by Lemma \ref{sfun}
we have that $\varphi_\xi = \langle u (x) \xi , \xi \rangle$ is systematically real positive and bounded with norm
$\langle u(1) \xi , \xi \rangle$. By the proof of Lemma \ref{sfun}, $\varphi_\xi$ extends to
a positive selfadjoint functional $\psi_\xi$ on ${\mathcal S}$ of norm $\langle u(1) \xi , \xi \rangle$.
For $x \in {\rm Ball}({\mathcal S})$ we have $\| {\rm Re} \, x \| \leq \| x \| \leq 1$.
We have ${\rm Re} \, \tilde{u} (x) = \tilde{u} ({\rm Re} \, x)$ and
$\| {\rm Re} \, \tilde{u} (x) \|$ equals $$\sup \, | \langle \tilde{u} ({\rm Re} \, x) \xi , \xi \rangle |
= \sup \, | \psi_\xi ({\rm Re} \, x) |$$
(suprema over $\xi \in H , \| \xi \| = 1$). This is dominated by $\sup \, \langle u(1) \xi , \xi \rangle = \| u(1) \|$.
Next, if $A$ is an approximately unital real Jordan operator algebra then the same argument gives that
$\varphi_\xi$ is systematically real positive and bounded with norm
$\sup_t \, |\langle u(e_t) \xi , \xi \rangle|$. Then
$$\| {\rm Re} \, \tilde{u} (x) \| \leq \sup \, | \langle u(e_t) \xi , \xi \rangle | \leq
\sup_t \, \| u(e_t) \| ,$$
where the second last supremum is over $t$ and $\xi \in H , \| \xi \| = 1$.
\end{proof}
\begin{corollary} Let $A$ be a unital real $JC^*$-algebra.
If $u : A \to B(H)$ is selfadjoint and positive then
$u$ is real bounded with $\| u \|_r = \| ({\rm Re} \; u)^{**}(1) \| = \lim_t \, \| u(e_t) \|$.
Here $(e_t)$ is a (increasing, if one wishes) cai for $A$. \end{corollary}
\begin{proof} Since $u$ is real positive we may appeal to Lemma \ref{rsu}. \end{proof}
\begin{corollary} \label{trivj} Let $A, B$ be approximately unital real Jordan operator algebras,
and let $T : A \to B$ be a contraction which is approximately unital (that is, takes some Jordan cai to a Jordan cai), or more generally for which $T^{**}$ is unital. Then $T$ is real positive. If in addition
$T$ is selfadjoint on $\Delta(A)$ then $T$ is systematically real positive.
If $\theta : A \to B$ is a contractive Jordan homomorphism then $\theta$ is systematically real positive.
\end{corollary}
\begin{proof} By taking the second dual we may assume that $A, B$ are unital, and that $T(1) = 1$.
Then the first assertion follows from the lines before Example \ref{expoly}. The `in addition' statement then
follows from Theorem \ref{db8} or the paragraphs after that.
The last assertion follows easily from the first after replacing $B$ with $\overline{\theta(A)}$, and noting that the restriction of $\theta$ to the real JC*-algebra $\Delta(A)$ is
selfadjoint by Theorem \ref{contractive-hom}. It also follows easily from Lemma \ref{ApA}.
\end{proof}
{\bf Remark.} If $\theta : A \to B$ is a contractive (resp.\ isometric) Jordan homomorphism between unital or approximately unital real Jordan operator algebras
then we are not certain if $\tilde{\theta} : A + A^* \to B + B^*$ is contractive
(resp.\ isometric).
\begin{theorem} \label{cepro} Let $A$ and $B$ be approximately unital real Jordan operator algebras, and write
$A^1$ for a real Jordan operator algebra unitization of $A$ with $A \neq A^1$. Let $C$ be a unital real Jordan operator
algebra containing $B$ as a closed Jordan subalgebra.
\begin{itemize} \item [(1)]
A real positive real contractive linear map $T : A \to B$ extends to a unital real
positive linear map from $A^1$ to $C$, which is systematically real
positive and real contractive if $T$ is also selfadjoint on
$\Delta(A)$.
\item [(2)] A real completely positive completely contractive linear map $T : A \to B$ extends to a unital real
completely positive completely contractive linear map from $A^1$ to $C$.
\end{itemize}
\end{theorem}
\begin{proof} (1) \ We follow the proof of \cite[Theorem 2.3]{BNjp}, with a few tweaks.
Clearly (2) follows from (1). Let
$\tilde{T} : A^1 \to C$ be the canonical unital extension of $T$ in (1), and write $e, f$ for the units of $A^1$ and $C$.
So $\tilde{T}(a + s e) = T(a) + s f$ for $s \in \Rdb, a \in A$.
Suppose that $A$ is a Jordan subalgebra of some real $C^*$-algebra $D$. Since $e \notin A$ we may assume that $e = 1_{D^1} \notin D$.
Suppose that Re $(x + \lambda e) \geq 0$ for $x \in A$ and scalar $\lambda$. We need to prove that
Re $(T(x) + \lambda f) \geq 0$. This is clear if Re $(\lambda) = 0$, so suppose the contrary.
Now Re $(\lambda) > 0$ (by considering the character
$\chi$ on $D^1$ that annihilates $D$; this
is a state so that Re$(\chi(x) + \lambda) = {\rm Re}(\lambda) \geq 0$).
Since Re$(x + \lambda e) \geq 0$ we have $-\frac{1}{{\rm Re} (\lambda)} \, {\rm Re} (x) \leq e$.
Let $$x_n = - \frac{n-1}{n \, {\rm Re} (\lambda)} \, x \, , \; \; \;
y = {\rm Re} (x_n) \leq \frac{n-1}{n} \, e$$ and $z =y _+
\leq \frac{n-1}{n} \, e$. By Theorem \ref{brord} there exists
a contraction $a \in A$ with $0 \leq z \leq {\rm Re} (a) \leq e$. Now
Re $(a - x_n) \geq 0$, since Re $(x_n) = y \leq y_+ = z \leq {\rm Re} (a)$.
Also
$\| {\rm Re} \, (T(a)) \| \leq 1$ since $a$ and ${\rm Re} \, T$ are contractions, and therefore $0 \leq {\rm Re} \, (T(a)) \leq f$. Also, Re $T(a - x_n) \geq 0$,
so that Re $(T(x_n)) \leq {\rm Re} \, (T(a)) \leq f$. That is,
$$ - \frac{n-1}{n \, {\rm Re} (\lambda)} \, {\rm Re} \, (T(x)) \leq f.$$
Letting $n \to \infty$ we have that ${\rm Re} \, (T(x) + \lambda f) \geq 0$ as desired.
Hence $\tilde{T}$ is a unital real positive map, which is selfadjoint on $\Delta(A^1)$,
and thus is real contractive by Lemma \ref{rsu}.
(2) \ We have that $T_c : A_c \to B_c$ is completely positive and completely contractive. Then apply
\cite[Proposition 2.2]{BNp}, and finally restrict to $A^1$ (since $(A^1)_c = (A_c)^1$).
\end{proof}
{\bf Remarks.} Of course the extensions in the previous result are unique. As in the complex case \cite{BNp,BNjp} one may apply these results to extend
projections $P : A \to A$ to unital projections on $A^1$.
In the complex scalar case one gets a better result \cite[Theorem 2.3]{BNjp}: A real positive contractive linear map $T : A \to B$ extends to a unital real
positive contractive linear map on $A^1$. We do not know if this is true in the real case, even if $T$ is systematically real
positive. If it were one would obtain the corollary that
a bounded real linear map $T : A \to B$ between approximately unital Jordan operator algebras which is selfadjoint on $\Delta(A)$,
is real positive and contractive if and only if $T({\mathfrak F}_A) \subset {\mathfrak F}_B$. We do not know if this is true either, although certainly
$T({\mathfrak F}_A) \subset {\mathfrak F}_B$ implies that $T$ is real positive.
\bigskip
{\em Acknowledgements.} We thank M. Kalantar for several helpful discussions, which we mention in more detail
in and around Theorem
\ref{ijco}.
Several results here (in particularly, many in Sections 2--4) are from the May 2020 PhD thesis of author W.\ Tepsan \cite{WTT}.
Other complementary facts, alternative proofs, and additional theory may be found there, and in a forthcoming paper by him.
In terms of future
directions, the noncommutative topology of real Jordan operator algebras in the spirit of
\cite{BNj}
looks like a fruitful topic that should be pursued elsewhere, as well as some other
features of the real positive cone that have not been explored for the real case here or in
\cite{WTT}. There are no doubt some interesting `operator space
aspects' of the theory of real associative operator algebras that are worth developing.
|
1,116,691,500,936 | arxiv | \section{Introduction}\label{aba:sec1}
Basis light-front quantization (BLFQ) is a nonperturbative approach which is developed for solving bound state problems in quantum field theories\cite{Vary:2009gt,Wiecki:2014ola,Honkanen:2010rc,Li:2017mlw,Li:2015zda,Chakrabarti:2014cwa,Lan:2019vui,Tang:2018myz,Xu:2019xhk,Du:2019ips,Adhikari:2016idg,Adhikari:2018umb,Li:2019kpr}. This approach has been successfully applied to QED\cite{Wiecki:2014ola,Chakrabarti:2014cwa} and QCD\cite{Li:2017mlw,Li:2015zda,Lan:2019vui,Tang:2018myz,Xu:2019xhk,Du:2019ips,Adhikari:2016idg,Adhikari:2018umb,Li:2019kpr} systems. In our work, we apply the BLFQ approach to the nucleon and study the electromagnetic form factors. As a Hamiltonian formalism, we adopt a light-front effective Hamiltonian, which includes the holographic QCD confinement potential supplemented by longitudinal confinement\cite{Li:2017mlw,Li:2015zda,Brodsky:2014yha} along with the one-gluon exchange interaction with a fixed coupling constant. The light-front wave functions (LFWFs) are obtained by diagonalizing the effective Hamiltonian and used to calculate the electromagnetic form factors.
Electromagnetic form factors are crucial for probing the structure of the nucleon. In the light-front formalism, the Dirac and Pauli form factors, $F_1(Q^2)$ and $F_2(Q^2)$, are defined with the longitudinal vector current ($J^+$)\cite{Brodsky:1980zm,Cates:2011pz}
\begin{eqnarray}
\braket{P+q,\uparrow|\frac{J^+(0)}{2P^+}|P,\uparrow} &=& F_1(Q^2), \\
\braket{P+q,\uparrow|\frac{J^+(0)}{2P^+}|P,\downarrow} &=& -(q^1-iq^2)\frac{F_2(Q^2)}{2M},
\end{eqnarray}
where $Q^2=-q^2=q_{\perp}^2$ is the square of the momentum transfer, and $M$ is the nucleon mass. The ket $\ket{P,S_z}$ represents the physical state that can be expanded in terms of the wave functions\cite{Brodsky:2014yha},
\begin{eqnarray}
\ket{P,S_z} = &&\!\!\!\! \int \prod_{i=1}^{3} \frac{dx_id^2k_{i\perp}}{\sqrt{x_i}16\pi^3} 16\pi^3\delta \left(1-\sum_{i=1}^{3} x_i\right) \delta^2 \left(\sum_{i=1}^{3}k_{i\perp}\right) \nonumber \\
&&\!\!\!\! \times \Psi^{\Lambda}(x_i,k_{i\perp},\lambda_i) \ket{x_iP^+_i,k_{i\perp}+x_iP_{\perp},\lambda_i}.\label{wavefunction_expansion}
\end{eqnarray}
Here, the $S_z$ and $\lambda_i$ are helicities of the nucleon and quarks respectively. The $x_i=\frac{k_i^+}{P^+}$ is the longitudinal momentum fraction of quarks.
Thus, the flavor form factors can be written as the overlap of light-front wave functions.
The nucleon Sachs form factors are written in the terms of Dirac and Pauli form factors,
\begin{eqnarray}
G_E^i(Q^2)= F_1^{i}(Q^2) - \frac{Q^2}{4*M_i^2} F_2^{i}(Q^2), ~~~~
G_M^i(Q^2)= F_1^{i}(Q^2) + F_2^i(Q^2).
\end{eqnarray}
The $i = \rm{P ~or~ N }$ represents the proton or neutron, and $F_{1/2}^i=\sum_f e_f F_{1/2}^{f/i}$ is the Dirac (Pauli) form factors of the nucleon~\cite{Beringer:1900zz}. And the electromagnetic radii of the nucleon can be obtained from
\begin{eqnarray}
\braket{r^2_E}^i=-6 \frac{dG^i_E(Q^2)}{dQ^2}\bigg|_{Q^2=0}, ~~~~
\braket{r^2_M}^i=-\frac{6}{G^i_M(0)}\frac{dG^i_M(Q^2)}{dQ^2}\bigg|_{Q^2=0}.
\end{eqnarray}
\section{Hamiltonian Formalism}
BLFQ solves the eigenvalue equation of the light-front Hamiltonian
$P^- \ket{\beta}= P^-_{\beta} \ket{\beta}$,
which leads to the eigenvalue $P^-_{\beta}$ and the associated eigenvectors of the bound state.
In our work, we only consider the lowest Fock-sector for the expansion of the nucleon, and employ an effective Hamiltonian $P^-_{\rm{eff}}$ which is given by
\begin{eqnarray}
P^-_{\rm{eff}} =&&\sum_{i} \frac{\rm{m}_i^2+p_{i\perp}^2}{x_i}+\frac{1}{2}\sum_{i,j} \big(\kappa_T^4 x_{i} x_{j}r_{ij\perp}^2+\frac{\kappa_L^4}{(\rm{m}_i+\rm{m}_j)^2}\partial_{x_i}(x_ix_j\partial_{x_j}) \big) \nonumber\\
&& + \frac{1}{2}\sum_{i,j} \frac{C_F4\pi \alpha_s}{Q^2} \bar{u}_{s^{\prime}_i}(k^{\prime}_i)\gamma^{\mu}u_{s_i}(k_i)\bar{u}_{s^{\prime}_j}(k^{\prime}_j)\gamma_{\mu}u_{s_j}(k_j),
\end{eqnarray}
where the $\rm{m}_{i/j}$ is the constituent mass of quarks and the $i,j=1,2,3$ label the Fock particles.
For Each single-particle basis state, we employ the discrete plane-wave basis ($k$) in the longitudinal direction and 2D harmonic oscillator (2DHO) basis ($n$ and $m$) in the transverse direction. Besides, a single quantum number ($\lambda$) presents the helicity degree of freedom.
For the nucleon, proton (or neutron) is the lowest eigenstate, denoted by $\ket{P^{\Lambda}}$, where the $\Lambda$ indicates helicity of the nucleon. In momentum space, the LFWFs are written as
\begin{eqnarray}
\Psi^{\Lambda}&&\!\!\!(x_i,k_{i\perp},\lambda_i)=\sum_{\substack{n_1,m_1,n_2 \\ m_2,n_3,m_3}} \big( \psi^{\Lambda}(k_{i},n_{i},m_{i},\lambda_i) \nonumber \\&&
\times \prod_i \frac{\sqrt{2}}{b(2\pi)^{\frac{3}{2}}}\sqrt{\frac{n!}{(n+|m|)!}}e^{-p_{\perp}^2/(2b^2)}
\left(\frac{|p_{\perp}|}{b}\right)^{|m|}L^{|m|}_{n}(\frac{p_{\perp}^2}{b^2})e^{im\theta}\big).
\end{eqnarray}
Here, b is an HO basis parameter with the dimension of mass, and $L^{|m|}_{n}(\frac{p_{\perp}^2}{b^2})$ is the generalized Laguerre polynomial.
\section{Numerical Results}
In this paper, we set the model parameters $m_{q/\rm{OGE}}=0.2~\rm{GeV}$, $m_{q/k}=0.3~\rm{GeV}$, ~$\kappa_T=0.284~\rm{GeV}$, $\kappa_L=0.373~\rm{GeV}$ and $\alpha_s=1.0 \sim 1.2$.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=\columnwidth]{sach_combine.pdf}
\end{minipage}
\label{sach_proton}
}
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=\columnwidth]{sach_combine_neutron.pdf}
\end{minipage}
\label{sach_neutron}
}
\caption{The Sachs form factors for the proton (a) and neutron (b). Nucleon Sach's FFs $G^{P/N}_E(Q^2)$ (upper panel) and $G^{P/N}_M(Q^2)$ (lower panel) are functions of $Q^2$. The bands are BLFQ results reflecting our $\alpha_s$ uncertainty of $10\%$. The experimental data are taken from Ref~\cite{Chakrabarti:2013dda}.}
\label{sach}
\end{figure*}
In Fig~\ref{sach_proton}, the Sachs form factors of the proton show an agreement with the experimental data, Except for $G^{\rm{P}}_M$ in the low $Q^2$ region. At $Q^2=0$, $G_M(0)$ gives the anomalous magnetic moments. Our calculations show that the anomalous magnetic moments of the proton ($G^{\rm{P}}_M(0)=2.443\pm0.027$) is somewhat different with the experimental measurements ($G^{\rm{P}}_M(0)=2.79$). In Fig~\ref{sach_neutron}, we show the Sachs form factors of the neutron and compare them with the experimental data revealing a significant difference. Especially, at $Q^2= 0$, the $G_M^{\rm{N}}(0)= -1.405\pm0.026$ is disagrees with the experimental data ($G_M^{\rm{N}}(0)=-1.91$).
\begin{table}
\tbl{Electromagnetic radii of the nucleon. Our results are compared with the experimental data~\cite{Beringer:1900zz}.}
{\begin{tabular}{@{}ccccc@{}}\toprule
~&~$\braket{r_E^P}/(\rm{fm})$~&~$\braket{r_M^P}/(\rm{fm})$~&~$\braket{r_E^N}^2/(\rm{fm}^2)$~&~$\braket{r_M^N}/(\rm{fm})$ \\
\colrule
BLFQ ~&~ $0.85\pm0.05$ ~&~ $0.88\pm0.03$ ~&~$-0.09\pm0.17$ ~&~$0.90\pm0.03$ \\
Exp. Data~&~ $0.833\pm 0.010$ ~&~ $0.777\pm 0.016$ ~&~$-0.1161\pm 0.0022$ ~&~$0.862^{+0.009}_{-0.008}$\\
\botrule
\end{tabular}
}
\label{tab:radii}
\end{table}
We also calculate the electromagnetic radii of the nucleons, which we show in Table~\ref{tab:radii}. The BLFQ results are in a good agreement with the experimental data\cite{Beringer:1900zz}.
\section{Conclusion}
In our work, we produce the light-front wave functions by solving the eigenvalue equation of light-front Hamiltonian, and evaluate the electromagnetic form factors of the nucleon. We observe the proton form factors are in a reasonable agreement with the experimental data. The neutron form factors show a significant issue in the low Q region. We also compare the electromagnetic radii of the nucleon with the experimental data.
\section{Acknowledgment}
We thank Henry Lamm, Wei Zhu for many useful discussions. CM is supported by the National Natural Science Foundation of China (NSFC) under the Grant No. 11850410436. XZ is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences and by Key Research Program of Frontier Sciences, CAS, Grant No ZDBS-LY-7020. JPV is supported by the Department of Energy under Grants No. DE-FG02-87ER40371, and No. DE-SC0018223 (SciDAC4/NUCLEI). A portion of the computational resources were provided by the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No.DE-AC02-05CH11231.
|
1,116,691,500,937 | arxiv | \section{Introduction}\label{sec:intro}
\section{Introduction}
Mechanism design (MD) \cite[e.g.,][]{Myerson81} is a corner stone of economic theory, with major successes in both theory and practice.
MD studies mechanisms for resource allocation among agents with private information, known as \emph{hidden types}, with the goal of maximizing social welfare or revenue.
The past two decades have seen a surge of interest in the study of mechanism design through an algorithmic and computational lens.
Major success stories range from computational advertising and spectrum auctions to applications in internet routing and loadbalancing.
An equally important and central branch in economics is contract theory (CT) \cite[e.g.,][]{GrossmanHart83}. While the natural focus of mechanism design is on the allocation of goods, contract theory has a natural focus on the \emph{allocation of effort}. Contracts are a main tool for effort allocation since they use payments (monetary or other) to determine which actions agents will take.
While the computer science community has largely ignored contract theory, driven by increased practical demand, there is a growing momentum and a recent set of papers have started to explore applications of contract theory from a computational viewpoint \cite[e.g.,][]{babaioff2006combinatorial,DuttingRT19,DuttingRT20,guruganesh2020contracts}.
The increased practical demand for a computational and algorithmic approach to contracts is caused by an accelerating movement of contract-based markets from the analog/pen-and-paper world to the digital/electronic world. This includes online markets for crowdsourcing, sponsored content creation, affiliate marketing, freelancing and more.
The economic value of these markets is substantial.\footnote{For example, according to Statista, influencer marketing on Instagram was worth 5.67 billion U.S.~dollars in 2018. See \url{https://www.statista.com/statistics/950920/global-instagram-influencer-marketing-spending/}.}
In such applications, platforms serve as a bridge between the two sides of the market (e.g., advertising brands and content creators), and are thus well-situated to play the role of market makers, applying their position and data to design better contracts.
\paragraph{\bf The need for CT $\times$ MD.}
Our work is motivated by the fact that in many of the applications that motivate the surge of interest in contracts, we actually see features of both --- contract theory and mechanism design; and while there is some work on this in economics (which we survey in Section~\ref{sec:related-work}) --- with the exception of \cite{guruganesh2020contracts} --- we are not aware of any prior work on the combination of the two from the computer science perspective.
The starting point of our work is that many of the applications at the intersection of CT and MD naturally involve an agent who can exert different levels of effort (his actions), and as in classic contract theory this leads to a stochastic outcome/reward to the principal, but the cost per unit-of-effort may differ between agents and is naturally modelled as private information. For example, a brand (the principal) may approach an influencer on a social media platform (the agent) to create branded or sponsored content on their behalf; and the opportunity cost of different influencers for different amounts of effort and outcome levels may differ.
The point is that this naturally leads to a model that combines the classic-principal agent problem with hidden action with features of \emph{single-dimensional} mechanism design that we propose and study in this paper.
\paragraph{\bf Single vs.~multi-parameter.}
To concretely discuss agent types and explain where our contribution diverges from \cite{guruganesh2020contracts}, we briefly introduce the classic model for a principal-agent contractual relation \cite[e.g.,][]{GrossmanHart83,carroll2015robustness}. A basic principal-agent setting is described by $n$ (hidden) actions the agent can choose among, $m$ (observable) possible outcomes the actions can lead to, and an $n\times m$ matrix~$F$ whose $i$th row maps action $i$ to a distribution over the outcomes. In addition, there is a cost vector with $n$ costs, one per action, specifying how much costly effort the agent must invest to take that action. Finally there is a reward vector with $m$ rewards, one per outcome, specifying how much the principal gains when the agent achieves that outcome. Given a contract (payment per outcome), the agent picks the utility-maximizing action (which maximizes his expected payment from the contract minus his cost of effort), and the principal gets as revenue the expected reward minus payment.
In our proposed model, there are $n$ different effort levels $i$ (actions). Action $i$ costs $\gamma_i$ units of effort. As before, taking an action triggers a stochastic outcome according to some probability distribution $F_i$ (the $i$-th row of $F$). A new feature relative to \cite{guruganesh2020contracts} is that agent's have a \emph{single-dimensional} private type $c$ --- their cost per unit-of-effort. So their cost for taking action $i$ is $c \cdot \gamma_i$.
The agent's hidden type by \cite{guruganesh2020contracts}, in contrast, is his mapping from actions to outcomes, as modeled by the matrix~$F$ (and is therefore naturally \emph{multi-dimensional}). The cost vector is assumed to be public knowledge. The (discrete) distribution of types is also publicly known. In this model, \cite{guruganesh2020contracts} establish the hardness of computing an optimal truthful contract menu (where optimal refers to the principal's revenue).
We study a complementary model, where $F$ is known but costs are hidden. The fundamental difference from a computational viewpoint is that in the new model, the type of an agent can be represented by a \emph{single parameter} --- his cost per unit of effort.
It is well-known from auction design that single-parameter types may allow positive results even when hardness results hold for multi-parameter types.
Table~\ref{tab:vision} positions our model with respect to the classic work in both mechanism design and contract theory, and on the intersection of the two.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|}
\toprule
& Known action & Hidden action \\ \hline
Known type & Trivial & \makecell{Classic contract theory\\
\citep{GrossmanHart83}}\\
&&\\
\hline
Hidden type & \makecell{Myerson's theory\\
\citep{Myerson81}} & Our model\\
(single-dim) & & \\
\hline
Hidden type & E.g., \citet{CaiDW13} & \cite{guruganesh2020contracts} \\
(multi-dim) & & \\
\bottomrule
\end{tabular}
\end{center}
\caption{Our model's relations to other settings in the literature}
\label{tab:vision}
\end{table*}
\subsection{Our Results}
\label{sub:our-results}
One of the most useful tools that Myerson's theory (see \cite[][]{Myerson81}) provides for single-parameter mechanism design is a characterization of implementable allocation rules. Known as Myerson's lemma, this characterization
is useful since it defines the design space --- the designer only needs to consider monotone allocation rules
\footnote{In the context of mechanism design, an allocation rule maps an agent's (reported) type to his allocation under the mechanism; a payment rule maps his type to what he pays (or in procurement is paid).}
and any such rule is guaranteed to have a corresponding payment rule such that the resulting mechanism is truthful.
Our main result is a ``Myerson's lemma'' for the more general model of contracts with private costs. Such a lemma should incorporate both the original characterization of Myerson, and the characterization of implementable actions from principal-agent theory of \cite{GrossmanHart83}.
These two previously-known characterizations seem quite different: The first characterizes implementable allocation rules as monotone, which means (in the procurement variant) that agents (service providers) with high reported costs are not assigned by the mechanism to provide service
\footnote{Or, if there are multiple levels of service, agents with high costs are assigned to provide lower, less costly such levels.} The second characterizes implementable actions --- those for which the principal has some contractual payment scheme over the outcomes that incentivizes the agent to choose this action. This characterization says that for an action $i$ to be implementable, there can be no convex combination of the other actions that achieves the same distribution as $i$ over outcomes, at a lower combined cost.
To contrast with our unified characterization, it's useful to restate monotonicity (in the sense of Myerson's theory) as follows: Assume for simplicity a discrete type space, and consider all agent types and whether they're assigned by the allocation rule to provide service or not. Consider the aggregate
service by the allocated agents and its overall cost. An allocation rule is monotone (and thus implementable) if and only if there is no combination of agent types that would together provide the same aggregate service at a lower combined cost.
Our unified characterization result is:
\begin{theorem*}[Informal characterization --- see Theorem~\ref{thm:charac-disc}]
Consider an allocation rule mapping agent types to assigned actions. The rule is implementable if and only if there is no weighted combination of agent types and actions that together achieve the same distribution over outcomes at a lower combined cost.
\end{theorem*}
Our characterization provides a computationally tractable way of checking (in time polynomial in the number of actions, outcomes and agent types) whether a given allocation rule is implementable (see Corollary~\ref{cor:poly-LP}).
For a continuous type space, the characterization is similar in spirit but slightly more involved and appears in Theorem~\ref{thm:char-cont}.
One implication of our characterization is that monotonicity of the allocation rule is no longer sufficient for implementability (as we demonstrate in Proposition~\ref{prop:non-monotone}). Intuitively this happens because the overall cost can now be improved not only by assigning the same actions to agents with lower costs, but also by taking an altogether different combination of actions.
We show two additional applications of our characterization results. The first stands in stark contrast to the APX-hardness of optimal contract design for multi-parameter types demonstrated by \cite{guruganesh2020contracts} (which holds even with only constantly-many actions):
\begin{theorem*}[Tractability for constantly-many actions --- see Theorem~\ref{thm:const-n}]
Finding the optimal contract for single-parameter types is solvable in polynomial time for a constant number of actions.
\end{theorem*}
Our second application (Theorem~\ref{thm:uniform}) uses the simple but powerful observation that if an allocation rule is implementable, the rest of Myerson's theory holds for it despite the general setting of contracts with private types. In particular, Myerson's payment identity holds and the expected revenue is equal to the expected virtual welfare (up to a constant).
It is thus tempting to consider mechanisms that maximize virtual welfare --- would they turn out to be implementable in our general setting, just like virtual welfare maximizers turn out to be monotone (and thus implementable) in Myerson's original setting? We establish that this is indeed the case when the agent's private cost is distributed uniformly:
\begin{theorem*}[Optimal contract for uniform costs --- see Theorem~\ref{thm:uniform}]
For uniform costs, the virtual welfare maximizing allocation rule is implementable.
\end{theorem*}
We leave as our main open question whether implementability holds for the candidate rule (the optimal monotone rule \`a la Myerson) beyond uniform distributions, and if not what should the mechanism be.
\subsection{Related Work}\label{sec:related-work}
Contract theory is an important and well-studied sub-field of microeconomics with many practical implications; for leading textbooks see \cite{bolton2005contract,salanie,Laffont}.
Many works on contract design study it entirely separately from mechanism design (screening) --- the former deals with hidden actions of the agent (moral hazard), and the latter with private types of the agent (adverse selection).
An analysis of the basic single principal, single agent setting (with no types) is found in the seminal work of \citet{GrossmanHart83}, and \citet{Holmstrom80} summarizes some of the classic foundations (see also \cite[]{Nobel16} for the scientific background on the Nobel prize shared by Hart and Holmstr\"om). One of the main take-aways is that the optimal contract can be found in this setting by solving (polynomially-many) linear programs.
A much smaller collection of works studies the combination of moral hazard and adverse selection.
One classic such work is by \citet{Myerson82}, who studies ``generalized'' principal-agent problems where agents have both private information and ``private decision domains'' (hidden action). The key insight is that the principal may, without loss of generality, restrict herself to incentive compatible direct mechanisms.
This extends the revelation principle to situations where there are moral hazard factors. We apply this insight in our results.
\citet{GottliebM13} study a simple setting with an effort/no effort binary choice for the agent and two possible outcomes, so their types are two-dimensional vectors. \citet{GottliebM15} study a more general setting where types and efforts are multi-dimensional (possibly
infinite-dimensional). Their work identifies assumptions under which ``optimal contracts are simple'' in the sense that the optimal (menu of) contracts consists of a single contract for all types. Our work focuses on solving for optimal ``menus'', i.e., contracts with separate payments per type.
\citet{ChadeS19} study a setup similar to ours in which there is a single-parameter type for the agent and the goal is to design the optimal (menu of) contracts. They make different assumptions than us, such as continuous action space or the simplifying MLRP assumption (a strengthening of first-order stochastic dominance among any pair of distributions associated with the actions). They obtain two different sufficient conditions under which they are able to optimally solve the design problem. Their solution involves minimizing the cost of implementing any given action at any given surplus for any given type in a pure moral hazard
problem.
Our focus is on a necessary and sufficient condition for implementability, and on computationally efficient solutions.
More recent work has started to explore contract design through the computational lens (but without private types) \cite[][]{babaioff2006combinatorial,HoSV16,DuttingRT19,KR19,DuttingRT20,DEFK21}.
\citet{guruganesh2020contracts} are the first to consider contract design with typed agents from a computational point of view. Importantly, their agent types are multi-dimensional, namely, an agent's type determines the outcome distributions corresponding to every available effort level.
They establish computational hardness of optimal contract design under moral hazard and adverse selection. Their hardness results motivate their exploration of the approximation power of simple classes of contracts (such as linear, fixed-provision contracts).
\citet{CastiglioniM021} study a similar multi-dimensional problem in which a principal seeks to design a single (non-type dependent) contract for a Bayesian agent, whose costs and distributions over outcomes are drawn from known distributions. They establish hardness results for computing the optimal contract, and argue that simple linear contracts provide optimal approximation guarantees subject to poly-time computability.
Further afield, some works consider hidden types of principals rather than agents \cite[e.g.,][]{BernheimW86}.
\section{Model}
\label{sec:model}
In this section we introduce our model in this paper: a contract design setting with hidden action and a single-parameter private type.
Section~\ref{sub:instance} describes a problem instance, Section~\ref{sub:contract-def} defines contractual solutions, Section~\ref{sub:running-example} presents a running example and Section~\ref{sec:comparison} positions our model relative to classic contract and mechanism design problems.
\begin{notation}
Let $[n]=\{0,\dots,n\}$ for every $n\in\mathbb{N}$ (i.e., zero is included).
\end{notation}
\subsection{Single-Parameter Principal-Agent Instance}
\label{sub:instance}
An instance (a.k.a.~setting) of our model consists of two players, a \emph{principal} and an \emph{agent}. An \emph{action} set~$[n]$ is available to the agent. The agent's chosen action leads to an \emph{outcome} $j\in [m]$, with \emph{reward} $r_j\geq 0$ for the principal. We assume without loss of generality that outcomes are ordered in a non-decreasing order by their \emph{rewards} $r_0\leq...\leq r_m$, and that there is an outcome with no reward for the principal, i.e., $r_0=0$. In what follows we do not distinguish between the outcomes and their rewards.
Action $i\in[n]$ requires $\gamma_i \ge 0$ \emph{units of effort} from the agent, and induces a distribution (probability mass function) $F_{i}$ over the $m$ outcomes/rewards. Let $F_{i,j}$ denote the probability of outcome~$j$ when the agent takes action $i$. We assume that only the first action requires no effort from the agent, i.e., $\gamma_0=0$, and that actions are ordered by the amount of effort they require, i.e., $\gamma_0< ...\leq \gamma_n$. We also assume that the first outcome occurs if and only if the agent takes the first action, i.e., $F_{0,0}=1$ and $F_{i,0}=0$ for $ 1\leq i\leq n$. Thus, the principal can monitor whether or not the agent takes the zero-cost action. This assumption is a simple way to model the agent's opportunity to opt out of the contract when individual rationality (i.e., guarantee for non-negative utility) is not satisfied \cite[e.g.,][]{HoSV16}.
Let
\begin{equation}
R_i=\mathbb{E}_{j\sim F_i}[r_j]=\sum_{j\in [m]} F_{i,j} r_j
\label{eq:expected-reward}
\end{equation}
be the \emph{expected reward} of action $i\in[n]$. We assume (as in \cite[][]{DuttingRT19}) that there are no ``dominated'' actions: every two actions $i<i'$ have distinct expected rewards $R_i\ne R_{i'}$, and the action that requires more units of effort has the higher expected reward, i.e., $R_{i}<R_{i'}$. Thus actions are also ordered by expected reward $0=R_0<\dots< R_n$.
\paragraph{Agent's type.} The agent has a single-parameter \emph{type} $c$. The type is drawn from a distribution~$G$ supported over the set of all possible types $C\subseteq \mathbb{R}_{\ge 0}$.
An agent's type captures his \emph{cost per unit-of-effort}, or in other words, his \emph{marginal} cost for effort.
When an agent of type $c$ takes action $i$, the principal gains a reward $r_j$ drawn according to distribution $F_i$, and the agent loses $\gamma_i c$ (the number of effort units that action $i$ requires multiplied by the agent's cost per unit).
We distinguish between instances with a \emph{discrete} type space $C$ and those with a \emph{continuous} one. We denote the former by ${\mathrm{Dis}}(F,\gamma,r,G,C)$, {in which case $G$ is a discrete density function}. We denote the latter by ${\mathrm{Con}}(F,\gamma,r,G,C)$, in which case $g$ denotes the probability density function, and $G$ denotes the cumulative distribution function. When clear from the context, we sometimes omit some parameters from ${\mathrm{Dis}}$ and ${\mathrm{Con}}$.
For the continuous case we assume (similarly to~\cite{Myerson81}) that $C=[0,\bar{c}]$ for $0<\bar{c}<\infty$.%
\footnote{Our results hold more generally for $C=[\underline{c},\bar{c}]\subseteq \mathbb{R}_{\ge 0}$.}
\paragraph{\bf Summary of an instance and who knows what.}
To summarize, an instance is described by
distributions $F=(F_0,\dots,F_n)$ and corresponding ``effort levels'' $\gamma=(\gamma_0,\dots,\gamma_n)$ for the actions,
rewards $r=(r_0,\dots,r_m)$ for the outcomes, and a type distribution $G$ over support $C$ for the agent.
We omit from the notation components of the setting that are clear from the context.
The instance itself is publicly known.
The action $i$ which the agent actually takes is \emph{hidden} from the principal, who only observes its stochastic reward $r_j$. The agent's realized type $c$ is \emph{privately-known} only to the agent himself.
\subsection{Contracts}
\label{sub:contract-def}
Our notion of a contract is a generalization of the standard one (see, e.g., \cite[][]{DuttingRT19}). The generalization is to accommodate for agent types, and is the direct revelation version of the ``menu of contracts'' notion studied by~\cite{guruganesh2020contracts}.
More formally, a \emph{contract} $(x,t)$ is composed of an \emph{allocation rule $x:C\to[n]$} and a \emph{payment rule $t:C\to \mathbb{R}^{m+1}_{\ge 0}$}.
A type report $c' \in C$ is solicited from the agent (where $c'$ may differ from the true type $c$), and the allocation rule maps $c'$ to an action
$x(c')$.
An important difference from mechanism design is that the action can only be \emph{recommended} to the agent by the contract, rather than \emph{enforced} like an allocation by an auction.
Whether or not the agent adopts the recommendation depends on the payment rule. A payment rule $t$ maps $c'$ to $m$~non-negative payments or \emph{transfers} $(t^{c'}_0,...,t^{c'}_m)$, one for each outcome.
The transfers are associated with outcomes rather than actions since the actions are hidden from the principal.
For action $i\in[n]$ let
\begin{equation}
T^{c'}_i=\mathbb{E}_{j\in F_i}[t^{c'}_j]=\sum_{j\in [m]} F_{i,j} t^{c'}_j\label{eq:expected-transfer}
\end{equation}
denote the expected payment from principal to agent with reported type $c'$ for taking action $i$.
All transfers are required to be non-negative; this guarantees the standard \emph{limited liability} property for the agent, who is never required to pay out-of-pocket (see, e.g., \cite[][]{innes1990limited, gollier1997risk,carroll2015robustness})
\begin{remark}
It is possible for the allocation rule $x$ to be randomized in the sense of mapping type~$c'$ to a distribution over actions.
In Appendix~\ref{appx:rand} we show that such randomized allocation rules have no extra power in the settings we consider. Thus we restrict attention to deterministic rules unless stated otherwise. More complex contract formats in which the randomization is also over payment rules are beyond the scope of our work, and they constitute an interesting avenue for future research. Such contracts are formally presented in Appendix~\ref{appx:rand}, along with a short discussion of their power.
\end{remark}
\paragraph{\bf The game.}
A contract $(x,t)$ induces the following two-stage game:
\begin{itemize}[leftmargin=0.63in]
\item[Stage 1:] The agent submits a type report $c'$, fixing the contractual payment vector $t^{c'}=(t^{c'}_0,...,t^{c'}_m)$.
\item[Stage 2:] The agent chooses an action $i$ (which is not necessarily the action $x(c')$ prescribed to it by the contract), and incurs a cost of $\gamma_ic$ (where~$c$ is his true type). An outcome $j$ is realized according to distribution $F_i$. The principal is rewarded~$r_j$ and pays $t^{c'}_j$ to the agent.
\end{itemize}
\begin{remark}
Once the game reaches Stage 2, we are back in a standard principal-agent setting with no types, in which the agent faces a standard contract (simply a vector of $m$ payments), and chooses accordingly a costly action that rewards the principal.
\end{remark}
\paragraph{\bf Utilities.}
Let $c$ be the true type, $c'$ the reported type, $i$ the action chosen by the agent and $j$ the realized outcome. The players' utilities are as follows: $t^{c'}_j-\gamma_i c$ for the agent, and $r_j-t^{c'}_j$ for the principal. In expectation over the random outcome $j\sim F_i$ these are $T^{c'}_i-\gamma_i c$ for the agent and $R_i-T^{c'}_i$ for the principal. The sum of the players' expected utilities is the expected \emph{welfare} from action $i$, namely $R_i-\gamma_i c$.
Let us now consider the agent's rational behavior given his expected utility. When facing payment vector $t^{c'}$ in Stage~2, an agent whose true type is $c$ will choose the action $i^*(c',c)$ that maximizes his expected utility:
$$
i^*(c',c)\in \arg\max_{i\in [n]}\{T^{c'}_i-\gamma_i c\}.
$$
As is standard in the contract design literature, if there are several actions with the same maximum expected utility for the agent, we assume consistent tie-breaking in favor of the principal (see, e.g., \cite[][]{guruganesh2020contracts}).
Thus $i^*(c',c)$ is well-defined.
When reporting his type in Stage~1, the agent will report $c'$ that maximizes $T^{c'}_{i^*(c',c)}-\gamma_{i^*(c',c)} c$, that is, his expected utility given his anticipated choice of action in Stage~2.
\paragraph{\bf Incentive compatibility.}
We say that a contract $(x,t)$ is \emph{incentive compatible (IC)} if for every type $c\in C$, it is in the best interest of an agent of type $c$ to both truthfully report $c$ in Stage~1 and to take the prescribed action $x(c)$ in Stage~2. Formally:
\begin{definition}
A contract $(x,t)$ is \emph{IC} if \;$\forall c\in C$,
\begin{enumerate}
\item $x(c)=i^*(c,c)$; and
\item $c\in \arg\max_{c'} \{T^{c'}_{i^*(c',c)}-\gamma_{i^*(c',c)} c\}$.
\end{enumerate}
\end{definition}
\noindent
Condition (1) in the above definition ensures that the prescribed action for type $c$ maximizes the agent's expected utility when he truthfully reports his type~$c$.%
\footnote{The contracts we consider are designed in favor of the principal and so the action recommended by $x$ can be compatible with the tie-breaking rule determining $i^*$.}
Condition (2) ensures that reporting truthfully is a (weakly) dominant strategy.
Note that focusing on IC contracts is without loss of generality by the revelation principal (see \cite[][]{Myerson82}).
Also, incentive compatibility usually goes hand in hand with individual rationality (IR), which in our context requires that the utility of an agent who reports truthfully
is non-negative.
Here the assumption that $\gamma_0=0$ comes in handy, as it means that the agent can always choose an action that requires zero effort. Together with limited liability this ensures non-negative utility.
We mention that classic principal-agent theory also assumes individual rationality for the principal. In our model, the principal can always guarantee herself non-negative utility by paying zero for all actions, incentivizing no-effort for all types.
\paragraph{\bf Objective.}
The principal's goal is to design an \emph{optimal} contract $(x,t)$, i.e., a contract that satisfies IC and maximizes her expected utility, where the expectation is over the agent's random type $c$ drawn from $G$, as well as over the random outcome of the agent's prescribed action $x(c)$:
$$
\mathbb{E}_{c\sim G}[R_{x(c)}-T^{c}_{x(c)}] = \mathbb{E}_{c \sim G} \left[\mathbb{E}_{j \sim F_{x(c)}}[r_j - t^c_j] \right].
$$
\subsection{An Example}
\label{sub:running-example}
The following principal-agent instance will serve as the running example of this paper.
\begin{example}\label{ex:runing}
There are two agent types, four actions with required effort levels $\gamma_0=0$, $\gamma_1=1$, $\gamma_2=3$, $\gamma_3=10$, and three outcomes with rewards $r_1=0$, $r_2=10$, $r_3=30$ (that is, $n=m=3$). The distributions over outcomes are $F_0=(1,0,0)$, $F_1=(0,1,0)$, $F_2=(0,0.5,0.5)$, and $F_3=(0,0,1)$. The two types are $c = 1$ and $c = 4$, and they occur with equal probability.
\end{example}
A possible contract for Example~\ref{ex:runing} is $x(1) = 3$ with payments $t^1 = (0,0,14)$, and $x(4) = 1$ with payments $t^4 = (0,4,0)$, as depicted in Figure~\ref{fig:ex}. Intuitively, the ``stronger'' type who incurs less cost per unit-of-effort ($1$ rather than $4$) is recommended a more strenuous action (the fourth rather than second action).
\begin{figure}[h!]
\begin{minipage}{0.49\textwidth}
\centering
\begin{tabular}{lll|l}
\toprule
$t_1(1) = 0$ & $t_2(1) = 0$ & $t_3(1) = 14$\\
$r_1 = 0$ & $r_2 = 10$ & $r_3 = 30$ \\
\midrule
1 & 0 & 0 & $\gamma_0 = 0$ \\
0 & 1 & 0 & $\gamma_1 = 1$ \\
0 & 0.5 & 0.5 & $\gamma_2 = 3$ \\
0 & 0 & 1 & $\gamma_3 = 10$\\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\begin{tabular}{lll|l}
\toprule
$t_1(4) = 0$ & $t_2(4) = 4$ & $t_3(4) = 0$\\
$r_1 = 0$ & $r_2 = 10$ & $r_3 = 30$ \\
\midrule
1 & 0 & 0 & $\gamma_0 = 0$ \\
0 & 1 & 0 & $\gamma_1 = 1$ \\
0 & 0.5 & 0.5 & $\gamma_2 = 3$ \\
0 & 0 & 1 & $\gamma_3 = 10$\\
\bottomrule
\end{tabular}
\end{minipage}
\caption{A contract for Example~\ref{sub:running-example}. The left tableau shows the payments for reported type $c' = 1$ (the recommended action is the fourth one), and the right tableau those for reported type $c' = 4$ (the recommended action is the second one).}\label{fig:ex}
\end{figure}
For this contract to be IC, we would like the agent to choose the tableau (read payments) that corresponds to his true type, as well as the recommended action for that type. For an agent with type $c = 1$ this means choosing the left tableau and fourth action. This gives him expected utility of $1 \cdot 14 - 10 \cdot 1 = 4$. An agent with type $c = 4$ should choose the right tableau and second action. This choice yields expected utility of $1 \cdot 4 - 1 \cdot 4 = 0$.
For neither type should the agent wish to pretend that he is of a different type and/or choose a different action. So, for example, the agent with type $c = 1$ should not be incentivized to pretend to be of type $c' = 4$ and take action $1$. Indeed, in this case his expected utility would be $1 \cdot 4 - 1 \cdot 1 = 3$, which is smaller than the expected utility he gets for truthfulness.
This is in fact true for all types and possible deviations, and so this contract is indeed IC. The principal's expected utility is $0.5 \cdot (30-14) + 0.5 \cdot (10-4) = 11$.
\subsection{Relation to Classic Settings}
\label{sec:comparison}
Consider a simple instance of our model in which there is only a single type in $C$, so that the type of the agent is publicly known. For such instances our model reduces to the classic principal-agent model of \citet{GrossmanHart83}
\footnote{As in \cite[][]{carroll2015robustness}, we couple the classic model with the particular form of agent risk-aversion captured by limited liability.}
The goal in that model is to design a standard contract, composed of a single payment vector and an (implicit) action recommendation, such that the agent takes the recommended action and the expected utility of the principal is maximized.
At the other extreme, consider an instance of our model in which every action $i$ deterministically leads to a distinct reward $r_i$ (i.e., the distributions $F$ are point mass; without loss of generality, when viewed as a tableau as in Figure~\ref{fig:ex} they constitute the identity matrix). In this case, the agent's action is not hidden. For such instances our model reduces to a reverse (procurement) variant of the single-parameter mechanism design setting of \citet{Myerson81}. In this variant, the agent acts as a seller of a service whose cost for providing the service is private. The service can be supplied at different (known) levels $\gamma_0,\dots,\gamma_n$, and the values $\{r_i\}_i$ of the principal/buyer for these levels are publicly known. The goal is to design a truthful procurement auction that maximizes the buyer's expected revenue.
The special case in which there are two actions ($n=1$) corresponds to the standard Bayesian procurement setting in which the auction's outcome is whether or not to buy from the agent/seller depending on his declared cost (drawn from a known distribution). In this special case, a decision to buy corresponds to a recommendation of the costly action ($x(c)=1$), and not buying corresponds to the zero-cost action ($x(c)=0$).
\section{Characterization of Contract Implementability}
\label{sec:charac}
A main driver of mechanism design with single-dimensional types has been Myerson's theory (see \cite[][]{Myerson81}). It characterizes the type of auction allocation rules that can be turned into incentive compatible auctions --- i.e., that are \emph{implementable} --- as those which are monotone. Furthermore, for each implementable allocation rule it provides an essentially unique payment rule that turns this rule into an IC mechanism.\footnote{Technically, \citet{Myerson81} only considered the continuous type case. A similar characterization, however, applies also in the discrete type case \cite[e.g.,][]{BergemannP07,Elkind07}. Allocation rules still have to be monotone, and while the payment identity no longer applies, it can be shown that payments have to be in a certain range, and hence there is still a revenue-optimal choice for any given allocation rule.}
In this section we develop such a theory for the single-parameter principal-agent model with hidden action and private cost. The main question is to find necessary and sufficient conditions for an allocation rule to be implementable as an IC contract.
\begin{definition}
\label{def:implementable}
An allocation rule $x:C\to[n]$ is \emph{implementable} if there exists a payment rule $t:C\to \mathbb{R}^{m+1}_{\ge 0}$ such that contract $(x,t)$ is IC.
\end{definition}
In Definition~\ref{def:implementable} the payment vector $t$ is required to be non-negative, thus imposing limited liability.
We note that this requirement is without loss of generality since every allocation rule that is implementable with arbitrary, possibly negative payments is also implementable with non-negative payments (by adding some offset to all payments). So, in principle, equivalent characterizations can be obtained without requiring payments to be non-negative. For our results in Section~\ref{sec:applications}, however, the constant offset and its interplay with limited liability will play a role, and for this it will be useful to develop the general machinery while requiring non-negative payments.
In Section~\ref{sec:properties} we define relevant properties of allocation rules, in particular monotonicity in the context of contracts. Sections~\ref{sec:discrete} and \ref{sec:continuous} give our characterization for discrete and continuous types, respectively, and show monotonicity is necessary but not sufficient for implementability. Throughout this section and unless stated otherwise, by an ``allocation rule'' we refer to such a rule in the context of contracts.
\subsection{Properties of Allocation Rules} \label{sec:properties}
Two properties of allocation rules that will play a role in our characterization results are monotonicity and the special case of piecewise constant monotonicity.
A monotone allocation rule recommends actions that are (weakly) more costly in terms of effort --- and hence also more rewarding in expectation for the principal --- to agents whose cost per unit-of-effort is lower. Intuitively, such agents are better-suited to take on effort-intensive tasks.
\begin{definition}\label{def:monotone}
An allocation rule $x:C\to[n]$ is \emph{monotone} if for every $c,c'\in C$,
$$
c<c'\implies x(c)\ge x(c').
$$
\end{definition}
A special class of monotone allocation rules are piecewise constant allocation rules (see Figure~\ref{fig:Fig2}). Informally, these are allocation rules such that: (i) the allocation function $x(\cdot)$ is locally constant in distinct intervals of $[0,\bar{c})$; and (ii) the allocation is decreasing with intervals.
\begin{definition}
\label{def:piecewise-constant}
An allocation rule $x:C\to[n]$ is \emph{monotone piecewise constant} if there exist $\ell+2$ \emph{breakpoints} $0=z_0< ...< z_{\ell}< z_{\ell+1}=\bar{c}$ where $\ell\leq n$, such that $x(z_{i})\geq x(z_{i+1})$ $\forall i\in [\ell]$ and for every $c\in C$,
\begin{eqnarray*}
c \in (z_{i},z_{i+1}) \implies x(c)=x(z_i);
\end{eqnarray*}
and without loss of generality $x(\bar{c})=x(z_\ell)$.%
\footnote{Note that in the continuous model, changing the allocation $x(\cdot)$ for a finite number of types does not change the objective. This is also the reason that the allocation of an interval $(z_i,z_{i+1})$ can be imposed without loss of generality on its left endpoint $z_i$ but not on its right one $z_{i+1}$.
}
\end{definition}
Note that in our setting, since the image of $x$ is finite, every monotone allocation rule can be viewed as corresponding to a monotone piecewise constant function.
\subsection{Characterization for Discrete Types}
\label{sec:discrete}
We now present our characterization of implementable allocation rules for discrete types.
Our approach must encompass the standard LP-based argument in contract theory for establishing whether or not a given action is implementable (see, e.g., Appendix A.2 of~\citep{DuttingRT19}).%
\footnote{In the classic principal-agent model with no private type, an action is \emph{implementable} if there exists a payment rule such that this action maximizes the agent's expected utility.}
At the same time, it must generalize Myerson's characterization of implementable allocation rules in single-parameter mechanism design problems (for discrete types). Our characterization thus shares features of Myerson's theory but also departs from it in significant ways.
\begin{theorem}[Discrete Characterization]\label{thm:charac-disc}
An allocation rule $x: C \rightarrow [n]$ is implementable if and only if there exist no weights $\lambda_{(c,c',k)} \geq 0$ for pairs of types $c,c' \in C$ and actions $k \in [n]$ which satisfy $\sum_{c'\in C}\sum_{k\in [n]}\lambda_{(c,c',k)}=1$ $\forall c\in C$ and the following conditions:
\begin{enumerate}
\item {\sc Weakly dominant distributions:}\label{itm:distrib-dicrete-char}
\begin{eqnarray*}
\sum_{{c'\in C}}\sum_{k\in [n]} F_{k,j} \lambda_{(c',c,k)} \geq F_{x(c),j} &\forall c \in C,j \in [m].
\end{eqnarray*}
\item {\sc Strictly lower joint cost:}\label{itm:costs-dicrete-char}
\begin{eqnarray*}
\sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}\gamma_k c <
\sum_{c\in C} \gamma_{x(c)} c.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
One can think of the weights $\lambda_{(c,c',k)}$ in Theorem~\ref{thm:charac-disc} as a \emph{deviation plan}, by which whenever the true type of the agent is $c$ then with probability $\lambda_{(c,c',k)}$ he deviates to type $c'$ and to action~$k$ (i.e., he pretends to be of type $c'$ and takes action $k$).\footnote{Note that $\lambda_{(c,c,x(c))}$ might be strictly positive. In this case, part of type $c$'s deviation plan is to report truthfully and take the allocated action.} Then, the first condition says that for every type~$c$, the combination of distributions over outcomes resulting from deviations to $c$ dominates the distribution of a non-deviating agent whose true type is $c$.
{The second condition compares the total expected cost of the deviation plan to the total cost of a non-deviating, truthful agent; it says that in summation over all agent types, the total cost of the deviation plan is strictly lower than that of truthfulness.}
Taking the two conditions together, Theorem~\ref{thm:charac-disc} states that an allocation rule is implementable precisely when there is no deviation plan for the agent which costs less than being truthful, and enables the agent to dominate the distribution over outcomes achievable by a truthful agent of any type (by pretending to be of that type).
\begin{proof}[Proof of Theorem \ref{thm:charac-disc}]
We take the following LP-based approach to determine whether or not $x$ is implementable. The LP for finding an IC contract that implements $x$ has $|C|m$ payment variables $\{t^c_j\}$ which must be non-negative by limited liability,
and $|C|^2(n+1)$ constraints ensuring that type~$c$'s expected utility from action $x(c)$ given contract $t^c$ is at least the expected utility from any other action $k\in [n]$ given any other contract $t^{c'}$.
Recall from Eq.~\eqref{eq:expected-transfer} that $T^{c}_{k}=\sum_{j=1}^{m} F_{k,j} t^{c}_j$ is the expected transfer to an agent for reporting type $c$ and taking action $k$.
The LP is:
\begin{equation*}
\begin{array}{ll@{}ll}\tag{LP1}\label{LP1}
\text{max} & 0 &\\
\text{s.t.} & T_{x(c)}^{c} -\gamma_{x(c)} c \geq T^{c'}_{k} -\gamma_{k} c & &\forall c,c' \in C,k \in [n],\\
& t^{c}_j\geq 0 & & \forall c\in C, j\in [m].
\end{array}
\end{equation*}
\begin{comment}
Taking the dual:
Rewrite each inequality constraint as a “less than or equal”, and rearrange each constraint so that the right-hand side is $0$.
\begin{equation*}
\begin{array}{ll@{}ll}
\text{max} & 0 &\\
\text{s.t.} & T^{c'}_{k}-T_{x(c)}^{c} -(\gamma_{k} c -\gamma_{x(c)} c )\leq 0 & &\forall c,c' \in C,k \in [n],\\
& t^{c}_j\geq 0 & & \forall c\in C, j\in [m].
\end{array}
\end{equation*}
Step 3. Define a non-negative dual variable for each inequality constraint, and an unre-
stricted dual variable for each equality constraint.
Define $\lambda_{(c,c',k)}\geq 0$ $\forall c,c',k$.
Step 4. For each constraint, eliminate the constraint and add the term (dual variable)*(left- hand side of constraint) to the objective. Maximize the result over the dual variables.
\begin{equation*}
\begin{array}{ll@{}ll}
\text{min}_{\lambda}\text{max} & \sum_{c,c'\in C,k\in [n]}\lambda_{(c,c',k)}[\sum_{j\in [m]}F_{k,j}t^{c'}_j-\sum_{j\in [m]}F_{x(c),j}t^{c}_j -(\gamma_{k} c -\gamma_{x(c)} c ) ]
\end{array}
\end{equation*}
Step 5. We now have an objective with several terms of the form (dual variable)*(expression with primal variables), plus remaining terms involving only primal variables. Rewrite the objective so that it consists of several terms of the form (primal variable)*(expression with dual variables), plus remainig terms involving only dual variables.
\begin{equation*}
\begin{array}{ll@{}ll}
\text{min}_{\lambda}\text{max} &
\sum_{c\in C}\sum_{j\in [m]}t^c_j(\sum_{c'\in C}\sum_{k\in [n ]}\lambda_{(c',c,k)}F_{k,j}-\lambda_{(c,c',k)}F_{x(c),j})-\\
&\sum_{c,c'\in C,k\in [n]}\lambda_{(c,c',k)}(\gamma_{k} c -\gamma_{x(c)} c )
\end{array}
\end{equation*}
Step 6. Remove each term of the form (primal variable)*(expression with dual variables) and replace with a cnostraint of the form:
expression $\geq$ 0, if the primal variable is non-negative. expression $\leq$ 0, if the primal variable is non-positive. expression = 0, if the primal variable is unrestricted.
\begin{equation*}
\begin{array}{ll@{}ll}
\text{min}_{\lambda} &
\sum_{c,c'\in C,k\in [n]}\lambda_{(c,c',k)}(\gamma_{k} c -\gamma_{x(c)} c ) \\
\text{s.t.} & \sum_{c'\in C}\sum_{k\in [n ]}\lambda_{(c',c,k)}F_{k,j}-\lambda_{(c,c',k)}F_{x(c),j} \geq 0& \forall c\in C,j\in [m]
\end{array}
\end{equation*}
\end{comment}
The dual of the above linear program has $C^2 (n+1)$ non-negative variables $\lambda_{(c,c',k)}$ indexed by $c,c'\in C$ and $k\in [n]$, and $Cm$ constraints.
The dual is:
\begin{equation*}
\begin{array}{ll@{}ll}
\tag{D1}
\label{Dual1}
\text{min} & \sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)} (\gamma_k-\gamma_{x(c)})c \text{\space} &\\
\text{s.t.}& \sum_{c'\in C,k\in [n]}F_{k,j} \lambda_{(c',c,k)} \geq
F_{x(c),j} (\sum_{c''\in C,k\in [n]} \lambda_{(c,c'',k)}) & & \forall c \in C,j\in [m], \\
& \lambda_{(c,c',k)}\geq 0 & & \forall c,c'\in C, k\in [n].
\end{array}
\end{equation*}
By strong duality, \ref{LP1} is feasible (i.e., $x$ is implementable) if and only if its dual has an optimal solution with objective value of zero.
Notice that the dual is feasible (e.g., setting $\lambda_{(c,c,x(c))}=1$ for every type $c$ and all other $\lambda$s to zero yields a feasible solution).
Thus \ref{LP1} is \emph{not} feasible if and only if there exists a feasible solution $\lambda$ to the dual which has objective value strictly below zero. Lemma~\ref{lemma:normalize-dis-char} in Appendix~\ref{appx:Char} shows that the existence of $\lambda$ implies the existence of a feasible solution with strictly negative objective $\lambda'$ in which $\sum_{c''\in C}\sum_{k\in [n]}\lambda'_{(c,c'',k)}=1$ $\forall c\in C$. It can be verified that when $\sum_{c''\in C}\sum_{k\in [n]}\lambda'_{(c,c'',k)}=1$ $\forall c\in C$, strictly negative objective value simplifies to
$\sum_{c,c'\in C}\sum_{k\in [n]} \lambda'_{(c,c',k)} \gamma_k c<\sum_{c\in C}\gamma_{x(c)}c,$ and the first set of constraints simplifies to $\sum_{c'\in C}\sum_{k\in [n]}F_{k,j} \lambda'_{(c',c,k)}\geq F_{x(c),j}$ $\forall c\in C, j\in [m].$ This completes the proof.
\end{proof}
In Appendix~\ref{appx:relation-lit} we elaborate on the relation between Theorem~\ref{thm:charac-disc} and classic characterizations. Specifically, we show how it reduces to the classic characterizations when, as specified in Section~\ref{sec:comparison}, there is either no hidden type (standard contract setting) or no hidden action (standard procurement setting).
\begin{comment}
\begin{theorem}
\label{thm:charac-disc} \tal{Rephrase}
For a discrete type space $C$, an allocation rule $x: C \rightarrow [n]$ is implementable if and only if there exist no weights $\lambda_{(c,c',k)} \geq 0$ for pairs of types $c,c' \in C$ and actions $k \in [n]$
which satisfy the following conditions:
\begin{enumerate}
\item {\sc Weakly dominant distributions:}\label{itm:distrib-dicrete-char}
\begin{eqnarray*}
\sum_{{c'\in C}}\sum_{k\in [n]} F_{k,j} \lambda_{(c',c,k)} \geq F_{x(c),j} \textcolor{blue}{\sum_{c''\in C}\sum_{k\in [n]}\lambda_{(c,c'',k)} &\forall c \in C,j \in [m].}
\end{eqnarray*}
\item {\sc Strictly lower joint cost:}\label{itm:costs-dicrete-char}
\begin{eqnarray*}
\sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}\gamma_k c <
\sum_{c\in C} \gamma_{x(c)} c \sum_{c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
One can think of the weights $\lambda_{(c,c',k)}$ in Theorem~\ref{thm:charac-disc} as a \emph{deviation plan}, by which whenever the true type of the agent is $c$ then with probability proportional to $\lambda_{(c,c',k)}$ he pretends
to be of type~$c'$, and takes action $k$.
We say that type $c$ \emph{deviates} under the deviation plan if $\lambda_{(c,c',k)}>0$ for some $c',k$, and that type $c$ is \emph{deviated to} if $\lambda_{(c',c,k)}>0$ for some $c',k$.
{Consider the two conditions after normalizing the $\lambda$s (using the normalization term $\sum_{c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}$ for weights $\lambda_{(c,\cdot,\cdot)}$ for every deviating type $c$). The first condition says that for every type~$c$ that is deviated to according to the plan, the combination of distributions over outcomes resulting from deviations to $c$ dominates the distribution of a non-deviating agent whose true type is $c$.
The second condition compares the expected cost of the deviation plan for a deviating agent of type $c$ to the cost of a non-deviating, truthful agent of the same type; it says that in summation over all agent types, the total cost of the deviation plan is strictly lower than that of truthfulness.
Taking the two conditions together, Theorem~\ref{thm:charac-disc} states that an allocation rule is implementable precisely when there is no deviation plan for the agent which costs less than being truthful, and enables the agent to dominate the distribution over outcomes achievable by a truthful agent of any type (by pretending to be of that type).}
\begin{proof}[Proof of Theorem \ref{thm:charac-disc}]
We take the following LP-based approach to determine whether or not $x$ is implementable. The LP for finding an IC contract that implements $x$ has $|C|m$ payment variables $\{t^c_j\}$ which must be non-negative by limited liability, and $|C|^2(n+1)$ constraints ensuring that type~$c$'s expected utility from action $x(c)$ given contract $t^c$ is at least the expected utility from any other action $k\in [n]$ given any other contract $t^{c'}$.
Recall from Eq.~\eqref{eq:expected-transfer} that $T^{c}_{k}=\sum_{j=1}^{m} F_{k,j} t^{c}_j$ is the expected transfer to an agent for reporting type $c$ and taking action $k$.
The LP is:
\begin{equation*}
\begin{array}{ll@{}ll}
\tag{LP1}
\label{LP1}
\text{max} & 0 &\\
\text{s.t.} & T_{x(c)}^{c} -\gamma_{x(c)} c \geq T^{c'}_{k} -\gamma_{k} c & &\forall c,c' \in C,k \in [n],\\
& t^{c}_j\geq 0 & & \forall c\in C, j\in [m].
\end{array}
\end{equation*}
The dual of the above linear program has $C^2 (n+1)$ non-negative variables $\lambda_{(c,c',k)}$ indexed by $c,c'\in C$ and $k\in [n]$, and $Cm$ constraints.
The dual is:
\begin{equation*}
\begin{array}{ll@{}ll}
\text{min} & \sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)} (\gamma_k-\gamma_{x(c)})c \text{\space} &\\
\text{s.t.}& \sum_{c'\in C}\sum_{k\in [n]}F_{k,j} \lambda_{(c,c',k)} \geq
F_{x(c),j} (\sum_{c''\in C}\sum_{k\in [n]} \lambda_{(c,c'',k)}) & & \forall c \in C,j \in [m], \\
& \lambda_{(c,c',k)}\geq 0 & & \forall c,c'\in C, k\in [n].
\end{array}
\end{equation*}
By strong duality, \ref{LP1} is feasible (i.e., $x$ is implementable) if and only if its dual has an optimal solution with objective value of zero.
Notice that the dual is feasible (e.g., setting $\lambda_{(c,c,x(c))}=1$ for every type $c$ and all other $\lambda$s to zero yields a feasible solution).
Thus \ref{LP1} is \emph{not} feasible if and only if there exists a feasible solution to the dual (feasibility is captured by Condition~1 of Theorem~\ref{thm:charac-disc}), which has objective value strictly below zero (as captured by Condition~2). This completes the proof.
\end{proof}
\end{comment}
\paragraph{\bf Computational implications.}
An immediate corollary of the LP-based proof of the characterization is that we can tractably: (i)~verify whether a given allocation rule is implementable; and (ii)~compute optimal payments for this allocation rule if it is.
For (i) we verify whether \ref{LP1} has a feasible solution, and for (ii) we minimize the expected payment $T_{x(c)}^c= \sum_{j} F_{x(c),j} t^c_j$ over the feasibility region of \ref{LP1}.
\begin{corollary}
\label{cor:poly-LP}
For a discrete type space $C$, the problem of determining whether or not an allocation rule is implementable is solvable in time $O(poly(n,m,|C|^2))$. Moreover, if an allocation rule is implementable, then we can find optimal payments for this allocation rule in time $O(poly(n,m,|C|^2))$.
\end{corollary}
We remark that if the type space is large ($|C|\gg n$) but the allocation rule is represented succinctly (e.g.,~by specifying which ranges of types map to which of the $n$ actions), the running time stays polynomial in $n,m$ (see Corollary~\ref{cor:poly-LP-cont} for the extreme case where $C$ is infinite).
\begin{comment}
As for the relation to no hidden action, consider a deterministic allocation rule $x$ of a procurement auction, which\ta{, for simplicity,} determines from which types to buy. That is, $x(c)=1$ if type $c$ is allocated (bought from) and $x(c)=0$ otherwise. Notice that such $x$ can also be viewed as an allocation rule in a setting with two actions $0,1$, \ta{where $\gamma_0=0$, $\gamma_1=1$}. An auction allocation rule is known to be implementable if and only if it is monotone in the auction sense, i.e., if for every two types $c'<c$ such that type $c$ wins (is allocated), type $c'$ also wins.
A simple observation offers an alternative definition:
\tal{Fix the following discussion: $x$ is monotone if $W=\arg\min_{W'\subseteq C}\sum_{c\in W'}c$}
\begin{observation}
\label{obs:auction-monoton}
Let $W\subseteq C$ be the winning types according to auction allocation rule~$x$. Then $x$ is monotone if and only if there is no $W'\subseteq C$ of at least the same size ($|W'|\ge |W|$) with strictly lower total cost ($\sum_{c\in W'} c < \sum_{c\in W} c$).
\end{observation}
\begin{proof}
It is not hard to see that if $x$ is not monotone there exists such $W'$ (simply swap $c\in W$ with $c'$). For the other direction, assume there exists such $W'$.
Then there must exist $c'\in W'\setminus W$ and $c\in W\setminus W'$ such that $c'<c$, contradicting monotonicity.
\end{proof}
To see the connection to Theorem~\ref{thm:charac-disc}, consider a bipartite graph $((V,U),E)$, with $V$ and $U$ corresponding to two copies of the type set $C$. Let $W$ be the set of winning types according to auction allocation rule $x$.
It can be verified that the characterization in Theorem~\ref{thm:charac-disc} says $x$ is implementable if and only if there is no \emph{perfect} fractional matching $\{\lambda_{c,c'}\}$ in the bipartite graph for which
\begin{equation}
\sum_{c\in W} c>\sum_{c\in C,c'\in W}\lambda_{c,c'}c.\label{eq:bipart}
\end{equation}
Such a matching corresponds to a fractional deviation plan where type $c$ reports $c'$ with probability $\lambda_{c,c'}$, the total cost condition in~\eqref{eq:bipart} holds, and by perfection of the matching
\begin{equation}
\forall c : \sum_{c'} \lambda_{c',c}=1.\label{eq:perfect}
\end{equation}
So implementability is equivalent to nonexistence of a fractional deviation plan that satisfies Conditions \eqref{eq:bipart}-\eqref{eq:perfect}; by the Birkhoff-von-Neumann theorem, this is equivalent to nonexistence of an integral such deviation plan, under which the new winner set $W'$ has the same size as $W$ by~\eqref{eq:perfect} and strictly lower cost by~\eqref{eq:bipart}.
Nonexistence of such $W'$ coincides with monotonicity by Observation~\ref{obs:auction-monoton}. We have thus shown that implementability is equivalent to monotonicity, deducing the classic characterization for auction allocation rules from Theorem~\ref{thm:charac-disc}.
\ta{In Appendix~\ref{appx:relation-lit}, we show that this is true for an arbitrary number of actions.} \tal{I need to work on the proof there.}
\end{comment}
\paragraph{\bf Monotonicity is necessary but not sufficient.}
We next observe that beyond the two extremes with just hidden type or just hidden action, the conditions in Theorem~\ref{thm:charac-disc} imply that any implementable allocation rule for a setting with private type and hidden action has to be monotone in the sense of Definition~\ref{def:monotone}.
\begin{proposition}
\label{prop:impl-is-mon-dis}
For discrete types, every implementable allocation rule is monotone.
\end{proposition}
The proof idea is simple ---
consider a ``swap'' between two types $\ell$ and $h$ that certify a violation of monotonicity, and use it to derive a deviation plan that satisfies the non-implementability conditions of Theorem~\ref{thm:charac-disc}.
\begin{proof}[Proof of Proposition~\ref{prop:impl-is-mon-dis}]
Suppose towards contradiction that there exists an implementable allocation rule~$x:C\to [n]$ which is non-monotone. By definition, there are two types $\ell<h$ such that $\gamma_{x(\ell)}<\gamma_{x(h)}$. We specify weights (a deviation plan) that satisfy the conditions for non-implementability in Theorem~\ref{thm:charac-disc}, contradicting our assumption that $x$ is implementable.
Let $\lambda_{(\ell,h,x(h))}=\lambda_{(h,\ell,x(\ell))}=1$,
and $\lambda_{(c,c,x(c))}=1$ $\forall c \in C \setminus \{h,\ell\}$, and $\lambda_{(c,c',k)}=0$ for all other entries. Simply put, $\lambda$ swaps the allocations of types $\ell$ and $h$.
We first show that $\lambda$ gives weakly dominant distributions (Condition~\eqref{itm:distrib-dicrete-char} in Theorem~\ref{thm:charac-disc}). For $c\notin \{\ell,h\}$, the inequality
$\sum_{{c'\in C}}\sum_{k\in [n]} F_{k,j} \lambda_{(c',c,k)} \geq F_{x(c),j}$ holds with equality $\forall j\in [m]$ by the definition of $\lambda$. For $c=\ell$, since the only non-zero $\lambda$ on the left-hand side is $\lambda_{(h,\ell,x(\ell))}=1$, the left-hand side is equal to $F_{x(\ell),j}$. Thus, the inequality holds with equality for $c=\ell$. The same arguments apply for $c=h$.
Next, we show strictly lower joint cost, i.e., $\sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}\gamma_k c < \sum_{c\in C} \gamma_{x(c)} c$.
By definition of $\lambda$, this simplifies to $\gamma_{x(\ell)} h+\gamma_{x(h)} \ell<\gamma_{x(\ell)} \ell+\gamma_{x(h)} h$. It can be verified that the last inequality holds since $\ell<h$ and $\gamma_{x(\ell)}<\gamma_{x(h)}$. This completes the proof.
\end{proof}
Interestingly, however, we show in Proposition~\ref{prop:non-monotone} below that monotonicity is \emph{not} a sufficient condition for implementability in our model with private type and hidden action --- a clear distinction from Myerson's theory \citep{Myerson81}.
This distinction is not caused just by the fact that in contract theory sometimes certain actions are not implementable at all
(at least not in a way that ensures the principal non-negative utility).
In fact, our running example that we use to establish Proposition~\ref{prop:non-monotone} satisfies a condition for the implementability of any action (see \citep[Appendix A.2]{DuttingRT19}).
That is, for every type $c$ and every action, there is a payment vector that incentivizes an agent of type $c$ to take that action, but there is no payment rule that incentives truthful reporting of every type while also ensuring that
the agent prefers to take the prescribed action for that type.
\begin{proposition}\label{prop:non-monotone}
Consider Example~\ref{ex:runing}. The monotone allocation rule $x(1)=x(4)=2$ is unimplementable.
\end{proposition}
\begin{proof}
The proof relies on the characterization in Theorem~\ref{thm:charac-disc}.
Suppose towards a contradiction that $x$ is implementable. We specify weights that satisfy the conditions for unimplementability in Theorem~\ref{thm:charac-disc}.
Let $\lambda_{(1,1,3)}=\lambda_{(1,4,3)}=\lambda_{(4,1,1)}=\lambda_{(4,4,1)}=0.5$, and $\lambda_{(c,c',k)}=0$ otherwise. One may interpret $\lambda$ as the deviation where type $1$ takes action $3$, type $4$ takes action $1$, and both types report either $c'=1$ or $c'=4$ with equal probability.
We first show that $\lambda$ gives weakly dominant distributions (Condition~\eqref{itm:distrib-dicrete-char} in Theorem~\ref{thm:charac-disc}).
For $c=1$, since the only non-zero $\lambda$s on the left-hand side are $\lambda_{(4,1,1
)}=\lambda_{(1,1,3)}=0.5$, the left-hand side (the generated distribution over outcomes) is equal to $F_{2,j}=F_{x(1),j}$. Thus, the inequality holds with equality for $c=1$. The same arguments apply for $c=4$.
Next, we show $\sum_{c,c'\in C}\sum_{k\in [n]} \lambda_{(c,c',k)}\gamma_k c < \sum_{c\in C} \gamma_{x(c)} c$, i.e., strictly lower joint cost (Condition~\eqref{itm:costs-dicrete-char} in Theorem~\ref{thm:charac-disc}).
By definition of $\lambda$ and recalling that $\gamma_1=1,\gamma_2=3,\gamma_3=10$, this simplifies to $14=\gamma_{1}\cdot 4+\gamma_{3}\cdot 1<\gamma_{2}\cdot 1+\gamma_{2}\cdot 4=15$.
That is, the initial joint cost is $15$, while the joint cost after deviation is $14$. This completes the proof.
\end{proof}
We note that while at first glance Proposition~\ref{prop:non-monotone} seems to indicate an inherent incompatibility between monotonicity and implementability, it could still be the case that the \emph{optimal} monotone allocation rule is always implementable. We will return to this question in Section~\ref{sec:applications}.
\subsection{Characterization for Continuous Types}
\label{sec:continuous}
We next tackle the case of continuous types. We establish our characterization result (Theorem~\ref{thm:char-cont}) by reduction to the discrete case with an extra condition (Lemma~\ref{lemma:reduction}). The key idea is as follows: We can argue --- independently of the LP approach to characterization --- that any implementable allocation rule must be monotone (Proposition~\ref{prop:impl-is-mon-cont}). Because the image of $x$ is finite, this means $x$ must be monotone piecewise constant (see Figure~\ref{fig:Fig2} on the left). The characterization can then be phrased for the $O(n)$ breakpoints $\{z_i\}$ of $x$.
Like in Theorem~\ref{thm:charac-disc}, the characterization states that there should be no ``profitable'' deviation plan, where this time deviations refer to breakpoints~$\{z_i\}$.
For a technical reason explained below, the deviation plan involves two copies of every breakpoint
that we refer to as $R$ (Right) and $L$ (Left).
\begin{theorem}[Continuous Characterization]
\label{thm:char-cont}
An allocation rule $x: \mathbb{R} \rightarrow [n]$ is implementable if and only if it is monotone piecewise constant and there exist no weights $\lambda_{(R,i,{i'},k)},\lambda_{(L,i+1,{i'},k)}\geq 0,$ for all $i\in [\ell],i'\in [\ell+1],k \in [n]$ which satisfy $\sum_{i',k\in [n]}\lambda_{(L,i+1,i',k)}=\sum_{i',k\in [n]}\lambda_{(R,i,i',k)}=1$ $\forall i\in [\ell]$ and the following conditions:
\begin{enumerate}
\item {\sc Weakly dominant distributions:}
\begin{eqnarray*}
\sum_{i'\in [\ell],k\in [n]}\frac{1}{2}(\lambda_{(R,i',i,k)}F_{k,j}+ \lambda
_{(L,i'+1,i,k)}F_{k,j})\geq
F_{x(z_i),j} && \forall j \in [m], i\in [\ell].
\end{eqnarray*}
\item {\sc Strictly lower joint cost:}\label{item:cont-dist-const}
\begin{eqnarray*}
\sum_{i\in [\ell]}(\gamma_{x(z_i)} z_i+\gamma_{x(z_{i})} z_{i+1}) >\sum_{i\in [\ell],i'\in [\ell+1],k\in [n]}\lambda_{(R,i,i',k)}\gamma_{k} z_i+\lambda_{(L,i+1,i',k)}\gamma_{k} z_{i+1}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\paragraph{\bf Our discretization.}
The reason we index the $\lambda$s with $R$ or $L$ is
as follows. The set of breakpoints $z_0,...,z_{\ell+1}$ of the monotone piecewise constant allocation rule $x$ form a natural continuous-to-discrete reduction. However, ensuring only that type $z_i$ has no incentive to deviate from $x(z_i)$ is not sufficient, since this does not maintain IC for non-breakpoints (i.e., types inside the interval $(z_i,z_{i+1})$).
A helpful observation is that ensuring IC for both ``endpoints'' of $[z_i,z_{i+1})$ \emph{does} suffice to guarantee that all types in the interval do not deviate from $x(z_i)$, as these types share the same allocation and related incentives. We thus introduce the following discretization: We ``split'' the $i$th breakpoint (save $z_0$ and $z_{\ell+1}$) into two discrete types, $(L,i)$ and $(R,i)$. Then, consider an interval $[z_{i-1},z_i)$. For the left breakpoint, with cost $z_{i-1}$, we have type $(R,i-1)$, and for the right breakpoint with cost (that tends from the right to) $z_{i}$ we have type $(L,i)$.
\begin{figure}[h]
\centering
\captionsetup{justification=centering,margin=1.5cm}
\includegraphics[scale=1,width=140mm]{Fig2.pdf}
\caption{{\bf Left}: A monotone piecewise constant allocation rule $x(\cdot)$ for $4$ actions; the $x$-axis is the continuous type space and the $y$-axis is $\gamma_{x(c)}$. \\{\bf Right}: The discretization of $x(\cdot)$. Every breakpoint $z_i$ (save $z_0$ and $z_4$) is duplicated; the left copy of $z_i$ is allocated according to $x(z_{i-1})$ and the right copy is allocated according to $x(z_i)$ (the copies are plotted as if separated by a gap but the distance between $(L,i)$ and $(R,i)$ is vanishing.}
\label{fig:Fig2}
\end{figure}{}
\paragraph{\bf Proof of Theorem~\ref{thm:char-cont}.}
Our proof of the characterization result for the continuous case relies on the following proposition (a slight generalization of a classic result by \cite{Myerson81})
and on
Lemma~\ref{lemma:reduction} below.
\begin{proposition}\label{prop:impl-is-mon-cont}
For continuous types, every implementable allocation rule is monotone piecewise constant.
\end{proposition}
\begin{proof}
We apply Myerson's argument to show that every implementable allocation rule $x$ mapping types to actions is monotone. Thus and since there is a finite number of actions to map to, $x$ is piecewise constant.
To show that $x$ is monotone,
let $c<c'$ be two costs; it suffices to show that $\gamma_{x{(c)}}\geq \gamma_{x{(c')}}$. Let $t$ be a payment rule that implements $x$, and recall $T^c_i$ is the expected payment to the agent for reporting type $c$ and taking action $i$. If the agent's true type is $c$, by IC it must hold that the expected payment from truthfulness is at least the expected payment from reporting $c'$ and deviating to the action recommended to type $c'$:
\begin{equation}\label{eq:proof-menumonotonicity-eq1}
T^{c}_{x(c)}-\gamma_{x(c)} c\geq T^{c'}_{x(c')}-\gamma_{x(c')} c.
\end{equation}
Similarly, if the agent's true type is $c'$ then by IC it must hold that
\begin{equation}\label{eq:proof-menumonotonicity-eq2}
T^{c'}_{x(c')}-\gamma_{x(c')} c'\geq T^{c}_{x(c)}-\gamma_{x(c)} c'.
\end{equation}
Rearranging \eqref{eq:proof-menumonotonicity-eq1} and \eqref{eq:proof-menumonotonicity-eq2} we have
\begin{equation}\label{eq:proof-menumonotonicity-eq3}
(\gamma_{x(c)} -\gamma_{x(c')}) c' \geq T^{c}_{x(c)}-T^{c'}_{x(c')} \geq (\gamma_{x(c)} -\gamma_{x(c')}) c.
\end{equation}
Suppose towards a contradiction that $\gamma_{x(c)} <\gamma_{x(c')}$. Dividing both sides of \eqref{eq:proof-menumonotonicity-eq3} by $\gamma_{x(c)} -\gamma_{x(c')}$ we get that $c'\leq c$, contradiction. This completes the proof.
\end{proof}
The following lemma reduces the implementability of a monotone piecewise constant allocation rule to that of the discretized version with an extra condition.
\begin{lemma}\label{lemma:reduction}
Consider a principal-agent instance ${\mathrm{Con}}(F,\gamma,C)$ and a monotone piecewise constant allocation rule $x:C\to[n]$ with a set $Z$ of breakpoints.
The following statements are equivalent for every payment rule $t: C \rightarrow \mathbb{R}^{m+1}_{\geq 0}$:
\begin{enumerate}
\item Payment rule $t$ implements $x$ with respect to ${\mathrm{Con}}(F,\gamma,C)$;
\item Payment rule $t$ implements $x$ with respect to ${\mathrm{Dis}}(F,\gamma,Z)$, such that
\begin{equation}
\textstyle
\forall 1\le i\le \ell : T^{z_i}_{x(z_i)} - \gamma_{x(z_i)} z_i = T^{z_{i-1}}_{x(z_{i-1})} - \gamma_{x(z_{i-1})} z_i.
\label{item:disc-constraint}
\end{equation}
\end{enumerate}
\end{lemma}
By Condition~\eqref{item:disc-constraint} in Lemma~\ref{lemma:reduction}, type $z_i$ is equally incentivized to take action $x(z_{i})$, which is allocated to types in the interval $[z_i,z_{i+1})$ on its right, and to deviate to $z_{i-1}$ and take action $x(z_{i-1})$, which is allocated to types in the interval $[z_{i-1},z_{i})$ on its left. Equivalently, in the discretization described above, type $(R,i)$ is incentivized to take its prescribed action $x(z_{i})$, and type $(L,i)$ (in the limit as its cost $c \rightarrow z_i$) is incentivized to take its prescribed action $x(z_{i-1})$.
\begin{proof}[Proof of Lemma~\ref{lemma:reduction}]
For the forward direction, suppose there exists a payment rule $t$ that implements $x$ with respect to ${\mathrm{Con}}(F,\gamma,C)$. We show that $t$ restricted to $Z$ both (i)~implements $x$ with respect to ${\mathrm{Dis}}(F,\gamma,Z)$; and (ii)~satisfies Condition~\eqref{item:disc-constraint}.
For (i), since $t$ implements $x$ with respect to ${\mathrm{Con}}(F,\gamma,C)$ and $Z\subseteq C$, then type $z_i$ cannot increase its expected utility by reporting $z_{i'}$ or by taking an action other than $x(z_i)$. For (ii), suppose towards a contradiction that for some $1\le i\le n$ it holds that $\textstyle T^{z_i}_{x(z_i)}-\gamma_{x(z_i)}z_i\neq T^{z_{i-1}}_{x(z_{i-1})}-\gamma_{x(z_{i-1})}z_i$. That is, without loss of generality, $\textstyle (\gamma_{x(z_{i-1})}-\gamma_{x(z_i)}) z_i< T^{z_{i-1}}_{x(z_{i-1})}- T^{z_i}_{x(z_i)}$. It follows that for a small enough value of $\epsilon$, $\textstyle (\gamma_{x(z_{i-1})}-\gamma_{x(z_i)}) (z_i+\epsilon)< T^{z_{i-1}}_{x(z_{i-1})}- T^{z_i}_{x(z_i)}$. This contradicts the assumption that $t$ implements $x$ with respect to ${\mathrm{Con}}(F,\gamma,C)$, since type $z_i+\epsilon\in C$ could increase its payoff by reporting $c'={z_{i-1}}$ and by taking action $x(z_{i-1})$.
For the backward direction, suppose $t$ implements $x$ with respect to ${\mathrm{Dis}}(F,\gamma,C)$ under Condition~\eqref{item:disc-constraint}. Define $\hat{t}$ such that $\hat{t}^c={t}^{z_{i}}$ for every $c\in [z_{i},z_{i+1})$, where $i\in [\ell]$, and $t^{z_{\ell+1}}=t^{z_{\ell}}$.\footnote{Note that $T^{z_{\ell}}_{x(z_{\ell})}=T^{z_{\ell+1}}_{x(z_{\ell+1})}$ since $x(z_{\ell})=x(z_{\ell+1})$ and none of types $z_{\ell},z_{\ell+1}$ deviates to the other.} Since $t^{z_{i}}$ and action $x(z_i)$ maximize type $z_{i}$'s expected utility among all types $Z$ and actions,
\begin{eqnarray}\label{eq:dis-to-cont-1}
\textstyle (\gamma_{k}-\gamma_{x(z_{i})}) z_{i} \geq T^{z_{i'}}_{k}-T^{z_i}_{x(z_i)} & \forall i\in [\ell],k\in [n], z_{i'} \in Z.
\end{eqnarray}
By Condition~\eqref{item:disc-constraint} and the fact that $t^{z_{i+1}}$ and action $x(z_{i+1})$ maximize type $z_{i+1}$'s expected utility $\forall i \in [\ell]$, we have that $t^{z_i}$ and action $x(z_i)$ also maximize type $z_{i+1}$'s expected utility. Thus,
\begin{eqnarray}\label{eq:dis-to-cont-2}
\textstyle T^{z_i}_{x(z_i)} - T^{z_{i'}}_{k} \geq (\gamma_{x(z_i)}-\gamma_{k}) z_{i+1} & \forall i\in [\ell],k\in [n], z_{i'}\in Z.
\end{eqnarray}
The above holds also for $i=\ell$ since ${x(z_{\ell})}={x(z_{\ell+1})}$, and we chose $t^{z_{\ell+1}}=t^{z_{\ell}}$ which also incentivizes $z_{\ell+1}$ to take $x(z_{\ell+1})$ as explained above.
Using Inequalities \eqref{eq:dis-to-cont-1}-\eqref{eq:dis-to-cont-2} we now show that type $c$'s expected utility is maximized by choosing payments $\hat{t}^{c}$ and action $x(c)$ for every $c\in [z_i,z_{i+1}), i\in [\ell]$. First, if $\gamma_{x(z_i)}>\gamma_{k}$, then by \eqref{eq:dis-to-cont-2} and using $\hat{t}^{c}={t}^{z_i}$, $x(c)=x(z_i)$, and $z_{i+1}> c$, we have that $\hat{T}^{c}_{x(c)} -\hat{T}^{c'}_{k} \geq (\gamma_{x(c)}-\gamma_{k}) c$ for every $k\in [n], c'\in C$. Otherwise, if $\gamma_{x(z_i)}\leq \gamma_{k}$, then by~\eqref{eq:dis-to-cont-1} and using that $\hat{t}^c=t^{z_i}$, $x(c)=x(z_i)$, and $c\geq z_{i}$, we have that $(\gamma_{k}-\gamma_{x(c)}) c\geq \hat{T}^{c'}_k-\hat{T}^c_{x(c)}$ for every $k\in [n], c'\in C$. This completes the proof.
\end{proof}
We are now ready to prove our characterization result for continuous types.
\begin{proof}[Proof of Theorem~\ref{thm:char-cont}]
By combining Proposition~\ref{prop:impl-is-mon-cont} and Lemma~\ref{lemma:reduction}, to determine whether $x$ is implementable one must determine whether there exists a payment rule $t$ that implements $x$ for ${\mathrm{Dis}}(F,\gamma,Z)$ under Condition~(\ref{item:disc-constraint}).
As in the proof of Theorem~\ref{thm:charac-disc} we adopt the following LP-based approach. The LP for finding an IC contract that implements $x$ under Condition~(\ref{item:disc-constraint}) (if one exists) has $|Z| m=O(nm)$ payment variables $\{t^{z_i}_j\}$, which must be non-negative by limited liability, and two sets of $O(|Z|^2n)=O(n^3)$ constraints referred to as the ``right'' set and the ``left'' set.
The right set of constraints ensures that reporting $z_i$ and taking action $x(z_i)$ maximizes type $z_i$'s expected utility among all type reports and actions. This set of constraints is equivalent to the constraints in \eqref{LP1}, ensuring that $t$ implements $x$.
The left set of constraints ensures that reporting $z_{i-1}$ and taking action $x(z_{i-1})$ also maximizes type $z_i$'s expected utility (i.e., ensuring Condition~(\ref{item:disc-constraint})).%
\footnote{It is also possible to replace this set of constraints by an equality constraint, which leads to a ``less symmetric'' dual.}
\begin{equation}\label{LP:cont}\tag{LP2}
\begin{array}{ll@{}ll}
\text{min} & 0 &\\
\text{s.t.} & T^{z_i}_{x(z_i)} -\gamma_{x(z_i)} z_i \geq
T^{z_{i'}}_k-\gamma_{k} z_i && \forall i,i' \in [\ell+1],k \in [n],\\
&T_{x(z_{i-1})}^{z_{i-1}} - \gamma_{x(z_{i-1})} z_i \geq T^{z_{i'}}_k- \gamma_{k} z_i && \forall 1\le i\le \ell ,i'\in [\ell+1],k \in [n],\\
&t^{z_i}_j\geq 0 && \forall i\in [\ell+1], j\in [m].
\end{array}
\end{equation}
The dual of \eqref{LP:cont} has two sets of $O(n^2 m)$ non-negative variables: ``right'' variables $\lambda_{(R,i,i',k)}$, and ``left'' variables $\lambda_{(L,i+1,i',k)}$ for for $i\in [\ell],i'\in [\ell+1],k\in [n]$.
In our interpretation, the indexing $(\{R,L\},i,i',k)$ corresponds to duplicate side, true type, reported type, and action. The dual is:
\begin{equation}\label{Dual2}\tag{D2}
\begin{array}{ll@{}ll}
\text{max} & \sum_{i\in [\ell],i'\in [\ell+1],k\in [n]}\lambda_{(R,i,i',k)}(\gamma_{x(z_i)} z_i-\gamma_{k} z_i)+\\
&\sum_{i\in [\ell],i'\in [\ell+1],k \in [n]}\lambda_{(L,i+1,i',k)} (\gamma_{x(z_{i})} z_{i+1} - \gamma_{k} z_{i+1})\\
\text{s.t.} & \sum_{i'\in [\ell],k\in [n]}\lambda
_{(R,i',i,k)}F_{k,j}+ \sum_{i'\in [\ell],k\in [n]}\lambda
_{(L,i'+1,i,k)}F_{k,j}\geq &\\
&F_{x(z_i),j}(\sum_{i'\in [\ell+1],k\in [n]}\lambda
_{(R,i,i',k)}+\sum_{i'\in [\ell+1],k\in [n]}\lambda
_{(L,i+1,i',k)}) && \forall j \in [m], i\in [\ell]\\
\end{array}
\end{equation}
By strong duality, \eqref{LP:cont} is feasible if and only if there are no variable assignments $\lambda$ such that the objective of \eqref{Dual2} is strictly positive and the constraints in \eqref{Dual2} satisfy. Lemma~\ref{lemma:normalize-cont-char} in Appendix~\ref{appx:Char} shows that the existence of $\lambda$ implies the existence of a feasible solution $\lambda'$ with strictly positive objective in which $\sum_{i',k\in [n]}\lambda'_{(R,i,i',k)}=\sum_{i',k\in [n]}\lambda'_{(L,i+1,i',k)}=1$ $\forall i \in [\ell]$.
It can be verified that when the above holds, strictly positive objective value simplifies to $\sum_{i\in [\ell]}(\gamma_{x(z_i)} z_i+\gamma_{x(z_{i})} z_{i+1}) >\sum_{i\in [\ell],i'\in [\ell+1],k\in [n]}\lambda_{(R,i,i',k)}\gamma_{k} z_i+\sum_{i\in [\ell],i'\in [\ell+1],k \in [n]}\lambda_{(L,i+1,i',k)}\gamma_{k} z_{i+1}$ and the constraints simplify to $\sum_{i'\in [\ell],k\in [n]}\lambda_{(R,i',i,k)}F_{k,j}+ \sum_{i'\in [\ell],k\in [n]}\lambda
_{(L,i'+1,i,k)}F_{k,j}\geq
2F_{x(z_i),j}$ $\forall j \in [m], i\in [\ell]$.
\end{proof}
\paragraph{\bf Algorithmic aspects.}
The LP-based proof of the characterization for the continuous setting again leads to efficient algorithms for checking implementability and finding optimal payments for an implementable allocation rule.
\begin{corollary}\label{cor:poly-LP-cont}
For continuous types, the problem of determining whether or not a piecewise constant allocation rule is implementable is solvable in time $O(poly(n,m))$. Moreover, if an allocation rule is implementable, then we can find optimal payments for this allocation rule in time $O(poly(n,m))$.
\end{corollary}
\section{Applying the Characterization}\label{sec:applications}
In this section we present two applications of the characterization to optimal contracts. First, we obtain a polynomial-time algorithm for computing the optimal contract when the number of actions is constant. Second, we give a non-trivial example of when a ``divide-and-conquer'' approach to computing the optimal contract can succeed: We show that separate treatment of the two types of IC constraints --- those that address private types and those that take care of hidden action --- works for the case of uniformly distributed costs. The idea is to initially ignore the IC constraints introduced by hidden action; this brings us into Myerson territory and we can optimize over monotone allocation rules. In a second step we then verify that for uniform distributions, this optimal monotone rule happens to satisfy the IC constraints due to hidden action.
\subsection{A Poly-Time Algorithm for a Fixed Number of Actions}
\label{sub:application1}
We give a polynomial-time algorithm for computing the optimal contract given discrete types $C$ and a constant number of actions $n$.
This positive result is in sharp contrast to the multi-parameter model, where computing the best contract for a constant number of actions is APX-hard \cite[e.g.,][]{guruganesh2020contracts,CastiglioniM021}.
In our single-parameter model, Proposition~\ref{prop:impl-is-mon-dis} allows us to search over \emph{monotone} allocation rules, reducing the complexity from $n^{|C|}$ (all allocations of actions to the $|C|$ types) to $|C|^n$.
We evaluate the rules by finding the optimal payments for each rule via Corollary~\ref{cor:poly-LP}.
\begin{theorem}
\label{thm:const-n}
The problem of computing the optimal contract for discrete types is solvable in polynomial time for a constant number of actions $n$.
\end{theorem}
\begin{proof
According to Proposition~\ref{prop:impl-is-mon-dis}, to find the optimal implementable allocation rule it suffices to optimize over monotone rules. We bound the number of monotone allocation rules by the number of combinations of $|C|$ actions from $[n]$ (with repetition), by showing the each allocation rule uniquely maps to such a set. Let $x$ be a monotone allocation rule, take $S_x$ as the multiset of actions allocated in $x$. {Any different allocation rule $x'$ induces $S_{x}\neq S_{x'}$. To see this, let $c$ be the smallest cost for which $x(c)\neq x'(c)$, assuming $x(c)> x'(c)$ without loss of generality. Observe that $x(c)$ will not be allocated for higher costs than $c$ in $x'$ by monotonicity of the allocation $x'$. This implies that the number of $x(c)$ instances in $S_{x}$ is strictly higher than that of $S_{x'}$, so $S_x\neq S_{x'}$.}
It is known that the number of combinations with repetition of $|C|$ not-necessarily distinct elements from a set of size $n+1$ is $\binom{|C|+n}{|C|}$ (see e.g.~\citep{combi-wiki}). By definition, and then by reorganizing,
$$
\textstyle \binom{|C|+n}{|C|}=\frac{(|C|+n)!}{|C|!\cdot n!}=\frac{|C|!}{|C|!}\cdot\frac{ \prod_{i=1}^{n}(|C|+i)}{ n!}=O(|C|^n),
$$
for constant $n$. Thus, the number of monotone allocation rules in polynomial in $|C|$. Recall Corollary~\ref{cor:poly-LP}, stating that it is possible to determine whether or not an allocation rule is implementable, and to give optimal contract if so, in polynomial time. Thus, using the brute-force approach of computing the expected revenue (reward minus payments of an optimal contract) of all monotone allocation rules yields a polynomial time procedure for computing the optimal allocation. This completes the proof.
\end{proof}
\subsection{Optimal Contract for Uniformly-Distributed Costs}
\label{sub:application2}
In this section we explore a Myersonian approach to optimal contract design for continuous-type settings: optimizing the expected revenue over all monotone allocation rules, in hopes that for the \emph{revenue-optimal} such rule, monotonicity turns out to be sufficient for implementability.
In Theorem~\ref{thm:uniform} we show this to be the case for uniformly distributed costs.
\begin{assumption}\label{assumption:high-cost-type}
Consider the highest-cost type $\bar{c}$. For every action $ 1\leq i\le n$, the expected reward is strictly less than this type's cost, i.e., $\gamma_i \bar{c}>R_i$.
\end{assumption}
Intuitively, the above assumption implies that when facing type~$\bar{c}$, the only action the principal can incentivize without losing herself is the zero-cost action.
This assumption will come in handy in Section~\ref{sub:opt-cont-uniform}, where we further discuss it.
\subsubsection{Generalization of the Myerson Toolbox to Hidden Action}
We start by generalizing several classic results by \citet{Myerson81} to our hidden action model with continuous types. Like Myerson's theory, the results of this section apply to \emph{randomized} allocation rules, defined as follows:
\begin{definition}\label{def:rand-alloc}
A \emph{randomized} allocation rule $x:C\to\Delta([n])$ is a mapping from a cost $c$ to a distribution over recommended actions, where $x_k(c)$ denotes the probability
of recommending action $k\in [n]$.
\end{definition}
Overloading notation to accommodate for randomization, denote by $R_{x(c)}=\sum_{k\in [n]}x_k(c) R_{k}$ the expected reward of $x$ given type $c$, and by $\gamma_{x(c)}=\sum_{k\in [n]}x_k(c) \gamma_k$ the expected effort. As before, the payment rule maps from cost $c$ to a vector of payments, which can now be seen as expected payments over the random coin tosses. Formally, let $T_{x(c)}^c = \sum_{i \in [n]}\sum_{j \in [m]} x_i(c) \cdot F_{i,j} \cdot t^c_j$ denote the expected payment over both random actions and random outcomes.
\begin{lemma}[Payment identity]
\label{lemma:myerson}
Let $C=[0,\bar{c}]$ be a continuous type space. Let $x:C\to\Delta([n])$ be a randomized allocation rule. Denote by $\gamma'_{x(c)}$ the derivative of $\gamma_{x(c)}$ with respect to $c$. Then if $x$ is implementable, for every payment rule $t$ that implements $x$,
\begin{eqnarray*}
\textstyle T^c_{x(c)}=T^{\bar{c}}_{x(\bar{c})}-\int_{c}^{\bar{c}} z\gamma'_{x(z)} \,dz & \forall c\geq 0.
\end{eqnarray*}
\end{lemma}
\begin{proof} Let $x$ be an implementable allocation rule. From similar arguments as in the proof of Proposition~\ref{prop:impl-is-mon-cont}, we have that for every contract $t$ that implements $x$,
\begin{eqnarray*}\label{eq:proof-menumonotonicity-eq4}
\textstyle(\gamma_{{x}(c)} -\gamma_{{x}(c')}) c' \geq T^{c}_{x(c)}-T^{c'}_{x(c')}\geq (\gamma_{{x}(c)} -\gamma_{{x}(c')}) c & \forall c<c'\in C.
\end{eqnarray*}
Dividing the above by $c'-c$ we get
\begin{eqnarray*}
\textstyle-\frac{[\gamma_{{x}(c')} -\gamma_{{x}(c)}]}{c'-c}\cdot c' \geq -\frac{T^{c'}_{x(c')}-T^{c}_{x(c)}}{c'-c}\geq -\frac{[\gamma_{{x}(c')} -\gamma_{{x}(c)}]}{c'-c}\cdot c & \forall c<c'.
\end{eqnarray*}
Taking the limit as $c' \downarrow c$ yields the following constraint: $T'^c_{x(c)} =\gamma'_{x(c)}\cdot c$ $\forall c\in C.$
Thus, $\int_{c}^{\bar{c}} T'^c_{x(c)} \,dc =\int_{c}^{\bar{c}} z\gamma'_{x(z)}\,dz$ $\forall c\in C.$
Thus, $T^{c}_{x(c)}\mid_{c}^{\bar{c}} =T^{\bar{c}}_{x(\bar{c})}-T^{c}_{x(c)}=\int_{c}^{\bar{c}} z\gamma'_{x(z)}\,dz$ $\forall c\geq 0.$ This implies that $T^{c}_{x(c)}=T^{\bar{c}}_{x(\bar{c})}-\int_{c}^{\bar{c}} z\gamma'_{x(z)} \,dz$ $\forall c\geq 0.$
\end{proof}
An appropriate variant of the classic result that expected revenue equals expected virtual welfare applies in our model.
We first define for completeness the notion of virtual costs (the ``reverse'' version of virtual values by \cite{Myerson81}):
\begin{definition}
Let $G$ be a distribution over a continuous type set $C$ with density $g$. Given cost $c\in C$, the \emph{virtual cost} is $$\varphi(c)=c+\frac{G(c)}{g(c)}.$$
\end{definition}
We can then define the expected revenue of a contract as $\mathbb{E}_{c \sim G}[R_{x(c)}-T^c_{x(c)}]$, and its expected virtual welfare as $\mathbb{E}_{c \sim G}[R_{x(c)} - \varphi(c) \gamma_{x(c)}]$.
\begin{proposition}
\label{prop:rev-wel}
For continuous types, the expected revenue of an IC contract $(x,t)$ is equal to its expected virtual welfare minus the expected utility of type $\bar{c}$:
$$
\mathbb{E}_{c\sim G}[R_{x{(c)}}-T^c_{x(c)}]=\mathbb{E}_{c\sim G}[R_{x{(c)}}-\varphi(c) \gamma_{x{(c)}}]-(T^{\bar{c}}_{x(\bar{c})}-\gamma_{x(\bar{c})}\bar{c}).
$$
\end{proposition}
\begin{proof}
We show that the expected payment is equal to the expected virtual cost, i.e., $\mathbb{E}_{c\sim G}[T^c_{x(c)}]$ $=T^{\bar{c}}_{x(\bar{c})}-\gamma_{x(\bar{c})}\bar{c}+\mathbb{E}_{c\sim G}[\varphi(c) \gamma_{x(c)}]$. By linearity of expectation this suffices to prove the proposition. By the expected payment formula in Lemma~\ref{lemma:myerson},
\begin{eqnarray*}
\textstyle\mathbb{E}_{c\sim G}[T^c_{x(c)}]=\int_{0}^{\bar{c}}T^c_{x(c)} g(c) \dd c =T^{\bar{c}}_{x(\bar{c})}G(\bar{c}) + \int_{0}^{\bar{c}}[-\int_{c}^{\bar{c}} z\gamma'_{x(z)} \dd z] g(c) \dd c.
\end{eqnarray*}
Reversing the integration order,
\begin{eqnarray*}
{T^{\bar{c}}_{x(\bar{c})}G(\bar{c}) + \textstyle\int_{0}^{\bar{c}}-[\int_{0}^{z}g(c) \dd c] z\gamma'_{x(z)} \dd z =
T^{\bar{c}}_{x(\bar{c})}G(\bar{c}) + \int_{0}^{\bar{c}}-G(z) \gamma'_{x(z)} z \dd z.}
\end{eqnarray*}
Using integration by parts, where $G(z)z=u(z)$ and $\gamma'_{x(z)}=v'(z)$
\begin{eqnarray*}
T^{\bar{c}}_{x(\bar{c})}G(\bar{c}) - \textstyle\int_{0}^{\bar{c}}{G(z) z}\cdot {\gamma'_{x(z)}} \dd z = T^{\bar{c}}_{x(\bar{c})}G(\bar{c}) -G(z) z \gamma_{x(z)}\mid^{\bar{c}}_{0}+\int_{0}^{\bar{c}}(g(z) z+G(z)) \gamma_{x(z)} \dd z.
\end{eqnarray*}
Since $G(\bar{c}) = 1$, $G(0) = 0$, the above equals to $T^{\bar{c}}_{x(\bar{c})}-\gamma_{x(\bar{c})}\bar{c}+\int_{0}^{\bar{c}}( z+\frac{G(z)}{g(z)}) \gamma_{x(z)} g(z) \dd z$. That is,
\begin{eqnarray*}
T^{\bar{c}}_{x(\bar{c})}-\gamma_{x(\bar{c})}\bar{c}+\mathbb{E}_{c\sim G}[\varphi(c)\gamma_{x(c)}].
\end{eqnarray*}
This completes the proof.
\end{proof}
\subsubsection{Finding the Optimal Contract for Uniform Costs}\label{sub:opt-cont-uniform}
According to Proposition \ref{prop:rev-wel}, the expected revenue of an IC contract is equal to the expected virtual welfare minus the (non-negative) expected utility of the highest type.
Focusing on the first term and ignoring momentarily the second one, a natural candidate for the revenue-maximizing allocation rule is the one that maximizes virtual welfare.
\begin{definition}\label{def:welfare-max-alloc}
The \emph{virtual welfare-maximizing allocation rule} $x^*$ (among all randomized such rules) is given by the deterministic rule that chooses
\begin{eqnarray*}
x^*(c)=\arg\max_{i\in [n]}\{R_i-\varphi(c)\cdot \gamma_i\} && \forall c\in C.
\end{eqnarray*}
\end{definition}
The following theorem shows that when the distribution over types is uniform, the virtual welfare maximizing allocation rule is implementable --- a non-trivial result in light of Proposition~\ref{prop:non-monotone}, which shows that monotonicity alone is insufficient.
Furthermore, using Assumption~\ref{assumption:high-cost-type} this theorem shows that $x^*$ is implementable by a contract for which type $\bar{c}$'s expected utility is exactly $0$ (this is the reason we could ignore the second term above). Then, Proposition~\ref{prop:rev-wel} implies optimality of $x^*$.
\begin{theorem}
\label{thm:uniform}
Assuming~\ref{assumption:high-cost-type}, and uniform distribution over types $G=U[0,\bar{c}]$, the virtual welfare maximizing allocation rule $x^*$ is implementable by a payment rule $t$ for which $T^{\bar{c}}_{x^*(\bar{c})}-\gamma_{x^*(\bar{c})}\bar{c}$=0.
Contract $(x^*,t)$ thus maximizes expected revenue among all IC, limited liability contracts.
\end{theorem}
\begin{proof}
Let $x^*$ be the virtual-welfare maximizing allocation rule. Note that by the assumption that $R_i< \gamma_i\bar{c}$ for $1\le i\le n$ and since $\bar{c} \le \varphi(\bar{c})$, we have that $x^*(\bar{c})=0.$ Thus, to show that $x^*$ is implementable by a contract $t$ for which $T^{\bar{c}}_{x^*(\bar{c})}-\gamma_{x^*(\bar{c})}\bar{c}=0$, it suffices to show that $x^*$ is implementable by a contract in which $t^c_0=0$ $\forall c\in C$. To test whether there exists such a contract, we test whether~\eqref{LP:cont} is feasible for $x^*$ when restricting $t^{z_i}_0=0$ $\forall i\in [\ell+1]$. Specifically, we test the feasibility of the following linear program.
\begin{equation*}
\begin{array}{ll@{}ll}
\min & 0 &\\
\text{s.t.} & \sum_{j=1}^m F_{x^*(z_i),j}t^{z_i}_j-\gamma_{x^*(z_i)} z_i \geq
\sum_{j=1}^m F_{k,j}t^{z_{i'}}_j-\gamma_{k} z_i && \forall i,i' \in [\ell+1],k \in [n],\\
& \sum_{j=1}^m F_{x^*(z_{i-1}),j}t^{z_{i-1}}_j - \gamma_{x^*(z_{i-1})} z_i \geq \sum_{j=1}^m F_{k,j}t^{z_{i'}}_j - \gamma_{k} z_i && \forall 1\le i\le \ell ,i'\in [\ell+1],k \in [n],\\
&t^{z_i}_j\geq 0 && \forall i\in [\ell+1], 1\le j\le m.
\end{array}
\end{equation*}
Using the same techniques as in the proof for Theorem~\ref{thm:char-cont} we have that $x^*$ is implementable by a contract for which $t^{z_i}_0=0$ $\forall i\in [\ell+1]$ if and only if there exist no weights $\lambda_{(L,i+1,{i'},k)},$ $\lambda_{(R,i,{i'},k)} \geq 0$ for all $i\in [\ell],i'\in[\ell+1],k \in [n]$ which satisfy $\sum_{i'\in [\ell+1],k\in [n]}\lambda_{(L,i+1,i',k)}= \sum_{i'\in [\ell+1],k\in [n]}\lambda_{(R,i,i',k)}=1$ $\forall i\in [\ell]$ and the following conditions: (1) $\sum_{i'\in [\ell],k\in [n]}\frac{1}{2}(\lambda_{(R,i',i,k)}F_{k,j}+ \lambda
_{(L,i'+1,i,k)}F_{k,j})\geq
F_{x^*(z_i),j}$ $\forall 1\le j \le m, i\in [\ell],$ and (2) $\sum_{i\in [\ell]}(\gamma_{x^*(z_i)} z_i+\gamma_{x^*(z_{i})} z_{i+1}) >\sum_{i\in [\ell],i'\in [\ell+1],k\in [n]}\lambda_{(R,i,i',k)}\gamma_{k} z_i+\lambda_{(L,i+1,i',k)}\gamma_{k} z_{i+1}.$
Suppose towards a contradiction that $x^*$ is unimplementable by such a contract. Fix a sufficiently small value of $\epsilon$ to be determined and define the following randomized allocation rule.
\begin{eqnarray}\label{eq:new-alloc}
x_k(c)=\begin{cases}
\sum_{i'\in [\ell+1]}\lambda_{(R,i,i',k)} & c\in [z_i,z_i+\epsilon),i\in [\ell],\\
\sum_{i'\in [\ell+1]}\lambda_{(L,i+1,i',k)} & c\in [z_{i+1}-\epsilon,z_{i+1}),i\in [\ell],\\
x_k^*(c) & \text{otherwise,}
\end{cases} & \forall k\in [n].
\end{eqnarray}
In Appendix~\ref{appx:uni} we show that for sufficiently small value of $\epsilon$, the expected virtual welfare of $x$ is strictly higher than that of $x^*$, contradicting our hypothesis that $x^*$ is the virtual welfare maximizer.
\end{proof}
A challenge in generalizing the result in Theorem~\ref{thm:uniform} to other distributions is that in our proof we used that for uniform distributions, strictly lower joint cost implies strictly lower joint \emph{virtual} cost. This is not the case for other distributions, e.g., exponential as the following example shows.
\begin{example} Consider a setting with $5$ actions with $\gamma_0=0$, $\gamma_1=1,\gamma_2=2,\gamma_3=3,\gamma_4=7$ and two types $c_1=1$, $c_1=2$. The joint cost of $x(1)=2$, $x(2)=3$ is: $\gamma_3\cdot 2+\gamma_2\cdot 1=8$. The joint cost of $x'(1)=4, x'(2)=1$ is: $\gamma_1\cdot 2+\gamma_4\cdot 1=9$. Thus the joint cost of $x$ is lower than the joint cost of $x'$. The virtual cost function of exponential distribution with parameter $1$ is $\varphi(c)=c+e^c-1$. Thus, The joint virtual cost of $x(1)=2$, $x(2)=3$ is: $\gamma_3\cdot (2+e^2-1)+\gamma_2\cdot (1+e-1)\approx 30.6$. The joint cost of $x'(1)=4, x'(2)=1$ is: $\gamma_1\cdot (2+e^2-1)+\gamma_4\cdot (1+e-1)\approx 27.41$. Thus, the joint virtual cost of $x$ is higher than the joint virtual cost of $x'$.
\end{example}
In Appendix~\ref{appx:beyond-uni} we discuss additional computational results and structural insights for other natural classes of distributions.
\section{Conclusion}
In this work we propose a natural principal-agent problem with hidden action and single-dimensional private types, which seems to strike a balance between applicability and tractability. We provide analogs to Myerson's result --- two characterizations, one for discrete and one for continuous types --- which we believe the agenda of CT $\times$ MD can build upon.
We provide two proofs of concept to show this direction is promising: (1)~a ``more positive'' computational result than the corresponding result in the multi-parameter model of \citet{guruganesh2020contracts}; and (2)~a non-trivial ``divide and conquer'' result, which suggests that the optimal monotone rule could actually be implementable (at least somewhat) generally, despite the fact that monotonicity is not a sufficient condition in our model.
An interesting question going forward is to verify or disprove that this approach works for all regular distribution functions, or all regular distributions under some additional regularity assumption on the contract setting. A natural candidate for the latter is to assume diminishing marginal returns, which we show results in a ``nice'' allocation rule (Theorem~\ref{thm:regular-implies-monotone}), and ensures all actions can be incentivized absent hidden types.
\section*{Acknowledgements}
The authors would like to thank Yingkai Li for his helpful comments, which contributed to the paper in general and to Section~\ref{sub:application2} in particular.
|
1,116,691,500,938 | arxiv | \section{Limits and Indications on $\nu$ Oscillations}
The discovery of neutrino masses or oscillations would take particle physics
beyond its Standard Model, and therefore requires very stringent standards of
proof and verification. Moreover, neutrino experiments are difficult, and their
history is littered with unconfirmed claims. Therefore, one must be cautious in
accepting new experimental results, and should demand that they fulfil stringent
credibility criteria. In my personal view~\cite{Helsinki}, these should
include confirmation by
more than one experiment, using more than one technique.
These criteria are obeyed by solar neutrino experiments, since five
experiments
(Homestake, Kamiokande, SAGE, GALLEX, Super-Kamiokande) see a deficit
using 3${1\over 2}$ different techniques (Cl, H$_2$0, Ga
with two extraction schemes)~\cite{Conrad}.
Now they are also obeyed by atmospheric neutrino experiments: five experiments
(Kamiokande, IMB, Super-Kamiokande, Soudan II, MACRO) see anomalies using two
different classes of technique (H$_2$O, tracking
calorimetry)~\cite{Conrad}. Therefore, I take
these results very seriously as evidence for new physics.
On the other hand, only one accelerator experiment (LSND) sees an
anomaly~\cite{LSND}, using
{\it a fortiori} just one technique (liquid scintillator).
Therefore, I prefer to adopt a wait-and-see attitude to this result,
eagerly awaiting its confirmation by another experiment such as
KARMEN or MiniBooNE~\cite{Conrad}.
In the general perception, the case for atmospheric neutrino oscillations
has recently leap-frogged over that of solar neutrinos.
This is largely because, in
addition to the sheer number of experiments reporting $\nu_\mu$ deficits, the
Super-Kamiokande Collaboration has reported dramatic effects in the
zenith-angle
distributions~\cite{SuperK}, where many systematic errors
cancel~\cite{EW}.
Moreover, both low- and high-energy data show compatible
effects, indicating that the $\nu_\mu/\nu_e$ ratio
decreases as $L/E$ increases, just as
expected if
$\nu_\mu$ oscillate into $\nu_\tau$ or perhaps a sterile neutrino $\nu_s$.
In the case of solar neutrinos, the overall deficit has been confirmed by
Super-Kamiokande with higher statistics~\cite{Totsuka}, but no comparable
``smoking gun" for
neutrino oscillations has yet appeared. There is a hint
of a day-night difference~\cite{Totsuka},
but its significance remains below two standard deviations, and there is also a
hint of an distortion of the energy spectrum~\cite{Totsuka}, but a
constant suppression is still
compatible with the data at the few-percent confidence level.
Before launching into the theory of neutrino masses, it is useful to review why
the oscillation hypothesis is being pursued to the exclusion of other possible
explanations. In the case of atmospheric neutrinos, most neutrino decay
scenarios are excluded~\cite{nodecay},
flavour-changing interactions with matter are highly
disfavoured~\cite{noFCNC}, and violations
of Lorentz invariance and the Principle of Equivalence
are disfavoured by the pattern of
zenith-angle distributions at low and high energies~\cite{LIPEOK}. In the
case of solar
neutrinos, the standard solar model is strongly supported by the
helioseismological data~\cite{helioseismo}, which do not allow substantial
changes in the solar
equation of state, and previous claims of a time dependence associated
with the solar cycle have not been established.
\section{Neutrino Masses}
If these are non-zero, they must be much smaller than those of the corresponding
charged leptons~\cite{PDG}:
\begin{equation}
m_{\nu_e} \mathrel{\rlap{\raise.5ex\hbox{$<$} 2.5~{\rm eV}~, \quad\quad
m_{\nu_\mu} \mathrel{\rlap{\raise.5ex\hbox{$<$} 160~{\rm keV}~, \quad\quad
m_{\nu_\tau} \mathrel{\rlap{\raise.5ex\hbox{$<$} 15~{\rm eV}~,
\label{one}
\end{equation}
so one might think naively that they should vanish entirely. However, theorists
believe that particle masses can be strictly zero only
if there is a corresponding
conserved charge associated with an exact gauge symmetry, which is not the case
for lepton number. Indeed, non-zero neutrino masses appear generically in Grand
Unified Theories (GUTs)~\cite{Peccei}. However, it is not necessary to
postulate new particles
to get $m_\nu \not= 0$: these could be generated by a non-renormalizable
interaction among Standard Model particles~\cite{BEG}:
\begin{equation}
{(\nu_LH)~(\nu_LH)\over M}
\label{two}
\end{equation}
where $M \gg m_W$ is some new, heavy mass scale.
The most plausible guess, though,
is that this heavy mass is that of some heavy particle, perhaps a right-handed
neutrino $\nu_R$ with mass $M \sim M_{GUT}$.
In this case, one expects to find the characteristic see-saw~\cite{seesaw}
form of neutrino mass matrix:
\begin{equation}
(\nu_L , \nu_R)~~\left(\matrix{0&m\cr
m&M}\right)~~\left(\matrix{\nu_L\cr\nu_R}\right)
\label{three}
\end{equation}
where the off-diagonal matrix entries
in (\ref{three}) break SU(2) and have the form of
Dirac mass terms, so that one expects $m = 0(m_{\ell,q})$. Diagonalizing
(\ref{three}), one finds a light neutrino mass
\begin{equation}
m_\nu \simeq {m^2\over M}
\label{four}
\end{equation}
Choosing representative numbers $m \sim$ 10 GeV,
$m_\nu \sim 10^{-2}$ eV one finds
$M \sim 10^{13}$ GeV, in the general ballpark of the grand unification scale.
The past year has witnessed tremendous activity in the theoretical study of
neutrino masses~\cite{vast}, of which I now pick out just a few key
features: \\
\\
{\it Other light neutrinos?}: we know from the LEP neutrino-counting constraint:
$N_\nu = 2.994 \pm 0.011$~\cite{LEPEWWG}, that any additional neutrinos
must be sterile $\nu_s$,
with no electroweak interactions or quantum numbers.
But if so, what is to prevent
them from acquiring large masses: $m_s\nu_s\nu_s$ with $m_s \gg m_W$, as for the
$\nu_R$ discussed above? In the absence of some new theoretical superstructure,
this is an important objection to simply postulating light $\nu_s$ or $\nu_R$.\\
\\
{\it Majorana masses?}: most theorists expect the light neutrinos to be
essentially pure $\nu_L$, with only a small admixture ${\cal O}(m/M)$ of
$\nu_R$. In this
case, one expects the dominant effective neutrino mass
term to be of Majorana type
$m_{eff}\nu_L\nu_L$, as given by (\ref{two}) or (\ref{three}). \\
\\
{\it Large mixing?}: small neutrino mixing used perhaps to be favoured, by
analogy with the Cabibbo-Kobayashi-Maskawa mixing of quarks. However, theorists
now realize that this is by no means necessary. For one thing, the off-diagonal
entries in (now considered as a 3$\times$3 matrix) (\ref{three}) need not be
$\propto m_q$ or $m_\ell$~\cite{ELLN}. Then, even if $m\propto m_\ell$, we
have no independent
evidence that mixing is small in the lepton sector. Finally, even if $m$ were to
be approximately diagonal in the same flavour basis as
the charged leptons $e, \mu, \tau$,
why should this also be the same case for the heavy Majorana matrix
$M$~\cite{ELLN}?
Since $\sqrt{\Delta m^2_{atmo}} \sim 10^{-1}$ to $10^{-1{1\over 2}}$ eV $\gg
\sqrt{\Delta^2_{solar}} \sim 10^{-2}$ to $10^{-2{1\over 2}}$ eV
(MSW solution~\cite{MSW}) or
$10^{-5}$~eV (vacuum solution), one may ask whether large neutrino mixing is
compatible with a hierarchy of neutrino masses. To feel more comfortable about
this possibility, consider the following very simple parametrization of the
inverse of a 2$\times$2 neutrino mass matrix~\cite{ELLN}:
\begin{equation}
m^{-1}_\nu \equiv \left(\matrix{b&d\cr d&c}\right) = d \left(\matrix{b/d & 1 \cr
1 & c/d}\right)
\label{five}
\end{equation}
Diagonalizing this, one finds mixing:
\begin{equation}
\sin^2 2\theta = {4 d^2\over (b-c)^2 + 4d^2}
\label{six}
\end{equation}
which is large if $\vert d\vert \mathrel{\rlap {\raise.5ex\hbox{$>$} \vert b-c\vert$. However, this does not
require degeneracy of the two mass eigenvalues:
\begin{equation}
m_\pm = {2\over (b+c) \pm \sqrt{(b-c)^2+4d^2}}~,
\label{seven}
\end{equation}
since a large hierarchy can be obtained if $d^2 \sim bc$.
We see in Figs.~\ref{fig:1},~\ref{fig:2}~\cite{ELLN} that
large mixing $\sin^2\theta\mathrel{\rlap {\raise.5ex\hbox{$>$}$ 0.8 and a hierarchy $m_+/m_- \mathrel{\rlap {\raise.5ex\hbox{$>$}$ 10 of
neutrino masses can be reconciled for ``reasonable" values of the dimensionless
ratios in (\ref{five}), e.g., $b/d \sim $ 0.5, $c/d \sim$ 1.5. However, it would
be difficult to accommodate the extreme hierarchy
required by the vacuum solution
to the solar neutrino deficit in such a na\"\i ve approach.
\begin{figure}[htb]
\hglue4cm
\includegraphics[width=8.5cm]{JohnPanic99Fig1.eps}
\caption{\it Dependence of the neutrino mixing angle in the simple
two-flavour model (\ref{five}): note that one may find $\sin^2 \theta >
0.8$ for generic values of the matrix elements~\cite{ELLN}.}
\label{fig:1}
\end{figure}
\begin{figure}[htb]
\hglue4cm
\includegraphics[width=8.5cm]{JohnPanic99Fig2.eps}
\caption{\it Dependence of the ratio of neutrino mass
eigenvalues on the simple model (\ref{five}): note that a hierarchy
of more than an order of magnitude may be found for generic values
of the matrix elements, that may also give large $\sin^2
\theta$~\cite{ELLN}.}
\label{fig:2}
\end{figure}
There may also be significant enhancement of neutrino mixing by
renormalization-group effects between the GUT scale and the electroweak
scale~\cite{ELLN,RGE}. The
renormalization-group equation for the 2$\times$2 mixing angle $\theta$ is
\begin{equation}
16 \pi^2~{d\over dt}~(\sin^2 2\theta) = -2 (\sin^2 2\theta)~(\cos^2
2\theta)~(\lambda^2_3 - \lambda^2_2)~~{m_++m_-\over m_+ - m_-}
\label{eight}
\end{equation}
We see that $\theta$ can be enhanced if either the combination of Yukawa
couplings $(\lambda^2_3
- \lambda^2_2)$ is large or $(m_+-m_-)$ is small.
Fig.~\ref{fig:3}~\cite{ELLN} shows an example with
large Yukawa couplings corresponding to a large
value of the ratio of Higgs vev's
$\tan\beta$ in a supersymmetric model. We see that a renormalization-group
enhancement of $\sin^22\theta$ from $\mathrel{\rlap{\raise.5ex\hbox{$<$}$ 0.2 at the GUT scale to $\mathrel{\rlap {\raise.5ex\hbox{$>$}$
0.9 at the electroweak scale is quite possible.
\begin{figure}[htb]
\hglue3.5cm
\includegraphics[width=8.5cm]{JohnPanic99Fig3.eps}
\caption{\it Example of the possible renormalization (\ref{eight}) of the
neutrino mixing angle: note that it may be enhanced to $\sin^2 \theta >
0.8$ even if it is small at the GUT scale~\cite{ELLN}.}
\label{fig:3}
\end{figure}
Many theoretical models of neutrino masses are circulating, often based on
specific GUT models~\cite{GUTmodels} and/or global U(1) flavour
symmetries, which illustrate some
of the points made earlier. For example, in a flipped SU(5)
model~\cite{ELLN}, the Dirac neutrino mass matrix
\begin{equation}
m^D_\nu \propto \left(\matrix{ \epsilon & {\cal O}(1) & 0 \cr
\epsilon & {\cal O}(1) & 0 \cr
0 & 0 & {\cal O}(1) }\right)
\label{nine}
\end{equation}
in a first approximation, where $\epsilon$ is small,
so that $m^D_\nu$ is not $\propto m_q$ or
$m_\ell$. There are also SO(10) models~\cite{CELW} in which entries in the
quark and lepton mass matrices
have very different U(1) weightings, so that
lepton mixing does not parallel quark
mixing. Moreover, in U(1) models it is very
natural to find a heavy Majorana mass
matrix that is off-diagonal in the $e,\mu,\tau$ basis.
For example, in a 2$\times$2 model, if
the $\nu_R^{(i)}$ have U(1) charges $n_i$, then the heavy Majorana matrix
\begin{equation}
M_{ij} \sim\epsilon^{n_i+n_j}
\label{ten}
\end{equation}
where $\epsilon \ll 1$ is a U(1) hierarchy factor. Then, if $\vert n_1-n_2\vert
\ll \vert n_{1,2}\vert$, one finds
\begin{equation}
M_{ij} \propto \left(\matrix{0 & {\cal O}(1)\cr {\cal O}(1) & 0}\right)
\label{eleven}
\end{equation}
which is a potential source of large neutrino mixing.
In these GUT and U(1) frameworks, near-degeneracy of neutrino masses:
$\vert m_i -
m_j\vert \ll m_{i,j}$ looks rather implausible, so that one might expect
\begin{equation}
m_3 \sim \sqrt{\Delta m^2_{atmo}} \gg m_2 \sim
\sqrt{\Delta m^2_{solar}} \gg m_1
\label{twelve}
\end{equation}
However, there are also models with non-Abelian symmetries~\cite{nonAb}
which predict
degenerate or near-degenerate neutrino masses.
Should one expect more than one large neutrino mixing angle? This seems very
likely: for example, in the flipped SU(5)
model~\cite{ELLN} that yields (\ref{nine}) for the Dirac
neutrino mass matrix, one also finds
\begin{equation}
M \sim \left(\matrix{ X & X & 0 \cr X & 0 & X \cr 0 & X & X}\right)
\label{thirteen}
\end{equation}
for the heavy Majorana mass matrix, where all the non-zero entries $X$ could be
comparable, and plausibly of order $10^{13\pm 1}$ GeV,
as required by the see-saw
mechanism~\cite{seesaw}. The small-angle MSW solution would then appear,
possibly, to be disfavoured.
Before leaving this section, it is useful to record the general form of the
3$\times$3 neutrino mixing matrix~\cite{numix}:
\begin{equation}
\left(\matrix{\nu_e\cr\nu_\mu\cr\nu_\tau}\right) =
\left(\matrix{ c_{12}c_{13} & c_{13}s_{12} & s_{13} \cr \cr
-c_{23}s_{12}e^{i\delta } - c_{12} s_{13}s_{23} & c_{12}c_{23} e^{i\delta} -
s_{12}s_{13}s_{23} & c_{13} s_{23} \cr\cr
s_{23}s_{12}e^{i\delta} - c_{12} c_{23} s_{13} & -c_{12} s_{23} e^{i\delta} - c_{23} s_{12}
s_{13} & c_{13} c_{23} }\right)
\left(\matrix{e^{i\alpha} & 0 & 0 \cr \cr
0 & e^{i\beta} & 0 \cr\cr 0 & 0 & 1}\right)
\left(\matrix{\nu_1\cr\cr\nu_2\cr\cr\nu_3}\right)
\label{forteen}
\end{equation}
which includes two CP-violating Majorana phases
$\alpha , \beta$ as well as three
mixing angles $\theta_{12}, \theta_{23}, \theta_{13}$ and one CP-violating phase
$\delta$ as in the quark case. Thus, a complete programme of neutrino physics
should aim at three masses, three mixing
angles and three phases. So far, we have
experimental hints about the possible magnitudes of two mass-squared differences
$\Delta m^2$, but not the overall neutrino mass scale. One mixing angle seems to
be large: $\theta_{23} \sim 45^\circ \pm 15^\circ$ (?)~\cite{SuperK} and
one small $\theta_{13}
\sim 0^\circ \pm 20^\circ$(?)~\cite{Chooz}, but the magnitude of
$\theta_{12}$ is still
unclear, and we have no information about
any of the phases. Indeed, the two Majorana
phases are essentially unobservable in
experiments at energies $E \gg m_\nu$,
though they do play a role in neutrinoless
double-$\beta$ ($\beta\beta_{0\nu})$ decay, as we discuss later.
\section{Neutrinos as Dark Matter?}
Let us set this possibility in context by first reviewing the density budget of
the Universe, in units $\Omega_? \equiv \rho_? /\rho_c$ of the critical density
$\rho_c\sim 10^{-29}$ gcm$^{-3}$.
Generic inflation models predict $\Omega_{total}
= 1 + {\cal O}(10^{-4})$, whereas the visible baryons
in stars, dust, etc., yield
$\Omega_{VB} \mathrel{\rlap{\raise.5ex\hbox{$<$} 0.01$. The success of Big-Bang Nucleosynthesis
calculations~\cite{BBN}
suggests that the overall baryon density $\Omega_B \sim 0.05$. This is not only
$\ll\Omega_{total}$ but even $\ll\Omega_m \sim 0.3$, the total mass density
inferred from observations of clusters of galaxies~\cite{clusters}.
Therefore the Universe must
contain plenty of invisible non-baryonic dark matter.
The astrophysical theory of structure formation suggests that most of the dark
matter is in the form of cold non-relativistic particles: $\Omega_{CDM} \mathrel{\rlap {\raise.5ex\hbox{$>$}
0.2$~\cite{whyCDM}. However, this theory does not fit perfectly the
combined data on large-scale
structure and the fluctuations observed in the cosmic microwave background
radiation, as seen in Fig.~\ref{fig:4}~\cite{GS}. One possibility is to
supplement cold dark matter
with hot dark matter in the form of neutrinos:
\begin{equation}
\Omega_\nu \sim \sum_\nu \left({m_\nu\over 98~{\rm ev}}\right) h^{-2}
\label{fifteen}
\end{equation}
where $h$ parametrizes the present Hubble expansion rate: $H \equiv $ 100 $h$ kms$^{-1}$
Mpc$^{-1}$, $h \sim 0.7 \pm 0.1$. However, alternative modifications of the
minimal cold dark matter model are possible, such as one with a cosmological
constant: $\Omega_\Lambda\sim 0.7$, which would be consistent with inflation:
$\Omega_{total}\simeq 1$, the age of the Universe, and the new data on
high-redshift supernovae~\cite{SN}.
\begin{figure}[htb]
\hglue3.5cm
\includegraphics[width=8.5cm]{JohnPanic99Fig4.eps}
\caption{\it Comparison of the available data on the
power $P(k)$ in the cosmic microwave
background (parallelograms) and on large-scale structure, compared
with the standard cold dark matter model (SCDM, solid line) with
$\Omega_m = 1$: although SCDM reproduces qualitatively the trends seen
in the data, it fails at large wave number $k$~\cite{GS}.}
\label{fig:4}
\end{figure}
The best one can probably say on the basis of present astrophysical and
cosmological data is that
\begin{equation}
m_\nu \mathrel{\rlap{\raise.5ex\hbox{$<$} 3~{\rm eV},
\label{sixteen}
\end{equation}
which is comparable to the direct limit (\ref{one}) on $m_{\nu_e}$. The next
generation of astrophysical and cosmological data will probably be sensitive to
$m_\nu \mathrel{\rlap {\raise.5ex\hbox{$>$}$ 0.3 eV~\cite{huetal}. Even $m_\nu \mathrel{\rlap {\raise.5ex\hbox{$>$}$ 0.03 eV may be
of cosmological
importance, but one would need to be very brave to claim astrophysical evidence
for a neutrino in the atmospheric neutrino mass range.
Could neutrinos be degenerate, with masses $\overline{m} \mathrel{\rlap {\raise.5ex\hbox{$>$}$ 2 eV and close
to the direct and astrophysical limits (\ref{one}),
(\ref{sixteen})~\cite{EL}? Any such
scenario would need to respect the stringent constraint imposed by the absence of
$\beta\beta_{0_\nu}$ decay~\cite{betabeta}:
\begin{equation}
<m_\nu>_e ~\simeq~ \overline{m} ~\vert c^2_{12} c^2_{13} e^{i \alpha} + s^2_{12}c^2_{13}
e^{i\beta} + s^2_{13}\vert\mathrel{\rlap{\raise.5ex\hbox{$<$} 0.2~{\rm eV}
\label{seventeen}
\end{equation}
In view of the upper limit on $\nu_\mu - \nu_e$ mixing from the Chooz
experiment~\cite{Chooz},
let us neglect provisionally the last term in (\ref{seventeen}). In this case,
there must be a cancellation between the first two terms, requiring
$\alpha\simeq\beta + \pi$, and
\begin{equation}
c^2_{12}-s^2_{12} = \cos 2\theta_{12} \mathrel{\rlap{\raise.5ex\hbox{$<$} 0.1 \Rightarrow \sin^2 2\theta_{12}
\mathrel{\rlap {\raise.5ex\hbox{$>$} 0.99
\label{eighteen}
\end{equation}
Thus maximal $\nu_e-\nu_\mu$ mixing is necessary. This certainly excludes the
small-mixing-angle MSW solution and possibly even the large-mixing-angle MSW
solution, since this is not compatible with $\sin^2 2\theta = 1$ (which would
yield a constant energy-independent suppression of the solar neutrino flux), and
global fits typically indicate that $\sin^2 \theta_{12} \mathrel{\rlap{\raise.5ex\hbox{$<$}$ 0.97, as
seen in Fig.~\ref{fig:15}~\cite{BKS}. Global fits before the new
Super-Kamiokande data on the energy
spectrum indicated that $\sin^2 2\theta \sim 1$ was possible for
vacuum-oscillation solutions. However, the new Super-Kamiokande analysis of the
energy spectrum now indicates~\cite{Totsuka} that, if there is any
consistent vacuum-oscillation
solution at all, it must have $\sin^2 2\theta$ considerably below 1, providing
another potential nail in the coffin of degenerate neutrinos.
\begin{figure}[htb]
\hglue4cm
\includegraphics[width=8.5cm]{JohnPanic99Fig5.eps}
\caption{\it Preferred region of $\sin^2 \theta$ and $\Delta m^2$
for the large-mixing-angle MSW solution to the solar neutrino
problem, both with (dashed contours) and without (grey contours)
the measured day-night asymmetry: note that $\sin^2 \theta <
0.97$~\cite{BKS}. }
\label{fig:15}
\end{figure}
The vacuum-oscillation solution would require extreme degeneracy:
$\Delta m \sim 10^{-10} \overline{m}$, which is impossible to reconcile with a
simple calculation of neutrino mass renormalization in models with degenerate
masses at the $m_{\nu_R}$ scale~\cite{EL}, as seen in Fig.~\ref{fig:6}.
Mass-renormalization effects
also endanger the
large-angle MSW solution (which would require $\Delta m \sim 10^{-4}
\overline{m}$), and, in the context of bimaximal mixing models, also generate
unacceptable values of the neutrino mixing angles.
These renormalization problems may not be insurmountable~\cite{otherRGE},
but
they do raise
non-trivial issues that must be addressed in
models of (near-) degenerate neutrino
masses~\cite{BRS}.
\begin{figure}[htb]
\hglue3.5cm
\includegraphics[width=8.5cm]{JohnPanic99Fig6.eps}
\caption{\it Renormalization of degenerate neutrino masses as
a function of the assumed Yukawa coupling $h$: note that the degeneracy
breaking is too large, except for very small values of $h$~\cite{EL}.}
\label{fig:6}
\end{figure}
\section{How to Discriminate Between Oscillation Scenarios?}
In the case of atmospheric neutrinos, one should consider {\it a priori} the
possibilities of $\nu_\mu\rightarrow\nu_e , \nu_\mu\rightarrow\nu_\tau$ and
$\nu_\mu\rightarrow\nu_s$ oscillations. The first of these is certainly not
dominant, as we have learnt from the Chooz~\cite{Chooz} and
Super-Kamiokande~\cite{SuperK,Totsuka} data. However,
$\nu_\mu\rightarrow\nu_e$ oscillations could be present at a subdominant level.
Future analyses should use a complete three-flavour framework
(\ref{forteen})~\cite{threeflavour}, in
which both $\nu_\mu\rightarrow\nu_e$ and $\nu_\mu\rightarrow\nu_\tau$
oscillations are allowed. As seen in
Fig.~\ref{fig:17}~\cite{threeflavour},
the proportion of $\nu_\mu \to
\nu_e$ oscillations could be quite substantial, particularly for
$3 \times 10^{-3}\, eV^2 \mathrel{\rlap {\raise.5ex\hbox{$>$} \Delta \, m^2
\mathrel{\rlap {\raise.5ex\hbox{$>$} 1 \times 10^{-3} \, eV^2$.
\begin{figure}[htb]
\hglue3cm
\includegraphics[width=8.5cm]{JohnPanic99Fig7.eps}
\caption{\it Three-flavour analysis of atmospheric neutrino data:
note that a 10 \% admixture of $\nu_\mu - \nu_e$ mixing cannot be
excluded~\cite{threeflavour}.}
\label{fig:17}
\end{figure}
Several tools to discriminate between dominant $\nu_\mu \to \nu_\tau$ and
$\nu_\mu \to \nu_s$ oscillations are available. One is $\pi^0$
production, which
is present in $\nu_\tau$ interactions, but absent for $\nu_\mu \to \nu_s$
oscillations. The present data from Super-Kamiokande
yield~\cite{Totsuka}:
\begin{equation}
(\pi^0 / e)_{obs} / (\pi^0 / e)_{MC} = 1.11 \pm 0.06 \pm 0.26
\label{twenty}
\end{equation}
where the Monte Carlo (MC) assumes oscillations
into neutrinos with conventional weak
interactions. This ratio would be $\mathrel{\rlap{\raise.5ex\hbox{$<$} 0.7$ for $\nu_\mu \to \nu_s$
oscillations. As seen in (\ref{twenty}), the data prefer $\nu_\mu \to \nu_\tau$
oscillations, and the statistical measurement error is relatively small, but
it is not possible to draw any definite conclusion at this
stage~\cite{Totsuka},
because of the large systematic error. This arises
from uncertainties in the $\pi^0$ production
cross section and the detector acceptance, which should soon
be reduced by data from the nearby detector in the K2K beamline, hopefully
enabling some definitive conclusion to be drawn.
A second tool is provided by the zenith-angle distributions for atmospheric
neutrino events, which differ between $\nu_\mu \to \nu_\tau$ and $\nu_\mu \to
\nu_s$ oscillations, because of matter effects in the latter case. As we heard
here~\cite{Totsuka}, preliminary measurements from Super-Kamiokande tend
to disfavour dominant
$\nu_\mu \to \nu_s$ at the $2-\sigma$ level, and it will be interesting to see
whether this trend is confirmed.
In the longer run, a third tool will be provided
by the neutral-current/charged-current event ratio in long-baseline neutrino
experiments, as discussed in the next section.
In the case of solar neutrinos, there are again three main analysis tools
available to Super-Kamiokande to help discriminate between the small- and
large-angle MSW and vacuum-oscillation solutions. One is provided by the
distortion of the energy spectrum.
Even without including the possibility of a big $hep$
contribution~\cite{bighep}, the large-angle MSW solution is very
consistent with the latest
Super-Kamiokande data, whereas the small-angle MSW solution is somewhat
restricted, and the vacuum-oscillation solution
appears almost excluded~\cite{Totsuka}. This is
because the range of $\sin^2 2\theta$ and $\Delta \, m^2$ favoured by the energy
spectrum has very little overlap with that
favoured by the overall suppression in
the rate.
The second tool is the day-night effect,
which may also now be showing up close to
the $2-\sigma$ level~\cite{Totsuka}. This also restricts the parameter
space of both the small-
and large-angle MSW solutions. In the former case, a possible signature is an
enhacement as neutrinos pass through the Earth's core, which is not apparent in
the data. No day-night effect is expected in the case of vacuum oscillations,
which may eventually turn into a problem if the current trend is confirmed.
A third tool that may soon supply some discriminating power is the seasonal
variation. In the case of the small-angle MSW solution, there should only be a
geometric effect, whereas a larger effect could appear in the other two cases,
particularly at high energies.
Currently there is a hint of a seasonal variation
in the Super-Kamiokande data~\cite{Totsuka}, but this is not yet ready to
discriminate between the different scenarios.
In the near future, important insight into the solar-neutrino problem will be
provided by the SNO measurement of the neutral-current/charged-current
ratio.
BOREXINO will also provide important input concerning the suppression of
intermediate-energy solar neutrinos. Another exciting possibility is
offered by
the KamLAND experiment, which can probe the
large-angle MSW solution directly in a
long-baseline reactor experiment.
Within a few years, we should find a definitive
resolution of the solar neutrino problem. In the case of atmospheric neutrinos,
this may require the input from the long-baseline
accelerator-neutrino experiments that we now discuss.
\section{Possible Long-Baseline Accelerator Neutrino Experiments}
In the previous sections, we have reviewed the various strong pieces of evidence
for possible new neutrino physics beyond the Standard Model, which are certainly
highly indicative of neutrino masses and oscillations. However, in the views of
many, it is necessary to use the controlled beams
provided by accelerators - whose
fluxes, energy spectra and flavour contents
are known and adjustable - to pin down
the interpretation of (in particular) the atmospheric-neutrino data, and to make
accurate measurements.
Two long-baseline accelerator-neutrino beams
have already been approved. The K2K
project extends over 250 km between KEK and the Kamioka
mine~\cite{K2K},
and has just
announced its first event in the Super-Kamiokande detector. This will be joined
in 2002 by the 730 km NuMI project sending a beam from Fermilab to the new
MINOS~\cite{MINOS}
detector in the Soudan mine. Under active discussion
in Europe is the NGS project~\cite{NGS} to send a
neutrino beam from CERN to the Gran Sasso laboratory, also some 730 km distant.
This has been recommended by CERN's Scientific Policy Committee,
and is likely to
be viewed favourably by the CERN Council if sufficient external resources can be
found. It could start taking data in 2005.
There is a substantial programme of work for these long-baseline
experiments. This
includes disappearance experiments, comparing the rates in nearby
and far detectors, as planned by K2K and MINOS. Also important are measurements
of the neutral-current to charged-current ratio, as also planned by K2K
and MINOS.
These should provide accurate measurements of $\Delta \, m^2$ and $\sin^2 \,
2\theta$ for $\nu_\mu \to \nu_e$ or $\nu_\mu \to \nu_s$ oscillations. The K2K
experiment is sensitive to about half of the region parameter space suggested by
Super-Kamiokande, and MINOS should cover essentially all of it.
MINOS should also
provide some information on $\nu_e$ appearance, though it is not optimized
for $e$ detection.
In my personal view, a key measurement will be that of $\nu_\tau$
appearance via $\tau$
production. Even if one accumulates many indirect indications that $\nu_\mu$
oscillate into $\nu_\tau$, direct proof is surely essential: ``If you have not
discovered the body, you have not proven the crime".
Remember Jimmy Hoffa: in the
absence of a body, it was impossible to
prove he had been murdered, let alone who
did it. Remember also the gluon: although there were prior indirect
arguments,
everybody remembers the observation of gluon jets~\cite{threejets} as the
``discovery" of the gluon.
The CERN-NGS beam is being optimized for $\tau$
production in a far detector~\cite{NGS}. The
$\tau$ event rate $\propto \sin^2 \, 2\theta (\Delta \, m^2)^2$, and should be
${\cal O}(10)$ per year in a kiloton detector if $\Delta m^2 \sim 3 \times 10^{-3} \,
eV^2$ as suggested by the Super-Kamiokande data. As seen in
Fig.~\ref{fig:8}~\cite{NGS}, either OPERA
or ICARUS should comfortably be able to detect $\tau$ production over all the
range of $\sin^2 \, 2\theta$ and $\Delta m^2$ indicated by Super-Kamiokande,
providing closure on the physics of atmospheric neutrinos~\cite{tauapp}.
\begin{figure}[htb]
\hglue4cm
\includegraphics[width=8.5cm]{JohnPanic99Fig8.eps}
\caption{\it Possible sensitivity of $\tau$-appearance experiments in the
proposed CERN-Gran Sasso long-baseline neutrino beam
(NGS)~\cite{NGS,tauapp}.}
\label{fig:8}
\end{figure}
\section{Possible Future Options}
What are the possibilities for the longer-term future?
Accelerator options under
consideration at CERN and elsewhere include linear $e^+ e^-$
colliders - a
first generation with $\mathrel{\rlap{\raise.5ex\hbox{$<$}$ 1 TeV in the centre of mass~\cite{LC}, and
a
possible second
generation in the range of 2 to 5 TeV~\cite{CLIC} - a $\mu^+ \mu^-$
collider~\cite{MC,Yellow} - aiming
eventually at several TeV in the centre of mass, but with intermediate
lower-energy Higgs factory options -
and a possible future larger hadron collider
with $\mathrel{\rlap {\raise.5ex\hbox{$>$}$ 100 TeV in the centre of mass.
The most relevant option for this talk may be the other physics
possibilities of an
intense $\mu$ source.
How about stopped-$\mu$ physics with $\sim 10^{14} \mu \,
s^{-1}$? The present limits on $\mu \to e \gamma$ and $\mu N \to eN$ could be
improved by many orders of magnitude.
Or how about $\mu N$ scattering with $\sim$
20 GeV muons on a fixed target:
how would this `MULFE'compare with ELFE? Also, the rates
for $\nu N$ scattering with a nearby (polarized?)
target `NULFE' would be prodigious. At
CERN one could also envisage a $\mu p$
collider using the LHC beam. However, the
most interesting option might be (very-)long-baseline
neutrino physics using the
neutrinos produced by the decays of stored muons~\cite{Geer}, which need
not be brought into
collision. The $\mu$-decay neutrino beams are separated entirely in flavour and
charge, have a spectrum that is calculable to high precision, include equal
numbers of $\nu_\mu$ and $\nu_e$, and can easily be switched in
charge~\cite{DGH}.
We have therefore been led to propose a three-step scenario for muon storage
rings~\cite{Yellow}. The first would be a {\it $\nu$ factory}, using
$\mu$-decay neutrino beams
as the ``ultimate weapons" for $\nu$-oscillation studies. The second step would
comprise one or more {\it Higgs factories},
capable of producing Higgs resonances
directly in the $s$ channel, measuring their
total widths, restricting drastically, e.g., the MSSM parameter space,
and providing
a new window on CP violation in the
Higgs sector: the ``ultimate weapon" for Higgs
studies. The third step could be a {\it multi-TeV
$\mu^+ \mu^-$ collider}. This has advantages
over an $e^+ e^-$ collider in the same energy range,
provided by its reduced energy spread and
its more precise energy calibration. However, the
centre-of-mass energy may ultimately be
limited by the neutrino-induced radiation hazard\cite{MC,King,Yellow}.
Any such programme of muon storage rings must
face many technical problems related to the proton
driver, the target, and capturing produced pions and muons.
In addition, muon colliders require
a large amount of beam cooling, and the $\nu$
radiation problem must be addressed before
progressing to a high-energy $\mu^+ \mu^-$ collider.
However, the physics of the first-step
$\nu$ factory is already very enticing, as we now discuss.
One might envisage $10^{14} \, p$ per cycle at
a rate of 15 Hz, producing close to $10^{21} \,
\mu^+ (\mu^-)$ per year, leading to $\nu_\mu +
\bar{\nu}_e (\bar{\nu}_\mu + \nu_e)$ beams with
fluxes of $\sim 2 \times 10^{20}$ per year.
These fluxes are so large that one could consider
very-long-baseline experiments with beams travelling several thousand
km~\cite{Geer,DGH,BCR,Yellow,Barger}: Fermilab to Gran
Sasso? CERN to Soudan? either or both to Kamioka or Beijing?
The sensitivities to $\Delta m^2$ and
$\sin^2 2\theta$ of such (very-)long-baseline experiments
have recently been studied in~\cite{DGH}. They vary as follows with
baseline $L$ and energy $E$:
\begin{eqnarray}
& appearance & disappearance \nonumber \\
\Delta m^2 : & E_{\mu}^{-1/2} & E_{\mu}^{-1/4} L^{-1/2} \nonumber \\
\sin^2 \, 2\theta: & LE_{\mu}^{-3/2} & L^{1/2}E_{\mu}^{-3/4}
\label{twentyone}
\end{eqnarray}
As seen here and in Fig.~\ref{fig:9}, very-long-baseline experiments may
actually not confer any
benefits for appearance and disappearance studies~\cite{DGH}.
However, the long-baseline experiments
already offer considerable improvements over the
sensitivities of current atmospheric-neutrino
experiments. Moreover, as seen in Fig.~\ref{fig:10}, very-long-baseline
experiments may offer a better
window on CP-violation effects in $\nu$-oscillation studies~\cite{DGH}.
Beams from $\mu$ storage rings
could be used to compare $\nu_\mu \to \nu_e$ oscillations with the $T$-reversed
$\nu_e \to \nu_\mu$ process as well as the
CP-conjugate process $\bar{\nu}_\mu \to \bar{\nu}_e$
(not to mention $\bar{\nu}_e \to \bar{\nu}_\mu$).
Thus, one may begin to dream of the Holy
Grail of $\nu$-oscillation studies,
the exploration of CP violation in the neutrino sector~\cite{Tanimoto}.
This could be connected indirectly with the baryon asymmetry of the
Universe via a leptogenesis scenario~\cite{leptogen}.
It used to be thought that neutrinos could constitute
the dark matter: it would be ironic if
they gave birth to the visible matter.
\begin{figure}[htb]
\hglue1.5cm
\includegraphics[width=12cm]{JohnPanic99Fig9.eps}
\caption{\it The sensitivities of long-baseline neutrino experiments
using beams from a muon storage ring used as a neutrino
factory~\cite{DGH}: (a)
to search for mixing between the first- and third-generation
neutrinos via appearance (left lines) and disappearance (right lines) for
$\theta_{23} = 45^o$ (solid lines) and $30^o$ (dashed lines),
assuming a baseline of 732~km, and (b)
to search for mixing between the second- and third-generation neutrinos
via appearance (dashed lines) and disappearance (solid lines), assuming
the indicated beam lengths. The boxes represent current indications and
limits.}
\label{fig:9}
\end{figure}
\begin{figure}[htb]
\hglue4cm
\includegraphics[width=8.5cm]{JohnPanic99Fig10.eps}
\caption{\it Sensitivity of (very-)long-baseline experiments
to CP violating effects in neutrino oscillations~\cite{DGH}.}
\label{fig:10}
\end{figure}
\section{Prospects}
Neutrino physics appears finally to be
leading particle physics beyond the straitjacket of the
Standard Model. The wealth of new data --
particularly from Super-Kamiokande~\cite{SuperK,Totsuka} -- is highly
suggestive
of neutrino masses and oscillations, for both
solar and atmospheric neutrinos. In both cases,
some definitive experiments are at hand.
In the case of solar neutrinos, these include SNO (to
see if B neutrinos have oscillated into some other flavour), BOREXINO (to see
if Be neutrinos have oscillated strongly), and KamLAND (to test the
large-mixing-angle MSW hypothesis using the
known flux of reactor neutrinos). Meanwhile, Super-Kamiokande is
progressing towards decisive
measurements of the spectrum distortion,
the day-night effect and the seasonal variation of the
solar neutrino flux. In the case of atmospheric neutrinos,
$\pi^0$ production and the
zenith-angle distribution may soon provide decisive discrimination betwen the
$\nu_\mu\rightarrow\nu_\tau$ and $\nu_\mu\rightarrow\nu_s$
scenarios. In this case, the definitive
measurements will be made by long-baseline neutrino
beams from accelerators, starting with K2K.
These have an extensive programme of work ahead of
them, including measurements of $\nu_\mu$
disappearance and the neutral current/charged current
ratio, as well as $\nu_e$ and $\nu_\tau$
appearance experiments. The detailed measurements
possible with controlled accelerator beams will
dissipate any remaining doubts about the
interpretation of the atmospheric neutrino experiments.
In the longer run, the concept of a neutrino
factory based on a muon storage ring offers the
prospect of a complete set of oscillation
measurements with separated neutrino flavours and
charges, including the possibility of very-long-baseline experiments and a
quest for CP
violation. This option also offers other
exciting opportunities in $\mu$ and $\nu$ physics, as
well as serving as a stepping-stone towards
Higgs factories and a high-energy $\mu^+\mu^-$
collider. As never before, neutrino physics is
entering, and perhaps diverting, the mainstream
of particle physics.
|
1,116,691,500,939 | arxiv | \section*{Introduction}
The von Renesse-Sturm theorem (see \cite{sturm-vonrenesse}) ensures that a Wasserstein distance contraction property between solutions to the heat equation on a Riemannian manifold is equivalent to a lower curvature condition. This result is one of the first equivalence results relating the Wasserstein distance and a curvature condition. Recent works have been devoted to a more precise curvature-dimension condition instead of a sole curvature condition. In this work, and in a fairly general framework, we derive new {\it dimensional} contraction properties under a curvature-dimension condition and we show that they are all equivalent to it.
\medskip
Let $\Delta$ be the Laplace-Beltrami operator on a smooth Riemannian manifold $(\ensuremath{\mathbf{M}}, \mathcal G)$ and let $(P_t f)_{t \geq 0}$ be the solution to the heat equation $\partial_tu=\Delta u$ with $f$ as the initial condition. Many of the coming notions and results have been considered in a more general setting, but for simplicity in the introduction we focus on this case. The Bochner identity states that
$$
\frac{1}{2}\Delta|\nabla f|^2-\nabla f\cdot\nabla \Delta f=|\nabla\nabla f|^2+\mathrm{Ric}(\nabla f,\nabla f)
$$
where $\mathrm{Ric}$ is the Ricci curvature of $(\ensuremath{\mathbf{M}},\mathcal G)$. The manifold associated with its Laplacian is said to satisfy the $CD(R,m)$ curvature-dimension condition if its Ricci curvature is uniformly bounded from below by $R \in \mathbb R$ and its dimension is smaller than $m \in (0, + \infty]$. In this case
\begin{equation}
\label{eq-bochner}
\frac{1}{2}\Delta|\nabla f|^2-\nabla f\cdot\nabla \Delta f\geq \frac1m(\Delta f)^2+R|\nabla f|^2
\end{equation}
by the Cauchy-Schwarz inequality.
The $CD(R,m)$ condition and~\eqref{eq-bochner} are the starting point of many comparison theorems, functional and geometrical inequalities, bounds on the heat kernel, etc. (see e.g.~\cite{bgl-book,EKS13,villani-book2,wang-book}).
\medskip
In this work we focus on the link between the curvature-dimension condition and Wasserstein distance contraction properties of the heat semigroup.
The von Renesse-Sturm theorem \cite{sturm-vonrenesse} states that: the $CD(R,\infty)$ condition holds if and only if
\begin{equation}
\label{eq-premiere}
W_2^2(P_tfdx,P_tgdx)\leq e^{-2Rt}W_2^2(fdx,gdx)
\end{equation}
for all $t \geq 0$ and probability densities $f,g$ with respect to the Riemannian measure $dx$. Here $W_2$ is the Wasserstein distance with quadratic cost.
There are many proofs of this result as well as extensions to more general evolutions and spaces, see for instance~\cite{ambrosio-gigli-savare,bgl-15,bgl-book,gko13,kuwada10,otto05,wang-book,wang11}. Following the seminal papers~\cite{LV,sturm}, attention has been drawn to taking the {\it dimension} of the manifold into account.
A first way of including the dimension is to use {\it two different times} $s$ and $t$ in the inequality~\eqref{eq-premiere}. It is proved in~\cite{bgl-15,kuwada15} that the $CD(0,m)$ condition implies
\begin{equation}
\label{eq-cas-simple}
W_2^2(P_sfdx,P_tgdx)\leq W_2^2(fdx,gdx)+2m(\sqrt{t}-\sqrt{s})^2
\end{equation}
for all $s,t\geq0$ and all probability densities $f,g$.
A non zero lower bound on the curvature and the equivalence have been further considered in~\cite{EKS13,kuwada15}:
\begin{itemize}
\item In~\cite{kuwada15}, the fourth author proved that the $CD(R,m)$ condition holds if and only if
\begin{equation}
\label{eq-last}
W_2^2(P_t fdx,P_sgdx)\leq A(s,t,R,m) W_2^2( fdx,gdx)+B(s,t,m,R)
\end{equation}
for all $s,t\geq0$ and all probability densities $f,g$, and for appropriate positive functions $A, B.$
\item In~\cite{EKS13}, the authors proved that the $CD(R,m)$ condition holds if and only if
\begin{multline}
\label{eq-eks}
s_{\frac Rm}\left(\frac 12 W_2(P_tf dx,P_sg dx)\right)^2
\leq e^{-R(t+s)}\,s_{\frac Rm}\left(\frac 12 W_2(fdx,gdx)\right)^2
\\
+\frac mR(1-e^{-R(s+t)})\frac{(\sqrt{t}-\sqrt{s})^2}{2(t+s)}
\end{multline}
for all $s,t\geq0$ and all probability densities $f, g$. Here $s_r(x)=\sin(\sqrt{r}x) / \sqrt{r}$ if $r>0$, $s_r(x)=\sinh(\sqrt{\vert r \vert}x) / \sqrt{\vert r \vert}$ if $r<0$ and $s_0(x)=x$, hence recovering~\eqref{eq-cas-simple} when $R=0$. Both inequalities~\eqref{eq-last} and~\eqref{eq-eks} are extensions of~\eqref{eq-premiere} and~\eqref{eq-cas-simple}, taking the dimension into account.
\end{itemize}
\medskip
Contraction properties with the {\it same time} have been derived in~\cite{bgg28} for the Euclidean heat equation in $\dR^m$, and then extended by the third author in~\cite{G15} to a compact Riemannian manifold. Let $\ent{dx}{h} = \int h \, \log h \, dx$ be the entropy of a probability density $h$. Then the $CD(R,m)$ condition implies
$$
W_2^2(P_t fdx,P_t gdx)\leq e^{-2R t}\,W_2^2(fdx,gdx)\\-\frac{2}{m}\int_0^t \! e^{-2R(t-u)}\PAR{\ent{dx}{P_u g}-\ent{dx}{P_u f}}^2du
$$
for all $t \geq 0$ and all $f$, $g$ probability densities.
This bound has also been proved in~\cite{bgg28} for the Markov transportation distance instead of the $W_2$ distance. This distance differs from $W_2$ and has actually been tailored to Markov semigroups and the Bakry-\'Emery $\Gamma_2$ calculus. Dimensional contraction properties for a Wasserstein distance defined with an adapted cost have also been derived in~\cite{wang11}.
\bigskip
In this paper we derive diverse {\it same time} contraction inequalities under a general $CD(R,m)$ curvature-dimension condition, and in fact prove that they are all {\it equivalent} to this condition. The results and the proof will be given in the two settings of a smooth Riemannian manifold and of a more general metric measure space, more precisely in the setting introduced in~\cite{AGS_BE} of a Riemannian energy measure space.
The paper is organized as follows. In Section~\ref{sec-main-result}, we state and explain the context of our main result, Theorem~\ref{thm-legros}. In Section~\ref{sec-proofdebut}, we prove the easier implications, leaving the main issue aside: from the weakest contraction to the curvature-dimension condition. Some arguments in this section require a detailed formulation given in Section~\ref{sec-mms} below : thus they are only outlined there and complemented in Section~\ref{subsec:3-4}. In Section~\ref{sec-strategy}, we present the strategy of our proof, motivated by the elementary gradient flow approach in Euclidean space. The result is proved on a Riemannian manifold in Section~\ref{sec-riemannian}, and on a Riemannian energy measure space in Section~\ref{sec-mms}. The general strategy is the same in both settings, and it could seem redundant to give both proofs. However the proof in the Riemannian setting is rather simpler, presents the most important steps of the argument and thus gives a way to get it in a more general space. We believe that it is an opportunity to emphasize, in our example, the main issues arising in transferring a proof in the Riemannian setting to the abstract measure space setting. Indeed, there, regularity is no more available ``for free'', and our proof will crucially use a whole panel of powerful tools developed by L.~Ambrosio, N.~Gigli, G.~Savar\'e, K.-T.~Sturm and coauthors to overcome this difficulty, in particular localization and mollification by semigroup.
The last section gives a new and simple derivation of a classical entropy-energy inequality, as well as dimensional HWI inequalities: for this we start from our contraction inequalities instead of the curvature-dimension condition, as in earlier works.
\section{Main result}
\label{sec-main-result}
Our main theorem states that, in a quite general framework, a curvature-dimension condition is equivalent to same time Wasserstein distance contraction inequalities.
\medskip
Let $(\ensuremath{\mathbf{X}},d)$ be a Polish metric space, $\mathcal P(\ensuremath{\mathbf{X}})$ be the set of Borel probability measures on $\ensuremath{\mathbf{X}}$ and
$\mathcal P_2(\ensuremath{\mathbf{X}})$ be the set of all $\mu\in\mathcal P(\ensuremath{\mathbf{X}})$ such that $\int d(x_0,x)^2 \, d\mu(x)<\infty$ for some $x_0\in \ensuremath{\mathbf{X}}.$
The (quadratic) Wasserstein distance between $\nu_1$ and $\nu_2$ in $\mathcal P_2(\ensuremath{\mathbf{X}})$ is defined by
$$
W_2(\nu_1,\nu_2)=\inf_{\pi} \sqrt{\iint d(x,y)^2 \, d\pi(x,y)}
$$
where the infimum runs over all probability measures $\pi$ on $\ensuremath{\mathbf{X}}\times \ensuremath{\mathbf{X}}$ with marginals $\nu_1$ and $\nu_2$.
A fundamental tool is the Kantorovich dual representation :
for $\nu_1,\nu_2\in\mathcal P_2(\ensuremath{\mathbf{X}})$,
\begin{equation}
\label{eq-kanto}
\frac{W_2^2(\nu_1,\nu_2)}{2}=\sup_{\psi} \Big\{ \int Q\psi \, d\nu_1-\int\psi \, d\nu_2\Big\}.
\end{equation}
Here the supremum runs over all bounded Lipschitz functions $\psi$ (in this case Theorem~5.10 in~\cite{villani-book2} can be extended to Lipschitz instead of continuous functions, see~\cite[Rmk.~3.6]{kuwada10}) and $Q\psi$ is the inf-convolution of $\psi,$ defined on $\ensuremath{\mathbf{X}}$ by
$$
Q\psi(x)=\inf_{y\in \ensuremath{\mathbf{X}}}\Big\{\psi(y)+\frac{d(x,y)^2 }{2}\Big\}.
$$
The Wasserstein space $(\mathcal P_2 (\ensuremath{\mathbf{X}}), W_2)$ is described in the reference books~\cite{ambrosio-gigli-savare} and~\cite{villani-book2}.
We shall define the entropy $\ent{\mu}{f}$ of a probability density with respect to a (finite or not) measure $\mu$ by $\ent{\mu}{f} = \int f \, \log f \, d\mu$ if $f \vert \log f \vert \in \mathbb{L}^1 (\mu)$ and $\infty$ otherwise.
\medskip
Our result will be stated in the two settings of a Riemannian Markov triple $(\ensuremath{\mathbf{M}},\mu,\Gamma)$ ($RMT$ in short), and a Riemannian energy measure space $(\ensuremath{\mathbf{X}},\tau,\mu, \mathcal{E} )$ ($REM$ in short). These settings will be described in detail in Sections~\ref{sec-riemannian} and~\ref{sec-mms} respectively. A $REM$ space is a particular metric measure space, developed in~\cite{AGS_BE}. A $RMT$ is a smooth Riemannian manifold equipped with a weighted Laplacian (see~\cite{bgl-book}) and is a particular example of $REM$ space.
Even if a $RMT$ is a $REM$ space we prefer to state and prove our result in both settings since the argument is a little simpler in the Riemannian case. We also believe that it emphasizes the main difficulties when generalizing a result from a smooth setting to an abstract metric measure space. In both spaces, $(P_t)_{t\geq0}$ denotes the associated Markov semigroup. It is defined through the weighted Laplacian in the $RMT$ case, and through the Dirichlet form in the $REM$ case.
The $CD(R,m)$ curvature-dimension condition is defined using the Bochner inequality~\eqref{eq-bochner} in a Riemannian manifold and in a weak form in a metric measure space (see Definitions~\ref{def-cd} and \ref{def-weak-cd}).
Recall finally that for $r\in\dR$ the map $s_{r}$ is defined on $\mathbb R$ by
$$
s_{r}(x)=
\left\{
\begin{array}{ll}
\displaystyle \sin(\sqrt{r}\,x) / \sqrt{r} \,\,& {\rm if}\,\, r>0\\
\displaystyle \sinh(\sqrt{|r|}\,x) / \sqrt{|r|} \,\,& {\rm if}\,\, r<0\\
x\,\,& {\rm if}\,\, r=0.
\end{array}
\right.
$$
\begin{ethm}[Equivalence between contractions and $CD(R,m)$ condition]
\label{thm-legros}
~
Consider a $RMT$ or $REM$ space as in Sections~\ref{sec-riemannian} and~\ref{sec-mms}, with (finite or not) reference measure~$\mu$ and associated semigroup $(P_t)_{t\geq 0}$. Let $R\in\dR$ and $m>0$. Then the following properties are equivalent:
\begin{enumerate}
\item the $CD(R,m)$ (or weak $CD(R,m)$ in a $REM$ space) curvature-dimension condition holds;
\item for any $t\geq0$ and any probability densities $f,g$ with respect to $\mu$,
\begin{multline}
\label{eq-contraction-sh}
s_{\frac Rm}\left(\frac 12 W_2(P_tf\mu,P_tg\mu)\right)^2\leq e^{-2Rt}\,s_{\frac {R}{m}}\left(\frac {1}{2} W_2(f\mu,g\mu)\right)^2
\\
- 2 m \int_0^t e^{-2R(t-u)}\sinh^2 \Big( \frac{\ent{\mu}{P_uf} - \ent{\mu}{P_ug}}{2m}\Big) du;
\end{multline}
\item for any $t\geq0$ and any probability densities $f,g$ with respect to $\mu$,
\begin{equation}
\label{eq-contraction-square-general-2}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!W_2^2(P_t f\mu,P_tg\mu) \! \leq \! e^{-2Rt}W_2^2(f\mu, g\mu)
\! -\! \frac{2}{m}\int_0^te^{-2R(t-u)} \left( \ent{\mu}{P_uf} - \ent{\mu}{P_ug} \right)^2 du.
\end{equation}
\end{enumerate}
\end{ethm}
See Theorems~\ref{thm-main} and~\ref{thm:main_mms}
for a more precise framework of Theorem~\ref{thm-legros}.
\smallskip
A bound with the same additional term as in (ii) has also been derived in~\cite{BGG15} for some specific instances of symmetric Fokker-Planck equations in $\dR^m$, for which the generator only satisfies a $CD(R,\infty)$ condition. Combined with a deficit in the Talagrand inequality, it has led to refined convergence estimates on the solutions.
Next section presents the easiest part of the proof of Theorem~\ref{thm-legros}. More precisely, $(\rm i)\Rightarrow (ii)$ and an outline of $(\rm ii) \Rightarrow (iii)$, including the key~Proposition~\ref{prop-3-4}. The full proof of $(\rm ii) \Rightarrow (iii)$ requires some knowledge on the spaces and will be finished in Section~\ref{subsec:3-4}. Sections~\ref{sec-proofdebut} to~\ref{sec-mms} (but Section~\ref{subsec:3-4}) are dedicated to the more difficult $(\rm iii)\Rightarrow (i),$ in both $RMT$ and $REM$ spaces.
\section{Proof of Theorem~\ref{thm-legros}: first implications}
\label{sec-proofdebut}
\subsubsection*{Proof of $(\rm i)\Rightarrow (ii)$}
In~\cite{EKS13} M. Erbar, K.-T. Sturm and the fourth author of this paper have proved an {\it Evolutional variational inequality} (EVI in short) in the $REM$ spaces. Let $f,g$ be probability densities with respect to $\mu$ and let $U_m = \exp ( - \ent{\mu}{\cdot}/m )$. Then, under the weak $CD(R,m)$ condition,
\begin{equation}\label{eq-evi-eks}
\frac{d}{dt} s_{\frac Rm}\left(\frac12 W_2(P_tf\mu,g\mu) \right)^2 + R \, s_{\frac Rm}\left(\frac12 W_2(P_tf\mu, g\mu) \right)^2 \leq \frac{m}{2} \left( 1 - \frac{U_m(g)}{U_m(P_t f)} \right).
\end{equation}
But it is classical, see e.g.~\cite{ambrosio-gigli-savare}, how to deduce a contraction property in $W_2$ distance between solutions $(P_tf)_{t \geq 0}$ and $(P_tg)_{t \geq 0}$ from an EVI: one applies the EVI to the curve $(P_tf)_{t \geq 0}$ and $P_sg$ for a given $s$, and then (with the time variable $s$) to the curve $(P_sg)_{s \geq 0}$ and $P_tf$ for a given $t$; then one adds both inequalities,
takes $t=s$ and integrate in time. Then one obtains (ii). To sum up, it turns out that the EVI~\eqref{eq-evi-eks} not only leads to the property~\eqref{eq-eks}, as observed in~\cite{EKS13}, but also to the {\it same-time} contraction property~(ii).
\subsubsection*{Outline of the proof of $(\rm ii)\Rightarrow (iii)$}
We first observe that $\sinh^2(x)\geq x^2$ for any $x$, so ii) implies the same bound with $\sinh^2(x)$ replaced by $x^2$ in the integral. Then the
implication $(\rm ii)\Rightarrow (iii)$ is essentially a consequence of the following result, which we prove in the general context of a geodesic space.
\begin{eprop}
\label{prop-3-4}
Let $( Y, d_Y )$ be a geodesic metric space, $U : Y \to ( - \infty , \infty ]$ and
$\varphi_t : Y \to Y$ $(t \ge 0)$ a one-parameter family of maps.
Suppose that $t \mapsto \varphi_t (y)$ is continuous for all $y \in Y$ and
$U ( \varphi_t (y) ) \in \dR$ for all $t > 0$ and $y \in Y.$
Suppose also that for $y_0 , y_1 \in Y$ and $t > 0$,
\begin{multline}
\label{contrtt2}
s_{\frac Rm} \left(
\frac12 d_Y (\varphi_t (y_0) , \varphi_t (y_1) )
\right)^2
\leq
e^{-2Rt}\,
s_{\frac {R}{m}} \left(
\frac {1}{2} d_Y ( y_0 , y_1 )
\right)^2
\\
- \frac{1}{2 m} \int_0^t
e^{-2R(t-u)}
\PAR{
U ( \varphi_u (y_0) ) - U ( \varphi_u (y_1) )
} ^2
du.
\end{multline}
Then
\begin{equation*}
d_Y (\varphi_t (y_0) , \varphi_t (y_1) )^2
\leq
e^{-2Rt}\, d_Y ( y_0 , y_1 )^2
\\
- \frac{2}{m} \int_0^t
e^{-2R(t-u)}
\PAR{
U ( \varphi_u (y_0) ) - U ( \varphi_u (y_1) )
} ^2
du.
\end{equation*}
\end{eprop}
\begin{Proof}
We adapt the argument of \cite[Prop.~2.22]{EKS13}. Let $( y_s )_{s \in [ 0 , 1 ]}$ be a geodesic from $y_0$ to $y_1$ in $Y$, and let $t>0$ be fixed. For any $n$ and $1 \leq i \leq n, $ let $x_i^n = d_Y ( \varphi_t ( y_{(i-1)/n} ), \varphi_t ( y_{i/n} ) )$. Then
$$
d_Y ( \varphi_t (y_0) , \varphi_t (y_1) )^2
\leq
\Big( \sum_{i=1}^n x_ i^n \Big)^2
\leq
n \sum_{i=1}^n (x_i ^n)^2
$$
for any $n.$ In particular
$$
d_Y ( \varphi_t (y_0) , \varphi_t (y_1) )^2
\leq \limsup_{n \to \infty} n \sum_{i=1}^n (x_i ^n)^2.
$$
Now, by neglecting the second term in the right-hand side of~\eqref{contrtt2} and by geodesic property,
$$
s_{\frac{R}{m}} \Big(\frac{x_i ^n}{2} \Big) \leq e^{-Rt} \, s_{\frac{R}{m}} \Big( \frac{1}{2} d_Y (y_{(i-1)/n} , y_{i/n}) \Big) = e^{-Rt} \, s_{\frac{R}{m}} \Big( \frac{1}{2n} d_Y ( y_0, y_1) \Big).
$$
It follows, as in \cite[(2.32)]{EKS13}, that there exists a constant $c$ such that $x_i^n \leq c/n$ for large $n$ and any $1 \leq i \leq n.$ Moreover $ s_{\frac{R}{m}} (x)^2 = x^2- R x^4/(3m)+O(x^6)$ as $x$ tends to $0$, so that
\begin{equation}\label{lemmaDLs}
\limsup_{n \to \infty}n\sum_{i=1}^n (x_i^n)^2= 4 \limsup_{n \to \infty}n\sum_{i=1}^n s_{\frac{R}{m}}(x_i^n/2)^2.
\end{equation}
As a consequence
\begin{align*}
d_Y ( \varphi_t ( y_0 ) , \varphi_t ( y_1 ) )^2
& \leq
4 \limsup_{n \to \infty}
n \sum_{i=1}^{n} s_{\frac{R}{m}}
\left(
\frac12 d_Y ( \varphi_t ( y_{(i-1)/n} ) , \varphi_t ( y_{i/n} ) )
\right)^2
\\
& \leq
4 \limsup_{n \to \infty} \Bigg(
n \sum_{i=1}^n e^{-2Rt}\,
s_{\frac {R}{m}} \left(
\frac {1}{2} d_Y ( y_{(i-1)/n} , y_{i/n} )
\right)^2
\\
& \hspace{4em}
- \frac{1}{2m} \int_0^t
e^{-2R(t-u)}
n \sum_{i=1}^n
\PAR{
U ( \varphi_u ( y_{(i-1)/n} ) ) - U ( \varphi_u ( y_{i/n} ) )
}^2
du
\Bigg)
\end{align*}
by assumption~\eqref{contrtt2}.
Then the conclusion follows from this estimate
by using~\eqref{lemmaDLs} with $d_Y ( y_{(i-1)/n} , y_{i/n})$ in place of $x_i^n$ in the first term, and the Cauchy-Schwarz inequality in the second term.
\end{Proof}
Let us return to our case.
As stated before, we give an outline and
leave the rigorous argument to Section~\ref{subsec:3-4}.
We apply Proposition~\ref{prop-3-4} for $( Y , d_Y ) = ( \mathcal{P}_2 (\ensuremath{\mathbf{X}}) , W_2 )$.
Under (ii), we can extend the action of the semigroup $P_t$ to probability measures.
Then, $\varphi_t = P_t$ fulfills all the assumptions of Proposition~\ref{prop-3-4}
with $U = \entf{\mu}$.
This ensures that (ii) implies~(iii).
\medskip
To sum up, (i) implies (ii), and (ii) implies (iii)
(with the aid of an additional argument in Section~\ref{subsec:3-4}).
Thus, it remains to prove that, conversely, (iii) implies (i).
\section{Strategy of the converse proof}
\label{sec-strategy}
The proof of (iii) $\Rightarrow$ (i) will be given in the two cases of a $RMT$ (in Section~\ref{sec-riemannian}) and a $REM$ space (in Section~\ref{sec-mms}). In this section we present its strategy, in a formal way.
\subsection{Example of a gradient flow in $\dR^d$}
\label{sec-radient-flow}
Let us first present the easiest case of a smooth gradient flow in $\dR^d$. There we shall see that the equivalence between the contraction inequality~\eqref{eq-contraction-square-general-2} and the $CD(R,m)$ curvature-dimension condition is natural. It gives a way to understand the general case.
\medskip
Let $F : \dR^d \to \dR$ be a $\mathcal C^2$ smooth function, and let $(X_t)_{t\geq0}$ be a gradient flow for the function $F$, that is, a solution to the differential equation
\begin{equation}
\label{eq-fg}
\frac{dX_t}{dt}=-\nabla F(X_t).
\end{equation}
Following~\cite{EKS13}, the function $F$ satisfies a $CD(R,m)$ curvature-dimension condition for $R\in\dR$ and $m>0$ if for any $x,h\in\dR^d$, the map $[0,1]\ni s\mapsto \phi(s)=F(x+sh)$ satisfies the convexity inequality
\begin{equation}
\label{eq-cd-R}
\phi''(s)\geq R || h ||^2 +\frac{1}{m}(\phi'(s))^2.
\end{equation}
Here $||\cdot||$ is the Euclidean norm in $\dR^d$. Since the path $(x+sh)_{s\in[0,1]}$ is a geodesic between $x$ and $x+h$, this means that
$F$ satisfies a $(R,m)$-convexity condition along geodesics.
Let now $(X_t)_{t\geq0}$ and $(Y_t)_{t\geq0}$ be two solutions to~\eqref{eq-fg} with initial conditions $X_0$ and $Y_0$ respectively. The function
$\Lambda(t)=||X_t-Y_t||^2$ satisfies
$$
\Lambda'(u)=-2\int_0^1\phi_u''(s)ds
$$
where $\phi_u(s)=F(X_u+s(Y_u-X_u))$.
If now the function $F$ satisfies the above $CD(R,m)$ condition~\eqref{eq-cd-R}, then
$$
\Lambda'(u)
\leq
-2R||X_u-Y_u||^2
-\frac{2}{m}\int_0^1 (\phi_u'(s))^2 du
\leq
-2R\Lambda(u)-\frac{2}{m} (\phi_u(1)-\phi_u(0))^2.
$$
Integrating over the interval $[0,t]$, we get
\begin{equation}
\label{eq-contraction-R}
||X_t-Y_t||^2\leq e^{-2Rt}||X_0-Y_0||^2-\frac{2}{m}\int _0^te^{-2R(t-u)}(F(X_u)-F(Y_u))^2du.
\end{equation}
Conversely, let us assume that the gradient flow driven by $F$ satisfies the property~\eqref{eq-contraction-R} for any $t\geq0$ and any initial conditions $X_0$ and $Y_0.$ Then $F$ satisfies the $CD(R,m)$ condition~\eqref{eq-cd-R}. For, taking the time derivative of~\eqref{eq-contraction-R} at $t=0$ implies
$$
- (X_0-Y_0)\cdot(\nabla F(X_0)-\nabla F(Y_0))\leq -R||X_0-Y_0||^2-\frac{1}{m}(F(X_0)-F(Y_0))^2.
$$
Let then $x, h$ in $\dR^d$ and $s \in [0,1]$ be fixed. A Taylor expansion for $Y_0 = x+(s+\varepsilon) h$ tending to $X_0 = x + sh$ (along a geodesic), so for $\varepsilon \to 0,$ implies back the $CD(R,m)$ condition~\eqref{eq-cd-R}.
\medskip
Let us observe that inequality~\eqref{eq-contraction-R} is exactly~\eqref{eq-contraction-square-general-2} when replacing $\dR^d$ with the space of probability densities, the Euclidean norm with the Wasserstein distance, $F$ with the entropy, $(X_t)_{t\geq0}$ with the semigroup $(P_t)_{t \geq 0}$ and the $CD(R,m)$ condition~\eqref{eq-cd-R} with the corresponding Bakry-\'Emery condition, which is equivalent to the $(R,m)$-convexity of the entropy (see \cite{EKS13}).
Of course, this computation is natural since
the considered evolution is the gradient flow of the entropy with respect to the Wasserstein distance, see~\cite{ambrosio-gigli-savare,jko}.
\smallskip
We now want to mimic the above proof for a smooth gradient flow on $\dR^d$ to the setting of a general semigroup on $(\mathcal P_2 (\ensuremath{\mathbf{X}}), W_2)$.
As here in the smooth case, we shall see in the coming section that geodesics play a fundamental role.
\subsection{How to adapt the gradient flow proof to the general case?}
\label{sec-how}
The most natural method to prove that a contraction inequality in Wasserstein distance, as in~\eqref{eq-premiere}, implies a curvature condition is to use close Dirac measures as initial data (see e.g.~\cite{bgl-15}). In our case, this can not be performed since the entropy of a Dirac measure is infinite. There seems to be hope since we consider the entropy of the heat kernel
in positive time, when it becomes finite.
However, it does not work again if we are on a homogeneous space.
For instance, on $\mathbb{R}^d$, the entropy of the heat kernel
$p_t (x,\cdot)$ does not depend on $x$ and the dimensional corrective terms
in Theorem~\ref{thm-legros} vanish
if we consider two Dirac measures as initial data.
To solve this issue we shall consider as initial data a probability density $g$ (with respect to $\mu$) and a perturbation of it, both in sufficiently wide classes of functions. The perturbation will be built by means of a geodesic in the Wasserstein space $(\mathcal P_2 (\ensuremath{\mathbf{X}}), W_2)$. More precisely, given such a $g$,
we are looking for a path $(g_s)_{s \geq0}$ of probability densities whose Taylor expansion for small $s$ is a geodesic in $\mathcal P_2 (\ensuremath{\mathbf{X}})$ with a direction given by a function~$f$.
We explain the idea on a $RMT$.
For that, consider the generator $L^g=L + \Gamma(\log g,\cdot)$ (see~\eqref{eq-Gamma} for the definition of $\Gamma$) with associated semigroup $(P_t^g)_{t\geq0}$. Given a direction function $f$, there are two ways of defining the path $(g_s)_{s \geq 0}$, both admitting the same Taylor expansion for small $s$:
\begin{itemize}
\item One can first consider the path $g_s=g(1-sL^g f)$ for small $s$ and a smooth and compactly supported function $f$. The function $g_s$ is a smooth, bounded and compactly supported perturbation of $g$. This path will be used on a $RMT$ since such functions are adapted to the Riemannian setting.
\item One can also consider the path $\tilde{g}_s=g(1+f-P_s^gf)$, again for $s$ small and ``nice'' $f\in\mathbb L^\infty(\mu)$. The path $(\tilde{g}_s)$ has the same Taylor expansion as $(g_s)$ since $f-P_s^gf=-sL^gf+o(s)$.
This path will be used on $REM$ spaces. Indeed, regularity of functions (such as $g_s$ above) is clearly a difficult issue in the setting of metric measure spaces, and $\mathbb L^\infty(\mu)$ functions are much more adapted to them. By using the semigroup $( P_s^g )_{s \ge 0}$ instead of the generator $L^g$, we can apply the maximum principle which preserves (essential) boundedness of functions.
\end{itemize}
\begin{erem} \label{rem:Approximation_geodesic}
Let us see, formally and in the Euclidean space $\dR^d$, why the probability measure $ g_s dx$ has the same first-order Taylor expansion as the geodesic in the Wasserstein space. Let $\nu_0$ be a probability measure in $\dR^d$ being absolutely continuous with respect to the Lebesgue measure, $\psi:\dR^d\rightarrow\dR$ be a convex map, and
$$
\nu_s=((1-s){\rm Id}+s\nabla\psi)_{\#}\nu_0
$$
for $s\in[0,1]$. The path $(\nu_s)_{s\in[0,1]}$ is a geodesic path between $\nu_0$ and $\nu_1$ in the Wasserstein space, that is for any $s,t\in[0,1]$,
$$
W_2(\nu_s,\nu_t)=|t-s| \, W_2(\nu_0,\nu_1).
$$
Moreover, for any test function $H:\dR^d\rightarrow \dR$, and by a formal Taylor expansion when $s$ goes to~0,
$$
\int Hd\nu_s=\int H((1-s)x+s\nabla\psi(x))d\nu_0(x)=\int\SBRA{ H(x)+s\nabla H(x)\cdot (\nabla\psi(x)-x)+o(s)}d\nu_0(x).
$$
Assume now that $d\nu_0=gdx$ for a function $g$. Then, by integration by parts as in~\eqref{eq-ipp} below,
$$
\int Hd\nu_s=\int Hd\nu_0-s\int H \, L^g(f) \, d\nu_0+o(s) = \int H \, g_s dx + o(s)
$$
where $f(x)=\psi(x)-|x|^2/2$.
In conclusion, the path $(g_s)_{s \geq 0}$ appears as a (smooth) first-order Taylor expansion of the $W_2$-geodesic path $(\nu_s)_{s \geq 0}$.
\end{erem}
\section{The Riemannian Markov triple context}
\label{sec-riemannian}
In this section we prove the implication (iii) $\Rightarrow$ (i) of Theorem~\ref{thm-legros} in the context of a Riemannian manifold, in the form of Theorem~\ref{thm-main} below.
\subsection{Framework and results}
Let $(\ensuremath{\mathbf{M}}, \mathcal G)$ be a connected complete $\mathcal C^\infty$-Riemannian manifold. Let $V$ be a $\mathcal C^{\infty}$ function on $\ensuremath{\mathbf{M}}$ and consider the Markov semigroup $(P_t)_{t\geq 0}$ with generator $L=\Delta-\nabla V\cdot\nabla$, where $\Delta$ is the Laplace-Beltrami operator. Let also $ d\mu=e^{-V}dx$ where $dx$ is the Riemannian measure and
$\Gamma$ be the carr\'e du champ operator, defined by
\begin{equation} \label{eq-Gamma}
\Gamma(f,g)=\frac{1}{2}(L(fg)-fLg-gLf)
\end{equation}
for any smooth $f,g.$ We let $\Gamma(f)=\Gamma(f,f)=|\nabla f|^2$ where $|\nabla f|$ stands for the length of $\nabla f$ with respect to the Riemannian metric $\mathcal G$.
\medskip
Then $(\ensuremath{\mathbf{M}},\mu,\Gamma)$ is a full Markov triple in a Riemannian manifold, as in~\cite[Chap.~3]{bgl-book}, and in this work we call it a {\it Riemannian Markov triple $(RMT)$}.
\medskip
The measure $\mu$ is reversible with respect to the semigroup,
that is, for any $t\geq0$, $P_t$ is a self-adjoint operator in $\ensuremath{\mathbb{L}}^2(\mu)$.
Moreover the integration by parts formula
$$
\int fLg \, d\mu=-\int \Gamma(f,g)d\mu
$$
holds for all $f, g$ in the set $\mathcal{C}^\infty_c (\ensuremath{\mathbf{M}})$ of infinitely differentiable and compactly supported functions on~$\ensuremath{\mathbf{M}}$.
The generator $L$ satisfies the diffusion property, that is, for any smooth functions $\phi, f,g$,
$$
L(\phi(f))=\phi'(f)Lf+\phi''(f)\Gamma(f),
$$
or equivalently
\begin{equation} \label{eq:diffusion2}
\Gamma(\phi(f),g)=\phi'(f)\Gamma(f,g).
\end{equation}
In other words, the carr\'e du champ operator is a derivation operator for each component.
\medskip
The map $(x,t)\mapsto P_tf(x)$ is simply the solution to the parabolic equation $\partial_t u=Lu$ with $f$ as the initial condition.
\begin{edefi}[$CD(R,m)$ condition]
\label{def-cd}
Let $R\in\dR$ and $m\in (0,\infty]$. We say that the $RMT$ $(\ensuremath{\mathbf{M}},\mu,\Gamma)$ satisfies a $CD(R,m)$ curvature-dimension condition if
$$
\Gamma_2(f)\geq R\Gamma(f)+\frac{1}{m}(Lf)^2
$$
for any smooth function $f,$ say in $\mathcal C^\infty_c(\ensuremath{\mathbf{M}}),$
where
\begin{equation} \label{eq:Gamma2}
\Gamma_2(f)=\frac{1}{2}(L\Gamma(f)-2\Gamma(f,Lf)).
\end{equation}
\end{edefi}
Let us notice that $m$ can be different from the dimension of the manifold $\ensuremath{\mathbf{M}}$.
The $CD(R,m)$ curvature-dimension condition is called the Bakry-\'Emery or $\Gamma_2$ condition
and has been introduced in~\cite{bakryemery}
(see also~the recent~\cite{bgl-book}).
\begin{eex}
On a $d$-dimensional Riemannian manifold $(\ensuremath{\mathbf{M}}, \mathcal G)$
\begin{itemize}
\item the operator $L=\Delta$ satisfies a $CD(R,m)$ condition if $m\geq d$ and the Ricci curvature of the manifold is bounded from below by $R$;
\item more generally, the operator $L=\Delta-\nabla V\cdot\nabla$ satisfies a $CD(R,m)$ condition if $m \geq d$ and
$$
\mathrm{Ric}+{\rm Hess}(V)\geq R \, \mathcal G+\frac{1}{m-d}\nabla V\otimes\nabla V,
$$
where $\mathrm{Ric}$ is the Ricci tensor of $(\ensuremath{\mathbf{M}}, \mathcal G)$, see for instance~\cite[Sec.~C6]{bgl-book} (when $m=d$ then we need $V=0$).
\end{itemize}
\end{eex}
In a $RMT$, the following result gives the implication (iii) $\Rightarrow$ (i) in Theorem~\ref{thm-legros} :
\begin{ethm}
\label{thm-main} Let $(\ensuremath{\mathbf{M}},\mu,\Gamma)$ be a Riemannian Markov triple and $(P_t)_{t\geq0}$ its associated Markov semigroup. Let $R\in\dR$ and $m>0$.
If the inequality~\eqref{eq-contraction-square-general-2} holds for any $t\geq0$ and any smooth functions $f,g$ on $\ensuremath{\mathbf{M}}$ with $f \mu, g\mu$ in $\mathcal P_2(\ensuremath{\mathbf{M}})$,
then the $CD(R,m)$ condition of Definition~\ref{def-cd} holds.
\end{ethm}
\subsection{Proof of Theorem~\ref{thm-main}}
\label{sec-proof-riemannian}
It is based on the approximation of geodesics introduced in Section~\ref{sec-how} (see Remark~\ref{rem:Approximation_geodesic}), properties of the Hopf-Lax solution of the Hamilton-Jacobi equation, and an adapted class of test functions.
\medskip
Let $f$ be in $\mathcal C^\infty_c(\ensuremath{\mathbf{M}}).$
Let also $g$ be a smooth and positive function on $\ensuremath{\mathbf{M}}$ such that $g\mu\in\mathcal P_2(\ensuremath{\mathbf{M}}),$
$$
\int g \, |\log g| \, d\mu<\infty\quad{\rm and}\quad\int \frac{\Gamma(g)}{g}d\mu<\infty.
$$
Let us define the generator $L^g$ by
$$
L^g h =Lh+\Gamma(\log g,h)
$$
on smooth functions $h$. Since $g>0$, then $L^g$ is well defined on the set $\mathcal C^\infty_c(\ensuremath{\mathbf{M}})$ and $L^g h \in \mathcal C_c^\infty(\ensuremath{\mathbf{M}})$ for any $h\in\mathcal C^\infty_c(\ensuremath{\mathbf{M}})$. Moreover, the generator $L^g$ satisfies an integration by parts formula with respect to the probability measure $g\mu$ : for $h,k \in \mathcal{C}^\infty_c (\ensuremath{\mathbf{M}})$ (one of them can be with non compact support)
\begin{equation}
\label{eq-ipp}
\int h \, L^g k \, gd\mu=-\int \Gamma(h,k) \, gd\mu.
\end{equation}
For any $s\geq 0$, let us define $g_s=g(1- sL^gf)$. The function $L^g f$ is in $\mathcal C^\infty_c(\ensuremath{\mathbf{M}})$, so bounded, and we can let $N=||L^gf||_\infty.$ We shall frequently use the bounds $(1-sN) g \leq g_s \leq (1+sN)g$. In particular $g_s>0$ for $s< 1/N.$ Moreover $\int g_sd\mu=1$. Hence, for $s$ small enough, which we now assume, $g_s\mu$ is in $\mathcal P_2(\ensuremath{\mathbf{M}})$ with a smooth and positive density.
The proof of Theorem~\ref{thm-main} consists in applying~\eqref{eq-contraction-square-general-2} with $g_s$ instead of $f,$ dividing by $2s^2$ and letting $s$ go to $0$. For this we shall estimate the three terms in the inequality.
\medskip
A key tool is the Hopf-Lax semigroup defined on bounded Lipschitz functions $\psi$ by
\begin{equation} \label{eq:Hopf-Lax}
Q_s\psi(x):=\inf_{y\in \ensuremath{\mathbf{M}}}\BRA{\psi(y)+\frac{d(x,y)^2}{2s}}, \quad s>0, \,\,x \in \ensuremath{\mathbf{M}}.
\end{equation}
The map $x\mapsto Q_s\psi(x)$ is Lipschitz for every $s\geq0$, and the map $(s,x)\mapsto Q_s\psi(x)$
satisfies the Hamilton-Jacobi equation
$$
\partial_s Q_s\psi+\frac{1}{2}|\nabla Q_s\psi|^2=0,\quad \lim_{s\rightarrow 0}Q_s\psi=\psi
$$
in a sense given in~\cite[Thms. 22.46 and~30.30]{villani-book2} for instance.
We observe that $s Q_s(\psi) = Q_1(s\psi) = Q(s\psi)$, so for $s>0$ the Kantorovich duality~\eqref{eq-kanto} can be written as
\begin{equation}
\label{eq-23}
\frac{W_2^2(\nu_1, \nu_2)}{2s^2} = \frac{1}{s} \sup_{\psi} \SBRA{\int Q_s\psi \, d\nu_1 -\int \psi \, d\nu_2}.
\end{equation}
\medskip
\noindent{\bf Estimate on the term on the left-hand side of~\eqref{eq-contraction-square-general-2}.}
Letting $\psi=f$ in~\eqref{eq-23}, we obtain
\begin{equation} \label{eq:first-estimate0}
\frac{W_2^2(P_tg_s\mu,P_tg\mu)}{2s^2}\geq \int \frac{Q_s f P_tg_s- f P_tg}{s} d\mu.
\end{equation}
Since $f$ is Lipschitz, almost everywhere in $\ensuremath{\mathbf{M}}$ we have
$$
\lim_{s\rightarrow0}\frac{Q_s f P_tg_s- f P_tg}{s} = -\frac{1}{2}\Gamma(f)P_tg-fP_t(gL^gf)
$$
by (vii') in~\cite[Thm.~30.30]{villani-book2}. But, by the definition of $Q_s f$ and since $f$ is bounded,
\[
Q_s f (x)
=
\inf_{ y \in B ( x, \sqrt{ 4s \| f \|_\infty } )}
\left\{
f (y) + \frac{ d (x,y)^2 }{ 2s }
\right\}.
\]
Thus, for the Lipschitz seminorm $\Vert \cdot \Vert_{Lip}$,
\begin{eqnarray}\label{minoqsf}
0 \geq \frac{Q_s f(x) - f (x)}{s}
& \geq
\displaystyle \inf_{ y \in B ( x, \sqrt{ 4s \| f \|_\infty } ) \setminus \{ x \} }
\left\{
\frac{ f (y) - f (x) }{ d(x,y) } \frac{d (x,y)}{s} + \frac{ d (x,y)^2 }{ 2s^2 }
\right\} \nonumber
\\
& \geq
\displaystyle - \frac{1}{2} \sup_{ y \in B ( x, \sqrt{ 4s \| f \|_\infty } ) \setminus \{ x \} }
\left(
\frac{ f (y) - f (x) }{ d(x,y) }
\right)^2
\geq
- \frac12 \| f \|_{Lip}^2
\end{eqnarray}
(see also~\cite[page 585]{villani-book2}). Moreover $|| Q_s f||_\infty \leq ||f||_\infty,$ so, adding and subtracting $Q_s f P_t g$,
$$
\Big|\frac{Q_s f P_tg_s- f P_tg}{s}\Big|\leq || Q_s f||_\infty \; \vert P_t(g L^gf) \vert +P_tg \; \frac{f-Q_sf}{s}\leq \Big( ||f||_\infty \, ||L^g f||_\infty +\frac{||f||^2_{Lip}}{2} \Big) P_tg.
$$
The right-hand side is in $\mathbb L^1(\mu)$, so by the Lebesgue dominated convergence theorem
$$
\liminf_{s\rightarrow0}\frac{W_2^2(P_tg_s\mu,P_tg\mu)}{2s^2}\geq \int \PAR{-\frac{1}{2}\Gamma(f)P_tg-fP_t(gL^gf)}d\mu.
$$
Now, by reversibility of the measure $\mu$ and the integration by parts formula~\eqref{eq-ipp},
$$
\int fP_t(gL^gf)d\mu=\int P_tf L^g(f) \, gd\mu=-\int\Gamma(f, P_tf) \, gd\mu.
$$
Thus we obtain our first estimate:
\begin{equation}
\label{eq-first-estimate}
\liminf_{s\rightarrow 0} \frac{W_2^2(P_tg_s\mu,P_tg\mu)}{2s^2}\geq -\frac{1}{2}\int P_t(\Gamma(f)) gd\mu+\int \Gamma(f,P_tf)gd\mu.
\end{equation}
{\bf Estimate on the first term on the right-hand side}. According to~\eqref{eq-23} we need an upper bound on the quantities $\int Q_s(\psi) g_sd\mu-\int \psi gd\mu$, independent of the bounded Lipschitz function~$\psi$.
\medskip
First of all, for $0 < t < s$,
\begin{equation} \label{eq-hl}
\frac{d}{dt}\int Q_t \psi \, g_t \, d\mu=\int \SBRA{-\frac{1}{2}\Gamma(Q_t\psi)(1-tL^gf)-Q_t\psi L^g f}gd\mu.
\end{equation}
This is justified by item (vii) in~\cite[Thms. 22.46 and~30.30]{villani-book2} and the properties that $g\mu\in\mathcal P(\ensuremath{\mathbf{M}}),$ $L^g f$ is bounded, $||Q_t \psi ||_{\infty}\leq ||\psi||_{\infty}$ and $||Q_t \psi ||_{Lip}\leq ||\psi||_{Lip}$ for any $t$.
Now the integration by parts formula~\eqref{eq-ipp} gives $-\int Q_t\psi \,L^g f \,gd\mu=\int \Gamma(Q_t\psi,f)gd\mu$. Recall that $L^gf$ is bounded
and that we have let $N=||L^gf||_\infty$. For $t<s< 1/N$ we obtain
\begin{multline*}
\frac{d}{dt} \int Q_t \psi \, g_t \, d\mu \leq \int \SBRA{-\frac{1}{2}\Gamma(Q_t\psi)(1-sN)+\Gamma(Q_t\psi,f)}gd\mu
\\
=
\int \SBRA{-\frac{1-sN}{2}\Gamma\PAR{Q_t\psi-\frac{1}{1-sN}f}+\frac{1}{2(1-sN)}\Gamma(f)}gd\mu\leq
\frac{1}{2(1-sN)}\int\Gamma(f)gd\mu.
\end{multline*}
Integrating over the set $t\in[0,s]$ :
$$
\int Q_s \psi \, g_sd\mu-\int \psi gd\mu \leq \frac{s}{2(1-sN)}\int\Gamma(f)gd\mu.
$$
Finally the Kantorovich duality~\eqref{eq-23} gives our second estimate:
\begin{equation}
\label{eq-second-estimate}
\limsup_{s\rightarrow 0}\frac{W_2^2(g_s\mu,g\mu)}{2s^2}\leq\frac{1}{2}\int\Gamma(f)gd\mu.
\end{equation}
{\bf Estimate on the second term on the right-hand side}. Let $u > 0$ and let us compute the limit of $\frac{1}{s}\PAR{\ent{\mu}{P_ug_s}-\ent{\mu}{P_ug}}$ when $s$ goes to 0.
First, for any $s > 0$,
$$
\frac{d}{ds}P_u(g_s)\log P_u(g_s)=-(1+\log P_ug_s) \; P_u \big( g L^g f \big).
$$
Then, for $0 < s < 1/N$,
$$
|(1+\log P_ug_s)P_u(g L^g f)|\leq NP_ug \; (1 + \log(1+N) + |\log P_u(g)|).
$$
Forgetting the dimensional corrective term in~\eqref{eq-contraction-square-general-2}, by the von Renesse-Sturm theorem~\cite{sturm-vonrenesse} the $RMT$ satisfies a $CD(R,\infty)$ condition. In particular, and since $\int \Gamma(g) / g \, d\mu<\infty,$ one can use a local logarithmic Sobolev inequality~\cite[Thm.~5.5.2]{bgl-book} to deduce
$\int P_ug \, |\log P_u g| \, d\mu<\infty$. In particular the right-hand side in the last inequality is in $L^1(\mu).$
Then, by the Lebesgue convergence theorem and~\eqref{eq-ipp},
\begin{eqnarray*}
\lim_{s\rightarrow0}\frac{{\ent{\mu}{P_ug_s}-\ent{\mu}{P_ug}}}{s}
&=&
-\int(1+\log P_ug)P_u \big(g L^g f \big)d\mu
\\
&=&
-\int P_u(\log P_ug)L^g f \; gd\mu =\int \Gamma(P_u(\log P_ug),f)gd\mu.
\end{eqnarray*}
By the Fatou lemma we obtain the third estimate :
\begin{multline}
\label{eq-third-estimate}
\limsup_{s\rightarrow0}-\frac{1}{m}\int_0^te^{-2R(t-u)}\SBRA{\frac{\ent{\mu}{P_ug_s}-\ent{\mu}{P_ug}}{s}}^2du
\\
\leq
-\frac{1}{m}\int_0^te^{-2R(t-u)}\PAR{\int \Gamma(P_u(\log P_ug),f)gd\mu}^2du .
\end{multline}
{\bf Conclusion.} Dividing the inequality~\eqref{eq-contraction-square-general-2} by $2s^2$, letting $s$ go to $0$ and using the three estimates~\eqref{eq-first-estimate},~\eqref{eq-second-estimate} and~\eqref{eq-third-estimate} we get
\begin{multline*}
-\frac{1}{2}\int P_t \Gamma(f) \, gd\mu+\int \Gamma(f,P_tf)gd\mu
\\
\leq
\frac{e^{-2Rt}}{2}\int\Gamma(f)gd\mu-\frac{1}{m}\int_0^te^{-2R(t-u)}\PAR{\int\Gamma(P_u(\log P_ug),f)gd\mu}^2du .
\end{multline*}
This inequality is an equality when $t=0$, and since $f\in\mathcal C_c^\infty(\ensuremath{\mathbf{M}})$, its derivative at $t=0$ implies
$$
-\frac{1}{2}\int L \Gamma(f) \, gd\mu+\int \Gamma(f,Lf)gd\mu\leq -R\int\Gamma(f)gd\mu-\frac{1}{m}\PAR{\int\Gamma(\log g,f)gd\mu}^2.
$$
Since $\int\Gamma(\log g ,f)gd\mu=\int\Gamma(g,f)d\mu=-\int gLfd\mu$ and by definition of the $\Gamma_2$ operator we get
\begin{equation} \label{eq:preCD}
\int \Gamma_2(f)gd\mu\geq R\int\Gamma(f) \, gd\mu+\frac{1}{m}\PAR{\int Lf \, g d\mu}^2
\end{equation}
for any $f\in\mathcal C_c^\infty(\ensuremath{\mathbf{M}})$ and any positive smooth probability density $g$ with $\ent{\mu}{g}\!, \int \Gamma(g)/g <\infty$.
\medskip
Inequality~\eqref{eq:preCD} appears as a weak form of the $CD(R,m)$ condition. Again from the $CD(R,\infty)$ condition, it is a consequence of Wang's Harnack inequality (see~\cite[Thm.~5.6.1]{bgl-book} and~\cite{wang-book}) that there exist $\alpha_0 >0$ and $o \in\ensuremath{\mathbf{M}}$ such that
\begin{equation} \label{eq:integrable}
\int \exp ( - \alpha_0 d ( o, x )^2 ) \, d \mu (x) < \infty.
\end{equation}
Then, in~\eqref{eq:preCD} we can replace $g$ by a sequence $(g_p)_p$ converging to
the Dirac measure $\delta_x$ at $x\in\ensuremath{\mathbf{M}}$; we get
$$
\Gamma_2(f)\geq R\Gamma(f)+\frac{1}{m}(Lf)^2
$$
at any $x\in\ensuremath{\mathbf{M}}$ and for any function $f\in\mathcal C_c^\infty(\ensuremath{\mathbf{M}})$. This is the $CD(R,m)$ condition as in Definition~\ref{def-cd}, and this finishes the proof of Theorem~\ref{thm-main}.
\section{The Riemannian energy measure space context}
\label{sec-mms}
In this section we prove the implication (iii) $\Rightarrow$ (i) of Theorem~\ref{thm-legros} in the context of a Riemannian energy measure ($REM$) space, a particular case of metric measure spaces, see Theorem~\ref{thm:main_mms} below.
The proof goes along the same overall strategy as in the manifold case of Section~\ref{sec-proof-riemannian}. However, to overcome the lack of differentiability, it will require several tools and results from optimal transport and heat distributions on metric measure spaces.
The framework and the main Theorem~\ref{thm:main_mms} are stated in Section~\ref{subsec:frame_mms}. As an intermezzo, in Section~\ref{subsec:3-4} we complement the proof of (ii) $\Rightarrow$ (iii) in Theorem~\ref{thm-legros}.
The path $(\tilde{g}_s)_{s\geq 0}$ is constructed in Section~\ref{subsec-path}, the three key estimates are given in Section~\ref{sec-3-estimates}, finally the main proof is given in Section~\ref{subsec:pf_mms}.
\subsection{Framework and results}
\label{subsec:frame_mms}
As a natural framework, we will state our result
on a Riemannian energy measure space, as introduced in~\cite{AGS_BE}.
Let $( \ensuremath{\mathbf{X}} , \tau )$ be a Polish topological space
and $\mu$ a locally finite Borel measure with a full support.
Let $( \mathcal{E} , \mathcal{D} (\mathcal{E} ) )$ be
a strongly local symmetric Dirichlet form on $\mathbb{L}^2 ( \mu ).$ Let finally
$(P_t)_{t \geq 0}$ be its associated semigroup and $L$ its generator, with domain $\mathcal D(L)\subset\mathbb{L}^2 ( \mu )$.
As for a Markov triple, see~\cite{bgl-book}, and since $P_t$ is symmetric and sub-Markovian, we can extend $P_t$ to a semigroup of contractions
on $\mathbb{L}^p (\mu)$ for $p \in [ 1, \infty ]$.
We also let $\mathcal{E} (f) : = \mathcal{E} (f,f)$ and
$$
\| f \|_{\mathcal{E}}^2 : = \| f \|_{\mathbb{L}^2 (\mu)}^2 + \mathcal{E} (f)
$$
for $f \in \mathcal{D} (\mathcal{E})$.
We assume that $( \ensuremath{\mathbf{X}} , \tau , \mu, \mathcal{E} )$ is a Riemannian
energy measure space in the sense of~\cite[Def.~3.16]{AGS_BE}, denoted $REM$ in this work.
A basic example of a REM space is a Riemannian Markov triple as in Section~\ref{sec-riemannian}.
In this case, $( \mathcal{E} , \mathcal{D} (\mathcal{E}) )$ is canonically defined
by completion of $( f, f ) \mapsto \int | \nabla f |^2 \, d \mu$.
\textsf{RCD} spaces introduced in \cite{AGMR,AGS14}
are another important class of $REM$ spaces.
In this case, $\mathcal{E}/2$ is given by the $\mathbb{L}^2$-Cheeger energy functional.
To make this presentation concise, we prefer to state the crucial properties of a $REM$ space
instead of its precise definition. Indeed the definition consists in several notions, which will be used
only indirectly through these properties:
\begin{itemize}
\item
The intrinsic distance $d_ {\mathcal{E}}$ associated with $( \mathcal{E} , \mathcal{D} ( \mathcal{E} ) )$, in the sense of \cite[Sec.~3.3]{AGS_BE},
becomes a distance function, further denoted $d$.
It is compatible with the topology $\tau$
and the space $(\ensuremath{\mathbf{X}}, d)$ is complete \cite[Def.~3.6]{AGS_BE} and length metric \cite[Thm.~3.10]{AGS_BE}.
\end{itemize}
We let $\mathrm{Lip}_b (\ensuremath{\mathbf{X}})$ denote the set of bounded Lipschitz functions on $\ensuremath{\mathbf{X}}$
(with respect to $d$).
Let $|\nabla f| : \ensuremath{\mathbf{X}}\to \mathbb{R}$ be
the local Lipschitz constant of a Lipschitz function $f$ on $\ensuremath{\mathbf{X}}$:
\[
| \nabla f | (x)
: =
\limsup_{y \to x} \frac{ | f (y) - f (x) | }{ d (x,y) } \cdot
\]
\begin{itemize}
\item
$\mathcal{E}/2$ coincides with
the $\mathbb{L}^2$-Cheeger energy associated with $d$, defined for $f\in \mathbb{L}^2 (\mu)$ by
$${\rm Ch}(f):=\inf\left\{\liminf_{n\to\infty}\frac 12\int |\nabla f_n|^2d\mu\,;\, f_n\in \mathrm{Lip}_b (\ensuremath{\mathbf{X}}),\, f_n\to f\, \mathrm{in} \, \mathbb{L}^2(\mu)\right\}.$$
As a result, $( \mathcal{E} , \mathcal{D} ( \mathcal{E} ) )$
admits a carr\'e du champ, i.e.\ there is a symmetric bilinear map
$\Gamma :
\mathcal{D} ( \mathcal{E} ) \times \mathcal{D} ( \mathcal{E} )
\to \mathbb{L}^1 ( \mu )
$
such that
\[
\mathcal{E} ( f, g ) = \int \Gamma ( f , g ) \, d \mu.
\]
As on smooth spaces, $L$ and $\Gamma$ satisfy the diffusion property
\eqref{eq:diffusion2}.
The coincidence of $\mathcal{E}/2$ and
the Cheeger energy makes many connections between $d$ and $\Gamma$.
For instance,
$\mathcal{D} ( \mathcal{E} ) \cap \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$
is dense in $\mathcal{D} (\mathcal{E})$ with respect to
$\| \cdot \|_{\mathcal{E}}$.
In addition,
\begin{equation} \label{eq:Gamma-nabla}
\Gamma (f) \leq | \nabla f |^2 \quad \mbox{$\mu$-a.e.}
\end{equation}
for any Lipschitz $f \in \mathcal{D} (\mathcal{E})$.
See \cite[Thm.~3.12]{AGS_BE} and \cite[Thm.~3.14]{AGS_BE} for all these facts.
\end{itemize}
Note that $\mathcal{D} ( \mathcal{E} ) \cap \mathbb{L}^\infty ( \mu )$
is an algebra and $\Gamma$ satisfies the Leibniz rule:
\[
\Gamma ( f g , h ) = f \Gamma ( g , h ) + g \Gamma ( f , h )
\quad
\mbox{for
$f,g \in \mathcal{D} ( \mathcal{E} ) \cap \mathbb{L}^\infty ( \mu )$
and
$h \in \mathcal{D} (\mathcal{E})$.
}
\]
We state further assumptions for our main theorem.
Fix a reference point $o \in \ensuremath{\mathbf{X}}$.
\begin{eass}
~
\begin{itemize}
\item[{\rm(Reg1)}]
There is $\alpha_0 > 0$ such that~\eqref{eq:integrable} holds.
\item[{\rm(Reg2)}]
$( \ensuremath{\mathbf{X}}, \tau )$ is locally compact.
\end{itemize}
\end{eass}
Assumption~(Reg1)
is equivalent to the condition (MD.exp) in \cite{AGS_BE}
(see e.g.\ the comments after Equation (3.13) in \cite{AGS_BE}).
This integrability condition yields the conservativity of $P_t$,~i.e.
\[
\int P_t f \, d \mu = \int f \, d \mu
\]
for $f \in \mathbb{L}^1 (\mu)$ (see~\cite[Thm.~3.14]{AGS_BE}).
This is equivalent to $P_t 1 = 1$ $\mu$-a.e, that is, the semigroup is Markovian (instead of sub-Markovian).
In fact~\eqref{eq:integrable} is a nearly optimal condition
to ensure that the semigroup is conservative
(see \cite[Rmk.~4.21]{AGS13}). Thus it is not restrictive.
Assumption~(Reg2) implies
that any closed bounded set in $\ensuremath{\mathbf{X}}$ is compact
(see e.g.~\cite[Prop.~2.5.22]{BBI}).
Moreover, $(\ensuremath{\mathbf{X}},d)$ is a geodesic space
(see e.g.~\cite[Thm.~2.5.23]{BBI}).
As a result, $(\mathcal{P}_2 (\ensuremath{\mathbf{X}}), W_2 )$ is also a geodesic space
(see e.g.~\cite[Cor.~1 and Prop.~1]{Lisini:2006be}).
\smallskip
In this framework, we should be careful when defining the operator $\Gamma_2$ in~\eqref{eq:Gamma2}
since $\Gamma (f)$ may not belong to $\mathcal{D} ( L )$
even for a sufficiently nice $f$.
To avoid such a technical difficulty, and following~\cite[Def.~2.4]{AGS_BE}, we employ a weak form of the $CD (R,m)$ condition :
\begin{edefi}[Weak $CD(R,m)$ condition]
\label{def-weak-cd}
Let $R\in\dR$ and $m>0$.
We say that the $REM$ space $(\ensuremath{\mathbf{X}} , \tau , \mu , \mathcal{E} )$ satisfies
a weak $CD (R,m)$ condition if, for all $f \in \mathcal{D} (L)$
with $L f \in \mathcal{D} (\mathcal{E})$ and all
$g \in \mathcal{D} (L) \cap \mathbb{L}^\infty (\mu)$
with $g \geq 0$ and $L g \in \mathbb{L}^\infty (\mu)$,
\begin{equation} \label{eq:weak-BE}
\frac12 \int \Gamma (f) L g \, d \mu
- \int \Gamma ( f , L f ) g \, d \mu
\geq R \int \Gamma (f) g \, d \mu
+ \frac{1}{m} \int ( L f )^2 g \, d \mu .
\end{equation}
\end{edefi}
Now we are ready to state our main theorem in this framework.
\begin{ethm}
\label{thm:main_mms}
Let $( \ensuremath{\mathbf{X}} , \tau , \mu , \mathcal{E} )$ be a Riemannian energy measure space
satisfying the above regularity assumptions (Reg1) and (Reg2). Let $R \in \mathbb{R}$ and
$m > 0$.
If inequality~\eqref{eq-contraction-square-general-2} holds for any $t \geq 0$ and probability densities $f,g \in \mathbb{L}^1 (\mu)$
with $f \mu , g \mu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$, then the weak $CD ( R, m )$ condition of Definition~\ref{def-weak-cd} holds.
In particular, the conditions (ii) and (iii) in Theorem~\ref{thm-legros}
are equivalent to the weak $CD (R,m)$ condition.
\end{ethm}
Note that~\eqref{eq-contraction-square-general-2}
yields a $W_2$-contraction
\begin{equation}
\label{eq-contraction-easy2}
W_2^2(P_tf d\mu,P_tg d\mu)\leq e^{-2Rt} W_2^2(f d\mu,g d\mu)
\end{equation}
by neglecting the term involving $m$.
Then, by \cite[Cor.~3.18]{AGS_BE}, \eqref{eq-contraction-easy2} implies a $CD (R,\infty)$ condition
in the sense of~\eqref{eq:weak-BE}.
This fact is very helpful for further discussion in the sequel
since it ensures regularity of the space in many respects.
As a regularization property of $P_t$, we have
\begin{equation} \label{eq:Lip-infinity}
\mbox{
$P_t f \in \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$
for $f \in \mathbb{L}^2 (\mu) \cap \mathbb{L}^\infty (\mu), \; t>0$
}
\end{equation}
(see \cite[Thm.~3.17]{AGS_BE}; More precisely,
$P_t f$ has a version which belongs to $\mathrm{Lip}_b (\ensuremath{\mathbf{X}})$).
In addition, $(\ensuremath{\mathbf{X}}, d, \mu )$ becomes an $\mathsf{RCD} (R, \infty)$ space
(see \cite[Thm.~4.17]{AGS_BE}).
Then, for a probability density $f$ with respect to $\mu$,
$( ( P_t f ) \mu )_{t \geq 0}$ is a gradient flow of $\mathrm{Ent}_\mu$
in the sense of the $R$-evolution variational inequality \cite[Thm.~6.1]{AGMR}.
As a consequence, we obtain the following properties:
\begin{itemize}
\item\label{page-dual}
We can extend the action of $P_t$ to $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$
in the sense that $P_t \nu$ is a solution to the $R$-evolution
variational inequality and that $P_t \nu = ( P_t f ) \mu$ if $\nu = f \mu$.
In particular, $( P_t \nu )_{t \geq 0}$ becomes a continuous curve
in $( \mathcal{P}_2 (\ensuremath{\mathbf{X}}), W_2 )$, see \cite[Thm.~6.1]{AGMR}.
In addition, $\nu \mapsto P_t \nu$ is a continuous map from
$( \mathcal{P}_2 (\ensuremath{\mathbf{X}}), W_2 )$ to itself, see \cite[Eq. (7.2)]{AGMR}.
\item
$P_t \nu \ll \mu$ for $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ and $t > 0$, and its density $\rho_t$ satisfies
$\ent{\mu}{\rho_t} \in \mathbb{R}$.
This property is included
in the definition of the $R$-evolution variational inequality, see e.g.\ \cite[Def.~2.5]{AGMR}.
Recall that, under~\eqref{eq:integrable}, $\ent{\mu}{\rho}$ is well-defined
and $\ent{\mu}{\rho} \in ( - \infty , \infty ]$
for $\rho : \ensuremath{\mathbf{X}} \to [ 0, \infty ]$
with $\rho \mu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$, see e.g.~\cite[Sec.~7]{AGS13}.
\item
There is a positive symmetric measurable function $p_t (x,y)$ such that
$P_t$ coincides with the integral operator associated with $p_t$, see \cite[Thm.~7.1]{AGMR}.
\item
For any bounded measurable $h$ and $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$,
we have
\begin{equation}
\label{eq-dual}
\int h \, d P_t \nu = \int P_t h \, d \nu,
\end{equation}
see \cite[Prop.~3.2]{AGS_BE}.
By the monotone convergence theorem, we can extend this identity
to those $h$ which are bounded only from below (or above).
\item For any $f \in \mathcal{D} (L)$ and $h \in \mathcal D({\mathcal E})$
we have the integration by parts formula
\begin{equation}
\label{eq-ipp-mms}
\int \Gamma (h , f) \, d \mu =- \int h \, Lf \, d\mu.
\end{equation}
\end{itemize}
\subsection{Proof of (ii) $\Rightarrow$ (iii) in Theorem~\ref{thm-legros}} \label{subsec:3-4}
As announced, before entering the proof of Theorem~\ref{thm:main_mms}, we first complete the proof of (ii) $\Rightarrow$ (iii) in
Theorem~\ref{thm-legros} with the aid of preparations in Section~\ref{subsec:frame_mms}.
We first check that~\eqref{eq-contraction-sh} yields~\eqref{eq-contraction-easy2}.
As we did in Section~\ref{subsec:frame_mms}, by using~\eqref{eq-contraction-square-general-2}
instead of~\eqref{eq-contraction-sh},~we~get
\begin{equation} \label{eq:s-contraction}
s_{\frac Rm} \left(
\frac12 W_2 ( P_t f \mu , P_t g \mu )
\right)^2
\leq
e^{-2Rt}\,
s_{\frac {R}{m}} \left(
\frac {1}{2} W_2 ( f \mu , g \mu )
\right)^2
\end{equation}
by neglecting the term involving $m$.
From this inequality, we can extend $P_t$ to a map
from $\mathcal{P}_2 (\ensuremath{\mathbf{X}})$ to itself, in a canonical way.
Moreover, in~\eqref{eq:s-contraction}
we can replace $f \mu$ and $g \mu$
with any $\nu_0 , \nu_1 \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ respectively.
Then we obtain~\eqref{eq-contraction-easy2} by a similar argument as in Proposition~\ref{prop-3-4}.
Thus, as discussed in Section~\ref{subsec:frame_mms},
$(\ensuremath{\mathbf{X}} , d, \mu )$ is an $\mathsf{RCD} ( R, \infty )$ space and
all properties at the end of Section~\ref{subsec:frame_mms} become available.
We remark that the extension of $P_t$ given on the basis of
\eqref{eq:s-contraction} coincides with the one given
by the $\mathsf{RCD} ( R, \infty )$ property.
In Section~\ref{sec-proofdebut}, we already pointed out that
we only need to show that $P_t$ fulfills all the assumptions for $\varphi_t$ in
Proposition~\ref{prop-3-4} with $(Y , d_Y ) = ( \mathcal{P}_2 (\ensuremath{\mathbf{X}}) , W_2 )$ and
$U = \entf{\mu}$. Here we are extending the definition of $\entf{\mu}$
so that, for $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$,
$\ent{\mu}{\nu} = \ent{\mu}{{ d \nu }/{ d \mu } }$ if $\nu \ll \mu$
and $\ent{\mu}{\nu} = \infty$ otherwise.
By taking observations at the beginning of this section into account,
it suffices to prove that~\eqref{eq-contraction-sh} implies
\begin{multline*}
s_{\frac Rm} \left(
\frac12 W_2 ( P_t \nu_0 , P_t \nu_1 )
\right)^2
\leq
e^{-2Rt}\,
s_{\frac {R}{m}} \left(
\frac {1}{2} W_2 ( \nu_0 , \nu_1 )
\right)^2
\\
- \frac{1}{2 m} \int_0^t
e^{-2R(t-u)}
\PAR{
\ent{\mu}{ P_u \nu_0 } - \ent{\mu}{ P_u \nu_1 }
} ^2
du
\end{multline*}
for $\nu_0 , \nu_1 \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ and $t > 0$.
But this is true since $P_\delta \nu_0 , P_\delta \nu_1 \ll \mu$ for any $\delta \in (0,t)$,
so that
\begin{multline*}
s_{\frac Rm} \left(
\frac12 W_2 ( P_t \nu_0 , P_t \nu_1 )
\right)^2
\leq
e^{-2R ( t - \delta )}\,
s_{\frac {R}{m}} \left(
\frac {1}{2} W_2 ( P_\delta \nu_0 , P_\delta \nu_1 )
\right)^2
\\
- \frac{1}{2 m} \int_{\delta}^{t}
e^{-2R(t-u)}
\PAR{
\ent{\mu}{ P_{u} \nu_0 } - \ent{\mu}{ P_{u} \nu_1 }
} ^2
du
\end{multline*}
by~\eqref{eq-contraction-sh} and the bound $\sinh^2(x) \geq x^2$; moreover $P_\delta \nu_i \to \nu_i$ in $W_2$ as $\delta \downarrow 0$ for $i=0,1$: this gives
the assertion.
Hence the proof of (ii) $\Rightarrow$ (iii) in Theorem~\ref{thm-legros} is completed
and thus it is sufficient to show the main assertion of Theorem~\ref{thm:main_mms}
to complete the proof of our equivalence result.
\subsection{Construction of the path $(\tilde{g}_s)_{s\geq 0}$}
\label{subsec-path}
In this section, we build the path $\tilde{g}_s$ mentioned in Section~\ref{sec-how},
under~\eqref{eq-contraction-square-general-2}.
Recall that $(\ensuremath{\mathbf{X}} , d, \mu )$ is now an $\mathsf{RCD} (R, \infty)$ space
as remarked at the end of Section~\ref{subsec:frame_mms}.
For $x \in \ensuremath{\mathbf{X}}$ and $r > 0$, we denote the open ball of
radius $r$ centered at $x$ by $B_r (x)$.
For this we first define $g (=\tilde{g}_0)$. We take $g$ in a more tractable (but large enough) class than the full class of Definition~\ref{def-weak-cd}.
Fix $\alpha > \alpha_0$ with $\alpha_0$ as in~\eqref{eq:integrable}, $\lambda \in ( 0 , 1 )$ and $g_0 : \ensuremath{\mathbf{X}} \to \mathbb{R}$ Lipschitz
with compact support.
Let us define $g$ as follows:
\begin{equation} \label{eq:g}
g := \frac{1}{Z} \left(
( 1 - \lambda ) g_0
+
\lambda \exp ( - \alpha d ( x , o )^2 )
\right)
\end{equation}
where $Z > 0$ is a normalizing constant
such that $g \mu \in \mathcal{P} (\ensuremath{\mathbf{X}})$.
Note that~\eqref{eq:integrable} yields $g \mu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$.
We fix $g$ until the end of the proof of Proposition~\ref{prop:preCD} below.
We can define the $\mathbb{L}^2$-Cheeger energy functional $\mathcal{E}_g/2$
associated with $d$ and the probability measure $g \mu$. Let $\mathcal{D} ( \mathcal{E}_g )$ be
the set of $f \in \mathbb{L}^2 (g \mu)$ with $\mathcal{E}_g (f) < \infty$.
Recall that $\mathcal{D} ( \mathcal{E}_g )$ is complete with respect
to $\| \cdot \|_{\mathcal{E}_g}$.
To define the path $(\tilde{g}_s)_{s\geq 0}$ we need the corresponding generator $L^g$, and for this
we show the following auxiliary lemma.
\begin{elem} \label{lem:g-bilinear}
In the above notation,
$\mathcal{D} ( \mathcal{E} ) \subset \mathcal{D} ( \mathcal{E}_g )$
and
\begin{equation} \label{eq:E_g}
\mathcal{E}_g ( f ) = \int \Gamma (f) g \, d \mu
\end{equation}
for $f \in \mathcal{D} (\mathcal{E})$.
In addition, $( \mathcal{E}_g , \mathcal{D} ( \mathcal{E}_g ) )$ is bilinear.
\end{elem}
We do not know whether~\eqref{eq:E_g} is valid
for any $f \in \mathcal{D} ( \mathcal{E}_g )$.
Thus we have to be careful when we apply the integration by parts formula
\eqref{eq-ipp} for $L^g$.
\begin{Proof}
The former assertion follows from \cite[Lem.~4.11]{AGS13}.
For the latter assertion, take $f, \tilde{f} \in \mathcal{D} ( \mathcal{E}_g )$.
For each $n \in \mathbb{N}$, take also $\chi_n \in \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$ with $0 \leq \chi_n \leq 1$,
$\chi|_{B_{n} (o)} \equiv 1$ and $\chi|_{B_{n+1} (o)^c} \equiv 0$.
Since, for each $n \in \mathbb{N}$, $g$ is bounded away from $0$ on $B_n (o),$
we have $f_n : = f \chi_n \in \mathcal{D} ( \mathcal{E} )$
by the locality of the Cheeger energy, see \cite[Prop.~4.8 (b)]{AGS13}
and \cite[Lem.~4.11]{AGS13}.
Moreover,
$( f_n )_{n \in \mathbb{N}}$ forms a Cauchy sequence
with respect to $\| \cdot \|_{\mathcal{E}_g}$
and hence $\| f_n - f \|_{\mathcal{E}_g} \to 0$.
By the same argument,
we have $\| \tilde{f}_n - \tilde{f} \|_{\mathcal{E}_g} \to 0$
for $\tilde{f}_n := \tilde{f} \chi_n$.
By~\eqref{eq:E_g}, and recalling that $\Gamma$ is symmetric bilinear, we have
\[
\mathcal{E}_g ( f_n + \tilde{f}_n )
+
\mathcal{E}_g ( f_n - \tilde{f}_n )
=
2 \left(
\mathcal{E}_g (f_n)
+
\mathcal{E}_g (\tilde{f}_n)
\right).
\]
Therefore the conclusion holds by letting $n \to \infty$.
\end{Proof}
By Lemma~\ref{lem:g-bilinear},
$( \mathcal{E}_g , \mathcal{D} ( \mathcal{E}_g ) )$ is a closed bilinear form
on $\mathbb{L}^2 ( g \mu )$. Hence there are an associated
$\mathbb{L}^2$-semigroup $P^g_t$ of symmetric linear contraction
and its generator $L^g$.
By \cite[Prop.~4.8 (b)]{AGS13}, $\mathcal{E}_g$ is sub-Markovian.
Thus $P^g_t$ satisfies the maximum principle, i.e. $P^g_t f \leq c$ if $f \leq c$
for $f \in \mathbb{L}^2 (g \mu)$ and $c \in \mathbb{R}$.
In addition, $\mathrm{Lip}_b (\ensuremath{\mathbf{X}}) \cap \mathcal{D} ( \mathcal{E}_g )$
is dense in $\mathcal{D} ( \mathcal{E}_g )$
with respect to $\| \cdot \|_{\mathcal{E}_g}$.
Note that we can define $P^g_t$ and $L^g$ without bilinearity of $\mathcal{E}_g$
(see \cite[Sec.~4]{AGS13} and references therein).
However, then they can be nonlinear and the integration by parts formula~\eqref{eq-ipp} may not hold.
\begin{elem} \label{lem:g_dom}
In the above notation,
\begin{enumerate}
\item
$g \in \mathcal{D} ( \mathcal{E} ) \cap \mathbb{L}^\infty ( \mu )$
and $\log g \in \mathcal{D} ( \mathcal{E}_g )$.
\item
$\mathcal{D} (L) \subset \mathcal{D} (L^g)$.
\end{enumerate}
\end{elem}
\begin{Proof}
(i) The first claim follows
from~\eqref{eq:Gamma-nabla} and~\eqref{eq:integrable}.
For the second one, note that
\[
\mathcal{E}_g ( \log g ) \leq \int | \nabla \log g |^2 g \, d \mu.
\]
It is the integrated form of~\eqref{eq:Gamma-nabla}
for $\mathcal{E}_g$ instead of $\mathcal{E}$.
Then the claim follows
from~\eqref{eq:integrable}.
(ii) Let $f\in\mathcal D(L)$ and $h \in \mathcal{D} (\mathcal{E}_g)$.
Take $h_n \in \mathrm{Lip}_b (\ensuremath{\mathbf{X}}) \cap \mathcal{D} ( \mathcal{E}_g )$
for $n \in \mathbb{N}$
such that $\| h_n - h \|_{\mathcal{E}_g} \! \to \! 0$.
By a truncation argument used in the proof of
Lemma~\ref{lem:g-bilinear}, we may assume that each $h_n$ is supported
on a bounded set, without loss of generality.
Then $h_n \in \mathcal{D} ( \mathcal{E} ) \cap \mathbb{L}^\infty (\mu)$
and hence $h_n g \in \mathcal{D} ( \mathcal{E} )$.
Thus the Leibniz rule, the assertion (i),~\eqref{eq-ipp-mms} and~\eqref{eq:Gamma-nabla} imply
\begin{align*}
\left|
\int \Gamma ( h_n , f ) g \, d \mu
\right|
& =
\left|
\int \Gamma ( h_n g , f ) \, d \mu
-
\int h_n \Gamma ( g , f ) \, d \mu
\right|
\\
& \leq
\left|
\int h_n ( L f ) g \, d \mu
\right|
+
\left|
\int h_n \Gamma ( \log g , f ) g \, d \mu
\right|
\\
& \leq
\| h_n \|_{L^2 (g\mu)}
\left(
\| g \|_\infty \| Lf \|_{\mathbb{L}^2 (\mu)}
+
\left\| \frac{ | \nabla g |^2 }{g} \right\|_\infty
\mathcal{E} (f)^{1/2}
\right).
\end{align*}
The definition of $g$ yields
$\| | \nabla g |^2 / g \|_\infty \! < \! \infty$.
Thus there is $C>0$ independent of $h$ and $n$ such~that
\begin{align*}
\left|
\mathcal{E}_g ( h_n , f )
\right|
\leq
C \| h_n \|_{\mathbb{L}^2 (g\mu)}.
\end{align*}
Here we used Lemma~\ref{lem:g-bilinear}.
By letting $n \! \to \! \infty$, we can replace $h_n$ with $h$ in this inequality.
Hence $f \in \mathcal{D} ( L^g )$ since $h$ is arbitrary in $\mathcal{D} (\mathcal{E}_g)$.
\end{Proof}
\bigskip
We can now define the path $(\tilde{g}_s)_{s \geq 0}$.
Let $f \in \mathcal{D} (L) \cap \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$ with $\| f \|_{\infty} \leq 1/4$.
We fix $f$ until the end of the following section, and observe that $f \in \mathbb{L}^2 ( g \mu )$.
Then we let
\begin{equation} \label{eq:gs}
\tilde{g}_s : = g ( 1 + f - P^g_s f ).
\end{equation}
By the $\mathbb{L}^{\infty}$-bound on $f$ and the maximum principle for $P^g_s$,
we have
\begin{equation} \label{eq:g-gs}
\frac12 g \leq \tilde{g}_s \leq 2 g.
\end{equation}
In what follows,
we may assume without loss of generality that $L^g f$ is not identically $0$.
For, by~\eqref{eq-ipp-mms} and Lemma~\ref{lem:g_dom},
\begin{equation} \label{eq:g-log}
\int Lf \, g \, d \mu
=
- \int \Gamma ( f , g ) \, d \mu
=
- \int \Gamma ( f , \log g ) g \, d \mu
=
\int L^g f \, \log g \, g\, d \mu.
\end{equation}
Thus, if $L^g f$ is identically $0$, then $\int L f \, g \, d \mu = 0$;
hence~\eqref{eq:preCD1} below holds in this specific case
(without the next section)
since the $CD (R , \infty )$ condition holds
on our $\mathsf{RCD} (R, \infty)$ space.
\subsection{Three key estimates}
\label{sec-3-estimates}
The proof of Theorem~\ref{thm:main_mms} is based on~\eqref{eq:preCD1} in Proposition~\ref{prop:preCD} below. In turn, this bound is based on the three key estimates in Lemmas~\ref{lem:first},~\ref{lem:second} and~\ref{lem:Ent_conv1}, which in the manifold case of Section~\ref{sec-proof-riemannian} correspond to~\eqref{eq-first-estimate},~\eqref{eq-second-estimate}
and~\eqref{eq-third-estimate}. The proofs are a bit different since we use $\tilde{g}_s$ instead of $g_s$.
The Hopf-Lax semigroup $(Q_s)_{s \geq 0}$ given by~\eqref{eq:Hopf-Lax}
will again play a crucial role. Required properties for $Q_s$ in this framework are given in~\cite[Sec.~3]{AGS13} or \cite[Sec.~3]{AGS_Sob} for instance.
\medskip
We begin with the {\it first estimate}, corresponding to~\eqref{eq-first-estimate}:
\begin{elem}[First estimate] \label{lem:first}
\begin{equation*}
\liminf_{s\rightarrow 0} \frac{W_2^2(P_t\tilde{g}_s\mu,P_tg\mu)}{2s^2}
\geq
-\frac{1}{2}\int P_t( | \nabla f |^2 ) g\, d\mu + \int \Gamma(f,P_tf)g \, d\mu.
\end{equation*}
\end{elem}
\begin{Proof}
It suffices to prove an lower bound on the right-hand side of~\eqref{eq:first-estimate0}.
By a rearrangement,
\begin{equation}\label{eq:estimate1-1}
\int \frac{Q_s f P_t\tilde{g}_s- f P_tg}{s} d\mu
=
\int \frac{Q_s f - f }{s} P_t ( \tilde{g}_s - g ) \, d \mu
+
\int \frac{Q_s f - f }{s} P_t g \, d \mu
+
\int f \frac{P_t ( \tilde{g}_s - g )}{s} \, d \mu .
\end{equation}
Since $g \mu \in \mathcal{P} (\ensuremath{\mathbf{X}})$,
the Cauchy-Schwarz inequality yields $s^{-1} ( \tilde{g}_s - g ) \to - g \, L^g f$
in $\mathbb{L}^1 (\mu)$.
Thus the last term in~\eqref{eq:estimate1-1}
converges to $- \int f P_t ( g L^g f ) \, d \mu$.
By Lemma~\ref{lem:g-bilinear}, and as in Section~\ref{sec-proof-riemannian},
this quantity is equal to the second term on the right-hand side of the assertion.
Moreover, by the general bound~\eqref{minoqsf}, the first term on the right-hand side of~\eqref{eq:estimate1-1} goes to 0.
Finally, by~\eqref{minoqsf} and the Lebesgue dominated convergence theorem we conclude on the second term as in the Riemannian case of Section~\ref{sec-proof-riemannian}.
More precisely, we have
\begin{align*}
\liminf_{s \to 0} & \int \frac{Q_s f(x) - f (x)}{s} P_t g (x) \, \mu (dx)
\\
& \geq
- \frac{1}{2} \limsup_{s \to 0}
\int \sup_{ y \in B ( x, \sqrt{ 4s \| f \|_\infty } ) \setminus \{ x \} }
\left(
\frac{ f (y) - f (x) }{ d(x,y) }
\right)^2 P_t g (x) \mu (dx)
=
- \frac12
\int | \nabla f |^2 P_t g \, d \mu.
\end{align*}
Thus the assertion holds.
\end{Proof}
Next lemma deals with the {\it second estimate} and corresponds to~\eqref{eq-second-estimate}.
\begin{elem}[Second estimate] \label{lem:second}
$$
\limsup_{s\rightarrow 0}\frac{W_2^2(\tilde{g}_s\mu,g\mu)}{2s^2}
\leq
\frac{1}{2 ( 1 -2 \| f \|_\infty )}
\int \Gamma(f) g \, d\mu.
$$
\end{elem}
\begin{Proof}
Again, by the dual form~\eqref{eq-23}, we need to bound $\int Q_s \psi \, \tilde{g}_sd\mu-\int \psi gd\mu$ uniformly from above on the bounded Lipschitz functions $\psi$. We can assume that $\psi$ is moreover supported on a bounded set.
Then the function
$(s_1 , s_2 ) \mapsto \int Q_{s_1}( \psi ) \tilde{g}_{s_2} \, d \mu$
satisfies the assumption of~\cite[Lem.~4.3.4]{ambrosio-gigli-savare}
since we have~\eqref{eq:g-gs} and $\| Q_{s_1} \psi \|_\infty \leq \| \psi \|_\infty$. Thus, instead of~\eqref{eq-hl}, we obtain
\begin{align*}
\frac{d}{ds} \int Q_s(\psi) \tilde{g}_s \, d \mu
& \leq
\frac{d}{ds} \left. \int Q_s(\psi) \tilde{g}_{s_0} \, d \mu \right|_{s_0 = s}
+
\frac{d}{ds} \left. \int Q_{s_0} (\psi) \tilde{g}_{s} \, d \mu \right|_{s_0 = s}
\\
& =
\int \SBRA{
-\frac{1}{2}| \nabla Q_s\psi |^2 ( 1 + f - P^g_s f )
- Q_s\psi \; L^g P_s^g f
} g \, d\mu
\end{align*}
for a.e.\ $s>0$.
Here the equality follows from \cite[Thm.~3.6]{AGS_Sob}, the properties
$\| Q_s \psi \|_{Lip} < \infty$,
$\| Q_s \psi \|_{\infty} < \infty$
and the Lebesgue dominated convergence theorem.
Note that $Q_s \psi \in \mathcal{D} ( \mathcal{E}_g )$
since $Q_s \psi$ is Lipschitz with a bounded support.
Thus, by virtue of Lemma~\ref{lem:g-bilinear} and~\eqref{eq:Gamma-nabla},
\begin{equation*}
- \int Q_s\psi \; ( L^g P_s^g f ) g \, d\mu
=
\mathcal{E}_g ( Q_s\psi , P_s^g f )
\leq
\sqrt{ \mathcal{E}_g ( Q_s\psi ) \mathcal{E}_g ( P_s^g f ) }
\leq
\sqrt{
\int | \nabla Q_s \psi |^2 g \, d\mu
\;
\mathcal{E}_g ( P_s^g f )}.
\end{equation*}
By combining this estimate with the last one,
we obtain
\[
\frac{d}{ds} \int Q_s(\psi) \tilde{g}_s \, d \mu
\leq
\frac{1}{ 2 ( 1 - 2 \|f\|_\infty ) }
\mathcal{E}_g ( P_s^g f )
\leq
\frac{1}{ 2 ( 1 - 2 \|f\|_\infty ) }
\mathcal{E}_g ( f )
=
\frac{1}{ 2 ( 1 - 2 \|f\|_\infty ) }
\int \Gamma( f ) g \, d \mu.
\]
Here the second inequality follows from the spectral decomposition for quadratic forms
and the equality follows from Lemma~\ref{lem:g-bilinear} again
since $f \in \mathcal{D} (L) \subset \mathcal{D} ( \mathcal{E} )$.
Thus the conclusion follows by integrating this estimate, as in the proof of~\eqref{eq-second-estimate}.
\end{Proof}
For the {\it third estimate}, we still require some preparation.
We call $\mathcal{C}_2 (\ensuremath{\mathbf{X}})$ the set of continuous functions $\psi$ on $\ensuremath{\mathbf{X}}$
for which there exists $C > 0$ such that $| \psi (x) | \leq C ( 1 + d ( o , x )^2 )$.
For $\psi \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$ and $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$,
we have $\psi \in \mathbb{L}^1 (\nu)$.
By assumption on $g$, $\psi \in \mathbb{L}^p (g \mu )$
for any $\psi \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$ and $p \in [ 1, \infty )$.
The following lemma ensures integrability properties
required in the proof of Lemma~\ref{lem:Ent_conv1} below.
\begin{elem} \label{lem:integrable}
~
\begin{enumerate}
\item
Let $J : = \{ \psi \in \mathbb{L}^2 (g \mu) \; | \; \psi g \mu \in \mathcal{P} (\ensuremath{\mathbf{X}}) \}$.
Then $\psi g \mu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ for any $\psi \in J$.
Moreover, for $J_0 \subset J$
with $\sup_{\psi \in J_0} \| \psi \|_{\mathbb{L}^2 (g\mu)} < \infty$,
we have
\[
\sup_{\psi \in J_0} \int d (o,x)^2 \psi g \, d \mu < \infty.
\]
\item
$\log P_u g \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$ for $u \geq 0$.
\end{enumerate}
\end{elem}
\begin{Proof}
(i) Using Assumption (Reg1) and~\eqref{eq:g}, this follows from
\begin{align*}
\int d ( o , x )^2 \psi (x) g (x) \, \mu (dx)
\leq
\left( \int d ( o , x )^4 g(x) \, \mu (dx) \right)^{1/2}
\left( \int \psi^2 g \, d \mu \right)^{1/2}
< \infty.
\end{align*}
(ii) By~\eqref{eq:g} this is obvious for $u = 0$ and hence we consider the case $u > 0$.
First of all, $\log P_u g$ is continuous on $\ensuremath{\mathbf{X}}$ since $P_u g > 0$.
Moreover, since $(\ensuremath{\mathbf{X}}, d, \mu )$ is an $\mathsf{RCD} (R,\infty)$ space,
we have the log-Harnack inequality
\begin{equation*}
P_u ( \log g ) (o) - \frac{R d (x,o)^2}{2( e^{2Ru} - 1 )}
\leq \log P_u g (x) \leq \log \| g \|_{\infty}
\end{equation*}
(see \cite[Lem.~4.6]{AGS_BE} or \cite[Prop.~4.1]{LiHQ15}).
Moreover $\log g \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$ and $P_u \delta_o \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ by the properties after~\eqref{eq:Lip-infinity}, so
we have $\int \log g \, d P_u \delta_o = P_u ( \log g ) (o) \in \dR$.
Thus $\log P_u g \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$.
\end{Proof}
We recall characterizations of convergence in $W_2$ for later use.
Let $\nu_n \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$, $n \in \mathbb{N}$ and $\nu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$.
Then $W_2 ( \nu_n , \nu ) \to 0$ is equivalent to either of the following
(see e.g. \cite[Thm.~6.9]{villani-book2}):
\begin{itemize}
\item
$\nu_n \to \nu$ weakly and
$\displaystyle
\sup_{n \in \ensuremath{\mathbb{N}}} \int d ( o, x )^2 \nu_n (dx) < \infty
$,
\item
$\displaystyle \lim_{n \to \infty} \int \psi \, d \nu_n = \int \psi \, d \nu$
for any $\psi \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$.
\end{itemize}
We now turn to the {\it third estimate}.
\begin{elem}[Third estimate] \label{lem:Ent_conv1}
$$
\liminf_{s \to 0}
\frac{1}{s^2}
\int_0^t \! e^{-2R(t-u)}\SBRA{\ent{\mu}{P_u \tilde{g}_s}-\ent{\mu}{P_u g}}^2du
\geq
\int_0^t \! e^{-2R(t-u)} \SBRA{ \int P_u \big( g L^g f \big) \log P_u g \, d \mu }^2 du .
$$
\end{elem}
\begin{Proof}
By the Fatou lemma, it suffices to show
\[
\liminf_{s \to 0} \SBRA{ \frac{ \ent{\mu}{P_u \tilde{g}_s} - \ent{\mu}{P_u g} }{s} }^2
\geq
\SBRA{ \int P_u \big( g L^g f \big) \log P_u g \, d \mu }^2
\]
for each $u > 0$.
By~\eqref{eq:g-gs}
and since
$\ent{\mu}{P_u g} \in \dR$,
we have $P_u \tilde{g}_s \log P_u g, P_u g \log P_u g \in \mathbb{L}^1 (\mu)$. Moreover
$a^2 \geq (a+b)^2/(1+\delta)-b^2 / \delta$ for $\delta > 0$ and
$$
0\leq x\log x-x+1\leq (x-1)^2
$$
for $x\geq 0$,
so
$$
\left( \ent{\mu}{P_u \tilde{g}_s} - \ent{\mu}{P_u g} \right)^2
\geq
\frac{1}{1 + \delta }
\left(
\int (P_u \tilde{g}_s - P_u g ) \log P_u g \, d \mu
\right)^2
- \frac{1}{\delta}
\left( \int \frac{( P_u \tilde{g}_s - P_u g )^2}{P_u g} d\mu
\right)^2.
$$
By the Cauchy-Schwarz inequality for $P_u$,
$$
\limsup_{s \to 0} \frac{1}{s} \!
\int \! \frac{ ( P_u \tilde{g}_s - P_u g )^2}{P_u g} d\mu
\leq
\limsup_{s \to 0}
\frac{1}{s} \!
\int \! P_u \! \left(
\frac{( \tilde{g}_s - g )^2}{g}
\right)
d \mu
=
\limsup_{s \to 0}\,\, s\!\!
\int \! \left| \frac{ P_s^g f - f }{s} \right|^2 \! g \, d \mu
= 0.
$$
Since $\delta > 0$ is arbitrary, it suffices to show
\begin{equation} \label{eq:Ent_conv1}
\lim_{s \to 0} \frac{1}{s} \int P_u \big( g( P^g_s f - f ) \big) \log P_u g \, d \mu
=
\int P_u \big( g L^g f \big) \log P_u g \, d \mu
\end{equation}
in order to complete the proof.
Here the well-definedness of the right-hand side
is included in the assertion.
Since $r \mapsto r_+$ is 1-Lipschitz,
$s^{-1} ( P^g_s f - f )_{+} = ( s^{-1} ( P^g_s f - f ) )_{+}$
converges to $(L^g f )_{+}$
in $\mathbb{L}^2 (g \mu)$ and hence in $\mathbb{L}^1 ( g \mu )$.
By~\cite[Thm.~4.16 (d)]{AGS13}, $\int L^g f \, g \, d \mu = 0$.
Hence $\|( L^g f )_{+} \|_{\mathbb{L}^1 (g\mu)} >0$ since $L^g f$ is not identically $0$
(as assumed at the end of Section~\ref{subsec-path}).
Thus $\| (P^g_s f - f)_{+} \|_{\mathbb{L}^1 ( g \mu )} > 0$
for sufficiently small $s > 0$.
Let us now define $\nu^f_s , \nu^f_0 \in \mathcal{P} (\ensuremath{\mathbf{X}})$ as follows:
\begin{align*}
\nu^f_s : =
\frac{ ( P^g _s f - f )_{+} }{ \| ( P^g_s f - f )_{+} \|_{\mathbb{L}^1 (g \mu)} } g \mu,
\qquad
\nu^f_0 : =
\frac{( L^g f )_+}{\| ( L^g f )_+ \|_{\mathbb{L}^1 (g \mu)} } g \mu .
\end{align*}
Then $\nu^f_s \to \nu_0^f$ weakly in $\mathcal{P} (\ensuremath{\mathbf{X}})$ as $s \to 0$.
Moreover, $\nu^f_s \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ for $s \geq 0$ by (i) in Lemma~\ref{lem:integrable}
since $f, P_s^g f, L^g f \in \mathbb{L}^2 (g \mu)$, and
$W_2 ( \nu^f_s , \nu^f_0 ) \to 0$ as $s \to 0$,
again by (i) in Lemma~\ref{lem:integrable} applied with
$J_0 =\{\| ( P^g_s f - f )_+ \|_{\mathbb{L}^1 (g \mu )}^{-1}(P_s^g f - f)_+, s>0\}$,
and the remark after it.
Then, likewise, $P_u \nu^f_s \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ for $u,s \geq 0$ and
\begin{equation} \label{eq:W2conv}
\lim_{s \to 0} W_2 ( P_u \nu^f_s , P_u \nu^f_0 ) = 0
\end{equation}
by~\eqref{eq-contraction-easy2}.
By Lemma~\ref{lem:integrable} again, $\log P_u g \in \mathcal{C}_2 (\ensuremath{\mathbf{X}})$
and in particular $\log P_u g \in \mathbb{L}^1 ( P_u \nu^f_0 )$.
Hence, by~\eqref{eq:W2conv} and the remark after Lemma~\ref{lem:integrable},
we obtain
\begin{multline*}
\lim_{s \to 0} \frac{1}{s} \int P_u ( g ( P_s^g f - f )_+ ) \log P_u g \, d \mu
=
\lim_{s\to 0} \frac{ \| ( P^g_s f - f)_+ \|_{\mathbb{L}^1 (g\mu)} }{ s }
\int \log P_u g \, d P_u \nu^f_s
\\
=
\| ( L^g f )_+ \|_{\mathbb{L}^1 (g\mu)}
\int \log P_u g \, d P_u \nu^f_0
=
\int P_u ( g ( L^g f )_+ ) \log P_u g \, d \mu \, \in \dR.
\end{multline*}
We can apply the same argument to $( P^g_s f - f )_-$ instead of
$( P^g_s f - f )_+$ to show the corresponding assertion.
In particular, the integral in the right-hand side of~\eqref{eq:Ent_conv1}
is well-defined and these two claims yield~\eqref{eq:Ent_conv1}.
\end{Proof}
\subsection{Conclusion of the proof of Theorem~\ref{thm:main_mms}}
\label{subsec:pf_mms}
Let $g$ be as in the last section, that is, given by~\eqref{eq:g}. To proceed, we recall the notion of semigroup mollification
introduced in \cite[Sec.~2.1]{AGS_BE}.
Let $\kappa \in \mathcal{C}^\infty_c (( 0, \infty ))$ with $\kappa \geq 0$
and $\int_0^\infty \kappa (r) \, d r = 1$.
For $\ep > 0$ and $f \in \mathbb{L}^p (\mu)$ with $p \in [ 1 , \infty ]$,
we define $\mathfrak{h}_\ep f$ by
\[
\mathfrak{h}_{\ep} f
: =
\frac{1}{\ep} \int_0^\infty
P_r f \; \kappa \left( \frac{r}{\ep} \right)
\, d r.
\]
It is immediate that
$\| \mathfrak{h}_\ep f - f \|_{\mathcal{E}} \to 0$
as $\ep \to 0$ for
$f \in \mathcal{D} ( \mathcal{E} )$.
Moreover, for $f \in \mathbb{L}^2 ( \mu ) \cap \mathbb{L}^\infty ( \mu )$,
$\mathfrak{h}_\ep f , L (\mathfrak{h}_\ep f) \in \mathcal{D} (L) \cap \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$.
Here the latter one comes from the following representation:
\[
L \mathfrak{h}_\ep f
=
- \frac{1}{\ep^2} \int_0^\infty
P_r f \; \kappa' \left( \frac{r}{\ep} \right)
\, d r .
\]
\begin{eprop} \label{prop:preCD} Following the same assumptions as in Theorem~\ref{thm:main_mms}, let $f = \mathfrak{h}_\varepsilon f_0$ for some $\varepsilon > 0$
and $f_0 \in \mathbb{L}^2 (\mu) \cap \mathbb{L}^\infty (\mu)$.
Then $\Gamma (f) \in \mathcal{D} ( \mathcal{E} )$, and for $g$ as above
\begin{equation} \label{eq:preCD1}
\frac12 \int \Gamma ( \Gamma (f) , g ) \, d\mu
+
\int \Gamma ( f , L f ) g \, d \mu
\leq
- R \int \Gamma(f) g \, d\mu
-
\frac{1}{m} \PAR{ \int Lf \, g \, d\mu }^2 .
\end{equation}
\end{eprop}
\begin{Proof}
By assumption, $f \in \mathcal{D} (L) \cap \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$.
Moreover, $\Gamma (f) = | \nabla f |^2$
$\mu$-a.e.\ by \cite[Thm.~3.17]{AGS_BE}.
Let $\eta > 0$ be so small that $\eta \| f \|_\infty \leq 1/4$.
By applying Lemma~\ref{lem:first}, Lemma~\ref{lem:second}
and Lemma~\ref{lem:Ent_conv1} to $\eta f$ instead of $f$
in~\eqref{eq-contraction-square-general-2},
\begin{multline*}
-\frac{\eta^2}{2}
\int P_t \Gamma (f) \; g \, d\mu
+
\eta^2 \int \Gamma ( f ,P_t f ) g \, d\mu
\\
\leq
\frac{e^{-2Rt} \eta^2 }{2 ( 1 - 2 \eta \| f \|_\infty ) }
\int \Gamma ( f ) g \, d\mu
-
\frac{\eta^2}{m} \int_0^t e^{-2R(t-u)}
\PAR{
\int
P_u ( ( L^g f ) g ) \log P_u g
\, d\mu
}^2
du.
\end{multline*}
By dividing this inequality by $\eta^2$ and letting $\eta \to 0$,
\begin{multline} \label{eq:preCD1-1}
-\frac{1}{2}
\int P_t \Gamma (f) \; g \, d\mu
+
\int \Gamma ( f ,P_t f ) g \, d\mu
\\
\leq
\frac{e^{-2Rt} }{2}
\int \Gamma ( f ) g \, d\mu
-
\frac{1}{m} \int_0^t e^{-2R(t-u)}
\PAR{
\int
P_u ( ( L^g f ) g ) \log P_u g
\, d\mu
}^2
du.
\end{multline}
By virtue of mollification by $\mathfrak{h}_\varepsilon$,
we have $L f \in \mathcal{D} (\mathcal{E})$ and
\begin{align*}
\left. \frac{d}{dt} \right|_{t=0}
\int \Gamma ( f ,P_t f ) g \, d\mu
& =
- \frac{1}{\varepsilon^2} \int_0^\infty \kappa' \PAR{ \frac{r}{\varepsilon} }
\int \Gamma ( f , P_r f_0 ) g \, d\mu
d r
=
\int \Gamma ( f , L f ) g \, d\mu .
\end{align*}
Note that $\Gamma (f) \in \mathcal{D} ( \mathcal{E} )$
(hence the left-hand side of~\eqref{eq:preCD1} is well-defined).
This fact follows from \cite[Lem.~3.2]{Savare:2014jm}
with the aid of mollification by $\mathfrak{h}_\ep$.
Then, by Lemma~\ref{lem:Ent_conv2} below,
we can differentiate~\eqref{eq:preCD1-1} at $t = 0$ to obtain
\begin{align*}
\frac{1}{2}
\int \Gamma ( \Gamma (f), g ) \, d\mu
+
\int \Gamma ( f , L f ) g \, d\mu
& \leq
- R \int \Gamma ( f ) g \, d\mu
- \frac{1}{m}
\PAR{
\int
( L^g f ) g \log g
\, d\mu
}^2
\\
& =
- R \int \Gamma ( f ) g \, d\mu
- \frac{1}{m}
\PAR{
\int
( L f ) g
\, d\mu
}^2 .
\end{align*}
Here we have used~\eqref{eq:g-log} also in the last equality.
This is nothing but the desired inequality.
\end{Proof}
\begin{elem} \label{lem:Ent_conv2}
For $\psi \in L^2 ( g \mu )$,
\[
\lim_{u \to 0} \int P_u ( \psi g ) \log P_u g \, d \mu
=
\int \psi g \log g \,d \mu .
\]
\end{elem}
\begin{Proof}
We may assume $\psi \ge 0$ and $\psi g \mu \in \mathcal{P} (\ensuremath{\mathbf{X}})$
without loss of generality.
Then in particular $\psi g \mu \in \mathcal{P}_2 (\ensuremath{\mathbf{X}})$ by Lemma~\ref{lem:integrable}.
First of all,
\[
\int P_u ( \psi g ) | \log P_u g | \, d \mu < \infty
\]
by a similar argument as in Lemma~\ref{lem:integrable}. Thus
\[
\int P_{u} ( \psi g ) \log P_{u} g \, d \mu
= \int \psi g P_{u} ( \log P_{u} g ) \, d \mu
\leq
\int \psi g \log P_{2u} g \, d \mu
\]
by the Fubini theorem and the Jensen inequality for $P_u$
as integral operator.
Now, for each $x$, $\lim_{u \to 0} W_2 ( P_u \delta_x , \delta_x ) = 0$ by the remark after Theorem~\ref{thm:main_mms}, and $g$ is bounded and continuous, so $P_u g(x) = \int g d P_u \delta_x \to g(x).$ Moreover $\log P_{2u} g\leq \log ||g||_\infty$ and $\psi g\mu$ is a probability measure, so
by the Fatou lemma
\begin{equation} \label{eq:Ent_conv2up}
\limsup_{u \to 0}
\int P_{u} ( \psi g ) \log P_{u} g \, d \mu
\leq
\int \psi g \log g \, d \mu .
\end{equation}
For the opposite bound, again by the Jensen inequality for $P_u$,
\[
\int P_{u} ( \psi g ) \log P_{u} g \, d \mu
\geq
\int P_u ( \psi g ) P_u ( \log g ) \, d \mu
=
\int \log g P_{2u} ( \psi g ) \, d \mu .
\]
Moreover $\log g$ is in $\mathcal{C}_2 (\ensuremath{\mathbf{X}})$ and $W_2 ( P_{2 u} ( \psi g ) \mu , \psi g \mu ) \to 0$ as $u \to 0$, again by the remark after Theorem~\ref{thm:main_mms}. Hence, by the remark after Lemma~\ref{lem:integrable},
we obtain
\begin{equation} \label{eq:Ent_conv2down}
\liminf_{u \to 0} \int P_{u} ( \psi g ) \log P_{u} g \, d \mu
\geq
\lim_{u \to 0} \int P_{2u} ( \psi g ) \log g \, d \mu
=
\int \psi g \log g \, d \mu .
\end{equation}
Hence the conclusion follows from the combination of
\eqref{eq:Ent_conv2up} and~\eqref{eq:Ent_conv2down}.
\end{Proof}
Now we are in turn to complete the proof of Theorem~\ref{thm:main_mms}.
\begin{tProof}{Theorem~\ref{thm:main_mms}} The last crucial step consists in transforming
$(
\int
( L f ) g
\, d\mu
)^2$
into
$
\int
( L f )^2 g
\, d\mu
$
which will be done by a localization procedure. Let $f$ be as in Proposition~\ref{prop:preCD}.
Remark first that, by letting $\lambda \to 0$ in the definition~\eqref{eq:g}, we obtain
\eqref{eq:preCD1} for $g_0$ instead of the function $g$ of~\eqref{eq:g}.
To put the square inside the integral in~\eqref{eq:preCD1}, we need to \emph{localize} this inequality, and thus we employ a partition of unity.
Let $\eta > 0$. Since $L f \in \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$,
we can take $\delta > 0$ sufficiently small
so that $| L f (x) - L f (y) | < \eta$
for any $x,y \in \ensuremath{\mathbf{X}}$ with $d(x,y) < 4 \delta$.
Since $\mathrm{supp}\, g_0$ is compact,
there is $\{ x_i \}_{i=1}^n \subset \ensuremath{\mathbf{X}}$ such that
$\mathrm{supp}\, g_0 \subset \bigcup_{i=1}^n B_{\delta} (x_i)$
(note that we require the regularity assumption~(Reg2)
only at this point).
Let us define $\tilde{\psi}_i$
($i=1, \ldots , n$)
by $\tilde{\psi}_i (x) : = 0 \vee ( 2 \delta - d (x_i , x ) )$
and
\[
\psi_i (x) :=
\begin{cases}
\displaystyle
\frac{ \tilde{\psi}_i (x) }{ \sum_{j=1}^n \tilde{\psi}_j (x) }
&
\mathrm{if} \quad \tilde{\psi}_i (x) \neq 0 ,
\\
0 & \mathrm{if} \quad \tilde{\psi}_i (x) = 0.
\end{cases}
\]
Then $\psi_i \in \mathrm{Lip} (\mathrm{supp}\ g_0)$, $0 \leq \psi_i \leq 1$,
$\mathrm{supp}\, \psi_i \subset B_{2 \delta} (x_i)$
and $\sum_{i=1}^n \psi_i (x) = 1$ for $x \in \mathrm{supp}\, g_0$.
By applying~\eqref{eq:preCD1} for $\psi_i g_0 / \| \psi_i g_0 \|_{\mathbb{L}^1 (\mu)}$
instead of $g_0$, we have
\begin{align*}
\frac12 & \int \Gamma ( \Gamma ( f ) , g_0 ) \, d \mu
+
\int \Gamma (f , L f ) g_0 \, d \mu
=
\sum_{i =1}^n
\left(
\frac12 \int \Gamma ( \Gamma ( f ) , \psi_i g_0 ) \, d \mu
+
\int \Gamma (f , L f ) \psi_i g_0 \, d \mu
\right)
\\
& \leq
- R \int \Gamma (f) g_0 \, d \mu
- \frac{1}{m}
\sum_{i=1}^n \frac{1}{ \| \psi_i g_0 \|_{\mathbb{L}^1 (\mu)}}
\left(
\int ( Lf ) \psi_i g_0 \, d \mu
\right)^2 .
\end{align*}
By the choice of $\delta$ and $\{ \psi_i \}_{i=1}^n$, with $\eta<1$,
\begin{align*}
\sum_{i=1}^n \frac{1}{ \| \psi_i g_0 \|_{\mathbb{L}^1 (\mu)}}
\left(
\int ( Lf ) \psi_i g_0 \, d \mu
\right)^2
& \geq
(1-\eta) \sum_{i=1}^n \| \psi_i g_0 \|_{\mathbb{L}^1 (\mu)} Lf (x_i)^2
- \eta
\\
& \geq
(1-\eta)\int ( Lf )^2 g_0 \, d\mu-\eta-2\eta(1-\eta)\|Lf\|_{\infty}
\end{align*}
By letting $\eta \to 0$,
\[
- \frac12 \int \Gamma ( \Gamma ( f ) , g_0 ) \, d \mu
-
\int \Gamma (f , L f ) g_0 \, d \mu
\geq
R \int \Gamma (f) g_0 \, d \mu
+
\frac{1}{m} \int ( Lf )^2 g_0 \, d \mu .
\]
Let now $g \in \mathcal{D} (L) \cap \mathbb{L}^\infty (\mu)$
with $g \geq 0$ and $L g \in \mathbb{L}^\infty (\mu)$,
as in Theorem~\ref{thm:main_mms}.
By virtue of mollification by $\mathfrak{h}_\ep$,
\eqref{eq:Gamma-nabla} and~\eqref{eq:Lip-infinity},
we have
$
\Gamma (f) , \Gamma ( f , L f ), ( Lf )^2
\in
\mathbb{L}^1 (\mu) \cap \mathbb{L}^\infty (\mu)
$.
Thus we can replace $g_0$ in the last inequality
with $g_1 \in \mathrm{Lip}_b (\ensuremath{\mathbf{X}}) \cap \mathcal{D} ( \mathcal{E} )$,
by a standard truncation argument.
Then we can replace $g_1$ with $g$
since $\mathcal{D} (\mathcal{E}) \cap \mathrm{Lip}_b (\ensuremath{\mathbf{X}})$
is dense in $\mathcal{D} (\mathcal{E})$
with respect to $\| \cdot \|_{\mathcal{E}}$.
Finally, we remove the mollification $\mathfrak{h}_\ep$.
Let $f \in \mathcal{D} ( L )$ with $L f \in \mathcal{D} (\mathcal{E})$
and $f_n : = (-n) \vee f \wedge n$.
Then we have, from the integration by parts formula~\eqref{eq-ipp-mms},
\[
\frac12 \int \Gamma ( \mathfrak{h}_\ep f_n ) L g \, d \mu
-
\int \Gamma ( \mathfrak{h}_\ep f_n , L \mathfrak{h}_\ep f_n ) g \, d \mu
\geq
R \int \Gamma ( \mathfrak{h}_\ep f_n ) g \, d \mu
+
\frac{1}{m} \int ( L \mathfrak{h}_\ep f_n )^2 g \, d \mu .
\]
By virtue of mollification by $\mathfrak{h}_\ep$,
$\| \mathfrak{h}_\ep f_n - \mathfrak{h}_\ep f \|_{\mathcal{E}} \to 0$
and $\| L \mathfrak{h}_\ep f_n - L \mathfrak{h}_\ep f \|_{\mathcal{E}} \to 0$
as $n \to \infty$.
Thus we obtain~\eqref{eq:weak-BE}
by letting $n \to \infty$ and $\ep \to 0$ after it,
with taking $L \mathfrak{h}_\ep f = \mathfrak{h}_\ep L f$ into account.
\end{tProof}
\section{Links with functional inequalities}
{\bf{A new proof of the entropy-energy inequality}}
\medskip
We now
consider the case where $R>0$ and $\mu$ is a probability measure. It is classical that the $CD(R,m)$ condition implies the entropy-energy inequality
\begin{equation} \label{eq:En-En}
\ent{\mu}{f}\leq \frac{m}{2}\log\left(1+\frac{1}{mR}I(f)\right)
\end{equation}
for any function $f$ such that $\int fd\mu=1$. Here $I(f)=\int\Gamma(f)/fd\mu$ is the Fisher information of~$f$. This inequality is given in~\cite[Thm.~6.8.1]{bgl-book} for instance, and also in \cite[Cor. 3.28]{EKS13} via the $(R,m)$-convexity of $\entf{\mu}$.
Inequality~\eqref{eq:En-En} improves upon the standard non dimensional logarithmic Sobolev inequality $\ent{\mu}{f}\leq I(f)/ 2R,$ a consequence of the $CD(R, \infty)$ condition. It leads
for example to a sharp bound on the instantaneous creation of the entropy of the heat semigroup in $\mathcal P_2(\ensuremath{\mathbf{X}})$, namely
$$
\ent{\mu}{P_tf}\leq \frac m2 \log\frac{1}{1-e^{-2Rt}}
$$
for all $f$ and $t>0$. For similar bounds, see also \cite[Prop. 2.17]{EKS13} for a gradient flow argument starting from the $(R,m)$-convexity of $\entf{\mu}$, and~\cite[Prop. 3.1]{BGG15} for Fokker-Planck equations on $\mathbb R^m$ with $R$-convex potentials.
\smallskip
The two approaches of \cite{bgl-book} and \cite{EKS13} are rather involved, and we now give a formal (and below rigorous) and direct way of recovering~\eqref{eq:En-En} from the contraction inequality~\eqref{eq-contraction-square-general-2} in Theorem~\ref{thm-legros} (which is equivalent to the $CD(R,m)$ condition). The key point is the (formal) identity
\begin{equation} \label{eq:speed}
\limsup_{\delta \downarrow 0}\frac{W_2^2(P_{\delta + t } f\mu, P_t f\mu)}{\delta^2}
= I( P_t f)
\end{equation}
(see e.g.~\cite[Equation~(26)]{ov00}) and the classical identity $\frac{d}{du}\ent{\mu}{P_uf}=-I(P_uf)$.
Indeed, from inequality~\eqref{eq-contraction-square-general-2} and the Fatou Lemma, for any $0\leq s<t$,
\begin{align*}
I ( P_t f )
=
\limsup_{\delta \downarrow 0} \frac{ W_2^2(P_{t+\delta} f\mu,P_{t} f \mu) }{\delta^2}
& \leq
e^{-2R(t-s)}
\limsup_{\delta \downarrow 0} \frac{ W_2^2(P_{s+ \delta} f \mu, P_s f \mu) }{\delta^2}
\\ &\quad - \! \frac{2}{m} \! \int_s^t \! e^{-2R(t-u)}
\liminf_{\delta \downarrow 0}
\left(
\frac{ \ent{\mu}{P_{u+\delta} f} - \ent{\mu}{P_u f} \!}{\delta}
\right)^2 \! du
\\
& =
e^{-2R(t-s)} I ( P_s f )
- \frac{2}{m}\int_s^{t}e^{-2R(t-u)} I ( P_u f )^2 du.
\end{align*}
This yields the differential inequality
$$
\frac{d}{dt}I(P_tf)\leq - 2R \, I(P_tf)-\frac2mI(P_tf)^2
$$
and then
\begin{equation}\label{eq:I2}
I ( P_t f ) \le \frac{ m R I (f) }{ e^{2Rt} ( I(f) + m R ) - I (f) }
\end{equation}
by integration on $[0,t]$. The entropy-energy inequality~\eqref{eq:En-En} follows by further integrating~\eqref{eq:I2} on $[0, + \infty)$ and using
$\displaystyle \lim_{t \to \infty} \ent{\mu}{P_t f} = 0$.
Before making this argument rigorous we give a formal argument to~\eqref{eq:speed} at $t=0$, alternative to~\cite{ov00}. For simplicity, assume that $\mu=dx$ is
the Riemannian measure and $(P_t)_{t \geq 0}$ is the heat semigroup associated with the Laplace-Beltrami operator $L = \Delta$. Let $f$ be a probability density with respect to $dx$. First
$$
\partial_s P_{s\delta} f + \nabla\cdot(w_sP_{s\delta}f) =0,
$$
where $w_s=-\delta\nabla\log P_{s\delta}f$. Then one can check that at the first order in $\delta$, the couple $(P_{s\delta}f,w_s)_{s\in[0,1]}$ is optimal between $P_\delta f\mu$ and $f\mu$ in the Benamou-Brenier formulation (see~\cite[Chap. 7]{villani-book2}). Hence
$$
\frac{W_2^2(P_\delta f\mu,f\mu)}{\delta^2}=\int_0^1\int |\nabla\log P_{s\delta}f|^2P_{s\delta}fd\mu ds + o(1) \rightarrow I(f), \quad \delta \to 0.
$$
\medskip
\begin{ethm}
In a REM space as in Section~\ref{sec-mms}, the contraction inequality~\eqref{eq-contraction-square-general-2} implies the entropy-energy inequality~\eqref{eq:En-En}.
\end{ethm}
\begin{Proof}
Let $f$ be a probability density with $f\mu \in \mathcal P_2(\ensuremath{\mathbf{X}})$ and $I (f) < \infty,$ as we can assume.
Recall that $( \ensuremath{\mathbf{X}} , d, \mu )$ is a $\mathsf{RCD} (R, \infty)$ space
under our assumption~\eqref{eq-contraction-square-general-2}.
Thus, by \cite[Thm.~9.3 (i) and Thm.~8.5 (i)]{AGS13},
\begin{equation} \label{eq:speed2}
- \frac{d}{du}\ent{\mu}{P_uf}
=
I(P_uf)
=
\limsup_{\delta \downarrow 0} \frac{ W_2^2 ( P_{u+\delta} f \mu , P_u f \mu ) }{\delta^2}
\end{equation}
for a.e.~$u \in (0, + \infty)$. In particular,~\eqref{eq:speed} holds almost everywhere and, proceeding as above,
\begin{equation}\label{eq:I}
I(P_tf)\leq e^{-2R(t-s)}I(P_sf)-\frac{2}{m}\int_s^t e^{-2R(t-u)}I(P_uf)^2du
\end{equation}
for any $t > s>0$ where~\eqref{eq:speed2} is valid.
\smallskip
We now prove that~\eqref{eq:I} holds for all $t>s \geq0$.
For this, set $\psi (t) := e^{2Rt} I ( P_t f )$.
Then $\psi$ is non-increasing on $[ 0, \infty )$ by a standard argument:
Indeed, by $CD ( R, \infty )$ with the self-improvement argument in \cite{Savare:2014jm}, we have
$\sqrt{ \Gamma ( P_t f )} \le e^{-Rt} P_t ( \sqrt{ \Gamma (f) } )$ for all $t \geq 0$. It yields
\[
\frac{ \Gamma ( P_t f ) }{ P_t f }
\le e^{-2R(t-s)} \frac{ \left( P_{t-s} ( \sqrt{\Gamma (P_s f)} ) \right)^2 }{ P_{t-s} ( P_s f ) }
\le
e^{-2R(t-s)}
P_{t-s} \left(
\frac{ \Gamma ( P_s f ) }{ P_s f }
\right).
\]
Thus the claim follows by integrating this inequality by $\mu$.
Moreover $t \mapsto I ( P_t f )$ is lower semi-continuous (see e.g.~\cite[Lem.~4.10]{AGS13}).
Thus $\psi$ is lower semi-continuous and non-increasing on $[0,\infty)$, so also right-continuous .
This implies that~\eqref{eq:I} holds for $t >s \geq 0$.
\smallskip
Let now $\delta > 0$. By dividing~\eqref{eq:I} by
$e^{-2Rt}( \psi(t) + \delta ) ( \psi(s) + \delta )$, for $t > s > 0$,
\begin{equation} \label{eq:pre_En-En1}
\frac{2}{m ( \psi (s) + \delta ) ( \psi (t) + \delta ) }
\int_s^t e^{-2Ru} \psi (u)^2 \, du
\leq
\frac{1}{ \psi (t) + \delta }
-
\frac{1}{ \psi (s) + \delta } \cdot
\end{equation}
We claim
\begin{equation} \label{eq:pre_En-En2}
\frac{ 2 ( 1 - \delta ) }{m}
\int_{0}^t e^{-2Ru}
\left(
\frac{ \psi (u) }{ \psi (u) + \delta }
\right)^2
\, du
\le
\frac{1}{ \psi (t) + \delta }
-
\frac{1}{ \psi (0) + \delta }
\end{equation}
for any $t \in [0, \infty)$.
For the proof of the claim, we let $J$ be the subset of $t \in [ 0 , \infty )$
satisfying~\eqref{eq:pre_En-En2} and prove $J = [ 0 , \infty )$.
First, $0 \in J$ obviously holds and hence $J \neq \emptyset$.
Second, if $t \in J$ and $t' \in ( t , \infty )$ with $t'- t$ sufficiently small,
then $t' \in J$. Indeed, by the right continuity of $\psi$,
we have
$\psi (u) + \delta \geq ( 1 - \delta ) ( \psi (t) + \delta )$
for any $u> t$ being sufficiently close to $t$.
We take $t' > t$ so that
this holds for all $u \in ( t , t' )$.
Thus~\eqref{eq:pre_En-En2} for this $t$,~\eqref{eq:pre_En-En1} and $\psi$ being non-increasing
yield
\begin{align*}
\frac{ 2 ( 1 - \delta ) }{m}
\int_{0}^{t'} e^{-2Ru}
&\left(
\frac{ \psi (u) }{ \psi (u) + \delta }
\right)^2
\, du
\\
& \leq
\frac{1}{ \psi (t) + \delta }
-
\frac{1}{ \psi (0) + \delta }
+
\frac{2}{m ( \psi (t) + \delta ) ( \psi (t') + \delta )}
\int_{t}^{t'} e^{-2Ru} \psi (u)^2 \, du
\\
& \le
\frac{1}{ \psi (t') + \delta }
-
\frac{1}{ \psi (0) + \delta }
\end{align*}
and hence $t' \in J$.
Third, $J$ is closed under increasing sequences.
That is, for any bounded increasing sequence $( t_n )_{n \in \ensuremath{\mathbb{N}}}$ in $J$, then
$\displaystyle \lim_{n \to \infty} t_n \in J$.
This property follows from the fact that $\psi$ is lower semi-continuous.
Now these three properties imply $J = [ 0 , \infty )$ and hence the claim holds.
\smallskip
Finally we obtain~\eqref{eq:I2} for all $t \geq 0$
by taking $\delta \downarrow 0$ and rearranging terms
in~\eqref{eq:pre_En-En2}.
But
\begin{equation}\label{HI}
\ent{\mu}{f} - \ent{\mu}{P_t f} = \int_0^t I ( P_s f ) ds
\end{equation}
for all $t$ by \cite[Thms.~9.3 (i) and~8.5 (i)]{AGS13} again.
Hence integrating~\eqref{eq:I2} in $t$ concludes the proof.~\end{Proof}
\bigskip
{\bf{A dimensional HWI type inequality}}
\medskip
For $R$ being $0$ or negative, no logarithmic Sobolev inequality for $\mu$ holds in general, and following~\cite{ov00} it can be replaced by a HWI interpolation inequality with an additional $W_2$ term : this is inequality giving an upper bound on the entropy $H$ in terms of the distance $W_2$ and the Fisher information $I$. As above, let us see how to derive a dimensional form of this inequality from the contraction property~\eqref{eq-contraction-sh} in Theorem~\ref{thm-legros}.
In a $REM$ space as in Section~\ref{sec-mms}, with a reference measure $\mu$ in $\mathcal P_2(\ensuremath{\mathbf{X}})$, assume the contraction property~\eqref{eq-contraction-sh} with $R=0$. Let $f, g$ such that $f \mu, g \mu \in \mathcal P_2(\ensuremath{\mathbf{X}})$, $I(f) < \infty$ and $g \mu$ has bounded support. Recall first that $( \ensuremath{\mathbf{X}} , d, \mu )$ is a $\mathsf{RCD} (0, \infty)$ space
under our assumption~\eqref{eq-contraction-sh}. In particular $I(P_t f) \leq I(f)$ for all $t \geq 0.$ Then \cite[Thm.~6.3]{AGMR} and the Cauchy-Schwarz inequality yield
$$
\frac{1}{2} \frac{d}{dt}
W_2^2(P_t f \mu, g \mu )
\geq
- W_2 ( P_t f \mu, g \mu ) \sqrt{I( P_tf)}
$$
for almost every $t>0.$ In particular
$$
\frac{1}{2}
W_2^2(P_t f \mu, g \mu )
-
\frac{1}{2}
W_2^2(f \mu, g \mu )
\geq - \int_0^t W_2 ( P_s f \mu, g \mu ) \sqrt{I( P_s f )} \, ds
\geq - \int_0^t W_2 ( P_s f \mu, g \mu ) \sqrt{I( f )} \, ds
$$
for all $t \geq 0$.
If now $g$ converges to $1$ in such a way that $g \mu $ converges to $\mu$ in the $W_2$ distance, then using the triangular inequality
$$
\big\vert W_2(P_s f \mu, g \mu) - W_2(P_s f \mu, \mu) \big\vert \leq W_2(g \mu, \mu)
$$
for any $0 \leq s \leq t$ one can pass to the limit above, leading to
$$
\frac{1}{2}
W_2^2(P_t f \mu, \mu )
-
\frac{1}{2}
W_2^2(f \mu, \mu )
\geq - \int_0^t W_2 ( P_s f \mu, \mu ) \sqrt{I( f )} \, ds.
$$
Now by~\eqref{eq-contraction-sh} the left-hand side is bounded from above by
$$
-4m \int_0^t \sinh^2 \left(\frac{ \ent{\mu}{P_s f} }{2m} \right) ds.
$$
Finally $s \mapsto W_2(P_sf \mu, \mu)$ and $s \mapsto \ent{\mu}{P_s f}$ are continuous on $[0,t]$, so one can let $t$ go to $0$ and obtain
\begin{equation} \label{eq-HWI1}
\sinh^2\left(\frac{ \ent{\mu}{f}}{2m} \right)\leq \frac{1}{4m}W_2(f\mu,\mu)\sqrt{I(f)}.
\end{equation}
A corresponding bound can also be derived for any $R$, in which the $s_{R/m}$ function appears.
\medskip
Here is a possible application of~\eqref{eq-HWI1}: in the above notation and assumptions (with $R=0$), there exists a positive numerical constant $C$ such that
$$
\ent{\mu}{P_t f} \leq \frac{m}{2} \max \Big\{ C , \log \frac{W_2^2(f \mu, \mu)}{mt} \Big\}, \quad t>0
$$
for all $f$ with $f \mu \in \mathcal P_2$.
This bound is a consequence of~\eqref{HI}, \eqref{eq-HWI1} with $P_t f$ instead of $f$, the bounds $W_2(P_t f \mu, \mu) \leq W_2(f\mu, \mu)$ and $\sinh^4 (x) \geq e^{4x}/32$ for $x$ large enough.
For short time, this gives a regularization bound of the entropy as $m/2 \log (1/t)$, which is exactly the behaviour observed above for $R>0$, and also for the heat kernel on $\mathbb R^m$; it also improves on the corresponding bound $m \log (1/t)$ in~\cite[Prop. 2.17, (ii)]{EKS13}.
\medskip
\noindent
{\bf Acknowledgments.} This research was supported by the French ANR-12-BS01-0019 STAB project
and JSPS Grant-in-Aid for Young Scientist (A) 26707004.
|
1,116,691,500,940 | arxiv | \section{Introduction}
Laser cooling and manipulation of neutral atoms is one of the priority field of atom
optics. In the recent time the major developments and success have been obtained in atom
lithography and direct deposition of atoms utilizing light fields as an immaterial
optical mask for atomic beam \cite{Meschede03,Oberthaler03}. In most nanofabrication
experiments, atomic structures are realized by far off detuned periodical conservative
potential created by intense laser fields acting as an array of immaterial light lenses
for the atoms. The influence of the spontaneous emission on the focusing is considered
to be negligible because of the large light detuning and short interaction times. In
essence, the atom trajectory affected by conservative dipole force without any loses (or
dissipation) of energy in the atomic beam. In this case the atomic beam focusing has a
classical analogy and can be described with methods developed for particle optics
\cite{McClelland91}. As in normal optics, the feature size is limited by a combination
of chromatic aberration caused by the broad longitudinal and transverse velocity
distribution of an atomic beam. Therefore an additional laser cooling field is required
to prepare a well collimated and transversely cooled atomic beam to minimize deleterious
effects. Additionally, because of spherical aberration some atoms do not focus well and
contribute to pedestal background. These factors are dominant and do not allow one to
reach the theoretically predicted diffraction limit for atom optics determined by de
Broglie wavelength of atoms, only a few picometres. Therefore the new alternative
methods for atom lithography are intensively investigated.
Recently, the idea of combining the traditional focusing method with the well known
concept of laser cooling was suggested for a blue detuned intensive light field
\cite{Stutzle03,Pru04} and was mentioned earlier in \cite{kaz}. Here the intense light
field, commonly used for focusing, was used to create a deep optical potential.
Additional dissipative light force provides additional cooling of the atoms to the
minimum of optical potential at blue detuning. The characteristic time when the
dissipation processes take effect is a few inverse recoil frequencies $\omega_{R}^{-1}$
(where $\hbar \, \omega_{R}=\hbar^2 k^2/2M$ is recoil energy, gained by an atom with
mass $M$ at rest after emission of a photon with momentum $\hbar k$). This time is
several tens of microseconds for the number of elements with closed dipole optical
transitions suitable for laser cooling. Thus for commonly used atomic beams with thermal
longitudinal velocity it might be difficult to realize this type of dissiaptive optical
mask experimentally due to power limitations of laser system used.
In the following we consider the alternative regime of dissipative optical mask, created
by red detuned low intensity light field with nonuniform polarization. It is well known
that low intensity light field with polarization gradients can be used for sub-Doppler
laser cooling of neutral atoms. This mechanism of laser cooling is well understood and
has been thoroughly studied by a number of authors
\cite{Wineland79,Dal89,Juha92,Berman93, Castin94} especially with respect to the
temperature of laser cooling, the atomic momentum distribution, and the localization
\cite{Gatzke97,Marksteiner95} that can be measured by spectroscopy methods
\cite{Jessen92,Raithel97}. Due to the extremely complex master equation for the quantum
description of atomic motion in light fields the secular approximation
\cite{Dal91,Dal93,Berman93,Castin94,Deutsch97} was initially suggested is valid in the
limit \cite{Dal91}
\begin{equation}\label{sec}
\sqrt{U_0/\hbar \omega_R} \ll |\delta|/\gamma \, .
\end{equation}
This limit assumes the energy separation between different energy bands of atoms in
optical potential are much greater then their width due to optical pumping and tunneling
effects. Here the light-shift well depth $U_0$ define the optical potential depth.
$\delta =\omega-\omega_0$ is the detuning between the laser $\omega$ and atomic
transition $\omega_0$ frequencies, and $\gamma$ is the radiative decay rate. This
approximation is valid, for a given potential depth, in the limit of large detuning. On
the contrary, it might failed in a deep potential at given detuning. More over, even if
the secular approximation is well fulfilled for the lowest vibrational levels it might
break down for upper ones where the separation between energy bands becomes smaller due
to potential anharmonicity effects, and especially for above-barrier motioned atoms.
In the present work we investigate the applicability of laser cooling in a deep optical
potential created by light field with nonuniform polarization for generating spatially
localized atom structures with high contrast for atom lithography. We consider the
conditions far from the situation of extremely low sub-Doppler cooling cases. Thus, to
describe the localization of atoms more correctly in here we do not restrict our
consideration by secular approximation. Rather we perform a full quantum numerical
analysis of generalized optical Bloch equation for atomic density matrix elements. In
particular, we consider the light field parameters beyond the secular approximation
limit. Finally we analyze the structure width and contrast of localized atoms, important
parameters for technological applications.
\section{Master equations}
Let us consider one-dimensional (along the z axis) motion of atoms
with total angular momenta $j_g$ in the ground state and $j_e$ in
the excited state in a field of two oppositely propagating waves
with the same frequency and intensity
\begin{eqnarray}\label{field}
{\bf E}(z,t) &=& E_{0}\left({\bf e}_1 \,e^{ikz} +
{\bf e}_2\, e^{-ikz}\right)e^{-i\omega t} + c.c. \nonumber \\
&& {\bf e}_{n} = \sum_{q =0,\pm 1} e^{q}_{n} {\bf e}_q \,, \,\,\,
n=1,2
\end{eqnarray}
Here $E_0$ is the amplitude of each of the oppositely propagating
waves. The unit vectors ${\bf e}_1$ and ${\bf e}_2$ determine
their polarizations with components $e^{q}_{n}$ in cyclic basis
$\{ {\bf e}_0 = {\bf e}_z, {\bf e}_{\pm 1} = \mp ({\bf e}_x \pm
i\, \bf{e}_y)/\sqrt{2} \}$.
In this work we restrict our consideretion by the weak-field
limit, i.e. with small saturation parameter
\begin{equation}\label{sat}
S = \frac{\Omega^2}{\delta^2+\gamma^2/4} \,.
\end{equation}
Here $\Omega = -E_0d/\hbar$ is the single-beam Rabi frequency
characterize the coupling between the atomic dipole $d$ and light
field.
In the weak-field limit, the atomic exited-state can be
adiabatically eliminated, and atom motion is describing by
reduced equation for the ground-state density matrix elements
\cite{Berman93,Castin94}:
\begin{equation}\label{equation}
\frac{d}{d t} {\hat \rho} = -\frac{i}{\hbar}\left[{\hat H}, {\hat \rho}
\right]+{\hat \Gamma}\{{\hat \rho}\}
\end{equation}
where the Hamiltonian $\hat{H}$ is given by
\begin{equation}\label{ham}
{\hat H} = \frac{{\hat p}^2}{2M} + \hbar\,\delta S \;{\hat V}^{{\dagger}}{\hat
V}\,.
\end{equation}
The last term in (\ref{ham}) describes atoms interaction with the
light field in resonance approximation, where
\begin{eqnarray}\label{Vmat}
\hat{V} &=& \hat{V}_1\, e^{ikz}+\hat{V}_2\, e^{-ikz} \nonumber \\
&=& \sum_{q} \hat{T}_q \,e^{q}_1 \;e^{ikz} +
\sum_{q} \hat{T}_q\, e^{q}_2\; e^{-ikz}
\, ,
\end{eqnarray}
and operator $\hat{T}_{q}$ is written through the Clebsch-Gordan
coefficients:
\begin{equation}\label{T}
\hat{T}_{q} = \sum_{\mu_e, \mu_g} C^{je, \mu_e}_{1,q;\, j_q,
\mu_g} |j_e, \mu_e \rangle \langle j_g, \mu_g|
\end{equation}
written in the basis of sublevel wave functions for exited $|j_e, \mu_e \rangle$ and the ground
$|j_g, \mu_g \rangle$ atomic states.
In addition, the relaxation part of kinetic equation for atomic
density matrix (\ref{equation}) has the following form
\begin{eqnarray}\label{relaks}
&&\hat{\Gamma}\{\hat{\rho}\} = -\frac{\gamma S}{2}
\left\{ \hat{V}^{\dagger} \hat{V}, \hat{\rho} \right\} \\
&+&\gamma S \sum_{q=0,\pm 1 }\int_{-1}^{1} {\hat T}_q^{\dagger}
e^{-i k s \hat{z}} \, \hat{V} \hat{\rho}\,\hat{V}^{\dagger} e^{-i
k s \hat{z}}{\hat T}_q K_q(s)\, ds \nonumber
\end{eqnarray}
where $\{\hat{a},\hat{c}\} = \hat{a}\hat{c}+\hat{c}\hat{a}$
standard anticommutator definition and ${\hat z}$ is position
operator. This term describes redistribution of the atom on the
ground state energy sublevels with taking into account the recoil
effects in spontaneous photon emission. The functions $K_{\pm
1}(s) = (1+s^2)3/8$ and $K_{0}(s) = (1-s^2)3/4$ is determined by
the probability of emission of a photon with polarization $q=\pm 1
,0$ into direction $s = \cos(\theta)$ (relative to the $z$ axis).
\section{Equilibrium atomic density matrix}
There are number of approaches developed for calculation of the evolution of the atomic
density matrix. The quantum problem is more difficult because it incorporate evolution
of numbers of internal and external components of the density matrix. The majority of
works are based on secular approximation for density matrix elements \cite{Berman93,
Castin94,Deutsch97}. It consists in the following: First, the eigenstates and
eigenenergies of the Hamiltonian $\hat{H}$ should be found. Then we can consider only the
evolution of diagonal elements of atomic density matrix in this eigenstatas basis. It
can be written in a form of balance equation with the rates characterizing the relaxation
part of the master equation. The secular approximation for the general master equation
(\ref{equation}) is valid for the lowest vibrational energy levels when the the energy
separation between different bands are much greater than the effective width due to
optical width and tunneling effects. This condition implies very large detuning.
However, the density of energy states increases for the upper energy levels and the
energy difference between adjacent states could be very small \cite{Castin94}. This
circumstances therefore make very hard to use the secular approximation for describing
of hot and nonlocalized atomic fraction.
In order to take into account effects of localization in optical potential and modulation
depth of the spatial distribution of atoms more correctly we utilize a different approach
for the quantum calculation of the master equation (\ref{equation}).
In the Wigner representation for atomic density matrix
$\hat{\rho}(x,p)$ the general master equation (\ref{equation})
takes the following form:
\begin{equation}\label{wigner}
\frac{d}{dt} \hat{\rho} =
-i \delta \, S \left[\hat{V}^{\dagger} \hat{V}, \hat{\rho}\right]
- \frac{\gamma \,S}{2} \left\{\hat{V}^{\dagger} \hat{V}, \hat{\rho}\right\} +\hat{\gamma}\{\hat{\rho}\}
\end{equation}
where commutator of density matrix with kinetic part of Hamiltonian (\ref{ham}) reduces
to partial deviation on $z$
\begin{equation}\label{dt}
\frac{d}{dt} \hat{\rho}(z,p) \equiv \left( \frac{\partial}{\partial t}
+ \frac{p}{M} \frac{\partial}{\partial z} \right) \hat{\rho}(z,p)
\end{equation}
The field part of Hamiltonian (\ref{ham}) has only the zeroth and
the second spatial harmonics:
\begin{equation}\label{Wharm}
\hat{V}^{\dagger} \hat{V} = \hat{W}_0 + \hat{W}_{+} e^{i2kz} +
\hat{W}_{-} e^{-i2kz}
\end{equation}
thus the commutator and anticommutator in the right hand side of equation (\ref{wigner})
could be written as:
\begin{eqnarray}\label{Wpart}
\hat{V}^{\dagger} \hat{V} \, \hat{\rho} &\mp& \hat{\rho}\,\,
\hat{V}^{\dagger}
\hat{V}= \hat{W}_0 \,\hat{\rho}(z,p) \mp \hat{\rho}(z,p)
\hat{W}_0 \\
&+& \left(\hat{W}_{-} \, \hat{\rho}(z,p+\hbar k) \mp
\hat{\rho}(z,p-\hbar k)\hat{W}_{-}
\right)e^{-i2kz} \nonumber \\
&+& \left(\hat{W}_{+} \, \hat{\rho}(z,p-\hbar k) \mp
\hat{\rho}(z,p+\hbar k)\hat{W}_{+}
\right)e^{i2kz} \nonumber \, .
\end{eqnarray}
The last term describing relaxation due to spontaneous emission of photons has
well-known form in Wigner representation:
\begin{eqnarray}\label{Wrelax}
\hat{\gamma}\{\hat{\rho}(z,p)\} = \gamma S \sum_{q=0,\pm 1 }\int_{-\hbar k}^{\hbar k} dp'/\hbar k\; \;
K_q(p'/\hbar k) \;\; {\hat T}_q^{\dagger}
\nonumber \\
\times \left [
\hat{V}_1\, \hat{\rho}(z,p+p') \; \hat{V}_2^{\dagger}e^{i2kz} +
\hat{V}_2\, \hat{\rho}(z,p+p') \; \hat{V}_1^{\dagger} e^{-i2kz}
\right. \nonumber \\
+ \left.
\hat{V}_1\, \hat{\rho}(z,p+p'-\hbar k) \;
\hat{V}_1^{\dagger} +
\hat{V}_2\, \hat{\rho}(z,p+p'+\hbar k) \; \hat{V}_2^{\dagger}
\right]
{\hat T}_q \, . \nonumber
\end{eqnarray}
Equation (\ref{wigner}) admits solution that is periodic in the
position variable. We further introduce a Fourier-series
expantion for atomic density matrix in spatial coordinates
\begin{equation}\label{fourier}
\hat{\rho}(z,p) = \sum_n \hat{\rho}^{(n)}(p) \; e^{i\,2n\,kz}
\end{equation}
and rewrite the master equation for discrete Fourier components
of density matrix $\hat{\rho}^{(n)}$:
\begin{eqnarray}\label{Lharm}
\left(\frac{\partial}{\partial t} \right.&+& \left.
2ni\,\frac{p}{M}\right)\hat{\rho}^{(n)} = \\
&& {\cal L}_{0} \left\{ \hat{\rho}^{(n)} \right\}
+{\cal L}_{+} \left\{ \hat{\rho}^{(n-1)} \right\}
+{\cal L}_{-} \left\{ \hat{\rho}^{(n+1)} \right\} \, . \nonumber
\end{eqnarray}
\begin{figure}[pt]
\begin{center}
\includegraphics[width= 3.0 in]{fig1n.eps}
\end{center}
\caption{\em Steady-state spatial (a) and momentum (b) distributions, and the total population
of spatial harmonics of atomic ground-state density matrix (c) for atoms with $j_g =
1 \to j_e = 2$ optical transition and mass as for $Cr$ atoms.
The light field detuning $\delta = -40 \gamma$ and saturation parameter $S = 0.5$.} \label{fig1}
\end{figure}
In steady-state problem ($\partial/\partial t \; \hat{\rho} = 0$) such
a recursion may often be solved by the method of continued fraction. This approach is
used for solution of the optical-Bloch equations in different spectroscopy tasks as well
as for the calculation of the force on atom in the light field (see for example
\cite{Risk,Tan}). The major distinction here is that these equations for density matrix
contain the recoil effects that makes them more complicated for calculation.
Additionally we note, the similar approach was described in \cite{Juha91} where the
authors analyzed the laser cooling (velocity distribution) of two-level atom in the
recoil limit and thus restrict their consideration only by the zeroth spatial harmonics
for the ground state density matrix. In our case the number of the spatial harmonics
depends on the light field and atomic parameters. Typically we use less than $30$
harmonics that is enough to obtain the spatial solution for equilibrium atomic density
matrix in considered range of light field parameters.
The spatial and momentum steady-state distribution of atoms with $j_g =
1 \to j_e = 2$ optical transition for $\delta = -40 \gamma$, $S = 0.5$ and chromium mass
are shown in Fig.\ref{fig1}(a) and (b). The Fig.\ref{fig1}(c) represents
the total population of spatial harmonics of atomic ground-state
density matrix integrated over momentum space $R^{(n)} = \int Tr\{\hat{\rho}^{(n)}(p)\}\, dp$.
The zeroth harmonic is equal to $1$ that is the normalization condition. As
it seen here the population of higher harmonics are rapidly decrease with number $n$.
\section{Results}
In this section we turn our attention to the steady-state spatial distribution of the
atoms in the optical potential created by the light field with $lin \perp lin$
configuration. We choose this configuration as a brightest example of the light field
with nonuniform polarization. Only light field ellipticity varies in position space
while the other parameters (intensity, phase, orientation of polarization vector) stay
unchanged. More over, the optical potential created by this field configuration has a
period of $\lambda/4$ that makes it very attractive for deposition of atomic structures
with high spatial periodicity.
\begin{table}[tbp]
\begin{center}
\begin{tabular}{ccccccccc}
\hline \hline \\
Element && cooling &&$\tilde{M}$ && $\lambda$ && $I_S$\\
&&transition && && (nm) && $(mW/cm^2)$\\ \\
\hline \\
$^{7}$Li && $2^2S_{1/2} \to 2^2P_{3/2}$ && 46 && 671 && 2.56 \\
$^{23}$Na && $3^2S_{1/2} \to 3^2P_{3/2}$ && 198 && 589 && 6.34 \\
$^{39}$K && $4^2S_{1/2} \to 4^2P_{3/2}$ && 358 && 766 && 1.81 \\
$^{85}$Rb && $5^2S_{1/2} \to 5^2P_{3/2}$ && 770 && 780 && 1.63 \\
$^{133}$Cs&& $6^2S_{1/2} \to 6^2P_{3/2}$ && 1270 && 852.3 && 1.06 \\
$^{52}$Cr && $4^7S_{3} \to 4^7P_{4}$ && 115 && 425.6 && 8.49 \\
$^{27}$Al && $3p^2P_{3/2} \to 3d^2D_{5/2}$ && 85 && 309.4 && 57 \\
$^{69}$Ga && $4p^2P_{3/2} \to 4d^2D_{5/2}$ && 382 && 294.4 && 127 \\
$^{115}$In && $5p^2P_{3/2} \to 5d^2D_{5/2}$ && 634 && 325.7 && 78 \\
$^{107}$Ag && $5^2S_{1/2} \to 5^2P_{3/2}$ && 601 && 328 && 76.8 \\
\hline\hline
\end{tabular}
\end{center} \caption{Dimmensionless atomic mass parameter corresponding
to laser cooling transition for different elements.}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[width= 3.0 in]{fig2n.eps}
\end{center}
\caption{\em Spatial FWHM width of localized atomic structures (a), and contrast (b) as function
of parameter $\delta \, \mu$ for different field detuning ( $\delta/\gamma = -5, -10, -20, -40, -80, -160$ from top
to bottom in (a) and from bottom to top in (b))
in a model of atom with $j_g=1/2 \to j_e=3/2$ optical transition.} \label{fig2}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width= 3.0 in]{fig3n.eps}
\end{center}
\caption{\em Asymptotic FWHM width of localized atomic structures as function
of light field detuning in a model of atom with $j_g=1/2 \to j_e=3/2$ optical transition.} \label{fig3}
\end{figure}
\begin{figure}[b]
\begin{center}
\includegraphics[width= 3.0 in]{fig4n.eps}
\end{center}
\caption{\em Spatial FWHM width of localized atomic structures (a), and contrast (b) as function
of parameter $\mu$ for different field detuning in a model of atom with $j_g=1 \to j_e=2$ optical transition.} \label{fig4}
\end{figure}
There are several physical parameters that are required to characterize a given laser
cooling situation. These are the atomic mass $M$, the wavelength $\lambda$, and the
natural line-width $\gamma$. In addition, there are two light-field parameters in the
low-field limit: detuning $\delta$, and saturation parameter $S$. It is possible to
choose reduced dimensionless units ($\hbar =1$, $k = 1$, $\gamma = 1$), so that the
dimensionless atomic mass $\tilde{M}$ can be defined from relation $\gamma/\omega_{R} =
2\tilde{M}$ \cite{Pru04}. This is so called a quasiclassical parameter that characterizes
an atom's kinetic in a light field. In particular, it describes the rate of kinetic
processes and evolution of atomic distribution function in momentum space, thus the
typical cooling time is order of $\tau = \omega_{R}/(\gamma S)$.
We perform an analysis of the effects of atom localization in optical potential.
Contrast can be defined as the ratio of spatial distribution modulation depth to its
amplitude $C= h/H$ (Fig.\ref{fig1} (a)) as in \cite{Meschede03}. As it can be seen
directly from (\ref{wigner}), there are only two parameters that characterize the
stationary solution for an atomic density matrix. This is the detuning, that can be
measured in the units of natural line-width $\tilde{\delta} = \delta/\gamma$ and the
second one is non-dimensional parameter $\mu$:
\begin{equation}\label{mu}
\mu = S \tilde{M} \, .
\end{equation}
This scale parameter makes it possible to implement the results of calculation for the
elements with allowed closed dipole optical transition having degeneration over the
angular momentum energy levels (table I).
Note, that in the secular approximation \cite{Dal91} the stationary solution is
characterized only by the ratio of optical potential depth to recoil energy
$U_0/\omega_R$ that is proportional to $\tilde{\delta} \mu$ in our notations. Thus we
first express the results in parameters $\tilde{\delta}$ and $\tilde{\delta} \mu$. As it
seen in the Fig. \ref{fig2} the differences between the curves become more significant
with the optical potential depth $U_0$ increasing, corresponding to out of framework
from secular approach. The width of the localized structures and the contrast tends to
an asymptotic constant value with increasing of the light field intensity, however that
depends on light field detuning.
In order to find the asymptotic values for the width we fit curves with an
empirical low $w(x) = a + b/\sqrt{1 + c\, x}$. The results for asymptotic values $a$ are
shown in Fig. \ref{fig3}.
The Fig. \ref{fig4} represents the FWHM width and contrast of the spatial structures as
a function of parameter $\mu$ at different detunings for the atomic model with $j_g=1 \to
j_e=2$ optical transition. The FWHM width of the spatial structures is monotone decrease
for a large detuning and parameter $\mu$. In spite of the fact that the equilibrium
temperature is growing with the depth of optical potential, the localization of the atoms
becomes stronger with growth of these parameters. Additionally the contrast tends
towards its maximum value Fig. \ref{fig4}(b). The dashed vertical lines on the
Fig.\ref{fig4} underline the limitations of the weak-field theory for different elements
from the table I in assumption $S < 0.5$. Note, that this is qualitative assumption. For
thorough analysis of the weak-field theory limits the solution of the quantum equation
for the total atomic density matrix is required with taking into account exited state
population. However, the width and contrast curves Fig.\ref{fig4} have a very strong
dependence on $\mu$, thus the localization effects remain rather significant for enough
small saturation parameter especially for "heavy" atoms table I.
\section{Conclusion}
We performed the fully quantum analysis of atomic localization obtained by laser cooling
in nonuniformly polarized low-intensity light field. Generally, conditions for a deep
laser cooling mismatch conditions for strong atomic localization that require deeper
optical potential and consequently leads to higher laser cooling temperature.
Additionally, in a deep optical potential the secular approximation (\ref{sec})
restricted by relation on light field detuning and the depth of optical potential. In
our treatment we had no such limitation allowing a more correctly describe the spatial
solution for atomic distribution function including localized and above-barrier motioned
atoms. The stationary solution of atoms is a function of the light field detuning
$\delta$ and the dimensionless parameter $\mu$ (\ref{mu}). We analyzed the width and
contrast of localized atomic structures as a function of these parameters. We showed
that the atomic structure width and contrast have a strong dependence on $\mu$ and tend
to constant values with an increasing optical potential dept that depends on the light
field detuning.
We demonstrated the applicability of laser cooling in far-off detuned deep optical
potential, created by a light field with polarization gradient, as a dissipative optical
mask for the purposes of atom lithography and nanofabrication for generation of spatially
localized atomic features with high contrast. This type of the light mask can be an
alternative method for creation of spatially localized atomic structures. The remarkable
distinction of this method from previous non-dissipative light mask is that the
suggested one is not sensitive to any aberration effects. More over, this type of
optical mask has no classical analog and can not be described by methods for classical
optics. The width and the contrast of localized atomic structures here is determined by
atomic energy dissipation mechanisms in the light field. Finally, we perform analysis of
the possible limits for the width and the contrast of localized atomic structures that
can in principle be reached by this type of the light mask.
\section{Acknowledgments}
This work was partially supported by RFBR (grants 05-02-17086, 04-02-16488, 05-08-01389)
and O.N.P. was supported by the grant MK-1438.2005.2.
|
1,116,691,500,941 | arxiv | \section*{Methods}
The sample, grown by chemical beam epitaxy \cite{Poole01},
consists of the following layers:
460~nm undoped InP grown on top of an insulating InP substrate,
followed by a 10~nm Si-doped InP layer, 10~nm undoped layer,
10~nm In$_{0.53}$Ga$_{0.47}$As layer, 20~nm undoped layer
and a 1.82~ML InAs layer that results in the
formation of InAs QDs by Stranski-Krastanow growth.
The QDs cover the surface with at a density of $\sim$ 2.5 QDs per $\mu \mathrm{m}^2$
having diameters in the range of 30-95~nm and heights of 0.5-6~nm.
The 2DEG layer formed in the InGaAs well serves as a back electrode and
an Ohmic contact to the 2DEG is made by indium diffusion.
Our home-built cryogenic AFM \cite{Roseman00} includes an
RF- modulated fiber optic interferometer
\cite{Rugar89} with 1550~nm wavelength for cantilever position detection.
We coat Si AFM cantilevers (Nanosensors PPP-NCLR) with 10~nm titanium (adhesion layer) and
20~nm platinum.
The cantilevers typically have a 160~kHz resonance frequency and a
quality factor between 100,000 and 200,000 at 4.5~K.
All of the images were taken in frequency modulation mode \cite{Albrecht91}.
In this mode, the cantilever is self-oscillated
at its resonance frequency
with a constant amplitude.
The frequency shift and dissipation were measured with a commercially available
phase-locked loop frequency detector (Nanosurf, easyPLL plus).
The topography images were taken
in constant frequency shift mode
where a constant frequency shift
is maintained by regulating the cantilever tip-sample distance
using a feedback controller.
The frequency shift and dissipation images were taken
in constant-height mode with a typical tip height of 20~nm.
Dissipation images are shown in Fig.~S1 as a
function of $V_\mathrm{B}$.
More negative $V_\mathrm{B}$ results in adding more
electrons to the quantum dot.
Areas of increased dissipation mark 2DEG-QD tunneling events.
Each time a ring is crossed when traveling towards the quantum
dot center marks the addition of an electron to the dot.
More details of the AFM images are listed in Supplementary Table 1.
The amplitude of the cantilever excitation signal, $A_\mathrm{exc}$,
is provided
as the dissipation signal from the Nanosurf oscillator controller.
It is converted to units of 1/s via:
$\frac{\omega_0}{Q} \frac{A_\mathrm{exc}-A_\mathrm{exc0}}{A_\mathrm{exc0}}$.
$A_\mathrm{exc0}$ is the excitation amplitude
independent of the tunneling process, in other words the background
dissipation.
This conversion is independent of cantilever oscillation amplitude.
Similarly, the signal is converted to units of eV/cycle
by multiplying $\frac{A_\mathrm{exc}-A_\mathrm{exc0}}{A_\mathrm{exc0}}$
by the factor $E_0 = \frac{\pi k_0 a^2}{eQ}$
where $a$ is the cantilever oscillation amplitude \cite{Morita}.
The $\Delta \omega-V_\mathrm{B}$ spectra shown in Fig.~\ref{fig:spectra}a
was originally superposed onto a large parabolic background
arising from the capacitive force between the 2DEG and cantilever tip.
Over several volts, at typical cantilever-sample gaps of 20~nm,
the curve can be fit with a single parabola.
In Fig.~\ref{fig:spectra}a this parabola was subtracted
from the frequency shift data.
The exact positions of the peaks (dips) in the dissipation (frequency shift)
are sensitive to the distance between cantilever tip and quantum dot.
In particular, slight changes in cantilever tip lateral position
with respect to the quantum dot center
can lead to slight shifts in the peaks
as can be deduced from the images
where the rings can have different spacing depending on location.
The shift in peaks as a function of height, however, is linearly dependent
over the distances used in this experiment (12-22~nm).
We took the differences in peak positions in the data displayed
in Fig.~\ref{fig:spectra}b
to be caused by small height differences (sub 1~nm)
and so the voltage axis was rescaled to align the peaks with the data
but the peak heights were not rescaled.
The mean factor involved in the voltage rescaling is 1.011
with the most extreme factor being 1.088.
The temperature data above 22~K had
thermally limited peaks
for a cantilever oscillation amplitude of 0.4~nm,
but needed to be reduced to 0.2~nm at 4.5~K.
The errorbars in Fig.~\ref{fig:spectra}c and d
represent how well the measurement over a single location was reproduced.
\section*{References}
\input{For_arXiv_2.bbl}
\section*{Acknowledgement}
Funding for this research was provided by
the Natural Sciences and Engineering Research Council of Canada,
le Fonds Qu\'{e}b\'{e}cois de le Recherche sur la Nature et les Technologies,
the Carl Reinhardt Fellowship,
and the Canadian Institute for Advanced Research.
\clearpage
\input{supp_info2}
\end{document}
\section{Figure Details}
The tip height for the voltage spectra and
all of the constant height images was 19~nm $\pm$ 1~nm,
with the exception of Fig.~4d-g and Fig.~S1
where the height was $\sim 23$~nm.
Additional image details are listed in Supplementary Table 1.
The acquisition time of the majority of spectra was 15~seconds.
\begin{figure*}[h!]
\includegraphics [width = 120mm] {figure5_lowRes}
\caption{\label{fig:volts}
A series of constant height dissipation images, for the
QD shown in Fig.~4d-g, for increasingly negative $V_\mathrm{B}$.
The base of the InAs structure is outlined with rectangular dashes and the highest area
is outlined with rounded, more closely spaced, dashes.
This QD is localized near a high point in the structure, which is often observed.
For increasingly negative $V_\mathrm{B}$ more rings emerge as the QD is populated with electrons.
In these images the ring lineshape is broadened by the large cantilever
oscillation amplitude of 0.33~nm at 4.5K with a tip-sample gap of roughly 23~nm.
Note that the lateral position of the final image is slightly offset from the others as
this image was taken at a later time in the experiment.
Notice streaks appear in the same ring
location indicating some nearby electrostatic influence.
The scalebar is 20~nm.
The same colorbar was used for each image, with all images but the last having
a range of 0--0.85~Hz and the last 0--2~Hz.
}
\end{figure*}
\begin{table*}[h!]
\caption{Experimental details of AFM images}
\begin{tabular}{l c c c c c} \hline
Fig. & T (K) & $\Delta \omega/2\pi$ (Hz) & Oscillation Amplitude (nm) & $V_\mathrm{B}$ (V) & Acquisition Time (min.)\\ \hline
1c & 78 & -9.4 & 1.6 & -0.35 & 6\\
1d & 4.5 & -- & 0.4 & -8.0 & 119\\
1e & 4.5 & -- & 0.4 & -8.0 & 119\\
3a & 4.5 & -- & 0.4 & -9.0 & 51\\
3b & 4.5 & -- & 0.4 & -7.6 & 17\\
3c & 78 & -9.4 & 1.6 & -0.35 & 14\\
3d-e & 4.5 & -- & 0.4 & -8.0 & 51\\
4a & 78 & -9.4 & 1.6 & -0.35 & 9\\
4b & 78 & -- & 1.6 & -8.0 & 68\\
4c & 4.5 & -- & 0.4 & -8.0 & 51\\
4d & 4.5 & -- & 0.8 & -8.0 & 9\\
4e & 4.5 & -- & 0.8 & -9.0 & 9\\
4f & 4.5 & -- & 0.8 & +6.8 & 9\\
4g & 4.5 & -- & 0.8 & -8.0 & 9\\
5 & 4.5 & -- & 0.8 & -- & 17 (last image 9)\\ \hline
\end{tabular}
\end{table*}
\section{Details of dissipation with degenerate shells}
Here we outline the approach used to derive
the general expression for the dissipation in
Eq.~(2).
The charging Hamiltonian for small cantilever tip motion
may be written
\begin{align}
\label{eq:Hc}
H_{\mathrm{C}} &=
\sum_N E_{\text{C}_N}
\left[ \left( N - {\cal N}_{V} \right)^2
+ \left( 1 + \frac{C_\text{2DEG}}{C_\text{tip}} \right) {\cal N}_{V}^2
\right]
\ket{N} \bra{N}
\\
&= H_{\mathrm{C},0} + \Delta H_\text{osc} - \sum_N
A_N N z \ket{N} \bra{N}, \nonumber
\end{align}
where $\ket{N}$ is a state with $N$ electrons on the QD,
${\cal N}_{V} = - \frac{C_\text{tip} V_\mathrm{B}}{e}$ is the dimensionless gate voltage,
$C_\text{tip}$ is the QD-tip capacitance and
$C_\text{2DEG}$ is the QD-2DEG capacitance.
In the second line,
$H_{\mathrm{C},0}$ is the cantilever-independent part of the
charging Hamiltonian,
$\Delta H_\text{osc}$ describes an electrostatic modification
of the cantilever potential,
and the QD-cantilever coupling strength for given $N$ is
$A_N = -2E_{\text{C}_N} \frac{V_\mathrm{B}}{e} \left( 1-\alpha \right)
\frac{\partial C_\mathrm{tip}}{\partial z}$.
We emphasize that
$E_\mathrm{C}$ and $A$ may be different for each
electron added, as indicated by the index $N$.
From the second equality in equation (\ref{eq:Hc}),
$N$ plays the role of a force on the cantilever.
As a result, the dissipation and frequency shift
may be found from the linear response
coefficient $\lambda_N(\omega)$
describing the response of $N$ to changes in $z$ \cite{Clerk05}.
Consider the
charge degeneracy point between $N$ and $N+1$
electrons on the QD.
This may always be viewed as
${n_\text{shell}}$ or ${n_\text{shell}}+1$ electrons occupying a shell
of degeneracy $\nu$
(even for a nondegenerate single level, for which
${n_\text{shell}} = 0$ and $\nu = 1$).
Neglecting interactions,
the charge state with ${n_\text{shell}}$ (${n_\text{shell}}+1$)
electrons
in the shell is $D_n$-fold ($D_{n+1}$-fold) degenerate,
with
\begin{equation}
D_n = \binom{\nu}{{n_\text{shell}}}
\quad,\quad
D_{n+1} = \binom{\nu}{{n_\text{shell}} + 1},
\end{equation}
where $\binom{\cdot}{\cdot}$ denotes a binomial coefficient.
These arise simply from the different ways
to put ${n_\text{shell}}$
or ${n_\text{shell}} + 1$ electrons into
$\nu$ single particle states.
Let $P_{{n_\text{shell}},i}$ be the probability to find ${n_\text{shell}}$ electrons
occupying the shell in configuration $i$,
and $P_{{n_\text{shell}}+1,j}$ be the probability to
find ${n_\text{shell}}+1$ electrons occupying the shell in
configuration $j$.
In general, these probabilities will satisfy
the master equations
\cite{Beenakker91}
\begin{align}
\partial_t P_{{n_\text{shell}},i} &= \sum_j \left\{
\Gamma_{j \rightarrow i} P_{{n_\text{shell}}+1,j}
-\Gamma_{i \rightarrow j} P_{{n_\text{shell}},i}
\right\}
\label{eq:ME1} \\
\partial_t P_{{n_\text{shell}}+1,j} &=
\sum_i \left\{
\Gamma_{i \rightarrow j} P_{{n_\text{shell}},i}
- \Gamma_{j \rightarrow i} P_{{n_\text{shell}}+1,j} \right\},
\label{eq:ME2}
\end{align}
where $\Gamma_{i \rightarrow j}$ is the rate to
add an electron to configuration $i$ producing configuration
$j$, and vice versa for
$\Gamma_{j \rightarrow i}$
(note that these rates are nonzero only for configurations
$i$ and $j$ that differ by the addition or removal
of one electron).
We calculate the rates using Fermi's golden rule.
The master equations (~\ref{eq:ME1}) and (~\ref{eq:ME2}) may be solved
for given values of $\nu$ and ${n_\text{shell}}$, but
the solutions are cumbersome for highly degenerate shells.
To simplify the equations we assume that
for a given charge degeneracy point (i.e. a single dissipation peak),
the tunneling
matrix elements from Fermi's golden rule are
equal for all single particle states
within the relevant shell.
This is an approximation, since
degenerate states may indeed have different
wavefunctions leading to
different tunneling rates.
However, we expect the rates to be similar since
the tunnel barrier between the QD
and the 2DEG extends over the entire QD area, minimizing
the effects of the spatial variations of
different wavefunctions.
Moreover, we checked that
significantly unequal rates lead only to very
small corrections in the peak shifts.
For example, taking distinct
rates for the two degenerate
orbital states in the $p$ shell,
we find that rates differing by a factor
of 2 lead to a correction
of 1.5\% for the shift
of the 3rd dissipation peak (i.e.~the 1st peak in the $p$ shell).
We thus neglect these possible differences here.
Taking the rates to be equal we arrive at the simplified
master equation for the total probability to find
${n_\text{shell}}$ electrons in the shell,
\begin{align}
\label{eq:ME}
\partial_t P_{n_\text{shell}} &= (\nu-{n_\text{shell}})
\left[
\frac{D_n}{D_{n+1}} \Gamma_- (1- P_{n_\text{shell}}) - \Gamma_+ P_{n_\text{shell}}
\right]
\end{align}
where
\begin{align}
\Gamma_+ = \Gamma f(E)
\quad,\quad
\Gamma_- = \Gamma \left[1- f(E)\right]
\end{align}
are the rates to add (+) or remove
($-$)
an electron to or from a single particle state,
and $f$ is the Fermi function.
Note that the master equation for $P_{{n_\text{shell}}+1}$
is not independent in our approximation of equal rates;
this is a result of
$P_{n_\text{shell}} + P_{{n_\text{shell}}+1} = 1$.
The stationary solution of equation (\ref{eq:ME}) is
\begin{align}
\label{eq:pStat}
P_{n_\text{shell}} &= \frac{({n_\text{shell}}+1)}{\phi}(1-f) \\
P_{{n_\text{shell}}+1} &= \frac{(\nu-{n_\text{shell}})}{\phi} f,
\end{align}
where $\phi$ is defined in equation~(3).
The quantity we need is
the linear response coefficient
$\lambda_N(\omega)$.
To find this, we assume that the cantilever is
oscillating at frequency $\omega$.
This causes the
chemical potential difference between
the QD and the 2DEG
to oscillate,
\begin{equation}
\label{eq:Eosc}
E \rightarrow E + \delta e^{-i\omega t},
\end{equation}
and this leads to a change in the probabilities,
\begin{align}
P_{{n_\text{shell}}+1} &\rightarrow P_{{n_\text{shell}}+1} + \lambda_N(\omega) \delta e^{-i\omega t}, \\
P_{{n_\text{shell}}} &\rightarrow P_{n_\text{shell}} - \lambda_N(\omega) \delta e^{-i\omega t}.
\label{eq:pChange}
\end{align}
Inserting equations (\ref{eq:pStat})--(\ref{eq:pChange})
into equation (\ref{eq:ME}) and linearizing in $\delta$, we solve
for $\lambda_N(\omega)$.
Its real and imaginary part yield
the dissipative and
conservative parts of the electrostatic force from
$(k_0/\omega_0^2) \gamma = - A^2 \Im{\left\{ \lambda_N(\omega) \right\}} / \omega $
and $(2k_0/\omega_0) \Delta \omega = A^2 \Re{\left\{ \lambda_N(\omega)\right\}}$.
The dissipation for arbitrary degeneracy is given in equation~(2).
and for the frequency shift we obtain
\begin{equation}
\label{eq:freqShiftGen}
\Delta \omega = -\frac{\omega_0}{2k_0} \frac{A^2 \Gamma^2}{k_\mathrm{B} T}
\left[
\frac{({n_\text{shell}}+1)(\nu-{n_\text{shell}})}{\omega^2 + (\phi \Gamma)^2}
\right]
f(1-f) .
\end{equation}
Note that we recover the single level result
[i.e. equation~(1) for the dissipation]
by taking $\nu = 1$ and ${n_\text{shell}} = 0$ as expected.
Finally, we point out that the temperature-dependent
level repulsion discussed in the paper is contained in a symmetry
of equations~(2) and (\ref{eq:freqShiftGen}), from which
we find that taking ${n_\text{shell}} \rightarrow \nu-{n_\text{shell}} -1$
is equivalent to $E \rightarrow -E$.
The peak shifts of $\gamma$ and $\Delta \omega$ are
proportional to temperature and we can solve for the coefficients
analytically.
However, in general the coefficients are complicated and
unenlightening.
To show how the peak shifts depend on degeneracy,
we provide the coefficients in the low and high frequency limits
where they are greatly simplified. Note that our experiment is in the intermediate
regime $\omega \sim \Gamma$, so the
peak shifts measured and calculated
in the main text lie between these two limits.
For $\gamma$, the peak shifts in the low and high frequency limits are
\begin{equation}
\frac{\Delta E_{\gamma,\text{peak}}}{k_\mathrm{B} T} \rightarrow
\begin{cases}
\ln{\left( d + \sqrt{d(d+1) +1} \right)} &\text{as } (\omega\rightarrow 0), \\
\ln{ \sqrt{d+1} } &\text{as } (\omega\rightarrow \infty),
\end{cases}
\end{equation}
where
\begin{equation}
d = \frac{\nu-{n_\text{shell}}}{{n_\text{shell}}+1} - 1.
\end{equation}
For a nondegenerate level, $d=0$ and there is no peak shift
at any frequency.
For $\Delta \omega$ the peak shifts in the same two limits are
\begin{equation}
\frac{\Delta E_{\Delta\omega,\text{peak}}}{k_\mathrm{B} T} \rightarrow
\begin{cases}
\ln{\left( d + 1\right)} &\text{as } (\omega\rightarrow 0), \\
0 &\text{as } (\omega\rightarrow \infty).
\end{cases}
\end{equation}
Comparing these limits, we see that
the shell degeneracy results in a greater
peak shift in
$\gamma$ than in $\Delta \omega$.
This is a direct consequence of equation~(4),
from which we see that, aside from an energy-independent
prefactor,
$\Delta\omega$ differs
from $\gamma$ by a factor of $\phi$.
We measured the separation between the peak in $\gamma$
and the peak in $\Delta \omega$ for each charge degeneracy point.
This is shown in Fig.~\ref{fig:peak3} for the third peak in
Fig.~2a as a function of temperature and compared to theory
with no fit parameters.
As argued in the main text, this provides strong evidence
that the observed peak shifts are indeed a
result of shell degeneracy.
\begin{figure*}
\includegraphics [width = 100mm] {figure6_lowRes}
\caption{\label{fig:peak3}
Difference between dissipation and frequency shift peak
positions as a function of temperature for peak 3, compared to
theoretical prediction with no fit parameters.
}
\end{figure*}
\end{widetext}
|
1,116,691,500,942 | arxiv | \section{Introduction}
Software engineering is a complex social activity that incorporates both communication and collaboration of developers and other stakeholders\cite{PalombaCSDetect}. The social and socio-technical features of development communities, along with the purely technical ones\cite{PalombaCSDetect}, have already been applied to predict and analyze software reliability\cite{defect_prediction} and maintainability\cite{PalombaCSImpact}. Inspired by the conception of code smell\cite{fowler1999refactoring}, Palomba \textit{et al.} \cite{PalombaCSDetect} coined the term “community smell” to describe unhealthy organizational structure of the software development communities causing social debt\cite{Social_Debt,SocialDebt2}. Social debt, much like technical debt\cite{TechdEBT} in terms of the impact on software maintenance, refers to unforeseen project costs connected to the presence of non-cohesive development communities having communication or collaboration issues. Such smells may not be the direct cause of software defects, however, they were proved to be refactoring preventers\cite{PalombaCSImpact}.
Community smells were evaluated and predicted at the granularity of community sub-groups \cite{PalombaCSDetect, PalombaCSImpact, PalombaCSPredict, PalombaMasters, KBS, ICGSE}. Differently, restructuring smelly communities rely on social efforts of every community member\cite{PalombaCSRefactoring}. Empirical guidelines of refactoring community smells, including monitoring, mentoring, planning, exercising, were proposed to shepherd the community \cite{PalombaCSRefactoring,PalombaCSShepherding,PalombaCSStats}.
However, open source software development is known for its distributed and egalitarian nature\cite{oss}. Meanwhile, developers focus on primarily their working state and current tasks rather than other issues\cite{CONTEXT_BASED_REFACTORING_ICPC_2017, PalombaMSR, CONTEXT_DEV1,CONTEXT_DEV_2}. Different from the situations in hierarchical and centralized organizations, the orders and arrangements of community shepherds (\textit{e.g.}, core members\cite{linus}, architects\cite{PalombaCSShepherding}) may not be strictly executed and followed\cite{oss}. Moreover, shepherds may be leaving\cite{linus}, or even missing\cite{unmaintained} from communities. Consequently, general guidelines of software quality that lacked developer-oriented adaptive approaches may be ineffective in practice\cite{PalombaMSR}.
Since community smells root in developers' motifs \cite{PalombaCSImpact}, research showed personalities could also influence community smells\cite{PalombaCSStats}. Sentiment is an expression of personality\cite{personality}, which can significantly affect the quality of work\cite{Ortu14MSREmotions,Ortu15MSRBullies,Ortu15Politeness,Ortu16MSRVAD,eeg}, and they were proved to have impacts on a variety of tasks including issue reopening\cite{Cheruvelil19}, commit changes\cite{sanerbug,Huq}, code refactoring\cite{Singh_APSEC17_TOOL_NEGATIVE_SENTIMENT}, and security\cite{security}. Previous research suggested sentiment-aware classifiers should be built to support developers' work\cite{eeg}.
To the best of our knowledge, there lacks a community smell prediction model designed individually for developers. Moreover, existing quantitative research focused on analyzing the impact of community smells on code quality rather than social representations of developers. Thus, such models cannot support developers to prevent smell from occurring. As a result, we intend to improve prediction approaches and refactoring guidelines of community smells in the granularity of developers individually by involving their sentiments.
This paper builds a developer-oriented and sentiment-aware community smell prediction model. We develop machine learners upon a developer sentiment dataset\cite{Ortu16MSRDataset} to predict if a developer is affected by 3 common community smells, such as \textit{Lone Wolf}, \textit{Organization Silo} and \textit{Bottleneck}\cite{PalombaCSDetect}. Furthermore, the model also predicts if a developer is a \textit{Smelly Quitter}\cite{defect_prediction}, \textit{i.e.}, quitted the community after being affected by any community smell concerned. Finally, discussions are made on the difference between smelly and non-smelly developers. We analyze the significance of the difference in sentimental features' distributions as well as its effect size.
The main contributions of the papers are listed as follows:
(1) To our knowledge, the first machine-learning based approach to integrate developer sentiments into community smell prediction, as well as the first work to predict community smells' occurrence on developers individually. The proposed model achieved a cross-project prediction performance of F-Measure ranging from 84\% to 93\%, and a within-project performance of F-Measure ranging from 76\% to 91\%;
(2) We investigate features having the most predictive power. We discover that 6 sentimental features, \textit{i.e.}, imperative and indicative expressions, politeness, and several emotions, are stronger predictors than the activeness metrics.
(3) We evaluate statistically the relationship between community smell and sentiments, which lead us to the conclusion that developers should communicate in a straightforward and polite way to ensure community healthiness;
(4) We provide an online appendix \cite{replication} with the generated dataset to replicate and extend our work.
The rest of this paper is organized as follows: In Section II we summarize related
literature. Section III presents how we construct our dataset, while Section IV outlines the settings and research questions, as well as the concerned evaluation metrics. In Section V we discuss the results of
this study, while Section VI overviews the threats to the validity of
the study and our effort to cope with them. Finally, Section VII concludes
the paper and describes future research.
\section{Related Work}
This section describes researches related to two aspects of this paper, \textit{i.e.}, community smell and developer sentiment.
\subsection{Community Smell}
Tamburri, Palomba \textit{et al.} contributed a series of researches \cite{PalombaCSDetect, PalombaCSImpact, PalombaCSPredict, PalombaMasters, PalombaCSDiversity, PalombaCommunityPattern, PalombaCSRefactoring, PalombaCSStats} concerning the definition\cite{PalombaCSDetect, PalombaMasters}, detection\cite{PalombaCSDetect}, diffuseness\cite{PalombaCSDetect, PalombaCSDiversity}, and variability\cite{PalombaCSStats} of community smells, as well as their impact on software maintainability\cite{PalombaCSImpact}. The authors treat community smells as patterns of motifs over collaboration and communication graphs, and they implemented a detection tool called \textsc{Codeface4Smells} by extending a socio-technical analysis tool called \textsc{Codeface}. The tool accepted the input of development mailing list and software repository history information to detect 5 community smells, namely \textit{Organisational Silo}, \textit{Black Cloud}, \textit{Lone Wolf}, \textit{Bottleneck} and \textit{Prima Donnas}. The authors also validated qualitatively\cite{PalombaCSDetect} the acceptance of the detection result, and they discovered the results were all true positives\cite{PalombaCSPredict}. Furthermore, the authors observed that community smells could be the preventers of refactoring, and they also intensify code smells continuously\cite{PalombaCSImpact}. Alternatively, the authors also discovered the associations between community patterns\cite{PalombaCommunityPattern}, \textit{e.g.}, \textit{Formal Group}, and \textit{Informal Network}, to specific community smells.
In terms of analyzing community smell in the granularity of developers individually, Palomba \textit{et al.} also provided practitioners with refactoring suggestions and frameworks\cite{PalombaCSRefactoring}. In a recent study\cite{PalombaCSStats}, the authors pointed out that communicability is important to prevent community smells, and developers' personality plays an important role in producing smells. Alternatively, Ahammed \textit{et al.} \cite{ahmed} made an exploratory study on 7 Apache projects and discovered the commit activities of developers correlate with their involvement of \textit{Missing Links} (\textit{Lone Wolf}) community smell.
The prediction of community smells has also been actively studied by the research community. Palomba and Tamburri \cite{PalombaCSPredict} built a state-of-the-art model to predict community smells' emergence on within- and cross-project scenarios. The research also pointed out socio-technical congruence, communicability and turnover-related metrics are the most powerful predictors. Almarimi \textit{et al.}\cite{KBS,ICGSE} combined Ensemble Classifier Chain (ECC) and Genetic Programming (GP) techniques to predict the emergence of community smells, and they also introduced 4 novel community smells, \textit{i.e.}, \textit{Solution Defiance}, \textit{Sharing Villainy}, \textit{Organizational Skirmish}, and \textit{Truck Factor Smell}.
The major differences of this work to the above-mentioned smell prediction papers are: (1) we predict the occurrence of smells in the granularity of developers individually rather than sub-groups or communities; (2) we involve developers' sentiment features to build predictors, and we assess their predictive power and statistical characteristics in software projects, which were not covered in previous researches.
\subsection{Developer Sentiment}
Ortu \textit{et al.} \cite{Ortu14MSREmotions,Ortu15MSRBullies,Ortu15Politeness,Ortu16MSRVAD,Ortu16MSRDataset} constructed a multi-aspect developer sentiment dataset based on \textsc{JIRA} comments and sentences, which is regarded as one of the golden dataset\cite{gold} for sentiment analysis of the software engineering domain. The authors also analyzed the impact of sentiments. For example, they found impolite comments\cite{Ortu15Politeness,politepeerj} and bullies\cite{Ortu15MSRBullies} are related to longer issue fixing time. Meanwhile, certain combinations of VAD\cite{Ortu16MSRVAD} (Valence-Arousal-Dominance) scores may indicate longer issue resolution time, as well as productivity problems such as burnout. Such results in line with Khan \textit{et al.}'s research \cite{KhanMood} reporting developers performs better with higher degree of arousal and valence after doing physical exercises. In further research on \textsc{GitHub} comments\cite{Ortu18UsersCommenters}, the authors also managed to differentiate comments made by users and developers using their emotions, sentiments and degree of politeness.
\begin{figure*}[htbp]
\centerline{\includegraphics[height=6cm ,width=14cm,angle=0]{fig1.pdf}}
\caption{Dataset construction and its usage.}
\label{fig}
\end{figure*}
Happiness is, in most cases, related to positive outcomes of software engineering\cite{graziotin2014happy,Graziotin_17_SEMOTION}. However, too many positive comments may also lead to buggy code\cite{Huq}. Graziotin \textit{et al.}\cite{Graziotin_17_SEMOTION, GraziotinJSS18} identified 49 consequences of unhappiness in their qualitative research. Internal consequences include low motivation, low cognitive performance, and work withdrawal. External ones contain low code quality and code discharging, \textit{i.e.}, destroying the codebase. Studies also showed that negative emotions are signals of the appearance of bug fix-inducing changes, but they may result in a longer issue-fixing time\cite{Huq} and issue-reopening\cite{Cheruvelil19}. As for negative sentimental expressions in code refactoring, Singh \textit{et al.}\cite{Singh_APSEC17_TOOL_NEGATIVE_SENTIMENT} concluded that poor tooling, error-proneness and high complexity of such activities could cause negative sentiments. Controversially, Islam \textit{et al.} \cite{SERA_16} reported opposite observations. Additionally, they discovered that the sentiment of code commit varies from commit types, \textit{e.g.}, there exist higher negative emotions for new feature implementation.
To sum up, it is possible to detect developers' sentiments, and sentiments do have affections on the software development process. However, a generally-accepted theorem explaining the pattern of such affection has not been proposed at present.
\section{Dataset Construction}
This section describes how we export and process the developers' sentiment features and community smells' occurrences to define and train our machine learner in the next section. Fig. 1 depicts generally the origins and construction methods of our dataset, as well as its further usage in this paper.
\subsection{Extracting Developer Sentiment Features}\label{AA}
First, we recover and adjust the raw data, \textit{i.e.}, sentiments of every developer of specific projects in \textsc{JIRA} ITS located in 2 tables called \texttt{jira\_issue\_comment} and \texttt{jira\_user}. Afterwards, we also group the features by developers and projects to prepare for cross- and within-project prediction. Details of the features will be described in Section IV.
\subsection{Selecting Projects and Fetching Mailing Lists}
\begin{table*}[htbp]
\centering
\caption{The Projects Analyzed in this Paper}
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{1}{l|}{\textbf{Repository}} & \textbf{Project Name} & \multicolumn{1}{l|}{\textbf{Developers}} & \textbf{Date Range} & \multicolumn{1}{l|}{\textbf{Issue Reports}} &
\multicolumn{1}{c}{\textbf{ Mailing List Name}} \\
\hline
\textsc{ASF} & HBase & 919 & 2007-04 - 2014-04 & 68806 & hbase-dev\footnotemark[1]\\
\textsc{ASF} & Hadoop Common & 1221 & 2009-05 - 2014-02 & 61905 & commons-dev\footnotemark[1]\\
\textsc{ASF} & Hadoop HDFS & 745 & 2009-05 - 2011-04 & 42188 & hadoop-hdfs-dev\footnotemark[1]\\
\textsc{ASF} & Cassandra & 1161 & 2009-03 - 2014-03 & 41937 & cassandra-dev\footnotemark[1]\\
\textsc{ASF} & Hadoop Map/Reduce & 857 & 2009-05 - 2011-03 & 34747 & hadoop-mapreduce-dev\footnotemark[1]\\
\textsc{ASF} & Hive & 839 & 2008-09 - 2014-03 & 34449 & hive-dev\footnotemark[1]\\
\textsc{ASF} & Harmony & 306 & 2005-09 - 2011-07 & 28325 & harmony-dev\footnotemark[1]\\
\textsc{ASF} & OFBiz & 538 & 2006-07 - 2014-02 & 25667 & ofbiz-dev\footnotemark[1]\\
\textsc{JBoss} & Hibernate ORM & 3958 & 2007-06 - 2014-01 & 23549 & hibernate-dev\footnotemark[2]\\
\textsc{ASF} & Camel & 882 & 2007-04 - 2014-01 & 21758 & camel-dev\footnotemark[1]\\
\textsc{ASF} & Wicket & 1210 & 2006-10 - 2014-01 & 17030 & wicket-dev\footnotemark[1]\\
\textsc{ASF} & Zookeeper & 484 & 2005-09 - 2011-07 & 13634 & zookeeper-dev\footnotemark[1]\\
\hline
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
The sentiment dataset consists of project comments in ITS of 4 major open source foundations namely \textsc{ASF}, \textsc{Codehaus}, \textsc{JBoss}, and \textsc{Spring}. However, only 23 projects contain sentiment data. Among the 23 projects, we first exclude 3 projects whose mailing list service providers do not support archive extraction. Next, we exclude 4 more projects as their mailing lists do not cover the time range of their sentiment data. Afterwards, we remove 3 software projects as they were sharing the same instances of \textsc{JIRA} ITS or Version Control System (VCS) repository with other projects, and such projects were incompatible with \textsc{Codeface4Smells}. Finally, We also filter out 1 project whose VAD data is missing in the dataset.
To sum up, the actual 12 projects we use to perform the analysis are listed in Table I. We fetch their mailing lists from the archives provided by their open source foundations.
\subsection{Detecting community smells}
Although we are aware of the existence of other community smell detection tools or models \cite{ICGSE,KBS}, they are not publicly available. Thus, we apply the state-of-the-art tool \textsc{Codeface4Smells}\cite{PalombaCSDetect,PalombaMasters} to detect community smells.
We follow strictly the instructions of the \textsc{Codeface4Smells} repository, \textit{e.g.}, we execute the application in the suggested \textsc{Vagrant} instance, and we fix the broken dependencies in order to avoid platform-specific problems. The only modification is adding exportation features to the socio-technical analysis script to derive names and e-mails of developers affected by each community smell.
As for configurations, community smell analysis must be performed in a given window. In our case, the window is 3 months as previous works suggested\cite{PalombaCSDetect,PalombaMasters,PalombaCSPredict}. Similar as the tool's replication package did \cite{PalombaMasters}, we also specify every commit to analyze in configuration files, the commits are exported from \textsc{Git} repositories of the projects using \textsc{PyDriller} \cite{pydriller}, which is also a commonly applied tool in software repository mining tasks\cite{PalombaMSR}. The configuration files are also provided in our replication package\cite{replication}.
\textsc{Codeface4Smells} is able to detect 5 community smells, including \textit{Organization Silo}, \textit{Lone Wolf}, \textit{Bottleneck}, \textit{Black Cloud}, and \textit{Prima Donnas}. However, \textit{Prima Donnas} detection is not empirically\cite{PalombaCSDetect} proved effective since its first appearance\cite{PalombaMasters}, so we do not take this community smell into consideration. \textit{Black Cloud} is sparsely distributed in software systems\cite{PalombaCSDetect}. In our dataset, there are several \textit{Black Cloud} appearances. However, the developer affected is not captured in the sentiment dataset, thus we could not perform \textit{Black Cloud} prediction. Consequently, our research scope includes 3 community smells, namely \textit{Organizational Silo}, \textit{Lone Wolf}, and \textit{Bottleneck}. We also involve \textit{Smelly Developers} and \textit{Smelly Quitters} as additional prediction classes, the details will be described in section IV.
\section{Design and Evaluation of a Developer-Oriented and Sentiment-aware Community Smell Prediction Model}
The goal of our study is to evaluate to what extent the occurrence of community smells on developers can be predicted by their sentiments, with the purpose of understanding the impact of sentiment to community smells from the granularity and perspective of developer themselves, in the mean time, provide researchers and practitioners with novel aspects and suggestions on community healthiness. This section proposes our research questions and our methodology.
\footnotetext[1]{ http://mail-archives.apache.org/mod\_mbox/\{Mailing List Name\}}
\footnotetext[2]{ https://lists.jboss.org/archives/list/[email protected]/}
\subsection{Research Questions and Methodologies}
\begin{framed}
RQ1: To what extent can we predict the occurrence of community smells on developers using their sentiments?
\end{framed}
Starting from the dataset we build in Section III, we define dependent and independent variables, and we build prediction models using Machine Learning classifiers. To pick the most appropriate machine learner, and to figure out why our model works or fails, their performances must be assessed. Hence, we propose RQ2.
\begin{framed}
RQ2: What is the predictive power of sentiments to the occurrence of community smells on developers?
\end{framed}
We investigate further the predictive power of each feature to reveal the contribution of every sentimental feature to our prediction model. The conclusion may provide us with useful information to explain our model and to look deeply into the distribution of sentimental features, which leads us to RQ3.
\begin{framed}
RQ3: Are smelly and non-smelly developers different in terms of their sentiments?
\end{framed}
Apart from model performance metrics, we intend to explain how sentiments impact the healthiness of development community according to their statistical characteristics including data distributions and mean values, and to differentiate smelly and non-smelly developers. Furthermore, we attempt to make suggestions to practitioners based on our findings.
\begin{table*}[htbp]
\centering
\caption{Features Extracted from the Developer Sentiment dataset and \textsc{\textsc{Codeface4Smells}}-defined Metrics}
\begin{tabular}{|l|c|l|}
\hline
\multicolumn{3}{|l|}{\textbf{VAD.} Automatically detected, the summation of mean value of sentences in comments by extending lexicons\cite{Ortu16MSRVAD} in \textsc{SentiStrength} \cite{SentiStrengthOrig}.} \\
\hline
Mean Valence & VAL & The mean intensity of valence, \textit{i.e.}, how developers enjoy a situation. \\
Mean Arousal & ARO & The mean intensity of arousal, \textit{i.e.}, increased alertness. \\
Mean Dominance & DOM & The mean intensity of dominance, the extent that developers were feeling in control. \\
\hline
\multicolumn{3}{|l|}{\textbf{Emotions.} Binary value labeled manually in the dataset.} \\
\hline
Mean Sadness & SAD & The mean intensity of all sadness expressions. \\
Mean Anger & ANG & The mean intensity of all angry expressions. \\
Mean Love & LOV & The mean intensity of all love expressions. \\
Mean Joy & JOY & The mean intensity of all joy expressions. \\
\hline
\multicolumn{3}{|l|}{\textbf{Sentiment Strength.} Measured using \textsc{SentiStrength}\cite{SentiStrengthOrig}, ranges between [-1;1].} \\
\hline
Mean Postive Sentiment & POS & The mean intensity of all sentiments smaller than 0. \\
Mean Negative Sentiment & NEG & The mean intensity of all sentiments greater than 0. \\
\hline
\multicolumn{3}{|l|}{\textbf{Politeness.} Binary value measured using Danescu \textit{et al.}’s tool \cite{politeness}.} \\
\hline
Politeness Proportion & POL & The proportion of polite expressions in all commentary sentences of a developer. \\
\hline
\multicolumn{3}{|l|}{\textbf{Uncertainty.} 4 categorical Grammatical Moods (GM) and modality in [-1;1] measured using \textsc{Pattern} \cite{mood} by detecting auxiliary verbs and adverbs.} \\
\hline
Indicative GM Proportion & IND & The proportion of sentences that express fact or belief, \textit{e.g.}, It rains. \\
Imperative GM Proportion & IMP & The proportion of sentences that express command, warning, \textit{e.g.}, Do not rain! \\
Conditional GM Proportion & CON & The proportion of sentences in the form like would, may, or will, \textit{e.g.}, It might rain. \\
Subjunctive GM Proportion & SUB & The proportion of sentences in the form like wish, were, \textit{e.g.}, I hope it rains. \\
Mean Degree of Modality & MOD & The degree of uncertainty of a sentence, ranges between [-1;1]. \\
\hline
\multicolumn{3}{|l|}{\textbf{Activeness.} Control variables reflecting developers' working state other than sentiments.} \\
\hline
Total Sentences & SEN & Number of total senteces developers commented in the \textsc{JIRA} ITS, extracted from \cite{Ortu16MSRDataset}. \\
Core Ranges Count & COR & Number of total ranges developer acted as a core developer, measured by \textsc{Codeface4Smells}\cite{PalombaCSDetect, PalombaMasters}. \\
Sponsored Ranges Count & SPO & Number of total ranges developer commited only in working hours\cite{PalombaCSPredict}, measured by \textsc{Codeface4Smells}. \\
\hline
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
\subsection{RQ1: Defining and Evaluating the Proposed Model in Cross- and Within-project Scenarios}
\subsubsection{Dependent Variables} In this paragraph, we list the definition of 5 smell-related prediction classes of our model. The 3 concerned community smells are defined as follows:
\begin{itemize}
\item \textbf{Organizational Silo}: refers to the
presence of siloed areas of the developer community
that do not communicate, except through one or two
of their respective members\cite{PalombaCSDetect}, \textit{i.e.}, co-committing developers do not directly communicate at all\cite{PalombaMasters}.
\item \textbf{Lone Wolf}: reflects co-committing software developers
who exhibit uncooperative behaviour and mistrust by not
appropriately communicating\cite{PalombaCSDetect}, \textit{i.e.}, the collaboration edges that do not have a communication counterpart\cite{PalombaMasters}.
\item \textbf{Bottleneck}: unique boundary spanners interpose
themselves into every interaction across sub-communities\cite{PalombaCSDetect}.
\end{itemize}
Despite the 3 smells, we also export 2 related classes:
\begin{itemize}
\item \textbf{Smelly Developer}: developers affected by any of the 3 above-mentioned community smells.
\item \textbf{Smelly Quitter}: \textit{Smelly Developers} from the last analysis window who quit the community in current window\cite{PalombaCSPredict}.
\end{itemize}
\begin{figure}[t]
\centerline{\includegraphics[height=3.5cm ,width=6.02cm,angle=0]{fig2.pdf}}
\caption{Lone Wolf and Organizational Silo identification pattern.}
\label{fig}
\end{figure}
Fig. 2 illustrates an example of community smell detection, in which smelly developers are marked in red. The left part of Fig. 2 shows a siloed area of developers, while the right part illustrates 2 lone wolves collaborating on shared code with indirect communication. To clarify, \textit{Organizational Silo} is a subset of \textit{Lone Wolf}\cite{PalombaMasters}. Technically, the major difference between the two smells stands in the definition of lacking connectivity in the communication graph. \textit{Organizational Silo} is detected if collaborating developers are disconnected in the communication graph. \textit{Lone Wolf} is detected if collaborating developers are not neighbours in the communication graph, \textit{i.e.}, lacks 1-degree connection. Differently, \textit{Bottleneck} is detected based on completely the communication graph, and its definition is remarkably similar to the unique
boundary spanner in social-network analysis \cite{PalombaCSDetect}.
Our motivation to involve \textit{Smelly Quitter} prediction is that literature reported negative sentiments may cause developer to ``destroy codebase'' \cite{Graziotin_17_SEMOTION, GraziotinJSS18}, and to ``quit over mistreatment'' \cite{Graziotin_14_QUIT}. Similarly, we assume it may also lead to the discharge of community.
\subsubsection{Independent Variables} We integrate sentimental and activeness features in Table II as independent variables.
Similar to \cite{SERA_16}, we split positive and negative sentiment into 2 features, POS and NEG.
We do not include an impoliteness proportion feature because the politeness labels available in the dataset are either polite or impolite, which will result in high correlation if we involve both. We ignore the confidential of politeness score since Ortu \textit{et al.} \cite{Ortu16MSRDataset} have already dropped the results with confidential less than a conventional threshold (0.5)\cite{Ortu15Politeness}.
The dataset also contains a mood column whose proper meaning was not clearly described in the original paper\cite{Ortu15MSRBullies}. We look into the documentation of the detection tool\cite{mood} and confirm this feature measures the GM of a sentence, and the result is mapped into 4 classes, \textit{i.e.}, \{indicative, imperative, conditional, subjunctive\}. In the dataset the 4 classes are presented in 4 values, namely \{0,1,2,3\}. We map the mood attribute into 4 proportional features to measure the developers' characteristics of expression.
Finally, we also introduce 3 activeness features including total sentences of developers commented on ITS (SEN) \cite{Ortu16MSRDataset}, the count of ranges that developers acted as core (COR) and sponsored (SPO) developers\cite{PalombaCSDetect,PalombaCSPredict}. They are used as control variables to assess the actual impact of sentimental features in comparison with activeness.
\subsubsection{Data Balancing and Feature Selection} Our dataset is highly imbalanced, which in line with the observations in related researches \cite{KBS,PalombaCSPredict}. Smelly developers account for 4.80\% of the overall developer population, which may hinder model performance. Therefore, we preprocess our data with Random Undersampling strategy, which is proved beneficial to imbalanced data in software engineering\cite{imbalanced}. We also address potential multicollinearity problem by removing the correlated features as previous researches proposed\cite{PalombaCSPredict}.
\subsubsection{Scenarios} We build models separately in both cross-project and within-project scenarios. For cross-project tasks, we merge the developers' features regardless of the projects they are working on. Notably, the same developers appearing on multiple projects are treated differently.
\subsubsection{Training Machine Learners}
Previous researches in community smell and code smell detection showed Random Forest was the best-performed classifier\cite{PalombaMSR,PalombaCSPredict}. The reason we evaluate again the different classifiers is that our data derive from developers individually, which is different from previous scenarios. Moreover, predictors have different characteristics in terms of design and effectiveness, \textit{i.e.}, the risk of overfitting and different execution speed\cite{PalombaCSPredict}. We intend to discover the best classifier for our scenario.
We apply the \textsc{scikit-learn} package\cite{scikit} from \textsc{Python} to train machine learners using multiple classifiers that have been used in previous works\cite{PalombaMSR,PalombaCSPredict,smellbugprediction}, including Random Forest, Decision Tree, Support Vector Machine, Multilayer Perceptron, Adaboost, Naive-Bayes, and Logistic Regression.
As related work suggested\cite{PalombaMSR}, instead of using default settings, we configure the hyper-parameters of the classifiers by exploiting Exhaustive Grid Search with a 10-Fold Cross-Validation strategy to calculate multiple times the performance of every combination of parameters.
\subsubsection{Validation and Assessment} To validate the performance of each classifier, we employ a 10 $\times$ 10-Fold Cross-Validation strategy to make sure that we have unbiased and stable \cite{PalombaCSPredict,esd1} performance data in cross-project scenarios. For the within-project scenario, we apply the Leave-One-Out Cross-Validation (LOOCV) according to \cite{PalombaCSPredict,esd1} which is proved to be stable and the least biased one in this scenario. Finally, we compute classical performance metrics including precision, recall, F-Measure, and AUC-ROC\cite{aucroc} to pick the best-ranked classifier in both scenarios. Thus, we could answer RQ1 quantificationally.
\subsection{RQ2: Explaining the Predictive Power of Features}
To answer this RQ, we need to explain the extent of predictive power that each independent variable contribute to the best-performed classifier in RQ1.
First, we apply an information gain algorithm, which has already been used in previous researches of community smells\cite{PalombaCSPredict} and code smells\cite{PalombaMSR} to compute the gain provided by each feature within the model to the correct prediction of a dependent variable. The approach measures the amount of reduction in uncertainty, \textit{i.e.}, Shannon's Entropy, of the model concerned after involving in a new feature. This paper applies the mutual information classification \cite{MI} implementation of \textsc{scikit-learn} package, which is originally designed for feature selection and identical in definition to the Gain Ratio Feature Evaluation used by previous researches\cite{PalombaCSPredict,PalombaMSR}. The algorithm ranks the features in descendent order according to each of their contribution to the entropy reduction of the concerned model. Thus, the features contributing the most predictive power could be identified.
Then, in order to assess the significance of the ranking of information gain, we also involve the Scott-Knott Effect Size Difference (SK-ESD) test. Scott-Knott test \cite{SK} is a statistical measure to compare and differentiate model performance using a hierarchical clustering approach to group the means of assessment metrics, \textit{e.g.}, F-Measures of multiple models. Scott-Knott test assumes input data distribution to be normally distributed, thus may be ineffective for non-normally distributed data. The ESD test is an enhanced version of the original Scott-Knott test that corrects the non-normal distribution of the input to make it comply with the requirements executing the Scott-Knott test. Meanwhile, it uses Cliff's Delta\cite{cliff} as an effect size measure to merge groups having negligible effect sizes. Scott-Knott and SK-ESD tests have already been applied in various researches in code smell \cite{PalombaMSR}, community smell \cite{PalombaCSPredict}, and software defect \cite{YangGe}. We use the original implementation of Tantithamthavorn \textit{et al.}\cite{esd1,esd2} available on \textsc{GitHub}.
Both information gain and SK-ESD algorithms are executed on each prediction class independently. We report the results from both cross- and within-project scenarios.
\subsection{RQ3: Investigating the Difference between Smelly and Non-Smelly Developers in Terms of Sentiments}
\begin{figure}[t]
\centerline{\includegraphics[height=7.53cm ,width=9cm,angle=0]{plot6.pdf}}
\caption{Correlation heat-map of features.}
\label{fig}
\end{figure}
\begin{figure*}[htbp]
\centerline{\includegraphics[height=6.5cm ,width=13cm,angle=0]{plot1.pdf}}
\caption{Cross-project prediction performance.}
\label{fig}
\end{figure*}
\begin{figure*}[htbp]
\centerline{\includegraphics[height=6.5cm ,width=13cm,angle=0]{plot2.pdf}}
\caption{Within-project prediction performance.}
\label{fig}
\end{figure*}
We take a closer look at the sentimental features' difference in distribution and intensity for smelly and non-smelly developers. We apply statistical measures, \textit{i.e.}, Wilcoxon Ranksum Test ($\alpha$ = 0.05, p $\textless$ $\alpha$ for statically significant \cite{scent}) to analyze the significance of the difference in distribution. Meanwhile, we calculate Cliff's Delta ($\delta$) for each test to measure the effect size, \textit{i.e.}, the extent of the difference, which is negligible when $\mid$$\delta$$\mid$ $\textless$ 0.147, small when 0.147 $\leq$ $\mid$$\delta$$\mid$ $\textless$ 0.33, medium when 0.33 $\leq$ $\mid$$\delta$$\mid$ $\textless$ 0.474, and large otherwise. Such measures have been applied in previous works of developer sentiment \cite{Ortu18UsersCommenters,Ortu16MSRVAD} and code smell\cite{scent}. We also report the mean and variance values of the features.
We expect to find statistical significance from our dataset to support our findings in RQ2, and to provide practitioners with advice to prevent community smells from occurring at an early stage.
\section{Result and Discussion}
In this section, we demonstrate and discuss the results of our experiment, answer the proposed research questions, and describe our findings.
\subsection{RQ1: Model Performance}
In Fig. 3, we assess the correlations of our features. Result shows the VAD features, \textit{i.e.}, VAL, ARO, DOM, correlate with each other ($\rho$\textgreater0.9). Although we find removing any 2 of them does not cause a significant change in the classifier's performance (\textless 2\%), we still exclude VAL and DOM to avoid potential multicollinearity problems. We also exclude CON for the same reason. Consequently, we remove 3 features out of 18 original ones.
We train multiple classifiers on the dataset and report the result of the best performed one, \textit{i.e.}, Random Forest, in Fig. 4 and Fig. 5 for both cross- and within-project prediction, and we include results for the 2 scenarios in the sets of box-plots. We also present the median of weighted average performance of our models in Table III.
Due to inconsistent community smell appearance in different projects, the performance of within-project prediction is less stable than cross-project ones, revealing potential drawbacks of our approach such as the capability to deal with cold start problem, \textit{i.e.,} do not have sufficient useful data to start with, which should be addressed in future work.
However, the median values of the model's performances show that the model still works in most cases.
\begin{framed}
Finding 1. Our prediction model could predict the occurrence of community smells on developers in most cases. In terms of cross-project prediction, our model reaches mean F-Measures ranging from 84\% to 93\%. Meanwhile, the model achieves F-Measures from 76\% to 91\% for within-project prediction.
\end{framed}
\subsection{RQ2: Features' Predictive Power}
\begin{table}[!b]
\centering
\caption{Weighted Average of Model Performance}
\begin{tabular}{c|c|c|c|c|c}
\hline
\textbf{Class} & \textbf{Scenario} & \textbf{Prec.} & \textbf{Rec.} & \textbf{F-Meas.} & \textbf{AUC-ROC} \\
\hline
\multirow{2}{*}{Bottleneck} & Within & .96 & .76 & .84 & .71 \\
& Cross & .82 & .73 & .76 & .68 \\
\hline
\multirow{2}{*}{Lone Wolf} & Within & .97 & .88 & .92 & .84 \\
& Cross & .98 & .85 & .89 & .80 \\
\hline
\multirow{2}{*}{Org. Silo} & Within & .98 & .90 & .93 & .86 \\
& Cross & .98 & .86 & .91 & .85 \\
\hline
\multirow{2}{*}{Smelly Dev.} & Within & .94 & .81 & .86 & .76 \\
& Cross & .96 & .78 & .86 & .77 \\
\hline
\multirow{2}{*}{Smelly Quitter} & Within & .98 & .87 & .92 & .82 \\
& Cross & .99 & .86 & .90 & .84 \\
\hline
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
\begin{table}[!b]
\centering
\caption{Ranking and Mean Gain Ratio of Each Feature}
\begin{tabular}{c|c|c|c||cccc}
\hline
\textbf{Metric} & \textbf{Mean} & \textbf{W.-P.} & \textbf{C.-P.} & \multicolumn{1}{c|}{\textbf{Metric}} & \multicolumn{1}{c|}{\textbf{Mean}} & \multicolumn{1}{c|}{\textbf{W.-P.}} & \textbf{C.-P.} \\
\hline
\textbf{\textit{IMP}} & .18 & 1 & 1 & \multicolumn{1}{c|}{\textit{\textbf{ANG}}} & \multicolumn{1}{c|}{.12} & \multicolumn{1}{c|}{4} & 4 \\
\textbf{\textit{SAD}} & .18 & 1 & 1 & \multicolumn{1}{c|}{\textit{\textbf{POS}}} & \multicolumn{1}{c|}{.11} & \multicolumn{1}{c|}{5} & 5 \\
\textbf{\textit{POL}} & .18 & 1 & 1 & \multicolumn{1}{c|}{\textit{\textbf{MOD}}} & \multicolumn{1}{c|}{.08} & \multicolumn{1}{c|}{6} & 6 \\
\textbf{\textit{IND}} & .18 & 1 & 1 & \multicolumn{1}{c|}{\textit{\textbf{SUB}}} & \multicolumn{1}{c|}{.07} & \multicolumn{1}{c|}{7} & 7 \\
\textbf{\textit{LOV}} & .18 & 1 & 2 & \multicolumn{1}{c|}{\textit{\textbf{ARO}}} & \multicolumn{1}{c|}{.03} & \multicolumn{1}{c|}{8} & 8 \\
\textbf{\textit{JOY}} & .17 & 2 & 1 & \multicolumn{1}{c|}{COR} & \multicolumn{1}{c|}{.03} & \multicolumn{1}{c|}{9} & 9 \\
\textbf{\textit{NEG}} & .13 & 3 & 2 & \multicolumn{1}{c|}{SPO} & \multicolumn{1}{c|}{.00} & \multicolumn{1}{c|}{10} & 10 \\
SEN & .13 & 3 & 3 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \\
\hline
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
Table IV lists the information gains that features contribute to our prediction model in descendent order. Metrics in bold and italic are \textbf{\textit{Sentimental Features}}. Column W.-P. \& C.-P. list the SK-ESD rank of gain of features to the within- and cross-project model. Fig. 6 and Fig. 7 depict the mean (in solid lines), median (in dashed lines), as well as the rank of each feature's gain. Metrics given the same rank are marked in the same color. The ranks of information gains in different classes are similar, so we do not report them due to the limitation of space. Therefore, the gains presented in this section is to describe the contribution of metrics to the overall performance of our model.
Every sentimental feature has some contribution to the model (mean Gain Ratio $\geq$ 0.033), and their contributions are average and moderate. Meanwhile, the rankings are slightly different in the 2 scenarios. However, 6 metrics, \textit{i.e.}, IMP, SAD, POL, IND, LOV, JOY, are always stronger predictors than any of the activeness features. Meanwhile, they are also among the highest ones.
In particular, imperative GM is the strongest factor among all the features in terms of predictive power, while indicative GM is in the same rank, showing certainty is a determining aspect of community healthiness.
Either positive or negative emotions, including sadness, love, and joy, are related to community smell issues.
POL came after IMP and SAD, which is in line with researches \cite{politeness,politepeerj,Ortu15Politeness,Ortu15MSRBullies} suggesting politeness may have an impact on developers' productivity.
The above-mentioned sentimental features contribute more than the activeness ones, \textit{i.e.}, SEN, COR and SPO, indicating the centrality and discussion activeness of developers are weaker predictors than several sentiments. Recent work \cite{ahmed} also reported that the number of commits are related to community smells' occurrence. In response, we are planning to investigate the traditional process metrics' impact on developers' working state in the software community. Since COR measures the centrality of developers in the communication and the collaboration graphs, we assume it is likely to be correlated with commit activeness over software systems.
\begin{figure}[!b]
\centerline{\includegraphics[height=3.3cm ,width=10cm,angle=0]{plot4.pdf}}
\caption{Within-project Information gain classified by SK-ESD.}
\label{fig}
\end{figure}
\begin{figure}[!b]
\centerline{\includegraphics[height=3.3cm ,width=10cm,angle=0]{plot5.pdf}}
\caption{Cross-project Information gain classified by SK-ESD.}
\label{fig}
\end{figure}
\begin{framed}
Finding 2. In terms of community smells' occurrence on developers, 6 sentimental features are stronger predictors than activeness ones. Imperative and indicative GM are among the strongest predictors, indicating certainty a key factor to community healthiness. Politeness and both positive and negative emotions including sadness, love, and joy, are also powerful predictors.
\end{framed}
\begin{table*}[htbp]
\centering
\caption{Result of Statistical Analysis}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r|r|r|r}
\hline
\textbf{Features} & \multicolumn{2}{c|}{\textbf{IMP}} & \multicolumn{2}{c|}{\textbf{POL}} & \multicolumn{2}{c|}{\textbf{LOV}} & \multicolumn{2}{c|}{\textbf{JOY}} & \multicolumn{2}{c|}{\textbf{NEG}} & \multicolumn{2}{c}{\textbf{IND}} \\
\hline
Wilcoxon R. & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c}{p\textless0.001} \\
\hline
Cliff's Delta & -0.48 & \multicolumn{1}{c|}{L} & -0.25 & \multicolumn{1}{c|}{S} & 0.34 & \multicolumn{1}{c|}{M} & 0.38 & \multicolumn{1}{c|}{M} & -0.09 & \multicolumn{1}{c|}{-} & -0.27 & \multicolumn{1}{c}{S} \\
\hline
\textbf{Smelly Dev.?} & \multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &
\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c}{\textbf{N}} \\
\hline
Mean & 0.09 & 0.26 & 0.54 & 0.67 & 0.14 & 0.13 & 0.08 & 0.02 & -0.16 & -0.17 & 0.65 & 0.77 \\
\hline
Variance & 0.01 & 0.08 & 0.04 & 0.08 & 0.04 & 0.09 & 0.06 & 0.03 & 0.01 & 0.02 & 0.33 & 0.06 \\
\hline \hline
\textbf{Features} & \multicolumn{2}{c|}{\textbf{SAD}} & \multicolumn{2}{c|}{\textbf{ANG}} & \multicolumn{2}{c|}{\textbf{POS}} & \multicolumn{2}{c|}{\textbf{MOD}} & \multicolumn{2}{c|}{\textbf{SUB}} & \multicolumn{2}{c}{\textbf{ARO}} \\
\hline
Wilcoxon R. & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c|}{p\textgreater0.05} & \multicolumn{2}{c|}{p\textless0.001} & \multicolumn{2}{c}{p\textless0.001} \\
\hline
Cliff's Delta & 0.13 & \multicolumn{1}{c|}{-} & -0.50 & \multicolumn{1}{c|}{L} & 0.15 & \multicolumn{1}{c|}{S} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & -0.53 & \multicolumn{1}{c|}{L} & -0.08 & \multicolumn{1}{c}{S} \\
\hline
\textbf{Smelly Dev.?} & \multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &
\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c|}{\textbf{N}} &\multicolumn{1}{c|}{\textbf{Y}} & \multicolumn{1}{c}{\textbf{N}} \\
\hline
Mean & 0.27 & 0.33 & 0.03 & 0.19 & 0.21 & 0.20 & 0.58 & 0.57 & 0.03 & 0.13 & 1.00 & 1.03 \\
\hline
Variance & 0.08 & 0.24 & 0.00 & 0.13 & 0.01 & 0.02 & 0.02 & 0.06 & 0.01 & 0.06 & 0.03 & 0.10 \\
\hline
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
\begin{figure*}[!htbp]
\centerline{\includegraphics[height=4.5cm ,width=14.5cm,angle=0]{plot3.pdf}}
\caption{Distribution of the features.}
\label{fig}
\end{figure*}
\subsection{RQ3: Smelly vs. Non-Smelly Developers}
Fig. 8 depicts the distribution of sentimental features using an enhanced version of box-plot called boxen-plot \cite{boxenplot}, which is more capable of displaying tails of large-sampled data, as it cuts data into more quantiles.
Table V lists the result of statistical analysis in order to differentiate the sentimental characteristics derived from our dataset, in which \{Y, N\} represent smelly and non-smelly Developers. Effect sizes in \{Large, Medium, Small, Negligible\} are mapped to \{L, M, S, -\}. Since we analyze the distribution of metrics' data separately, zeros are removed for each group of metrics in this section to ensure the effectiveness of data.
Since predictive power could speak for the actual impact of features on the prediction outcome \cite{PalombaCSPredict, PalombaMSR}, we reveal IMP, IND, POL, SAD, LOV, and JOY may have impacts on community smell, which would be analyzed and discussed in the next paragraphs.
The IMP feature is the top-ranked feature in terms of predictive power. From experience, we assume instructive expressions are more likely to be serious and impolite, which would become an obstacle for cooperation. However, result shows the distribution of the proportion of imperative GM is significantly different for smelly and non-smelly developers with a large effect size. Result also reveals non-smelly developers use 3 times more frequently the imperative expressions, \textit{i.e.}, commands and warnings. Moreover, indicative GM is also among the top contributing features, and smelly developers use less indicative expressions than non-smelly ones. Hence, we conclude that ensuring certainty is vital to developers' communication and collaboration quality. This conclusion is in line with the observations made from purely technical aspects\cite{uncertainty}. Interestingly, non-smelly developers also use more subjunctive expressions to express wishes and opinions, indicating the importance of certainty may differ from scenarios.
Difference in politeness, however, does not have a great effect size as we expect, \textit{i.e.}, in our dataset, smelly developers communicate in a similar structure compared with non-smelly developers in terms of politeness. Nevertheless, non-smelly developers use 24\% more frequently the polite expressions in general than smelly developers.
Previous researches \cite{HAPPY_PULL_REQUEST_MERGED,graziotin2014happy} in developers' sentiments topics mainly regard happiness as a positive factor to development task. Surprisingly, expressing more joyful emotion may also indicate community smells' occurrence. Meanwhile, we also notice literature reporting that too many positive comments \cite{Huq} indicate potential bugs. In contrast, negative emotions have always been an indicator of potential problems in software engineering\cite{eeg}. Unexpectedly, except for anger, no notable difference are found in smelly and non-smelly developers in terms of negative sentiments. According to their mean values, non-smelly developers are even more bad-tempered and depressed than the smelly ones. Due to the multi-facet and complex nature of sentiments and development activity, the difficulties in interpretation occurred frequently in developer sentiment analysis \cite{Ortu16MSRVAD,hawaii,linus,Cheruvelil19,Ortu15Politeness}.
Practically, we suggest developers should communicate in a straightforward and polite way. As for emotions, we cannot provide suggestions until further empirical investigation and case studies are made to figure out positive and negative emotions' actual impact. Indeed, the unforeseen part of results shed light on the necessity of a deeper understanding of developers' sentiments and their impact on software community through qualitative and quantitative study. Beyond technical aspects, the research community need to improve and reshape the framework of comprehending developers' perception\cite{PalombaMSR, CONTEXT_DEV_2,CONTEXT_DEV1,sensitive} as well as their task context\cite{psy, CONTEXT_BASED_REFACTORING_ICPC_2017,CONTEXT_2,CONTEXT_3}.
\begin{framed}
Finding 3. Smelly developers are different from non-smelly ones in terms of sentiments. They are less polite, and they use less imperative and indicative GM, \textit{i.e.}, less certain statements, instructions and warnings. Unexpectedly, they express more positive and less negative emotions. To ensure community healthiness, we suggest developers should communicate in a straightforward and polite way.
\end{framed}
\section{Threats to Validity}
This section clarifies the way we address threats to validity.
\subsection{Construct Validity}
The major threat to construct validity is the reliability of our datasets. We combine 2 sources of information, \textit{i.e.}, community smell detection results and developer sentiment dataset.
In terms of community smell detection, we employ an open-source tool called \textsc{Codeface4Smells} \cite{PalombaCSDetect}. The authors provided detailed replication data to prove the dependability of the tool, \textit{i.e.}, its output were all true positives. Hence, we believe the tool is reliable. In addition, we follow strictly the installation, configuration, and execution guides\cite{PalombaMasters} of the detection tool. As for software repositories and mailing lists, we fetch them from original sources of \textsc{ASF} and \textsc{JBoss}. To a great extent, we can confirm the reliability of this source.
The developer sentiment dataset is proposed and improved progressively by Ortu \textit{et al.} \cite{Ortu16MSRDataset,Ortu14MSREmotions,Ortu15MSRBullies,Ortu15Politeness,Ortu16MSRVAD} through years of validated works. Except for the emotions (joy, love, anger, sadness), all data are automatically detected by lexicon-based tools. The performance of the tools may be a threat to validity. Indeed, no tool is ready for detecting sentiments in all kinds of discussions in software engineering\cite{cross-platform}, \textit{e.g.}, app reviews and Q\&As. However, tools employed by the dataset are all state-of-the-arts, \textit{e.g.}, \textsc{SentiStrength}\cite{SentiStrengthOrig} achieved an agreeable performance ranging from 0.70 to 0.99 in terms of positive and negative sentiment detection\cite{CHALLENGE_BUT_SENTISTRENGTH_HAS_RELIABLE_OUTPUT_ON_JIRA_ICSE18} in the \textsc{JIRA} ITS, and its reliability have also been proved in other researches\cite{gold,mlSentiStrength}. Thus, we conclude that the reliability of the sentiment dataset in the context of our research is acceptable.
The process of combining the two datasets, however, may cause some loss in information. To match the developers in both datasets, we make our best effort to link developers from both sides with their e-mails and names. However, 8.8\% of the smelly developers are not found in the sentiment dataset. Thus, their data are dropped. Nevertheless, we still manage to preserve most of the data.
The coherence of mailing lists and developer comments in \textsc{JIRA} may be a threat as well. Therefore, we investigate the contents of mailing lists. In most of the cases (8 out of 12), they are automatically generated \textsc{JIRA} discussions. If not so, they are \textsc{JIRA}-centered discussions, \textit{i.e.}, comments attaching hyperlinks of \textsc{JIRA} tickets. The involvement of ITS contents in mailing lists was also observed in other research \cite{mlcontent}. In conclusion, we are measuring discussions in the same context presented in different layouts.
\subsection{Conclusion Validity}
The mix of manually labeled emotions and automatically detected sentiments in the dataset may hinder the practical value of our model, as the evaluation of sentiments is not fully automatic. However, such labels could be replaced by the outputs of reliable sentiment evaluation tools\cite{EmoTxt,emoji}.
In terms of the reliability of model settings, we configure the hyper-parameters using Grid Search, and we employ LOOCV as well as 10 $\times$ 10-Fold Validation, which was proved stable by previous research\cite{esd1}. We also report results of classical evaluation metrics, \textit{i.e.}, precision, recall, F-Measure, and AUC-ROC. Furthermore, we apply statistical tests, \textit{e.g.}, SK-ESD and Effect Sizes, to validate the significance of our conclusions.
\subsection{External Validity}
Since different projects may involve developers in various backgrounds\cite{PalombaCSDiversity,novice}, the multi-faceted nature of software development and developer sentiment is an unavoidable threat to external validity. To address this issue, we perform our study in 12 active open-source projects of 2 major open source ecosystems to maximize the generality of our conclusion. Such systems have been widely studied in previous works of software engineering \cite{smellbugprediction,PalombaMSR,apachedebt}.
\section{Conclusion}
This paper investigates whether, and to what extent, the occurrence of community smells on developers could be predicted by their sentiments. Furthermore, it analyzes the difference of smelly and non-smelly developers in terms of the distribution and intensities of their sentiments.
We construct sentimental features from a developer sentiment dataset\cite{Ortu16MSRDataset} as independent variables. Meanwhile, we exploit \textsc{Codeface4smells}\cite{PalombaMasters,PalombaCSDetect} to generate dependent variables concerning whether a developer is affected by 3 community smells, \textit{i.e.}, \textit{Lone Wolf}, \textit{Organization Silo}, and \textit{Bottleneck}, and if he/she is a \textit{Smelly Quitter}. Afterwards, we also assess the predictive power of all the features. Finally, we evaluate the significance and effect sizes of the distributions of sentiments on smelly and non-smelly developers.
Result shows our model achieves mean F-Measures ranging from 76\% to 93\% in within- and cross-project prediction. Additionally, sentimental features including imperative and indicative GM, politeness, sadness, love, and joy are stronger predictors than the activeness metrics. We reveal smelly developers are less polite, and they use less certain statements, instructions, and warnings. Unexpectedly, we also discover that smelly developers express more positive and less negative emotions, which need to be interpreted in further research. To conclude, we suggest developers should communicate in a straightforward and polite way.
Future work includes: (1) the integration of more effort- and process-aware metrics, (2) involving other kinds of discussions such as chat messages\cite{chat}, (3) interpreting the pattern of positive and negative emotions' interaction with social debt.
\section*{Acknowledgment}
This work is partially supported by the NSF of China under grants No. 61772200, the Shanghai Natural Science Foundation No. 17ZR1406900, 17ZR1429700, and the Software and Integrated Circuit Industry Development Special Funds of Shanghai Economic and Information Commission under Grant No. XX-XXFZ-02-20-2463.
\newpage
\bibliographystyle{ieeetr}
|
1,116,691,500,943 | arxiv | \section{Introduction}
Protoplanetary disks around young stars have shown various structures in their thermal dust continuum emission. Observing and understanding their origin is necessary to understanding the chemical, physical and dynamical processes that are ongoing in a protoplanetary disk. Of the structures present in dust emission, rings and gaps are the most common \citep[e.g., ][]{2015ApJ...808L...3A, DSHARP_Andrews, DSHARP_Huang_Annular, 2018ApJ...869...17L, 2018A&A...610A..24F}, albeit arcs and spirals have also been observed in some systems \citep[e.g., ][]{2018ApJ...860..124D,DSHARP_Huang_Spirals}.
Observing with instruments such as the Atacama Large Millimeter/submillimeter Array (ALMA) is crucial to our understanding of planet-formation mechanisms, as we can observe at wavelengths that trace continuum emission from the cold midplane \citep[e.g.,][]{2014prpl.conf..339T}, where we expect planets to be forming or have already formed. In the case of midplane spiral structures, their origin may be linked to the presence of a companion; stellar, fly-by or planetary \citep[]{2015MNRAS.453.1768P,2018ApJ...860L...5F, 2019MNRAS.483.4114C, 2018ApJ...859..118B, 2018ApJ...860..124D, 2020A&A...639A..62K}. Spirals may also be excited if the system is gravitationally unstable. Gravitationally instability is expected in cool and massive disks, where the disk-to-star mass ratio is larger than 0.1 \citep[][]{1997ApJ...486..372B, 2001ApJ...553..174G, 2004MNRAS.351..630L, 2016ARA&A..54..271K, 2016MNRAS.458..306H, 2016PASA...33...12R, 2020MNRAS.493.2287Z, 2019ApJ...871..228H, 2009MNRAS.393.1157C}. To date, not many spirals in dust continuum emission have a clear origin, except for those in multiple systems where the presence of spirals has been linked to stellar interactions \citep[]{DSHARP_Nico, 2020MNRAS.491.1335R}. On the other hand, there are disks were spirals have been reported at millimeter wavelengths and where no companion to which the spiral origin may be linked to has been detected yet \citep[to date these are Elias 27, IM Lup, WaOph 6, and MWC 758, ][]{laura_elias,DSHARP_Huang_Spirals, 2018ApJ...860..124D}. If no companion is detected and the disk is massive compared to the host star mass, the gravitational instability (GI) scenario arises as a possible explanation for the origin of the observed spirals. Studying disks undergoing GI is important, as population synthesis models show that GI primarily ends up forming brown dwarf mass objects \citep[]{2018MNRAS.474.5036F, 2017MNRAS.470.2517H}. It seems that giant planet formation through GI is rare \citep{2015MNRAS.454.1940R}, but it may still be the formation mechanism for important systems like HR 8799 \citep{2017A&A...603A...3V}.
Elias 2-27 is a young (0.8 Myr) M0 star \citep{2009ApJ...700.1502A} located at a distance of 116$^{+19}_{-10}$ pc \citep{2018A&A...616A...1G} in the $\rho$ Oph star-forming region \citep{1999ApJ...525..440L}. It harbors an unusually massive protoplanetary disk, the disk-to-star mass ratio of Elias 2-27 is reported to be $\sim$ 0.3 \citep[]{2009ApJ...700.1502A, laura_elias}. The initial detection of two large-scale spiral arms was obtained with medium-resolution ALMA observations by \cite{laura_elias}. Due to the brightness and accessibility of the source, it became one of the Disk Substructures at High Angular Resolution Project \citep[DSHARP,][]{DSHARP_Andrews} targets, allowing further analysis of the dust emission at high resolution. Its distinctive morphology consists of two extended quasi-symmetric spiral arms and a gap, 14\,au wide, located at 69\,au from the star \citep{DSHARP_Huang_Annular,DSHARP_Huang_Spirals}. Due to its characteristic structure, the system has been the subject of several theoretical studies, concluding that GI is a possible origin to the spiral arms \citep[]{2018MNRAS.477.1004H,2018ApJ...860L...5F, 2017ApJ...839L..24M, 2018ApJ...859..119B}. Though GI seems to explain the spiral morphology, it does not explain the dust gap, which could be carved by a companion of $\sim$0.1 M$_J$ as constrained in hydrodynamical simulations by \cite{DSHARP_Zhang}. Localized deviations from Keplerian motions at the location of this dust gap have been recently found, strengthening the hypothesis of a planetary-mass companion in the gap \citep{2020ApJ...890L...9P}. However, a lower mass inner companion, such as the one proposed to open the gap, would not be able to excite the observed spiral arms \citep{2017ApJ...839L..24M}.
Overall, Elias 2-27 appears to be a strong candidate to be a gravitationally unstable protoplanetary disk, but there are many tests to be done in order to determine if this is in fact the origin of the observed spirals. GI spirals will create pressure enhancements where we expect solids to be trapped and grain-growth favored \citep{2004MNRAS.355..543R, 2005prpl.conf.8560R, 2015MNRAS.451..974D}. This will not occur in the case of a companion, as companion-induced spirals will co-rotate with the planet at its Keplerian speed, faster than the background gas flow at their location, prohibiting dust growth and accumulation \citep{2015MNRAS.451.1147J}. Another morphological signature is the expected symmetry of the spirals produced by GI, which should have a constant pitch angle in a logarithmic spiral model \citep{2018ApJ...860L...5F}. Thus, measurements of dust growth signatures together with symmetric, constant pitch angle, logarithmic spirals, in a protoplanetary disk point towards a GI scenario.
Additionally, valuable dynamical information may be obtained from gas observations. The presence of planets or companions leaves distinct footprints in the kinematics and these perturbations may be constrained by the amplitude of the gas deviations from the expected Keplerian motion of an unperturbed disk. The current state-of-the art methods vary from tracing pressure gradients \citep{2018ApJ...860L..12T}, observing deviations from expected isovelocity curves in the channel maps ('kinks') \citep[]{2019NatAs...3.1109P, 2018ApJ...860L..13P, 2020ApJ...890L...9P, 2015ApJ...811L...5P} and using the mean velocity maps to model the velocity structure of the disk and detect Doppler flips in the residuals \citep[]{2020ApJ...889L..24P, 2018MNRAS.480L..12P}. For GI spirals, \cite{2020arXiv200715686H} characterizes the presence of a ``GI-wiggle'' that, contrary to companion-disk interactions, will not be spatially localized but rather be a large scale perturbation, present throughout a wider velocity range, co-located with the spirals. Analyzing the disk kinematics complements the analysis of the observed dust structures and allows us to understand and connect the various ongoing processes.
Previously published gas observations of Elias 2-27 in the $^{12}$CO and $^{13}$CO in J=2-1 transition show heavy absorption, as the star is quite embedded in its cloud \citep[]{2009ApJ...700.1502A, laura_elias, DSHARP_Andrews,2020ApJ...890L...9P}. In this study we present $^{13}$CO and C$^{18}$O observations in J=3-2 transition. The higher energy transition and lower abundance of the isotopologues allows us to avoid some of the cloud contamination, while also probing closer to the midplane than previous works.
The present paper offers new observational constraints on Elias 2-27 and is organized as follows. Section 2 provides an overview on the calibration and imaging process of the observations, section 3 analyzes the spirals in the multi-wavelength continuum data, section 4 studies the $^{13}$CO and C$^{18}$O emission, through a geometrical analysis of the moment maps and the localization of perturbations in channel maps, section 5 shows the analysis of hydrodynamical simulations computed for a GI disk using the derived observational parameters of Elias 2-27, section 6 discusses the results and determines the possible origin of the spirals, and finally, section 7 summarizes the main findings of this work.
\section{Observations}
We present multi-wavelength (Band 3, 6 and 7) dust continuum and spectral line ($^{13}$CO $J = 3-2$ and C$^{18}$O $J = 3-2$) ALMA data of Elias 2-27. In the case of the Band 6 (1.3\,mm) observations, the imaged data corresponds to the one presented in \cite{laura_elias}, detailed information regarding the calibration of this dataset may be found in the original publication. For the Band 7 (0.89\,mm) and Band 3 (3.3\,mm) observations, the dust continuum data was first calibrated through the ALMA pipeline and afterwards, phase and amplitude self-calibration was applied. Additionally, following the calibration procedure described in \cite{DSHARP_Andrews}, we applied astrometric and flux scale alignment, to correct for spatial offsets between the center of emission of different observations and relative flux scale differences. In both bands we have short (15\,m-313.7\,m in Band 7, 15.1\,m-2.5\,km in Band 3) and long (15.1\,m-1.4\,km in Band 7, 21\,m-3.6\,km in Band 3) baseline observations, in order to account for all of the emission from different spatial scales of the disk. Spatial offsets are corrected by locating the center of emission through a Gaussian fit in the image plane of each observation and adjusting phase and pointing centers with the Common Astronomy Software Applications \citep[CASA][]{2007ASPC..376..127M} tasks \textsc{fixvis} and \textsc{fixplanets} respectively. The absolute flux calibration uncertainty of ALMA data is expected to be $\sim$10$\%$, and we note that after self-calibration the relative flux scales are consistent within $\sim$5$\%$ for different observations in the same band. To adjust these amplitude scale differences we compare the deprojected, azimuthally averaged, visibility profiles and scale them using the short-baseline data as reference for the \textsc{gaincal} task, following the methodology in \citet{DSHARP_Andrews}. For Band 7 data we apply the spatial offset corrections and amplitude scaling before any self calibration as relative flux scales vary $5-10\%$ between observation sets. In the Band 3 data set we have a higher flux scale difference from the initial datasets ($\sim$20$\%$), probably due to atmospheric conditions as we can visually see that the image quality is not the best. Therefore, in Band 3 we conduct self-cal of the short-baseline observation and afterwards, with a flux scale difference of $\sim$3$\%$, we perform the spatial and amplitude scale corrections.
Self-calibration was first conducted on the dust continuum emission short-baseline data and the result combined with the long-baseline data to be self-calibrated all together. The selected time intervals for both phase and amplitude self-calibration start with the longest available total integration period for the initial round, and afterwards, each following time interval was half of the previous interval. The longest and shortest intervals used were, 1500s and 46s for the phase calibration of the joint 0.89\,mm data, and 2860s and 178s for the phase calibration of the joint 3.3\,mm data. For the calibration of the short baseline data, the time intervals of phase calibration were between 1080s and 270s for the 0.89\,mm data, and 304s and 9.5s for the 3.3\,mm data. In the case of the amplitude self-cal time intervals, in Band 7 only one round was required in both short baseline and joint data, so the solutions were obtained from the longest time interval. In Band 3 the short baseline data had one round of amplitude calibration and the joint data set had two rounds. Self-calibration rounds were applied until signal-to-noise ratio (SNR) improvement was less than 5$\%$ in the case of Band 7 data and until there was no SNR improvement in the case of Band 3. For the short baseline data at 0.89\,mm, we conducted 3 rounds of phase calibration and 1 round of amplitude calibration. The combined dataset at 0.89\,mm had 7 rounds of phase calibration and 1 round of amplitude calibration, overall the peak SNR in the joint dataset improved 300$\%$. The short baseline data at 3.3\,mm had 6 rounds of phase calibration and 1 round of amplitude calibration, the joint data set had 5 rounds of phase calibration and 2 round of amplitude calibration. In the joint dataset, the SNR improvement was 6$\%$.
For the final imaging we used multi-scale \textsc{tclean} for all the images, using a 1$\sigma$ stopping threshold (where $\sigma$ is the image RMS) for the Bands 3 and 6 data, and a 2$\sigma$ stopping threshold in Band 7. Robust weighting values were 1.0 in the case of Band 6, and 0.5 for Bands 3 and 7. We also set the \textsc{gain} value to 0.05 and \textsc{cyclefactor} parameter in \textsc{tclean} to 2.0, to have a more detailed cleaning, by substracting a smaller fraction of the source flux from the residual image and triggering a major cycle sooner. All images have comparable beam sizes: 0.22$\arcsec \times$0.17$\arcsec$ beam (26\,au$\times$20\,au) at 0.89\,mm, 0.26$\arcsec \times$0.22$\arcsec$ beam (30\,au$\times$26\,au) at 1.3\,mm, and 0.26$\arcsec \times$0.20$\arcsec$ (30\,au$\times$23\,au) beam at 3.3\,mm. At each wavelength the image RMS is approximately 93\,$\mu$Jy/beam, 86\,$\mu$Jy/beam and 10\,$\mu$Jy/beam, respectively for 0.89\,mm, 1.3\,mm and 3.3\,mm.
The observations of C$^{18}$O and $^{13}$CO were obtained simultaneous to the dust continuum emission in Band 7 (0.89\,mm) and in the $J = 3 - 2$ transition. C$^{18}$O is observed at 329.330 GHz, with a spectral resolution of 0.035 MHz and $^{13}$CO at 330.588 GHz, with a spectral resolution of 0.121 MHz. After applying the same self-calibration solutions as to the dust continuum of Band 7, the emission was first imaged using natural weighting (robust parameter of 2.0) and not applying any uvtapering or uv-range filtering, in order to be sensitive to large-scale emission with a beamsize of 0.29$\arcsec \times$0.22$\arcsec$ ($\sim$ 34$\times$25\,au) for $^{13}$CO and 0.30$\arcsec \times$0.23$\arcsec$ ($\sim$ 35$\times$27\,au) for C$^{18}$O (see figures in Appendix A). In order to avoid the cloud contamination present in the disk \citep[]{laura_elias, DSHARP_Huang_Spirals} we exclude large-scale emission by considering only baselines longer than 36m for $^{13}$CO (scales shorter than 6.4$\arcsec$, $\sim$740\,au) and 45m for C$^{18}$O (scales shorter than 5.11$\arcsec$, $\sim$590\,au). We also applied uvtapering of 0.2$\arcsec \times$0.115$\arcsec$, PA= 0$^{\circ}$ for $^{13}$CO and 0.2$\arcsec \times$0.0$\arcsec$, PA= 0$^{\circ}$ for C$^{18}$O, in order to obtain a roughly round beam. The final images that trace the disk emission were obtained with a robust parameter of 0.5, resulting in a beam size of 0.26$\arcsec \times$0.25$\arcsec$ ($\sim$ 29\,au) for $^{13}$CO and 0.31$\arcsec \times$0.29$\arcsec$ ($\sim$ 35\,au) for C$^{18}$O. We imaged all channel maps in both $^{13}$CO and C$^{18}$O using a 0.111\,km s$^{-1}$ spectral resolution, though the C$^{18}$O data was observed with a finer spectral resolution (0.032\,km s$^{-1}$), and found the best compromise of SNR by using a broader spectral resolution.
We also recover gas emission from CN v$ = 0$, N $= 3 - 2$ in $J = 7/2 - 5/2$ and $J = 5/2 - 3/2$, these additional molecules will be studied separately in future work. We do not recover any emission from requested observations of CN v$ = 0$, N $= 3 - 2$ in $J = 5/2 - 5/2$ or SO 3$\Sigma$ v$=0$ $J = 3 - 2$, at the achieved sensitivity level of 3mJy/beam.
\begin{figure*}
\centering
\includegraphics[width=\hsize]{dust_data_plot.pdf}
\caption{Dust continuum observations of Elias 2-27 at 0.89\,mm, 1.3\,mm and 3.3\,mm. For each panel: the intensity colorscale is shown on the right, the scalebar in lower right corner corresponds to 30\,au at the distance of the star, and the ellipse in the bottom left corner indicates the spatial resolution.
}
\label{dust_data}
\end{figure*}
\section{Dust Spiral Structure}
We recover the spiral structure at all wavelengths, as shown in Figure \ref{dust_data}. Given the $\sim$25\,au beam size, we are not able to fully resolve the 69\,au gap but can distinguish it as a small decrease in brightness temperature at all wavelengths in Figure \ref{dust_data}.
For all calculations throughout this study, we will assume the gap location, disk inclination, and disk position angle as derived in \cite{DSHARP_Huang_Annular}, which are 69.1 $\pm 0.4$\,au, 56.2$^\circ \pm 0.8 ^\circ$, and 118.8$^\circ \pm 0.7 ^\circ$, respectively.
\begin{figure*}
\centering
\includegraphics[width=\hsize]{plot_mcmc_spiral.pdf}
\caption{
The spiral morphology of Elias 2-27 at multiple wavelengths. Panels from left to right correspond to data from the 0.89\,mm, 1.3\,mm and 3.3\,mm observations. North-West spiral is traced in blue, South-East spiral is traced in red. Top panels: Dust continuum maps from which the azimuthally averaged radial profile has been subtracted to highlight spiral location, red and blue points trace the maxima of emission along the arms. Middle panels: Deprojected radial location of the emission maxima, as a function of the angle measured from North to the East (left). Error bars correspond to the astrometric error of each data point and grey lines show the posterior distribution of a logarithmic spiral fit with constant pitch angle. Bottom panels: Deprojection of the subtracted dust continuum observations from the top panels. The vertical line marks the dust gap location from \citet{DSHARP_Huang_Spirals}, and colored lines show the best-fit logarithmic spiral model.
}
\label{spiral_trace}
\end{figure*}
\begin{table*}
\def1.5{1.5}
\caption{Best-fit Parameters of the Logarithmic Spiral Model}
\label{table_dust_pitch}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Wavelength & Spiral Arm & Angle Range & $R_0$ [au] & $b$ & Pitch Angle\\
\hline
0.89\,mm& NW & -251$^{\circ}$ to -5$^{\circ}$ & 249.8$^{+1.2}_{-1.1}$ & $0.230\pm0.002$ & 12.9$^{\circ}$ $\pm 0.1^{\circ}$\\
& SE & -70$^{\circ}$ to 150$^{\circ}$ & 111.5$\pm0.4$ & $0.247\pm0.003$ & 13.9$^{\circ} \pm 0.2^{\circ}$\\
1.3\,mm& NW & -250$^{\circ}$ to -35$^{\circ}$ & 250.2$^{+2.3}_{-2.5}$ & $0.229^{+0.003}_{-0.004}$ & 12.9$^{\circ} \pm 0.2^{\circ}$\\
& SE & -61$^{\circ}$ to 135$^{\circ}$ & 115.5$^{+0.8}_{-0.9}$ & 0.234$\pm0.007$ & 13.2$^{\circ} \pm 0.4^{\circ}$\\
3.3\,mm& NW & -260$^{\circ}$ to -23$^{\circ}$ & 249.9$^{+3.9}_{-3.8}$ & $0.229\pm0.005$ & 12.9$^{\circ} \pm 0.3^{\circ}$\\
& SE & -50$^{\circ}$ to 140$^{\circ}$ & 113.6$^{+1.0}_{-1.1}$ & $0.231\pm 0.010$ & 13.0$^{\circ} \pm 0.6^{\circ}$\\
\hline
\end{tabular}
\end{table*}
\subsection{Tracing the Spiral Morphology}
To trace the spiral structure from our dust continuum images, we radially subtract an azimuthally-averaged radial profile of the emission and trace the spiral features from these ``subtracted images'' \citep[as done in ][]{DSHARP_Huang_Spirals}. From the subtracted images, shown in the top panels of Figure \ref{spiral_trace}, we find the radial location of maximum emission along each spiral, at azimuthal steps sampled every 9$^{\circ}$, between a determined azimuthal angle range (values in Table \ref{table_dust_pitch}). The radial extent where we trace the maxima of emission is determined visually. To aid our visual criteria we consider radial locations no further than where we have a signal of 5 times the RMS in the non-subtracted image.
Previous analyses of Elias 2-27 \citep[][]{laura_elias, DSHARP_Huang_Spirals} have shown that a logarithmic spiral model, with a constant pitch angle, can adequately trace the spiral morphology. Therefore, we use MCMC modelling to find the best-fit parameters for a logarithmic spiral model, considering the location data of each spiral and each observed wavelength. The spiral form is given by:
\begin{equation}
r(\theta) = R_0 e^{b\theta}
\end{equation}
Here $\theta$ is the polar angle in radians measured from the North and to the East (left), for reference see coordinates in Figure \ref{dust_data}. $R_0, b$ are free parameters, with $R_0$ the radius at $\theta = 0$, measured in au, and $b$ relates to the pitch angle of the spiral arms ($\phi$), as $\phi = arctan(b)$. The uncertainty on the location of each measured maxima is assumed to be the astrometric error\footnote{see Sect. 10.6.6 in \url{https://almascience.nrao.edu/documents-and-tools/cycle5/alma-technical-handbook/view} for further details.}: $\Delta p = 60000 \cdot (\nu \cdot B \cdot SNR)^{-1}$, were $\Delta p$ is the approximate position uncertainty of a feature in milliarcseconds, $SNR$ is the peak/RMS intensity ratio of the data point on the image, $\nu$ is the observing frequency in GHz and $B$ is the maximum baseline length in kilometers.
The top panel of Figure \ref{spiral_trace} shows the maxima along each spiral from the subtracted images. In the middle panels are the deprojected radial locations of each maxima, measured from the center, as a function of azimuthal angle. Grey lines show logarithmic spiral models, derived from 300 draws of the posterior values after convergence of the MCMC simulations for the best-fit parameters of each spiral arm. The horizontal line at 148\,au marks the location where we observe a break in the spiral arm, at all wavelengths, clearest in the South-East spiral, also subtly present in the North-West spiral. \cite{DSHARP_Huang_Spirals} had previously noted a possible decrease in the pitch angle value outside R $\sim$150\,au. The bottom panel (Figure \ref{spiral_trace}) shows the polar deprojection of the subtracted data, with a vertical line marking the dust gap location. The red/blue lines show the best-fit model for the logarithmic spirals. The median value for the parameters of the logarithmic spiral model, for each spiral and wavelength, are shown in Table \ref{table_dust_pitch}, along with the 16th and 84th percentile uncertainties derived from the posteriors.
We note that the pitch angle values retrieved here, $\sim$ 12.9$^{\circ}$ and $\sim$ 13.3$^{\circ}$, for NW and SE spiral respectively (similar between wavelengths, see Table \ref{table_dust_pitch}), are different than the recovered pitch angles from \cite{DSHARP_Huang_Spirals} (15.7$^{\circ}$ and 16.4$^{\circ}$, for NW and SE spiral). When applying our method to the high-resolution dataset presented in \cite{DSHARP_Huang_Spirals}, we retrieve the same results as in their work. The pitch angle difference between this work and theirs is probably due to beam smearing effects, combined with the challenge of image subtraction in lower-resolution data \citep[as discussed in][]{DSHARP_Huang_Spirals}, as the angular resolution difference between datasets is a factor of 4-5.
\subsection{Contrast variations along the spirals}
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{contrast_inter_arm_binned.pdf}
\caption{Contrast of the spiral arms in Elias 2-27, panels from left to right correspond to data from the 0.89\,mm, 1.3\,mm and 3.3\,mm observations. Top panels: Polar deprojection of the dust continuum emission maps, where the azimuthal angle is measured from North and to the East. Blue and red colors correspond to the North-West and South-East spirals, respectively. Continuous lines trace the spirals following the best-fit parameters found in section 3.1, dashed lines trace the inter-arm region following the same best-fit parametric model, but minus 90$^{\circ}$ from the spiral. Bottom panels: Calculated contrast values along each spiral arm, blue and red points correspond to the North-West and South-East spirals, respectively.
}
\label{spiral_contrast}
\end{figure*}
We test the contrast at comparable locations over different wavelengths, for this may provide evidence of dust trapping. If indeed there is dust growth within the spiral arm or large particles are trapped at this location, then at longer wavelengths we could expect to observe higher contrast. The latter effect is due to larger particles being more densely packed in the spiral arm \citep{2004MNRAS.355..543R}, this has been shown to occur in the case of dust-trapping vortex studies \citep{2018A&A...619A.161C}. We compute how the contrast varies radially along each spiral arm, with respect to a fixed ``inter-arm'' region, which is expected to be a location where we can be sure there is no emission from the spiral arms. Considering the symmetry between the spirals, the inter-arm region of each spiral is assumed to follow the same shape of the corresponding spiral arm, but rotated by 90 degrees clockwise from the spiral location.
The top panels of Figure \ref{spiral_contrast} show a polar projection of the dust continuum, overlayed are the best-fit model of the spirals for each wavelength in continuous line and this same model, shifted 90 degrees clockwise, tracing the inter-arm region, in dashes. From this figure it is clear that the interarm region effectively traces zones of lower emission, compared to the spiral arm location.
The contrast is calculated, from the original images, as the ratio between the emission at the spiral and the inter-arm regions, at each radius and for both spiral arms. We use the previously derived parameters and calculate the contrast along the best-fit spiral model for each wavelength, binning the data azimuthally every 15$^{\circ}$. The standard deviation within the bin is used as the error of each averaged flux measurement, both for the interarm and spiral region. The contrast curves for each wavelength are shown in the bottom panels of Figure \ref{spiral_contrast}. The NW spiral remains the strongest spiral at all wavelengths with an average contrast value 25-33$\%$ higher than the SE spiral. Both spirals maintain a similar contrast throughout their radial extent, deviating in the 0.89\,mm and 1.3\,mm observations only $\sim$11$\%$ from the average contrast value in the case of the SE spiral and $\sim$17$\%$ in the case of the NW spiral. In the 3.3\,mm data the contrast variation increases and deviations from the average value increase to $\sim$16$\%$ in the SE spiral and $\sim$24$\%$ in the NW. For the 0.89\,mm, 1.3\,mm and 3.3\,mm emission, the radial distance of the maximum contrast value is, respectively for (NW, SE) spirals, ($\sim$125\,au, $\sim$117\,au), ($\sim$129\,au, $\sim$127\,au) and ($\sim$129\,au, $\sim$126\,au). For the minimum contrast location, in the same order, the radial distances are ($\sim$155\,au, $\sim$149\,au), ($\sim$161\,au, $\sim$147\,au) and ($\sim$150\,au, $\sim$142\,au). We observe in our contrast curves that the radial location of the maximum contrast slightly shifts outwards with wavelength, no global shifts are detected in the minimum contrast distances, however the minimum contrast of the SE spiral appears to shift inwards at longer wavelength. Overall our values are in agreement with previous measurements of \cite{DSHARP_Huang_Spirals} where the peak contrast is located at $\sim$123\,au and low contrast at $\sim$147\,au (measured from higher angular resolution 1.3\,mm data). While the form of the contrast curve in the 1.3\,mm emission agrees also with the previous studies \citep[]{laura_elias, DSHARP_Huang_Spirals}, at larger radial distances we observe a decrease in the contrast of both spirals in the 0.89\,mm emission and also in the SE spiral in 3.3\,mm, contrary to the apparent growing contrast at larger radial distance observed in the 1.3\,mm emission.
If we follow the peak contrast of both spirals, located at $\sim$125\,au, we see a slight variation in the contrast value, which grows larger with longer wavelength. The peak contrast value of the NW spiral increases by 11$\%$ from the 0.89\,mm to the 1.3\,mm emission and by 8$\%$ between the 1.3\,mm and 3.3\,mm emission. The SE spiral increases its peak contrast value 5$\%$ initially and in the 3.3\,mm data decreases the peak contrast by 2$\%$ with respect to the 1.3\,mm emission. Minimum contrast values also shift, but do not show any correlation with varying wavelength. We note that previous studies on this source measured the contrast differently, using the ratio between the spiral flux and the minimum flux at the same radial location \citep[]{DSHARP_Huang_Spirals, laura_elias}. For completeness, we test our results using the minimum flux method used in the previous studies, and obtain similar contrast curves in which we also measure increasing peak contrast values towards longer wavelengths. However, we do not consider these results, for the minimum flux at a determined radial distance is found at different azimuthal angles between different wavelengths. Therefore, the contrast calculated using the minimum radial flux does not sample comparable locations between different wavelengths, something necessary in order to detect possible signatures of dust growth at a given location, which we achieve by using the interarm region.
\subsection{Spectral Index Analysis}
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{radial_profiles_tau_alpha_correct.pdf}
\caption{Radial profiles of brightness temperature, spectral index, and optical depth for Elias 2-27 using multiple wavelengths. Top panel: Azimuthally averaged brightness temperature profiles for each wavelength, shaded area shows the 1$\sigma$ scatter at each radial bin divided by the beams spanning the angles over which the intensities are measured. Middle panel: Spectral index radial profile considering all three wavelengths, darker shaded area shows 1$\sigma$ errors from assuming the intensity error of the azimuthally averaged intensity profile. Lightly shaded area shows the total error including a 10\% flux uncertainty on the intensity measurement. Lower panel: Optical depth radial profile for each wavelength, shaded area shows 1$\sigma$ uncertainty considering the 1$\sigma$ errors on the stellar luminosity and from the azimuthally averaged intensity profiles. Vertical dashed line and shaded region marks the dust gap location (69\,au) and width (14.3\,au) as determined in \cite{DSHARP_Huang_Annular}.
}
\label{radial_prof_tau_alpha}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{spectral_index_map_3wl_correct.pdf}
\caption{Left: Spectral index map calculated from the emission of all available wavelengths. Only emission over 5$\sigma$ in the image plane is considered. Blue lines show the location of the dust gap at 69\,au and the derived best-fit parametric model for the spiral arms of 1.3\,mm emission. White contours correspond to $\alpha$=2.0 and 2.25, black contours indicate $\alpha$=2.4. Right: Spectral index error map, obtained from the image rms and intensity value of each pixel. Blue lines trace the dust features and black contours correspond to $\alpha$ uncertainties at 0.01, 0.025, 0.05 and 0.1 levels.
}
\label{spectral_index_map}
\end{figure*}
The presence of dust growth throughout the disk can be inferred using the multi-wavelength continuum observations to compute the disk spectral index. The spectral index ($\alpha$) of the spectral energy distribution follows $I_{\nu} \propto \nu^{\alpha}$, where $I_{\nu}$ corresponds to the measured intensity, from the image plane, at frequency $\nu$. In the optically thin regime, if $\alpha$ has values between 2-2.5 it can be an indicator towards the presence of large, up to cm-sized, grains and hence dust growth at the location of such values \citep[e.g., ][]{2009ApJ...696..841K, 2014prpl.conf..339T, 2017ApJ...839...99P, 2018A&A...619A.161C, 2019ApJ...881..159M}. To adequately compare the emissions from all three wavelengths, the calculations in this section are done from dust images at equal resolution and centered in the peak brightness. We use the task imsmooth in CASA to especify a round, 0.26$\arcsec \times$0.26$\arcsec$ beam ($\sim$31\,au). Figure \ref{radial_prof_tau_alpha} shows in the top panel the azimuthally averaged brightness temperature of the smoothed, equal resolution images.
We calculate $\alpha$ as the best-fit slope of a linear model applied to the logarithmic space ($\alpha \propto d$\,ln$I_{\nu}/d$\,ln$\nu$), using all wavelengths available in this study. From an MCMC fit we obtain a posterior distribution of $\alpha$, with best-fit and uncertainties computed from the 50th, 16th, and 84th percentile values.
From the azimuthally averaged intensity profiles, we compute an $\alpha$ radial profile shown in the middle panel of Figure \ref{radial_prof_tau_alpha}. We retrieve a disk average $\alpha$ value of 2.6$\pm$0.06, with $\alpha<2.0$ in the inner $\sim$50\,au. These low values in the inner disk will be discussed in section 5.1 and are probably related to optically thick emission where self-scattering at long wavelengths is a relevant process.
We apply the same techniques described in \cite{DSHARP_Huang_Annular} to calculate the optical depth at our various wavelengths. The main assumption relies on approximating the midplane temperature profile ($T_{mid}$), assuming a passively heated, flared disk in radiative equilibrium \citep[e.g.,][]{1997ApJ...490..368C} and considering that the millimeter emission follows this temperature profile, if it traces the midplane. From the temperature profile, we compute the expected blackbody emission ($B_\nu$), using Planck's law, and relate it to the measured intensity ($I_\nu$) through $\tau_\nu$, following $I_\nu (r) = B_\nu(T_{mid}(r))(1 - \exp(-\tau_\nu (r)))$. $T_{mid}$ will depend on the assumed flaring of the disk and the stellar luminosity \citep[we use the DSHARP values for flaring and stellar luminosity of $\varphi$ = 0.02 and $log L_*/L_\odot$ = -0.04 $\pm$ 0.23,][]{DSHARP_Andrews}.
\begin{figure*}
\centering
\includegraphics[width=\hsize]{spectral_index_line_angle_164.0_3wl_temp_correct.pdf}
\caption{Top: Brightness temperature profile for 0.89\,mm, 1.3\,mm and 3\,mm emission along the azimuthal angle of maximum spiral contrast (224$^\circ$), measured radially. Positive radial values are measured from the center of the disk and to the West, negative values indicate distance to the East. Vertical coloured dashed lines show the location of the spiral arm according to the best-fit parametric model at each wavelength, black vertical line marks the location of the dust gap. Bottom: spectral index along the azimuthal cut, calculated from the emission of the three wavelengths.
}
\label{panel_spiral_width}
\end{figure*}
The computed optical depth profiles are shown in the bottom panel of Figure \ref{radial_prof_tau_alpha}. We note that at 0.89\,mm the modeled $T_{mid}$ is $\sim$2 times larger than the measured $T_b$. In Rayleigh-Jeans regime, as expected for mm emission, $T_b = T_{mid}(1 - $exp$(-\tau))$. Then, if $T_{mid}$ is overestimated it will produce lower optical depth values and the emission will appear more optically thin than it really is. The disk could be colder, or with a lower flaring value, and the emission at all wavelengths would be more optically thick. These issues will be discussed in section 5.1.
The spectral index map for Elias 2-27 and the uncertainty on the spectral index are presented in Figure \ref{spectral_index_map}. We only consider emission above 5$\sigma$ at all wavelengths for this calculation, and we adopt the rms of each image as the uncertainty on the intensity for each pixel. The best-fit spiral model from the 0.89\,mm dust emission is overlaid for reference, along with the dust gap location (69\,au). Contour lines for $\alpha$ values of [2.0, 2.25, 2.4] and for $\alpha$ uncertainties at the level of [0.01, 0.025, 0.05, 0.1] are presented. We observe that contours tracing $\alpha=$ 2.4 seem to be coherent with the spiral morphology as traced by the best-fit spiral. This is specially apparent in the NW spiral location.
The variations between the spiral and inter-arm region at the NW spiral location are $\sim0.1$, with uncertainties of $\sim0.04$. Overall, we observe lower $\alpha$ co-located with the spiral features. At the gap location no particular behaviour is observed, except for a decrease in $\alpha$ values inwards from the gap location. As noticed in the radial profile, in the inner disk ($<$50\,au) $\alpha$ reaches values below 2.0.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{moment_maps_largescale_mask_all.pdf}
\caption{Left column shows $^{13}$CO emission maps, Right row C$^{18}$O emission maps. Top Row: Integrated emission maps considering all emission over 3$\sigma$. Lower Row: Mean velocity maps.
}
\label{mom_largescale}
\end{figure*}
Another indicator of dust growth is that smaller grains, traced by shorter wavelengths, will be less concentrated than larger grains observed at longer wavelengths. This will translate into a width difference along the dust-trapping structure, such that smaller grains will be more widely spread than larger grains within a dust-trap. Such behaviour has been constrained in different sources with vortex-like structures, likely tracing dust traps \citep[]{2015ApJ...812..126C, 2015ApJ...810L...7V, 2018A&A...619A.161C, 2019MNRAS.483.3278C}. The measurement of the width variation cannot be done using a subtracted image or an azimuthally averaged profile, both options would introduce artifacts or remove relevant emission, therefore, we must use the original image. We decide to trace variations along a single azimuthal cut, choosing the angle were the highest spiral contrast is observed, which is located at $\sim$125\,au corresponding to $\sim$224$^\circ$.
Figure \ref{panel_spiral_width} shows the brightness temperature profiles along the azimuthal angle of maximum spiral contrast (224$^\circ$) for all three wavelengths, the spirals can be localized as a bump in the intensity curve between 100\,au-150\,au in the top panel. From this it can be seen that the shape of the intensity curve is similar at all three wavelengths, with no noticeable width differences. The bottom panel shows the spectral index distribution along the azimuthal cut, calculated directly from the top panel intensity profile, as previously done for the spectral index map. In this $\alpha$ profile we see that the East side (negative radial distance) shows a small decrease in the spectral index value at the spiral location, while the West spiral does not show significant $\alpha$ variations. The spatial extent of the variation in the East is resolved by our beam size (0.26$\arcsec$, $\sim$31\,au).
\section{$^{13}$CO and C$^{18}$O $J=3-2$ emission analysis}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{CO_mom0_size.pdf}
\caption{ Integrated emission (moment 0) maps for $^{13}$CO (left) and C$^{18}$O (right) gas emission. Contours of 0.89\,mm continuum emission are overlaid on top. White grid marks the minor and major axis of the disk, as determined by the continuum emission position angle, with ticks on these axis indicate 0.5$\arcsec$ ($\sim$58\,au) intervals.
}
\label{mom0_size}
\end{figure*}
We analyze the emission from the two observed CO isotopologues in three different ways, by: 1) studying the presence of structures in the integrated emission and in channel maps, including large-scale emission in the entire field-of-view (FOV), 2) searching for velocity perturbations in channel and velocity maps, 3) constraining the vertical height of the $^{13}$CO and C$^{18}$O emitting layer, and analyzing the kinematics of these isotopologues.
\subsection{Channel and moment maps}
\begin{figure}
\centering
\includegraphics[width=\hsize]{size_98_gap.pdf}
\caption{Top Panel: Integrated emission map of the C$^{18}$O gas emission. Yellow dots trace local minimum of emission, green circles trace the disk radial extent as the location at which 98$\%$ of the total azimuthal emission is included. Middle Panel: Radial distance from the center of the points tracing the border, azimuthal angle is measured from the North, to the East. Bottom Panel: Azimuthally averaged intensity profile of the C$^{18}$O emission, shaded area shows the 1$\sigma$ scatter at each radial bin divided by the number of beams spanning the angles over which the intensities are measured. Vertical yellow line marks the average radial location of the minimum of emission and the deviation of the data is indicated by the vertical grey region.
}
\label{C18O_gasgap}
\end{figure}
The channel maps for both $^{13}$CO and C$^{18}$O are shown in Appendix A. The known cloud absorption \citep[]{laura_elias, DSHARP_Huang_Spirals} affects the East side of the disk and blocks all $^{13}$CO emission from $v \ge 2.88$\,km s$^{-1}$, while for C$^{18}$O some cloud absorption is present near $v = 2.55-2.66$\,km s$^{-1}$. We note that the South part of the disk is the brightest. The systemic velocity is determined to be 1.95\,km s$^{-1}$ based on the C$^{18}$O channel map analysis using a spectral resolution of 0.05\,km s$^{-1}$ and kinematic modelling done with the \textsc{eddy} package \citep{2019JOSS....4.1220T}.
Considering all spatial scales and imaging the whole FOV, extended, large-scale emission appears in both isotopologues around Elias 2-27 (see Figures in Appendix A). As $^{13}$CO is the most abundant isotopologue, the large-scale emission appears more strongly and along a wider range of velocities than in C$^{18}$O. The extended emission is clearly identified between $v = 2.55-4.88$\,km s$^{-1}$ for $^{13}$CO, and between $v = 2.66-4.21$\,km s$^{-1}$ for C$^{18}$O. This large-scale emission appears to have a striping pattern, probably due to the lack of compact baselines in the observations. The shortest baseline is 15m, which projected in the sky for the observed frequencies recovers emission from $\lesssim$15$\arcsec$ scales, however, the detected emission may extend beyond our $\sim$20$\arcsec$ FOV. We compute integrated emission and mean velocity maps, shown in Figure \ref{mom_largescale}, from the channel maps that consider the whole FOV, including emission above 3$\sigma$. While in $^{13}$CO the large-scale emission is seen through the entire FOV, the C$^{18}$O large-scale emission is more constrained and crossing only through the East side of the disk. There is no clear velocity gradient between the large-scale emission and the disk in either isotopologue.
From here on, we focus on the channel maps that trace the material closer to the disk rather than large-scale emission. These channel maps were obtained using uv-tapering and filtering of the short baselines as described in section 2. Figure \ref{mom0_size} shows the integrated emission (moment 0) of both CO molecules. Foreground absorption in $^{13}$CO is clear in the East side of the disk. Overlaying the continuum contours for the 0.89\,mm emission and measuring along the major axis, the East side of the disk appears to have a larger extent than the West side in the gas. Figure \ref{mom0_size} shows that the size difference is roughly 0.5” between East and West sides of the disk, in both CO tracers. At the source distance, this size difference corresponds to 58\,au between the projected emission extent of each side. To measure the East/West size variations, we trace the edge of the emission in the C$^{18}$O moment 0 map. The border of the disk is considered as the radius that encompasses 98$\%$ of the integrated emission at each sampled azimuthal angle (green points on the top panel of Figure \ref{C18O_gasgap}). We define the center based on the emission peak from the Band 7 continuum data and deproject accordingly, assuming the inclination and position angle of the dust continuum. The deprojected radial distance as a function of azimuthal angle is shown in the middle panel of Figure \ref{C18O_gasgap}. Errors correspond to the astrometric error, calculated as described in section 3.1. The edge of the disk is not well described by a circle or an ellipse; it shows two local maxima and two local minima. The global minimum distance is located along the major axis on the West, but the global maximum distance is shifted with respect to the East major axis. The locations of maximum and minimum border extents are not colocated with the continuum spiral features, or their extension.
Another feature in the moment 0 map is the presence of a ``gap'' of emission at large disk radii in C$^{18}$O. To estimate its location, we trace the radial positions of emission minima sampling every 9$^{\circ}$, between 185-300\,au (this range is determined by analysis of the intensity profile). Using the mean value and standard deviation of the minima radial locations (yellow points in top panel of Figure \ref{C18O_gasgap}), we estimate the gap position at 241 $\pm$ 24\,au. In the $^{13}$CO integrated intensity map we do not observe a gap and cannot infer one from the intensity profile, even when excluding the azimuthal angles $\sim$65$^\circ$-155$^\circ$, where foreground absorption is strongest. If we trace the emission border of $^{13}$CO following the 98$\%$ integrated emission criteria, we obtain a similar emission border curve to the C$^{18}$O, shown in Appendix A.
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{models_residuals_flare.pdf}
\caption{Top panels show the emission layer height as a function of radial distance to the star constrained from the $^{13}$CO (left) and C$^{18}$O (right) data. Blue points correspond to measurements coming from the West side of the disk, the red points come from the East side, colored line corresponds to the best-fit double power law height profile for the data, the grey area shows the uncertainty on the emitting layer as derived from the posteriors. The vertical dashed line indicates the location of the gap reported in the continuum, the dot-dashed line correspond to the gap location in the C$^{18}$O integrated intensity map. Grey area indicates the width of the dust gap \citep[obtained from][]{DSHARP_Huang_Annular}, orange area indicates the gas gap's location uncertainty. Bottom panels show the residuals of each isotopologue after subtracting the best-fit model to the data.
}
\label{height_both}
\end{figure*}
\subsection{Tracing the emitting layer in $^{13}$CO and C$^{18}$O}
Using the method detailed in \cite{2018A&A...609A..47P} we recover the emission surface of each molecule. This is done by tracing the maxima from the upper layer of emission in the channel maps and applying geometrical relationships. The emitting layer is assumed to have a cone-like structure, this means that the height of the emitting layer should be symmetric with respect to the disk's major axis.
If the variations in the projected radial extent of the disk found in Section 4.1 are related to variations in the emitting surface height then we would only expect symmetry along the disk's semi-major axis in the West side of the disk. The latter can be determined from Figure \ref{C18O_gasgap}, where we see that at the West Major Axis location ($\sim -60^{\circ}$) the radial distance of the emission border grows similarly when we move towards North (positive angles) or South (negative angles). This similarity in the radial distance increment maintains for $\sim 90^{\circ}$ in each direction (North and South), which corresponds to the complete West side of the disk. On the other hand, at the location of the East Major axis, we do not see this symmetry in the growth or decline of the radial distances towards North or South. The possible lack of symmetry is an important caveat and we expect this to affect mostly on the constraints obtained for the East side of the disk (where the variations in disk extension are larger). The emission surface we measure should be taken as a rough estimate. Details on the geometry relations can be found in the original publication \citep{2018A&A...609A..47P}.
From all available channels, we visually select those in which the top layer of gas emission can be clearly identified. For C$^{18}$O we select channels at velocities $+$0.77 to $+$1.55\,km s$^{-1}$ and $+$2.44 to $+$3.21\,km s$^{-1}$. For $^{13}$CO we select channels $+$0.66 to $+$1.66\,km s$^{-1}$ and $+$2.33 to $+$2.77\,km s$^{-1}$.
The recovered height profile of the emission layer for both gas isotopologues is shown in Figure \ref{height_both}. Measurements obtained from channels that trace the East sides of the disk (with respect to the semi-minor axis) are colored red, while those from the West are blue. The $^{13}$CO and C$^{18}$O gas emission layer we constrain follows the expected distribution for these isotopologues in a disk: $^{13}$CO traces a higher layer from the mid-plane than C$^{18}$O in both East and West sides at all radii. The continuum and C$^{18}$O gas gap location are highlighted in Figure \ref{height_both}, together with the width for the dust gap \citep[as reported in ][]{DSHARP_Huang_Annular} and the uncertainty regions of each gap. No clear feature is recovered in the height profiles at the gap locations.
The height distribution is modelled using the equations for a complex flared surface presented in the \textsc{eddy} package \citep{2019JOSS....4.1220T}, such that the altitude of the emitting layer follows a double power-law:
\begin{equation}
z(r) = z_0 \left(\frac{r}{r_0}\right) ^{\psi} + z_1 \left(\frac{r}{r_0}\right) ^{\varphi},
\end{equation}
where $r$ is the radial distance from the star, as measured in the azimuth plane, and the characteristic radius $r_0$ is fixed at 1$\arcsec$, corresponding to 116\,au for our system. The second power-law is aimed to be a correction on the first term, therefore we first find the best set of parameters for the emission layer characterized with a single power-law and then optimize close to those parameters, to include the second power-law. The best fit parameters of the model are assumed to be the median value from the posteriors of the MCMC simulations and are shown with their uncertainties (16th and 84th percentile uncertainties derived from the posteriors) in Table \ref{table_param_height}.
\begin{table}[h]
\def1.5{1.5}
\setlength{\tabcolsep}{5pt}
\caption{Height model parameters from Channel analysis}
\label{table_param_height}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Param. & $^{13}$CO $-$ W &$^{13}$CO $-$ E & C$^{18}$O $-$ W & C$^{18}$O $-$ E \\
\hline
$z_0$ [au]& $77.6^{+8.2}_{-8.9}$ & $30.0 \pm 0.2$ & $44.0^{+7.3}_{-6.2}$ & $32.3^{+8.9}_{-8.5}$ \\
$\psi$ & $1.76^{+0.03}_{-0.04}$ & $1.01 \pm 0.01$ & $2.19^{+0.12}_{-0.16}$ & $1.14^{+0.11}_{-0.09}$ \\
$z_1$ [au]& $-38.1^{+8.9}_{-8.1}$ & $-0.3^{+0.1}_{-0.2} $ & $-11.8^{+6.1}_{-7.4}$ & $-15.9^{+8.6}_{-8.9}$ \\
$\varphi$ & $2.29^{+0.08}_{-0.04}$ & $-3.36^{+0.54}_{-0.52}$ & $3.58^{+0.46}_{-0.24}$ &$0.46^{+0.15}_{-0.33}$ \\
\hline
\end{tabular}
\end{table}
Our modelled emission surface presents consistent differences in the elevation and morphology between East and West sides of the disk. The height profiles have a quasi-linear form in the East channels, mostly tracing lower height values than the West. At larger radii ($>$200\,au) the West channels show a decrease in the emission surface height. Residuals obtained from subtracting the model from the observations are shown in the bottom panels of Figure \ref{height_both}. We see that the largest residual scatter is found between the dust and gas gap locations, which roughly coincides with the radial extent of the dust spiral arms (80-250\,au). The residuals from the $^{13}$CO emission show a ``curved'' pattern, indicating a more complex emitting surface. We note that in both isotopologues the residuals at the dust gap location are mostly negative, indicating a possible decrease in the emitting surface at this radii. This is unresolved with our spatial resolution.
\subsection{Tracing the kinematics in $^{13}$CO and C$^{18}$O}
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{velocity_residuals_flare.pdf}
\caption{ Top panels show the data tracing the velocity of the gas emission, as a function of radial distance to the star, from the C$^{18}$O (left) and $^{13}$CO (right) isotopologues. Blue points correspond to measurements coming from the West side of the disk, red points come from the East side, plotted curves correspond to the best-fit Keplerian rotation profile and shaded area corresponds to the stellar mass uncertainty as indicated by the 16th and 84th percentiles of the posteriors. Vertical dashed line indicates the location of the gap reported in the continuum, dot-dashed line correspond to the gap location in the C$^{18}$O integrated intensity map. Grey area indicates the width of the dust gap \citep[obtained from][]{DSHARP_Huang_Annular}, orange area indicates the gas gap's location uncertainty. Bottom panels show the residuals of each isotopologue after subtracting the best-fit model to the data.
}
\label{vel_both}
\end{figure*}
Besides tracing the height profile of the emitting layer, \citeauthor{2018A&A...609A..47P}'s method allows us to determine the velocity profile of the traced emission layer. In a given velocity channel, we know the projected radial velocity, $v_{obs}$, together with the systemic velocity for the source, $v_{syst}$. We are interested in determining the azimuthal velocity $v$ of a parcel of gas at an azimuthal radial distance $r$ and height $h$ from the star. Using the inclination angle $i$ and the polar azimuthal angle, $\theta$, we can relate the known velocities to $v$ through $v_{obs} = v_{syst} + v\cos(\theta)sin(i)$. To obtain $cos(\theta)$ we apply geometrical relationships from the measurements in the channel maps as defined in \cite{2018A&A...609A..47P}. The velocity profile will allow us to obtain a mass estimate for the central star and also test for super-Keplerian velocities at large radial distances, which is a characteristic expected in disks undergoing GI \citep{1999A&A...350..694B, 2007NCimR..30..293L}.
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{kep_model_eddy_flare_chan_fit.pdf}
\caption{Velocity maps, model and residuals for $^{13}$CO (top row) and C$^{18}$O (bottom row). In each row, the first column shows the integrated emission velocity map (moment 1). The second column shows the velocity map model, computed using the constraints found for the emission surface and stellar mass. The third row shows the residuals calculated by subtracting the model map from the observations. The spiral arms show best fit parametric model from the 0.89\,mm dust emission, inner and outer ellipses indicate radial limits for the data used to derive the emission surface geometry and stellar mass values.
}
\label{kep_model_vel}
\end{figure*}
The velocity profiles of the emitting gas for both $^{13}$CO and C$^{18}$O are shown in Figure \ref{vel_both}. An overall velocity difference is observed between both sides of the disk, with measurements from the East side having a higher velocity. From our previous results (Figure \ref{height_both}) we know that the East side corresponds to the side apparently closest to the midplane. The difference in the velocity profile is in agreement with the height profile variations between sides, as being closer to the midplane results in larger velocities. We do not observe any noticeable behavior of the velocity profile at the location of the dust and gas gaps.
We fit the velocity profiles with a Keplerian model to constrain the mass of the central star. Based on comparisons to stellar evolution models in the H-R diagram, the mass of Elias 2-27 has been reported to be $\sim$0.49M$_\odot$ \citep[]{2009ApJ...700.1502A, DSHARP_Andrews, laura_elias}. For our modelling, we incorporate the height distribution of each molecule, using the best fit double power-law model found previously (see Table \ref{table_param_height}). The modelled velocity at radial distance $r$ from the star will follow equation \ref{vel_prof}, where $G$ is the gravitational constant, $M_*$ is the central star mass, and $h$ is the height of the gas at azimuthal radius $r$:
\begin{equation}
\frac{v^2}{r} = \frac{GM_* r}{(r^2 + h^2)^{\frac{3}{2}}}
\label{vel_prof}
\end{equation}
We note that equation \ref{vel_prof} does not include the effects of the radial pressure gradient and the disk's self-gravity \citep{2013ApJ...774...16R}.
We simultaneously fit the model to the data points from East and West sides of the disk, using MCMC simulations and taking into account the different height profiles (Figure \ref{height_both}). The curves for the expected Keplerian motion, considering the best-fit stellar mass and its 1$\sigma$ uncertainty range are shown over the data in Figure \ref{vel_both}. The final masses and errors are computed from the median value and 16th and 84th percentile uncertainties derived from the posteriors. From the $^{13}$CO measurements we constrain a stellar mass of $M_*=0.5 \pm 0.01$ M$_\odot$, while from the C$^{18}$O measurements we constrain $M_*=0.46^{+0.02}_{-0.03}$ M$_\odot$. Both values are in similar between them and compared to the previous estimates \citep[$M_* \sim 0.49M_\odot$,][]{2009ApJ...700.1502A, DSHARP_Andrews, laura_elias}.\\
Compared to the expected Keplerian velocity profile from the fits of Figure \ref{vel_both}, we see residuals throughout the whole radial extent. These simultaneous sub- and super-Keplerian velocities are expected if the emission layer height difference between the East and West sides was larger than what was constrained from the analysis of Figure \ref{height_both}. For now we only attempt to fit of a purely Keplerian rotation profile, but given the large disk mass of Elias 2-27 fitting a self-gravitating rotation curve is warranted. This possibility will be further explored in a separate paper (Veronesi et al., submitted).
Finally, we use the constrained emission surface of each side of the disk, and a stellar mass value of 0.49M$_\odot$, to build a model of the expected mean velocity maps of each isotopologue. The models are compared to the observations through residual analysis. Our model velocity maps consider only the Keplerian motion of the upper layer of the emission surface. In the case of Elias 2-27, the disk is inclined such that emission from the lower layer appears in the southern part of the disk \citep{DSHARP_Huang_Spirals}, which may be cause for larger residuals across the southern border. Additionally, our constraints on the shape of the emission surface do not cover the whole radial extent of emission, and are extracted from the data retrieved at radial distances between $\sim$40-300\,au ($\sim$0.35$\arcsec$ - 3.59$\arcsec$) from the central star (see Figure \ref{height_both}). Therefore, we should also expect to have larger residuals in the inner and outer regions where the emitting surface is not directly constrained by our method described above.
Figure \ref{kep_model_vel} shows the $^{13}$CO and C$^{18}$O velocity maps in top and bottom rows, with observations, model, and residual velocity maps in left, middle, and right columns. The integrated velocity maps are computed using the \textsc{bettermoments} package \citep{2019ascl.soft01009T}, to accurately constrain the line of sight velocity from the channel maps with 0.111\,km s$^{-1}$ spectral resolution. Initial analysis of the observations allows us to identify marked perturbations throughout the disk, especially along the South and the West, where a distinct ``distorted'' pattern is observed in the outer disk. Along the major axis the C$^{18}$O data also displays ``distorted'' perturbations in a seemingly perpendicular form with respect to the azimuthal Southern ``distorted'' pattern. In the model maps we see that the West side of the disk is able to reproduce to some extent the ``distorted'' pattern along the South of the disk in both isotopologues, given the decrease in height in the outer disk. We do not see this pattern in the East side of the model map, as the retrieved emitting surface does not present large deviations from a linear cone-like model.
In the residuals, the radial extent that was used for determining the emission layer height profile is marked as two ellipses to define inner and outer radial bounds (40-275\,au). For larger radial distances than what was sampled, the residuals in the West side of the disk are much lower than the residuals in the East side, for both isotopologues. This means that the modelled West side emission layer with a ``dip'' in the emitting surface height at larger distances from the star is necessary when extending the model to larger radii. This indicates that the East side of the disk likely also has a decrease in the emitting surface height at larger radial distances. Given our limited range of sampled radial distances, we may not be sensitive to this ``turning point'' in the East. Within our sampled radii, marked by the ellipses, we observe negative residuals in the North-East quadrant of the $^{13}$CO emission. This roughly coincides with the location of the most prominent cloud absorption (see Figure \ref{mom0_size}), so we associate these residuals with the absorption. The C$^{18}$O residuals within the ellipses are much stronger and display a rough ``X'' shape across the center, with marked positive residuals close to minor and major axis' in North-West and South-East quadrants, respectively. As was noted, the radial distances within the ellipses coincide with the radial extension of the dust spiral arms, while the ``X'' shape does not have a clear co-location with the spiral structure, the largest positive residuals coincide with the location where a spiral starts and the other ends. This ``X'' shaped residual probably traces perturbations arising closer to the mid-plane, as it is not observed in $^{13}$CO and the C$^{18}$O emission traces a lower height layer.
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{spirals_gap_kinks.pdf}
\caption{Selected central channels of $^{13}$CO (top) and C$^{18}$O emission (bottom). White continuous line shows the dust features: inner gap at 69\,au and the spirals as traced from the 0.89 continuum emission. Dotted white line traces the C$^{18}$O gas gap location at 241\,au. Dashed lines show how the spirals traced in the dust would extend further outside of the continuum emission. Blue curve traces the expected isovelocity curve of each channel, following the constrained emission layer geometry of the top layer. The velocity of each channel map is indicated in top-right corner of each panel, the beam size is in the bottom-left corner. Green arrows mark the outer perturbation, yellow arrows mark the inner perturbation.
}
\label{kinks_central}
\end{figure*}
\subsection{Features in the channel maps of $^{13}$CO and C$^{18}$O}
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{isovel_smallC18O.pdf}
\caption{Selected high-velocity channels of C$^{18}$O emission. White lines trace the spirals detected in the 0.89\,mmcontinuum emission, blue lines indicate the isovelocity curves expected at each channel velocity, indicated in top right corner of each panel, following the constrained emission layer geometry of the top layer. Arrows indicate were deviations from expected isovelocity curves (``kinks'') are observed.
}
\label{kinks_small}
\end{figure*}
In the CO channel maps (see Figures in Appendix A) we observe several perturbations, which don't follow the expected Keplerian velocity field and we refer to them as ``kinks''. In the following figures, we overlay several previously characterized features of Elias 2-27 to use as reference: the continuum spirals and their extension, the location of the dust continuum and C$^{18}$O gaps, together with the expected isovelocity curves for each channel. We note that the isovelocity curves seem perturbed as they are obtained from the model velocity emission shown in Figure \ref{kep_model_vel}, considering the constrained emission layer of each isotopologue and disk side. We only show the isovelocity curves of the top layer of emission, because the geometrical constraints have been derived for this layer and do not adequately trace the bottom layer, which we can visually identify. In the bottom layer, the emission seems to be coming from a layer at a further distance from the midplane than what we trace with the geometry of the top layer. This difference should be studied in future work and could be caused by the layers tracing different sectors of the disk due to temperature effects \citep[][]{2018A&A...609A..47P}, optical depth or some other asymmetry in the vertical disk structure.
We observe two types of perturbations: inner kinks at roughly the location of the spiral arms (but outside the dust gap at $\sim70$ au), and outer kinks, beyond the extent of the continuum emission at $\sim 250$ au. Perturbations are strongly present in the central velocity channels of both CO tracers, shown in Figure \ref{kinks_central}. We observe the inner kink (marked with a yellow arrow, Figure \ref{kinks_central}) close to the spiral in the south side of the disk and along several channels, it appears strongest at channels +1.77 to +1.55\,km s$^{-1}$. This feature is co-located with the NW spiral. A large outer ``C'' shape (marked with a green arrow, Figure \ref{kinks_central}) can be seen beyond the gas gap in the south where the emission is brightest. This ``C'' shape feature is strongest in $^{13}$CO, along channels 1.55-2.33\,km s$^{-1}$, but in channels 1.66-1.88\,km s$^{-1}$ of C$^{18}$O we can also recognize it in the southern part of the disk. We suggest that the ``C'' is not a deviation from Keplerian motion, but rather is the projected emission from the upper and lower sides of the disk, connected by material bridging in the center between both sides.
Besides these features located near the disk systemic velocity, we see more subtle deviations in high velocity channels, for both East and West sides of the disk in the C$^{18}$O channel maps. To highlight these perturbations, we show the expected isovelocity curves for these velocity channels, along with the dust spirals in Figure \ref{kinks_small}. The top and bottom panels of Figure \ref{kinks_small} show the West and East C$^{18}$O emission, imaged with finer spectral resolution that is available only for the C$^{18}$O data. The deviations are most visible at 0.95-1.0\,km s$^{-1}$ in the West, where the top layer of the disk emission does not precisely follow the isovelocity curve (blue line) and appears perturbed at the spiral arm location. In the East side, at 2.95-2.90\,km s$^{-1}$, similar deviations are apparent in the top emission layer of the disk, roughly co-located with the SE spiral. These kinks are not clearly observed in the $^{13}$CO maps, however it is expected that perturbations due to the spirals should be more apparent in C$^{18}$O than in $^{13}$CO, as the C$^{18}$O traces a layer closer to the midplane, where the spirals reside. The deviations are better discerned in the West channels, possibly due to the lack of cloud absorption at these velocities, but also because, if the kinks are caused by the spiral arms, the highest-contrast spiral is in the West side of the disk.
Recently, \cite{2020ApJ...890L...9P} reported the presence of a kink in the Northern side of Elias 2-27, at the location of the dust gap, which was signaled as possible indicator of a planetary companion. We do not recover this feature, possibly given our lower spatial resolution: the DSHARP data has 4-5 times better angular resolution than this work, and is sensitive to spatial scales down to $\sim$6\,au in this system. This work analyses data with higher spectral resolution (3-6 times better) and less affected by cloud absorption than previously published studies. The latter makes us sensitive to the perturbations reported in this work, however we are not able to detect small spatial scale perturbations due to our angular resolution.
\section{Hydrodynamic Simulations of a Disk undergoing GI}
Elias 2-27 has been subject to different modelling approaches in order to explain the origin of the observed spiral substructure \citep[]{2017ApJ...835L..11T, 2017ApJ...839L..24M, 2018MNRAS.477.1004H, 2018ApJ...860L...5F, 2018ApJ...859..119B, 2020MNRAS.498.4256C}, with most of the modelling efforts oriented towards a gravitationally unstable disk as the disk-to-star mass ratio ($q$) has been estimated to have values around 0.2-0.3 \citep[]{2009ApJ...700.1502A, 2009ApJ...701..260I, 2010A&A...521A..66R, laura_elias} and gravitational instabilities are expected when disk-to-star mass ratios are $>0.1$ \citep{2016ARA&A..54..271K}. The amount of spiral arms (m) excited in a GI scenario will depend inversely on the disk-to-star mass ratio (m $\sim M_*/M_d$). Given the m$=$2 spiral mode observed in Elias 2-27, previous simulations \citep{2017ApJ...835L..11T, 2017ApJ...839L..24M, 2018MNRAS.477.1004H, 2018ApJ...860L...5F} have aimed at producing the system using higher disk mass estimates ($M_d/M_* \sim0.5$) than those derived from the observations \citep[$M_d/M_* \sim0.2-0.3$, ][]{laura_elias}. Recent work by \citeauthor{2020MNRAS.498.4256C}, however, has shown that a disk-to-star mass ratio of 0.27 may also reproduce the observations. While this low disk-to-star mass ratio predicts a high amount of spirals, it has been shown that ALMA sampling can make a disk of $q=$0.25 appear as a $m=$2 system \citep{2014MNRAS.444.1919D}.
We performed a total of 10 three-dimensional, dusty, gaseous hydrodynamical simulations using the SPH code \texttt{PHANTOM} \citep{2018PASA...35...31P}. To accurately compare the simulations to our multiwavelength observations, we use the multigrain setup considering 5 different grain sizes, ranging from 1\,micron to 1\,cm in 5 logarithmically-spaced size bins, assuming a size distribution of $dn/da\propto a^{-3.5}$. Multiple grain sizes are necessary as, while the most efficient emission at wavelength $\lambda$ comes from dust grains of size $a\sim\lambda/2\pi$ \cite[e.g.][]{2006ApJ...636.1114D}, there is an overall contribution from all grains. The dust is modelled self-consistently with the gas, using the multigrain ``one-fluid'' approach, where we limit dust flux using the Ballabio switch \citep{2018MNRAS.477.2766B}. Since the disk is massive and self-gravitating, the dust remains in the strongly-coupled regime ($St\lesssim 1$) out to grain sizes of several cm, so we do not need to use the two-fluid approach. In this regime, the dust exerts a force back on the gas (back-reaction) that is significant \citep{2018MNRAS.479.4187D}, so we include this effect on the gas.
In all 10 simulations we used 1 million SPH particles and assumed a central stellar mass of 0.5M$_{\odot}$, represented by a sink particle \citep{1995MNRAS.277..362B}, with accretion radius set to 1\,au. We set the initial inner and outer disk radii to 5\,au and 300\,au respectively. Different simulations vary in disk-to-star mass ratio and density profile. The total dust mass in the system is kept constant at $0.001$ M$_{\odot}$, since this is observationally constrained \citep{laura_elias}. As the total dust mass is a fixed value in our simulations, to obtain different disk-to-star mass ratios we vary the gas-to-dust ratio ($\epsilon$), which corresponds to $\epsilon=100,151$ and 252 for $q=0.2, 0.3$ and 0.5, respectively. We do not sample lower gas-to-dust ratios because we do not expect to recover a m = 2 spiral arm morphology for $q<$0.2. While sampling the emission with ALMA can make disks with high $m$ values appear as $m=$2 systems, when going to very low $q$ this effect does not hold \citep{2014MNRAS.444.1919D}. The sound speed profile was set as $c_\mathrm{s}\propto R^{-0.25}$, and we used two surface density profiles: either a simple power-law,
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{comp_q03_exp10_00064.pdf}
\caption{Panels from Left to Right correspond to data from the 0.87\,mm, 1.3\,mm and 3\,mm simulated observations for a exponentially tapered dust density profile with index 1.0 and disk-to-star mass ratio $q$=0.3. Top: images of simulated emission with subtraction of azimuthally averaged intensity profile, blue and red dots trace the maxima location along the spirals. Bottom: blue and red dots correspond to the deprojected radial location of the traced spirals in the simulated emission. Colored solid lines show the constant pitch angle logarithmic spiral fit, dashed colored lines extend the fit to lower radii. Black points are the deprojected radial location of the spirals from the observations (Section 3) and their astrometric error. Pitch angle likelihood parameter is indicated in the bottom right panel.
}
\label{model_q03_98}
\end{figure*}
\begin{equation}
\label{eq_pl}
\Sigma(R) = \Sigma_{0}\left(\frac{R}{R_{0}}\right)^{-p}
\end{equation}
\noindent
where $\Sigma_0$ is the surface density at the inner edge of the disk, and $p$ either 1.3 or 1.5, or an exponentially-tapered power-law,
\begin{equation}
\label{eq_pl_exp}
\Sigma(R)=\Sigma_{c}\left(\frac{R}{R_{0}}\right)^{-p} \exp \left[-\left(\frac{R}{R_{c}}\right)^{2-p}\right]
\end{equation}
\noindent
where $R_c$ is the characteristic radius of the profile, which we set to $R_c=200$au and $p$ either 0.7 or 1.0. In both surface density profiles $R_0$ is the reference radius and is set to $R_0=10$au.
We used a polytropic equation of state, and assumed that the disk cooled through the $\beta$ cooling prescription \citep{2001ApJ...553..174G}, where the cooling timescale, $t_\mathrm{c}$, is related to the dynamical timescale, such that $t_\mathrm{c} = \beta t_\mathrm{dyn}$. The dynamical timescale is the rotation period, $2\pi/\Omega$, and we set $\beta = 15$. Finally, each simulation is computed for 10 orbital periods at the outer radius (300\,au), from which we receive outputs every 0.1 fraction of an orbital period. The detail of the model parameters are shown in Table \ref{table_param_model}.
\begin{table}[h]
\def1.5{1.5}
\caption{SPH Model Parameters}
\label{table_param_model}
\centering
\begin{tabular}{c c}
\hline\hline
Param. & Value\\
\hline
$M_*$ [$M_\odot$] & 0.5\\
$R_{in}$ [au] & 5\\
$R_{out}$ [au] & 300\\
$M_{dust}$ [$M_\odot$] & 0.001\\
gas-to-dust mass ratio & 100, 151, 252\\
Min. Grain size [cm]& 10$^{-4}$ \\
Max. Grain size [cm]& 1\\
\hline
\end{tabular}
\end{table}
While SPH simulations portray the overall dynamic behaviour of a system, in order to accurately compare the model to an observation it is necessary to produce radiative transfer calculations of the SPH outputs, and then simulate mock observations using the same observing conditions (uv coverage) as the actual observations. Radiative transfer is necessary because it accounts for the multiwavelength emission the grains will have given the stellar characteristics and grain distribution. Sampling the radiative transfer output with the same observing configuration is crucial, for the antenna distribution and observation time will result in a given angular resolution and will form an image according to the sampled uv-coverage.
\subsection{Dust Simulations}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{DSHARP_comp_model.pdf}
\caption{Residual emission after subtracting the azimuthally averaged intensity profile. Left: DSHARP \citep{DSHARP_Andrews} high-angular resolution emission at 1.3\,mm. Right: Simulated emission of a GI disk with an exponentially tapered dust density profile of index 1.0 and disk-to-star mass ratio $q=$0.3. Both data sets have the same angular resolution, indicated by the beam in the bottom left of each panel.
}
\label{DSHARP_comp}
\end{figure*}
The Monte Carlo radiative transfer \textsc{mcfost} code \citep{mcfost1,mcfost2} was used to compute the disk thermal structure and synthetic continuum emission maps at each observed wavelength (0.89\,mm, 1.3\,mm and 3.3\,mm). We assumed $T_\mathrm{gas} = T_\mathrm{dust}$, and used $10^7$ photon packets to calculate $T_\mathrm{dust}$. We set the parameters for the central star to match those of the Elias 2-27 system \citep{2009ApJ...700.1502A, laura_elias}, with temperature $T=3850$ K, $M = 0.5$M$_{\odot}$ and $R_* = 2.3$ R$_\odot$.
To create the density structure as input into the \textsc{mcfost} calculation, each SPH simulation underwent Voronoi tesselation such that each SPH particle corresponds to one \textsc{mcfost} cell. We assumed the dust is a mixture of silicate and amorphous carbon \citep{1984ApJ...285...89D} and optical properties were calculated using Mie theory. The grain population consists of 100 logarithmic bins ranging in size from 0.03 $\mu$m to 1\,mm. The dust density of a grain size $a_i$ was obtained by interpolating from the SPH dust sizes in each cell in the model. We assume that grains smaller than half the smallest SPH grain size (0.5 $\mu$m) are perfectly coupled to the gas distribution. We normalised the dust size distribution by integrating over all grain sizes, where a power-law relation between grain size $a$ and number density of dust grains $n(a)$ was assumed such that d$n(a)\propto a^{-3.5}$\,d$a$.
The radiative transfer emission map of each wavelength was sampled with the same uv-coverage as the observations at each corresponding wavelength, using \textsc{galario} \citep{2018MNRAS.476.4527T} to create mock ALMA visibilities that were afterwards processed using the same deconvolution procedures as with the observations (described in Section 2) to obtain the final mock ALMA images.
For each simulation image we subtract its azimuthally-averaged radial profile of emission, following the same procedure described to trace the spiral morphology on the multi-wavelength observations (Section 3.1). We find that most of our models are able to accurately reproduce the m$=$2 large-scale spiral morphology. This is expected in the ALMA images, even at lower disk-to-star mass ratios (were we expect larger number of spiral arms), as was shown by \cite{2014MNRAS.444.1919D}. To select the simulation that best resembles our observations, we measure the spiral's pitch angle and compare them to the observational values. The pitch angles measured for each spiral and model setup are shown in Table \ref{table_pitch_model}. The model pitch angle ($\phi_{model}$) and the observational pitch angle ($\phi_{obs}$) are compared considering the difference between the values of each spiral and wavelength and weighing by the error of the observational pitch angle ($\sigma_{obs}$), following a likelihood parameter determined by:
\begin{equation}
\sum_{\lambda} \sqrt{\left[\left(\frac{\phi_{model}^{NW} - \phi_{obs}^{NW}}{\sigma_{obsNW}} \right)^2 + \left(\frac{\phi_{model}^{SE} - \phi_{obs}^{SE}}{\sigma_{obsSE}}\right)^2 \right]_{\lambda}}
\end{equation}
The simulation that best reproduces the observations follows a dust density profile of an exponentially tapered power-law with index 1.0 and a disk-to-star mass ratio of 0.3. These parameters are close to the previously published observational constraints for this disk \citep{laura_elias} and similar to the result derived in \citeauthor{2020MNRAS.498.4256C}. The simulation is shown in Figure \ref{model_q03_98}. Besides reproducing the pitch angle values, the radial extension of the spirals and their overall morphology in the simulation is similar to the observations. We note that the comparison to each simulation set was made for a specific timestep within all the simulation outputs, and the selection was based on the output that showed a clear 2 spiral arm feature after at least 6 outer orbits. For the case of the best-fit simulation, the selected timestep was after 6.4 outer orbits (at 300\,au from the star). It is important to state that, as expected, the spirals arising from GI are constantly excited and de-excited throughout the timelapse of our simulation. This means that for a same set of parameters, there may be several timesteps that accurately reproduce the morphology but others where no spirals are seen.
Even though there is a simulation that reproduces the observations better than the rest, the pitch angles of the different SPH simulations are in most cases comparable to those of the observations ($\sim$ 12.9$^{\circ}$ and $\sim$ 13.2$^{\circ}$, for NW and SE spiral respectively, with small variations between wavelengths). This shows that it is possible to reproduce the grand-design spirals even at lower disk-to-star mass ratios than previously tested in this system \citep[]{2018MNRAS.477.1004H, 2017ApJ...839L..24M}, with stellar mass, disk dust mass and density profile values comparable to the observational constraints. The likelihood value of the rejected models is shown in Table \ref{table_pitch_model}. Additionally, we see that not all GI spirals, recovered from the models and measured with our method (section 3.1), are perfectly symmetric (same pitch angle), which is a property that has been predicted in other works for GI excited spiral arms \citep{2018ApJ...860L...5F}. This is specifically observed in the models with q=0.2.
We compare our best-fit model with the high-angular resolution DSHARP \citep{DSHARP_Andrews} data, at 1.3\,mm. The comparison of the subtracted, residual images is shown in Figure \ref{DSHARP_comp}. From the visual comparison we clearly see that the internal structure of the spirals are different. Even at high-angular resolution, the observations show thicker and wider, continuous spirals. On the other hand, the simulation shows thinner and discontinuous spirals, which are made from a superposition of filaments. At larger radii we note that the spirals of the observation get wider, while in the simulation they remain thin. The causes for these differences are discussed in section 6.3.
We measure the pitch angle value, sampling the same angular extent as in the work by \cite{DSHARP_Huang_Spirals}. We retrieve a value of 15.56$^{\circ} \pm 0.06^{\circ}$ in the NW spiral and 12.76$^{\circ} \pm 0.06^{\circ}$ in the SE spiral. These values differ from the constraint from the lower angular resolution simulations, showing that the beam smearing does impact in the pitch angle measurements, as proposed in section 3.1. The pitch angle values for the DSHARP observations are 15.7$^{\circ} \pm 0.2^{\circ}$ in the NW spiral and 16.4$^{\circ} \pm 0.2^{\circ}$ in the SE spiral. The main difference is from the SE spiral, possibly related to the effects of the morphology differences and the thickness of the spiral from the observation.
\subsection{Gas Simulations}
\begin{figure}
\centering
\includegraphics[width=\hsize]{model_gas_together.pdf}
\caption{Individual emission of channels at velocities +1.73km/s (Left) and +1.62km/s (Right) for simulated $^{13}$CO (Top two rows) and C$^{18}$O (Bottom two rows) emission. From top to bottom, the first and third rows correspond to \textsc{mcfost} output emission. The second and fourth rows shows the emission after applying uv-coverage as in observations and processing with CASA. White lines trace the spirals from the best simulation (q=0.3, exponentially tapered dust density profile index 1.0). The beam for the simulated ALMA images is shown in the bottom left of each corresponding panel.
}
\label{model_gas_all}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{model_gas_together_18small.pdf}
\caption{Individual emission of selected channels (corresponding velocities marked in top row) for C$^{18}$O simulated emission. Top row corresponds to \textsc{mcfost} output emission. Bottom row shows the emission after applying uv-coverage as in observations and processing with CASA. White lines trace the spirals from the best simulation (q=0.3, exponentially tapered dust density profile index 1.0). The beam for the simulated ALMA images is shown in the bottom left of each corresponding panel.
}
\label{model_gas_small18}
\end{figure*}
From the simulation that best reproduces the pitch angle of the spiral arms (q = 0.3, density profile with a tapered power-law of index 1.0), we compute the simulated channel maps. As with the dust simulations, we use \textsc{mcfost} \citep{mcfost1,mcfost2}, with the same parameters as before, to compute the disk thermal structure and synthetic $^{13}$CO $J = 3 - 2$ and C$^{18}$O $J = 3 - 2$ line maps. The molecule abundances, relative to local H$_2$ are set to 7 $\times$ 10$^{-7}$ for $^{13}$CO \citep[as done in ][]{2020arXiv200715686H} and 2 $\times$ 10$^{-7}$ for C$^{18}$O \citep[following the estimate from][]{1982ApJ...262..590F}. The spectral resolution for the kinematic simulations is set to 0.111\,km/s to match the observations and we additionally compute C$^{18}$O channel maps with 0.05\,km/s resolution, to compare with the finer spectral resolution data.
The results for two representative channel maps are shown in Figure \ref{model_gas_all}. While in the radiative output we observe the characteristic ``GI-wiggle'' shown by \cite{2020arXiv200715686H} at the spiral arm's location, when sampling the data with the observation's uv-coverage the ``GI-wiggle'' is not visible in any isotopologue. In the work by \cite{2020arXiv200715686H} the ``GI-wiggle'' remains visible even after convolving with a gaussian beam. Compared to \cite{2020arXiv200715686H}, our simulated ALMA images have a $\sim$4 times lower spectral resolution (0.111\,km/s compared with 0.03\,km/s) and $\sim$3 times lower angular resolution (0.3$\arcsec$ compared to 0.1$\arcsec$). While the higher spectral resolution C$^{18}$O observations have a comparable channel width (0.05\,km/s) with the analysis of \cite{2020arXiv200715686H}, the angular resolution smears the ``GI-wiggle'' features (see Figure \ref{model_gas_small18}). We note that no North/South brightness asymmetry is present in the simulated channel maps. Additional channel maps computed with a 90$^\circ$ inclination do not show any significant vertical difference between East/West sides.
\section{Discussion}
\subsection{Spiral structure and multi-wavelength dust continuum emission}
The morphology of the dust spiral structure can be a key indicator towards the origin of the spirals. We measure symmetric spirals, present in all three wavelengths, with similar radial extension and pitch angles. Symmetric spirals with constant pitch angles are predicted in the case of GI \citep{2018ApJ...860L...5F}, rather than for companion perturbations where asymmetric, variable-pitch angle spirals are expected \citep{2018ApJ...859..119B}. Additionally, we measure similar contrasts for both spirals ($\sim30\%$ difference in contrast between NW and SE spirals), which is also consistent with GI predictions, as companion induced spirals are expected to show a clear primary spiral \citep{2018ApJ...859..119B}. The latter was already shown for the emission at 1.3\,mm \citep{laura_elias, DSHARP_Huang_Spirals}. In this study we extend the finding to 0.89\,mm and 3.3\,mm. Additionally, the spectral index map shows spiral morphology with slightly lower alpha values along the spirals. This coincides with the prediction for dust trapping, expected for GI \citep[][]{2015MNRAS.451..974D}.
We measure the optical depth profiles, which appear to show similar values in all three wavelengths. However, the temperature profile used for deriving the optical depth \citep[computed using the flaring and stellar luminosity values from][]{DSHARP_Andrews}, results in a midplane temperature $\sim$2 times higher than the 0.89\,mm brightness temperature at all radii. Most likely, the disk is much colder and optically thick than what we derive, and our optical depths must then be taken as lower limits.
Several works have shown that when scattering from dust grains is considered, optically thick disks can display lower intensities and be categorized as more optically thin disks \citep{2019ApJ...877L..18Z, 2019ApJ...877L..22L, 2020ApJ...892..136S}. When scattering is not present and the emission is optically thin, then $\alpha$ cannot reach values below 2.0. When dust scattering is included, a region of high optical depth can have $\alpha<2.5$ (and even attain an spectral index below 2.0) if the albedo decreases with wavelength \citep{2019ApJ...877L..18Z, 2020ApJ...892..136S}, something observed in the innermost regions of TW Hya \citep{2016ApJ...829L..35T, 2018ApJ...852..122H}. Furthermore, while $\alpha$ also depends on other dust properties, such as the grain size distribution \citep[e.g.,][]{2014prpl.conf..339T}, values of $\alpha< 3$ are not expected when the optical depth is low, and even in presence of cm-sized grains $\alpha$ should not attain values below $\sim2.5$ \citep{2019ApJ...877L..18Z}.
From our spectral index profiles $\alpha<2.0$ in the inner $\sim$40\,au and $\alpha<2.5$ inside $\sim$200\,au, outside this region $\alpha$ grows, reaching a maximum value of 3.0 in the outer disk. This indicates that the outer disk is probably optically thin with grains of 0.1-10\,cm, favouring a dust distribution $n(s) \propto s^{-3.5}$ \citep[see Figure 9 in][]{2019ApJ...877L..18Z}. However, in the region between $\sim$70-200\,au the spectral index increases slowly between $\sim$2.2-2.5. This scenario favours a dust distribution $n(s) \propto s^{-2.5}$ and can be explained by either optically thin emission and 0.1-10\,cm or optically thick emission with maximum grain sizes $\sim$0.1 cm. For the inner disk ($\lesssim70$\,au), the spectral index reaches values below 2.0. Most likely, the emission is optically thick and dust scattering is at work, even when the maximum optical depth we constrain (under standard assumptions) is $\tau \sim 0.5$ at all wavelengths.
For the DSHARP sample, it has been shown that the optical depths of $\tau \sim 0.5$ at 1.3\,mm in bright rings, can be obtained from optically thick regions with a scattering albedo of $\omega_{\nu} \sim$0.89 \citep{2019ApJ...877L..18Z}.
For Elias 2-27, we measure an average $\tau \sim 0.45$ at 1.3\,mm inside 70\,au, which could be obtained with a scattering albedo of $\omega_{\nu} \sim$ 0.93 \citep[see equations 14 and 15 from ][]{2019ApJ...877L..18Z}. This albedo value is sufficient to mask optically thick 1.3\,mm emission ($\tau_{real} \sim$1-5) in the inner disk, which we would be inferring as optically thin 1.3\,mm emission ($\tau_{obs} \sim 0.45$) in the standard assumption of only absorption opacity. As was previously discussed, our optical depths values could be underestimated due to the difference between the measured brightness temperatures and model midplane temperature. If the midplane temperature was a factor of $\sim$1.5 lower, the optical depth in the inner 70\,au of the disk, at 1.3\,mm would be $\tau \sim 0.99$ which could be obtained with a scattering albedo $\omega_{\nu} \sim$ 0.74. Similar analysis can be done with the other wavelengths from the measured $\tau$ values of the inner disk. This results in a scattering albedo of 0.92 and 0.95 for 0.89\,mm and 3.3\,mm respectively.
Together with the effect of scattering at long wavelengths, the observed low spectral index values can also occur due to the disk's temperature. For cold ($<$30K) systems, such as Elias 2-27, spectral indices below 2 are expected due to the displacement of the peak blackbody radiation to the sub-mm range \citep{2020ApJ...892..136S}. Additionally, if the emission is optically thick and there are fluctuations in the vertical temperature structure, this could also result in $\alpha<2.0$ \citep[see Figure 5 in][]{2020ApJ...892..136S}.
The underestimation of the optical depth in Elias 2-27 when not including the effect of scattering will impact its solid mass estimate. This effect is larger for inclined disks and when the emission area is compact \citep{2019ApJ...877L..18Z}, which is the case of our source in the inner regions. Considering the large disk extent (up to $\sim250$\,au), most of the dust mass resides in the optically thin outer disk, thus, the disk mass could be underestimated by up to a factor of $\sim$2 \citep{2019ApJ...877L..18Z}. The latter implicates that the previously contrained disk-to-star mass ratio of 0.1-0.3\citep[][using standard assumptions such as a gas-to-dust ratio of 100]{2009ApJ...700.1502A, 2010A&A...521A..66R} is a lower bound. If the disk mass was higher by up to a factor of 2, the resulting disk-to-star mass ratio would make gravitational instabilities a likely cause for the spiral structure.
\subsection{Asymmetries and Perturbations in the Gas}
The highly perturbed morphology constrained for the emitting gas layer in Elias 2-27 is a new characteristic for this system and offers new insight into the ongoing dynamic processes. The asymmetric structure of the $^{13}$CO and C$^{18}$O emitting layer, as well as the dust spiral arms present in Elias 2-27, could be in principle caused by fly-by interaction \citep[e.g., ][]{2019MNRAS.483.4114C} or a external companion. But if this were the case, we also expect a strong kinematical perturbation in the integrated emission and velocity maps of Elias 2-27, such as those reported by \cite{DSHARP_Nico} for disks with known external companions. Furthermore, observations in near-infrared of Elias 2-27 have not found any companion \citep{2009ApJ...696L..84C, 2020MNRAS.tmp.2019Z}.
With no outer perturbation, the emission layer height of the gas should follow hydrostatic equilibrium and its structure depend on the gas temperature \citep{2015arXiv150906382A}. In this case we expect a cone-like or flared emission layer, with height increasing at larger radial distance \citep{2013ApJ...774...16R}. We will discuss two possible origins for the observed asymmetries in the gas: ongoing infall of material from its surrounding cloud/envelope, or a warped inner disk causing azimuthal temperature variations.
Three-dimensional simulations of circumstellar disks with ongoing accretion show that the vertical structure of the disk will become asymmetric, as the accreting gas shocks the disk from above or below, along the $z$ plane, as described by cylindrical coordinates \citep[see Figure 8 in][]{2017A&A...599A..86H}. Furthermore, simulations for ongoing infall predict the appearance of spiral structures in the surface, generated by the infall process and shocks, both in 3D simulations \citep{2017A&A...599A..86H, 2011MNRAS.413..423H} and 2D simulations \citep{2015A&A...582L...9L}. Infall-triggered spirals may have been observed in the 343\,GHz ALMA emission of VLA 1, a Class I source with active envelope infall \citep[][]{2020NatAs...4..142L}. From our observations, we have shown the presence of large-scale emission surrounding the disk (see Figures \ref{mom_largescale}, \ref{panel_largescale_13CO} and \ref{panel_largescale_C18O}). Our data lacks appropriate uv-coverage to accurately sample the whole FOV, which extends for 20$\arcsec$, this is possibly the cause of the striped pattern seen in the channel maps (Figures \ref{panel_largescale_13CO} and \ref{panel_largescale_C18O}). We do not recover velocity gradients connecting the large-scale emission to the disk emission, this could be due to lack of sensitivity or angular resolution, nevertheless, it opens the option for infall to be ongoing. While this is not expected for a Class II source, it has been proposed that Elias 2-27 could be a very young Class II disk \citep{2017ApJ...835L..11T}.
An azimuthal variation of the temperature in the disk could also explain an azimuthally varying emission layer height, which can be expected when a disk is warped. A warp will affect the disk illumination, depending on the position and characteristic angles of the warp itself \citep{2010MNRAS.403.1887N}. Warps are generally detected through their shadowing effects in scattered light \citep[]{2015ApJ...798L..44M, 2018A&A...619A.171B} and their distinct kinematical signatures \citep[]{2017MNRAS.466.4053J, 2020ApJ...889L..24P, 2017A&A...607A.114W}. Given its high extinction, no scattered light observations are available for Elias 2-27, so we can only compare our observations with the kinematic predictions for a warp.
In ALMA observations, a misaligned disk can be inferred through the kinematical signature in the velocity map: if a non-misaligned disk is modeled then positive and negative residuals appear, opposite with respect to a ``symmetry'' axis \citep[e.g., as seen in the velocity field residuals for HD100546 or HD143006, ][]{2017A&A...607A.114W,DSHARP_Laura_10}. We do not observe such residuals in our data (see Figure \ref{kep_model_vel}), rather we see an ``X'' shaped residual in the C$^{18}$O emission. Other signatures related to warps are asymmetric illumination, deviations in line profiles, and twisted features in the channel and integrated emission maps \citep{2017MNRAS.466.4053J, 2018MNRAS.473.4459F}. The gas emission of Elias 2-27 is characterized by a strong illumination asymmetry between North-South sides of the disk. Additionally, we have shown the presence of ``curved'' and ``wavy'' features and deviations in both the integrated velocity maps and velocity cube (see Figures \ref{kep_model_vel} and \ref{kinks_central} respectively). These features may relate to an inclined disk, warped such that the disk bends perpendicular to the line of sight \citep[see Case C in ][]{2017MNRAS.466.4053J}. The latter configuration produces channel maps with curved structures and asymmetric illumination, such as observed in our data, while it also shows further structure in the integrated intensity map and asymmetric line profiles \citep{2017MNRAS.466.4053J}. We may not be sensitive to all these features given our moderate spatial resolution (simulations are done with resolution $\sim$ 0.1$\arcsec$, we have $\sim$ 0.3$\arcsec$) and the effects of cloud contamination in the system.
If indeed there is a warp, we should be able to roughly trace its location through the temperature variations it will cause. We expect temperature variations to affect the emitting layer's height and should therefore be able to trace these variations in the projected emission. While we do observe an elevation difference in the East and West sides of the disk, apparently separated by the semi-minor axis, we have discussed that this is probably by chance and that we cannot determine an exact symmetry angle. The method used to derive the emitting surface assumes symmetry with respect to the semi-major axis and this biases our results. We also show that, when tracing the deprojected border of emission (see Figure \ref{C18O_gasgap}) there are two local maxima and minima radial extensions, suggesting that there is not in fact a single symmetry axis \citep[we would have expected only one minimum and maximum extent, roughly symmetrically opposed for the case of a warp,][]{2017MNRAS.466.4053J}. More complex processes, or a combination of effects are occurring.
Finally, if a warp is responsible for the kinematic effects, there is the question of its origin. On one hand, warps are thought to arise from close-in binary interactions or inclined planetary orbits \citep[]{2018MNRAS.481...20N, 2020MNRAS.492.3306A}. Warps in very young systems, possibly produced by the infall of material, have been predicted \citep{2010MNRAS.401.1505B} and also reported \citep{2019Natur.565..206S}. In the case of Elias 2-27 we have discussed that infall may be ocurring, given the detection of large-scale emission at velocities close to the disk velocities. If the disk is indeed warped, this effect could be caused either by a planet in an inclined orbit or infall, both options require further investigation.
Aided by the isovelocity curves computed from the constrained emission surface of the disk, we detect multiple deviations from Keplerian motion in the velocity cube of $^{13}$CO and C$^{18}$O, co-located with the spiral structure. The isovelocity curves themselves show a perturbed nature, given the complex emission surface. The colocation of the perturbations with the spiral structures and the strong deviations could indicate a connection between the spirals and the emitting surface morphology. Previously reported deviations for planetary companions have been of around 15$\%$ \citep{2018ApJ...860L..13P} with respect to the channel velocity, while the perturbations we observe are much larger, reaching up to 80$\%$ of the channel velocity for some of the features in the central channel maps. Such large perturbations increase the likelihood that they relate to the spirals, rather than to a companion. Large planetary deviations require a high perturber mass, and in that case we would expect to see its effect in the dust emission. Furthermore, we expect deviations caused by a planet to be spatially localized \citep[]{2018ApJ...860L..13P, 2020ApJ...890L...9P, 2019NatAs...3.1109P} and our observations show deviations on both East and West sides of the disk, present along several channels. The detected perturbations agree with the predictions of \cite{2020arXiv200715686H}, they are co-located with the spirals and the morphology of the kink is similar to what was predicted in their work. If indeed the disk is warped or suffering considerable infall of material, as previously discussed, the observed ``kinks'' could be the combination of both kinematical deviations induced by a warp and perturbations due to the spirals.
In this study we attempted a purely Keplerian fit to the expected super-Keplerian velocity profile when GI is the governing process \citep{1999A&A...350..694B, 2007NCimR..30..293L}. The Keplerian rotation curve is able to fit the observed velocity profile, but we do note that the East channels, especially from the C$^{18}$O emission, show super-Keplerian velocities. Further analysis is needed to check if a self-gravitating rotation curve may be a better description of this data. This is currently being studied and will be presented in a future publication (Veronesi et al, submitted).
Finally, regarding the gap in the C$^{18}$O gas emission, we do not see evidence of it being produced by a physical perturber. The gap does not appear to be colocated with any perturbation in the channel maps (see Figure \ref{kinks_central}), which we would expect if the gap origin was planetary \citep[e.g.][]{2018ApJ...860L..13P, 2018ApJ...860L..12T}. It may answer to chemical processes of the gas or optical depth effects \citep[see discussion in ][]{2018ApJ...869L..48G}.
\subsection{Comparison with SPH simulations.}
We find that gravitational instabilities can accurately reproduce the spiral morphology at multiple wavelengths, with parameters close to the observational constraints, as shown in Figure \ref{model_q03_98}. However we see considerable morphological differences in the comparison to high-angular resolution data. The width and morphology of spiral arms in GI environments can be regulated by varying the cooling parameters \citep[see figures from ][]{2003MNRAS.339.1025R, 2017ApJ...836...53B}. Future SPH simulations should attempt to sample this parameter space to further understand the cooling prescription of the disk. We note that \citeauthor{2020MNRAS.498.4256C} also analyses the spiral structure of Elias 2-27 at DSHARP angular resolution and they obtain thicker spirals in the substracted image. These spirals are constructed with a semi-analytical model, considering grain growth and while they are thicker than what we recover, they are still not able to reproduce the precise morphology of the observations, as they are too smooth \citep[see substracted images in Figure 16 from ][]{2020MNRAS.498.4256C}.
GI does not explain the dust gap, if the dust gap was carved by a planet, estimates indicate at best a mass of 0.1M$_J$ \citep{DSHARP_Zhang}, which is much smaller than planets expected to be formed in a GI environment. Though not shown in this work, we produced additional SPH simulations of an unstable disk with different planetary-mass companions. In all cases, after a few orbits, the planet migrated onto the star. This is expected, according to the predictions of Type I migration for planets orbiting at several au from the star \citep{2014prpl.conf..667B}. Not allowing migration could allow a gap to form \citep[see ][]{2017ApJ...839L..24M}, however it would not be a realistic scenario.
In our spectral line observations we recover perturbations that are colocated with the spirals and span several channels, sharing some similarities to those predicted by for GI disks \citet{2020arXiv200715686H}. Our gas simulations from the best-fit GI parameters do not show the perturbation features from \citet{2020arXiv200715686H} when sampled and imaged with the uv-coverage of the observations. This implies that the inner perturbation observed in the $^{13}$CO and C$^{18}$O channel maps is indeed very large, as it appears at our lower angular resolution. The perturbation is seen across several channels, which is in agreement with predicted perturbations of a GI disk \citep{2020arXiv200715686H}. The fact that we don't see the perturbation in the channel maps from simulated emission could be due to the decisions regarding the chosen time-stamp for the simulation and the position angle. As we select a specific time-frame, it is possible that at other evolutionary stages, the strength of the gas perturbations would have been larger. Also, it has been shown by \cite{2020arXiv200715686H} that the strength of the ``GI-wiggle'' will vary depending on the position angle of the emission. While the inclination of the simulation matches the value of the observations (56.2$^\circ$), the position angle is set before inclining the disk. When producing the ALMA mock image, the position angle of the emission is determined by the value that allows a good visual comparison of the final, inclined, simulated emission image to the observations. Due to the computational expense of sampling various position angles, we adopted the observational value (118.8$^\circ$) for the position angle of the simulations, varying only by 90$^\circ$ or 180$^\circ$ in some cases. These shifts were decided using a visual criteria. We therefore note that there could be a range of position angle values that allow a good comparison with the observations, while also producing stronger kinematic perturbations, this is not tested in this work.
\subsection{Spiral Structure Origin}
The hypothesis for the origin of the spiral arms observed in the dust emission of Elias 2-27 are either gravitational instabilities or perturbation by a companion. From the observational constraints presented in this work there are some key features that may help us define which scenario fits best the Elias 2-27 disk. To begin, however, we must state that neither option can accurately predict, on its own, both the spirals and the dust gap. In the case of a disk undergoing GI, gaps are not expected features even if a planet was formed, given the fast migration even massive planets will have under these conditions \citep{2011MNRAS.416.1971B}. In order to reproduce the spiral arms a companion would have to be located beyond the spiral extent \citep{2017ApJ...839L..24M}. The possibility of the spirals being formed by a companion seems unlikely, as has been discussed in previous studies, given the contrast, symmetry, and extent of the spirals \citep{2018ApJ...860L...5F, 2018ApJ...859..119B}. Furthermore, the possibility of an external perturber, such as a stellar companion or a fly-by causing the spiral structure, is also unlikely, due to the lack of a clear kinematical signature in the data and the non-detection of any object nearby \citep{2005A&A...437..611R, 2009ApJ...696L..84C, 2020A&A...635A.162L}. \cite{2020A&A...635A.162L} conducted a NACO/VLT survey in search for planetary companions around 200 stars, including Elias 2-27. For this system they reach a 50$\%$ probability of detecting a 2$M_J$ companion outside 100\,au or a $\sim$10$M_J$ companion at 40-50\,au \citep[][R. Launhardt, priv. comm.]{2020A&A...635A.162L}. We note that the detection of an external perturber is made more challenging by the extinction that affects this region.
Elias 2-27 has been shown to have a large disk-to-star mass ratio \citep{2009ApJ...700.1502A, 2010A&A...521A..66R}, in this work we additionally discuss the possibility of the disk mass being up to a factor of $\sim$2 higher, if scattering is a relevant process \citep{2019ApJ...877L..18Z}. Even if scattering wasn't relevant, we show that the disk is probably more optically thick than the reported values, which also points towards a larger disk mass. The values of disk-to-star mass ratio is sufficient to excite gravitational instabilities and we can accurately reproduce the spiral morphology using SPH models of a self-gravitating disk in the medium resolution at least. Additionally, we detect dust trapping signatures in the continuum observations, in the contrast variations with increasing wavelength and lower spectral index values along the spirals. We also measure strong kinematic perturbation co-located with the spirals over multiple channels. The high disk mass, together with the strong deviations from Keplerian motion, consistent with the kinematical prediction for a GI disk \citep{2020arXiv200715686H}, lead us to signal the origin of the spirals to be gravitational instabilities, rather than a companion.
Infall of material would explain the high disk-to-star mass ratio of the system and the excitation of spiral structures due to GI \citep{2017ApJ...838..151T, 2017A&A...599A..86H}. While infall mechanisms are expected to be present in younger Class 0/I systems than Elias 2-27, which is classified as a Class II disk \citep{2009ApJ...700.1502A}, it has been proposed that Elias 2-27 could be an extremely young Class II object to explain the spiral structure \citep{2017ApJ...835L..11T}. Though GI can explain the spiral structure traced in the dust, there is also a clear dust gap \citep{DSHARP_Huang_Annular}, which emphasize is not a feature predicted or explained by GI. On the other hand, infall may explain on its own the perturbed morphology of the gas emission layer, but there is also a marked brightness asymmetry, which could be related to the presence of a warp in the disk \citep{2010MNRAS.403.1887N}. If the disk is warped, due to infalling material breaking the disk, this could possibly explain the dust gap observed at high angular resolution by \cite{DSHARP_Huang_Annular}, if the separation between inner and outer misaligned disks were located at $\sim$70\,au from the central star. Certainly further observations on the source are required, to sample shorter baselines and adequately study the dynamics of the large-scale emission, searching for infall signatures, and also observations at higher spatial resolution, to analyze the origin of the dust gap and better constrain the kinematic perturbations in the system.
\newpage
\section{Summary}
We have presented and analyzed multi-wavelength dust continuum observations of the protoplanetary disk around Elias 2-27. We also studied the gas emission for $^{13}$CO and C$^{18}$O $J=3-2$. This provides new observational constraints on this source, which allowed us to study the origin of the prominent spiral structure. Our findings are as follows:
\begin{itemize}
\item The spiral substructure is present in dust observations at multiple wavelengths, from 0.89 to 3.3\,mm, and shows a higher contrast at longer wavelengths. These signs are possible indicators of grain growth and dust trapping at the spiral arm location.
\item From the spectral index analysis we trace a spiral morphology with lower spectral index values along the spiral location. This is expected for dust trapping, which is a key signature of gravitational instabilities not observed before in other systems with spiral morphology. The spectral index values also indicate the presence of large grains in the outer disk. Inwards of $\sim 70$au the spectral index drops to values even lower than 2. This can be explained if this region has high optical depth and high albedo in the presence of dust scattering. We discuss that our optical depth estimates are lower limits, given the low brightness temperature measured in the observations. The presence of scattering and higher optical depths implies that the solids mass estimated under standard assumptions is likely a lower limit. The mass of the disk in Elias 2-27 is presumably higher than previously estimated.
\item We compute SPH simulations of a gravitationally unstable disk with parameters as those in Elias 2-27, and are able to replicate the spiral morphology at the three different wavelengths we study at $\sim$0.2$\arcsec$ resolution. Discrepancies at high angular resolution could be due to the cooling prescription used.
\item Observations show that the gas emission is not azimuthally symmetric in the vertical direction, i.e. the disk has a larger emission layer height in the West than in the East at most radial distances. Additionally, at larger radial distances, the kinematic data indicate that the emission layer height decreases. This is the first time we observe an azimuthal emission layer height difference in a protoplanetary disk, and it does not appear in predictions for either a GI disk or the presence of a planetary companion.
\item Tracing the different heights of the $^{13}$CO and C$^{18}$O emission layers we show that $^{13}$CO comes from a higher layer than C$^{18}$O, with velocities consistent with Keplerian rotation. The stellar mass we constrain ($\sim$0.46-0.5 M$_\odot$), is in agreement with the literature value (0.49M$_\odot$). Gas emission depletion (a gap) is observed in the distribution of C$^{18}$O at a radius of $\sim$240\,au. This gap does not appear to be co-located with the main perturbations we recognize in the channel maps.
\item We see ``kinks'' or perturbations in the channel maps of both CO tracers, that appear co-located with the spiral features. These kinks are stronger and present across a wide velocity range, making it unlikely that they have a planetary origin. The characteristics of these perturbations are similar to what has been predicted in a GI disk by \cite{2020arXiv200715686H}.
\item Based on observations that show large-scale emission surrounding and connecting to the disk, we propose the infall of material from the surrounding cloud is responsible for exciting GI in the disk, which in turn causes the dust spiral arm features. Infall of material can also explain the perturbed emission layer constrained from the gas tracers. Additionally, if infall warped the disk, this could explain the brightness asymmetry in the channel and integrated emission maps. Depending on the warp location it could also explain the dust gap observed at higher angular resolution. Further observations are necessary to effectively detect the presence of a warp and confirm ongoing infall. \\
\end{itemize}
\software{CASA \citep{2007ASPC..376..127M}, eddy \citep{2019JOSS....4.1220T}, bettermoments \citep{2019ascl.soft01009T}, PHANTOM \citep{2018PASA...35...31P}, mcfost \citep{mcfost1, mcfost2}, galario \citep{2018MNRAS.476.4527T}, frankenstein \citep{2020MNRAS.495.3209J}, Astropy \citep{astropy:2013, astropy:2018},
Matplotlib \citep{Hunter:2007}, emcee \citep{2013PASP..125..306F}}
\begin{acknowledgements}
This paper makes use of the following ALMA data: \#2013.1.00498.S, \#2016.1.00606.S and \#2017.1.00069.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile.
The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.
L.M.P.\ acknowledges support from ANID project Basal AFB-170002 and from ANID FONDECYT Iniciaci\'on project \#11181068.
M.B.\ acknowledges funding from ANR of France under contract number ANR-16-CE31-0013 (Planet Forming Disks).
C.H.\ was a Winton Fellow and this research was supported by Winton Philanthropies / The David and Claudia Harding Foundation. A.S.\ acknowledges support from ANID/CONICYT Programa de Astronom\'ia Fondo ALMA-CONICYT2018 31180052.
J.M.C.\ acknowledges support from the National Aeronautics and Space Administration under grant No. 15XRP15\_20140 issued through the Exoplanets Research Program.
S.M.A.\ acknowledges funding support from the National Aeronautics and Space Administration under Grant No. 17-XRP17 2-0012 issued through the Exoplanets Research Program.
J.B.\ acknowledges support by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51427.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
Th.H.\ acknowledges support from the European Research Council under the
Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28.
L.L.\ acknowledges the financial support of DGAPA, UNAM (project IN112820), and CONACyT, México.
M.T.\ has been supported by the UK Science and Technology research Council (STFC) via the consolidated grant ST/S000623/1.
L.T.\ acknowledges support from the Italian Ministero dell Istruzione, Universit\`a e Ricerca through the grant Progetti Premiali 2012 – iALMA (CUP C$52$I$13000140001$), by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) - Ref no. FOR $2634$/$1$ TE $1024$/$1$-$1$, and the DFG cluster of excellence Origins (www.origins-cluster.de).
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 823823 (DUSTBUSTERS) and from the European Research Council (ERC) via the ERC Synergy Grant {\em ECOGAL} (grant 855130).
Powered@NLHPC: This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02).
This research used the ALICE2 High Performance Computing Facility at the University of Leicester. This research also used the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (\url{www.dirac.ac.uk}). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. This work was partially supported by the University of Georgia Office of Research and the Department of Physics and Astronomy.
\end{acknowledgements}
|
1,116,691,500,944 | arxiv | \section{Introduction}\label{intro
\begin{quote}
\emph{
These days, there's not much you can understand about what is
going on around you if you do not understand the uncertainty
attached to pretty much every phenomenon.}
\hspace*\fill{\small--- J. N. Tsitsiklis}
\end{quote}
There is a continuous progress in understanding the fire phenomenon,
its impact on the structure and on the reaction of humans and safety
systems. There are scientific methods and models of the fire and the
emergency scene, there are computer implementations, but the complexity
of the domain impedes the more widespread use of these tools.
Currently, the most typical approach for assessing the safety of the
building is a precise choosing of the input parameters for a small
number of lengthy, detailed simulations. This procedure is managed by a practitioner,
based on his experience. However, based on \emph{heuristics and
biases}~\cite{tversky1974judgment,kahneman2009conditions} we have
concerns that human judgement surpasses statistical calculations. The
alternative is to let computer randomly choose the parameters and run
thousands of simulations. The resulting collection allows us, after
further processing, to judge on the safety of the building.
\section{Aamks, the multisimulations platform}\label{intro
We created Aamks -- the platform for running simulations of fires and
then running the evacuation simulations, but thousands of them for a
single project. This is the Monte-Carlo approach. We use CFAST, which is
a rough, but a fast fire simulator. This allows us to explore the space
of possible scenarios and assess the probability of them. The second
component of risk -- consequences -- is taken from an evacuation
simulator capable to model evacuation in the fire environment. We use
a-evac as the evacuation simulator which we have built from scratch. The
\emph{multisimulation} is a handy name for what we are doing. Aamks
tries to assess the risk of failure of humans evacuation from a building
under fire. We applied methodology proposed
in~\cite{hostikka2002probabilistic,frame,PRA} -- stochastic simulations
based on the Simple Monte-Carlo approach~\cite{christian2007monte}. Our
primary goal was to develop an easy to use engineering tool rather than
a scientific tool, which resulted in: AutoCAD plugin for creating
geometries, web based interface, predefined setups of materials and
distributions of various aspects of buildings features, etc. The
workflow is as follows: The user draws a building layout or exports an
existing one. Next, the user defines a few parameters including the type
of the building, the safety systems in the building, etc. Finally, they
launch a defined number of stochastic simulations. As a result they
obtain the distributions of the safety parameters, namely: available
safe egress time (ASET), required safe egress time (RSET), fractional
effective dose (FED), hot layer height and temperature and F-N curves as
well as the event tree and risk matrix.
Fortran is a popular language for coding simulations of physical
systems. CFAST and FDS+Evac are coded in fortran. Since we don't create
a fire simulator we code Aamks in python which is more comfortable
due to extremely rich collection of libraries. We decided that
borrowing Evac from FDS and integrating it with Aamks would be harder
for us than to code our own evacuation simulator, hence a-evac was born.
There's also a higher chance of attracting new python developers than
fortran developers for our project.
Aamks consists of the following modules:
\begin{itemize}
\item{a-geom, geometry processing: AutoCAD plugin, importing
geometry, extracting topology of the building, navigating in the building,
etc. }
\item{a-evac, directing evacuees across the building and altering
their states }
\item{a-fire, CFAST and FDS binaries and processing of their outputs }
\item{a-gui, web application for user's input and for the results
visualisation }
\item{a-montecarlo, stochastic producer of thousands of input files for
CFAST and a-evac }
\item a-results, post-processing the results and creating the content
for reports,
\item{a-manager, managing computations on the grid/cluster of computers }
\item{a-installer}
\end{itemize}
\section {A-evac, the evacuation simulator
In the following subsections we describe the internals of a-evac,
sometimes with the necessary Aamks context.
\subsection{Geometry of the environment}\label{geom
The Aamks workflow starts with a 3D geometry where fires and
evacuations will be simulated. We need to represent the building, which
contains one or more floors. Each floor can consist of compartments and
openings in them, named respectively COMPAS and VENTS in CFAST. Our
considerations are narrowed to rectangular geometries. There are two
basic ways for representing architecture geometries: a) cuboids can
define the insides of the rooms (type-a-geometry) or b) cuboids can
define the walls / obstacles (type-b-geometry). CFAST uses the
type-a-geometry. We create CFAST geometries from the input files of the
following format (there are more entities than presented here):
\begin{verbatim}
{
"FLOOR 1":
{
"ROOM": [
[ [ 3.0 , 4.8 , 0.0 ] , [ 4.8 , 6.5 , 3.0 ] ] ,
[ [ 3.0 , 6.5 , 0.0 ] , [ 6.8 , 7.4 , 3.0 ] ]
] ,
"COR": [
[ [ 6.2 , 0.2 , 0.0 ] , [ 7.6 , 4.8 , 3.0 ] ]
] ,
"D": [
[ [ 3.9 , 3.4 , 0.0 ] , [ 4.8 , 3.4 , 2.0 ] ]
] ,
"W": [
[ [ 1.2 , 3.4 , 1.0 ] , [ 2.2 , 3.4 , 2.0 ] ]
] ,
"HOLE": [
[ [ 3.0 , 6.5 , 0.0 ] , [ 4.8 , 6.5 , 3.0 ] ]
]
}
}
\end{verbatim}
ROOM and COR(RIDOR) belong to COMPAS. D(OOR), W(INDOW) and HOLE belong
to VENTS. HOLE is a result of CFAST restrictions -- it is an artificial
entity which serves to merge two compartments into a single compartment
as shown on Figure~\ref{hole}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.5\textwidth]{hole.pdf}
\end{center}
\caption{The concept of a HOLE: a) the room in reality, b) the room
representation in CFAST: two rectangles for separated calculations, but
open to each other via HOLE.}
\label{hole}
\end{figure}
All the entities in the example belong to the same FLOOR 1. The
triplets are $(x_0,y_0,z_0)$ and $(x_1,y_1,z_1)$ encoding the beginning
and the end of each entity in 3D space. In practice we obtain these
input files from AutoCAD, thanks to our plugin which extracts data from
AutoCAD drawing. There's also an Inkscape svg importer -- useful, but
without some features. Adding basic support for another graphics tools
is not much work.
In later sections we will introduce problems of guiding evacuees
throughout the building. Those modules require the type-b-geometry. We
convert from a type-a-geometry to a type-b-geometry by duplicating the
geometry, translating the geometry and applying some logical operations.
Figure~\ref{toObstacles} shows the idea.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.9\textwidth]{toObstacles.pdf}
\end{center}
\caption{The conversion from type-a-geometry to type-b-geometry}
\label{toObstacles}
\end{figure}
There are three aspects of movement when it comes to evacuation
modeling~\cite{cuesta2015evacuation}: (a) path-finding --
for the rough route out of the building, (b) local movement -- evacuees
interactions with other evacuees, with obstacles and with environment, and (c)
locomotion -- for "internal" movement of the agent (e.g. body sway).
A-evac models only (a) and (b).
\subsection{Path-finding (roadmap)
The simulated evacuees need to be guided out of the building.
The type-b-geometry provides the input for path-finding. Each
of the cuboids in type-b-geometry -- representing obstacles -- is
defined by the coordinates. These coordinates represent corners of the
shapes. Since we model each of the floors of a building separately, we
flatten 3D geometry into 2D and represent obstacles as rectangles.
Therefore type-b-geometry in path-finding module is represented as set
of 4-tuple coordinates $\big((x_0, y_0),(x_1, y_1),(x_2, y_2),(x_3,
y_3)\big)$.
The set of 4-tuple elements is then flatten to the set of coordinates --
bag-of-coordinates. Due to the fact that majority of the obstacles share
the coordinates, we remove duplicates from the set (for the sake of
performance). Then, this bag-of-coordinates is the input for
triangulation. We apply Delaunay
triangulation~\cite{delaunay1934sphere}, that represents space as a set
of triangles. Figure~\ref{tri} depicts the idea of triangulation.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.0\textwidth]{tri.pdf}
\end{center}
\caption{The idea of triangulation. a) original geometry, b)
bag-of-coordinates, c) triangulation.}
\label{tri}
\end{figure}
The triangles are used as navigation meshes for the agents. The
navigation meshes define which areas of an environment are traversable
by agents.
After triangulation of bag-of-coordinates, some of the triangles are
located inside the obstacles -- those (by definition) are not
traversable so we remove them. What is left is a traversable-friendly
space.
We create then the graph of spatial transitions for the agents, based on
the adjacency of triangles obtained from the triangulation. Spatial
transition means that an agent can move from one triangle to another.
An agent on an edge of a triangle can always reach the other two edges.
For triangles which share edges it allows an agent to travel from one
triangle to another.
The pairs of all neighbouring edges are collected. We use python
networkx module~\cite{brandes2005network} which creates a graph made of
the above pairs. For further processing we add agents positions to the
graph, by pairing them with the neighbouring edges.
The graph represents all possible routes from any node to any other node
in the graph. We can query the graph for the route from the current
agent's position to the closest exit. It means that agent will walk
through the consecutive nodes and will finally reach the exit door. We
instruct networkx that we need the shortest distances in our routes
(default is the least hops on the graph) and we obtain a set of edges
the agent should traverse in order to reach the exit.
Figure~\ref{evacgraph} depicts the set of edges returned by the graph
for an example query.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.99\textwidth]{path.png}
\end{center}
\caption{The roadmap defined by the graph for an example query. The red line
crosses centers of edges that an agent needs to travel to reach the exit.}
\label{evacgraph}
\end{figure}
The set of edges returned by the graph cannot be used directly for
path-finding. Neither the vertices of the edges nor the centers of them,
do define the optimal path that would be naturally chosen by evacuees during real
evacuation. Therefore an extra algorithm should be used to smooth the
path. For this purpose we apply funnel algorithm defined
in~\cite{chazelle1982theorem}. The funnel is a simple algorithm finding
straight lines along the edges.
The input for the funnel consists of a set of ordered edges (named
portals) from the agent origin to the destination. The funnel always
consist of 3 entities: the origin (apex) and the two vectors from apex to
vertices on edges -- the left leg and the right leg.
The apex is first set to the origin of the agent and the legs are set to
the vertices of the first edge. We advance the left and right legs to
the consecutive edges in the set and observe the angle between the legs.
When the angle gets smaller we accept the new vertex for the leg.
Otherwise the leg stays at the given vertex. After some iteration one of
the legs should cross the other leg defining the new position of the
apex. The apex is moved and we restart the procedure.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.59\textwidth]{funnel_def.pdf}
\end{center}
\caption{The idea of funnel algorithm. a) starting point, b) advancing
legs.}
\label{funnel}
\end{figure}
As a result the path is smoothened and defined only by the points where the
changes in velocity vector are needed. Moreover, we used an improved version
of the funnel algorithm that allows for defining points keeping a
distance from the corners reflecting the size of the evacuee. This
allows for modeling the impaired evacuees on wheeled chairs or beds in
the hospitals. Figure~\ref{funnel} depicts the smoothened path by funnel
algorithm.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.99\textwidth]{funnel.png}
\end{center}
\caption{The roadmap from starting point to exit smoothened by
funnel algorithm. }
\label{funnel}
\end{figure}
\subsection{Local movement}\label{orca
Local movement focuses on the interaction with (a) other agents (b) static
obstacles (walls) and (c) environmental conditions. A-evac handles (a) and
(b) via RVO2\footnote{\url{http://gamma.cs.unc.edu/RVO2/}} which is an
implementation of the Optimal Reciprocal Collision Avoidance (ORCA) algorithm
proposed in~\cite{van2008reciprocal,van2011reciprocal}. Later in this
section we describe how we are picking the local targets which is an
aspect of (b). (c) is basically altering agent's state such as speed.
RVO2 aims at avoiding the \emph{velocity
obstacle}~\cite{fiorini1998motion}. The velocity obstacle is the set of
all velocities of an agent that will result in a collision with another
agent or an obstacle. Otherwise, the velocity is collision-avoiding.
RVO2 aims at asserting that none of the agents collides with other
agents within time $\tau$.
The overall approach is as follows: each of the agents is aware of other
agents parameters: their position, velocity and radius (agent's
observable universe). Besides, the agents have their private parameters:
maximum speed and a preferred velocity which they can auto-adjust
granted there is no other agent or an obstacle colliding. With each loop
iteration, each agent responses to what he finds in his surroundings,
i.e. his own and other agents radiuses, positions and velocities. The
agent updates his velocity if it is the velocity obstacle with another
agent. For each pair of colliding agents the set of collision-avoiding
velocities is calculated. RVO2 finds the smallest change required to
avert collision within time $\tau$ and that is how an agent gets his new
velocity. The agent alters up to half of his velocity while the other
colliding agent is required to take care of his half.
Figure~\ref{local_issues} a-b depicts the idea of velocity collision
avoidance.
The algorithm remains the same for avoiding static obstacles. However,
the value of $\tau$ is smaller with respect to obstacles as agents
should be more 'brave' to move towards an obstacle if this is necessary
to avoid other agents.
It turned out problematic how to pick the local target from the
roadmap. Local targets need to be updated (usually advanced, but
not always) near points defined by funnel algorithm during path-fining
phase -- the \emph{disks} on Figure~\ref{path_vs_local}, after they become
visible to the agent. However, the disks can be crowded and agents can
be driven away from the correct courses by other agents. We carefully
inspected all possible states that agents can find themselves in. In
order to have a clearer insight and control over the agents inside the
disks we use the Finite State
Machine\footnote{\url{https://en.wikipedia.org/wiki/Finite-state_machine}}
instead of just plain algorithm block in our code. The state of the
agent is defined by 4 binary features: (a) is agent inside the disk? (b)
are \emph{where agent is walking to} and \emph{what agent is looking at}
the same target? (c) can agent see what he is looking at (or are there
obstacles in-between)? (d) has agent reached the final node?
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.9\textwidth]{path_vs_local.pdf}
\end{center}
\caption{The roadmap and local movement}
\label{path_vs_local}
\end{figure}
Within each iteration of the main loop we check the states of the
agents. The states can be changed by agents themselves -- e.g. agent
has crossed the border of the disk, or by our commands -- e.g. agent is
ordered to walk to another target. Consider these circumstances: the
agent has managed to see his next target and now he walks towards this
next target -- he is in state S1. But now he loses the eye contact with
this new target and finds himself in state S2. The program logic reacts
to such a state by transiting to the state S3: start looking at the previous
target and walk towards this previous target. Based on what happens next
we can order the transition to another state or just wait for the agent
to change the state himself. By careful examination of all possible
circumstances we can make sure that our states and their transitions can
handle all possible scenarios.
On Figure~\ref{local_issues}c) we show how agents are passing through a
HOLE. Due to our concept of the disks (where searching for new targets
takes place) and due to the internals of RVO2 we gain the desired effect
of agents not crossing the very center of the disk. Instead, the agents
can walk in parallel and advance to another target which looks natural
and doesn't create an unnecessary queue of agents eager to cross the
very center of the HOLE.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.7\textwidth]{local_issues.pdf}
\end{center}
\caption{ RVO2 at its work of resolving collisions: (a) agents on direct
collision courses and (b) their calculated collision-avoiding courses, (c)
three agents crossing a HOLE in parallel.}
\label{local_issues}
\end{figure}
\subsection{Evacuation under fire and smoke
Each a-evac simulation is preceded with the simulation of the fire. We
have only tested a-evac with
CFAST~\cite{peacock2013CFAST,jones2000technical}. CFAST writes its
output to csv files. We need to query these CFAST results quite a bit,
therefore we transform and store these results in a fast in-memory
relational database\footnote{\url{https://www.sqlite.org/}}. For each
frame of time we are repeatedly asking the same questions: (a) given the
agent's coordinates, which room is he in? (b) what are the current
conditions in this room?
When it comes to (b), the environment effects on the agent can be: (b.1)
limited visibility (eyes), (b.2) poisonous gases (nose) and (b.3)
temperature in the room (body). Both (b.1) and (b.2) are read from the
default (but configurable) height of 1.8~m. There are always two zones
in CFAST, which are separated at a known height, so we need to read the
conditions from the correct zone, based on where our 1.8~m belongs.
The value of visibility ($OD$ -- optical density) affects agent's speed.
We use the relation proposed in~\cite{frantzich2003utrymning} following
the FDS+Evac~\cite{korhonen2007fds}:
\begin{equation}\label{speedF}
v_n^{pref} (K_s) = max \Big\{ v_{n, min}, v_n^{pref} \big(1 + \frac{\beta}{\alpha} \cdot K_s\ \big) \Big\}
\end{equation}
where: $K_s$ is the extinction coefficient ($[K_s] = m^{-1}$) calculated
as $OD/log_{10}e$ according
to~\cite{jin1974visibility,jin2016visibility}, $v_{n, min}$ is the
minimum speed of the agent $A_n$ and equals
$0.1\cdot~v_n^{pref}$(agent's preferable velocity), and $\alpha$,
$\beta$ are the coefficients defined in~\cite{frantzich2003utrymning}.
Setting the minimal value of speed means that the agent does not stop
in thick smoke. They continue moving until the value of incapacitated
Fractional Effective Dose (FED) is exceeded, which is fatal to the
agent. FED is calculated from CFAST-provided amounts of the
following species in the agent environment: carbon monoxide (CO),
hydrogen cyanide (HCN), hydrogen chloride (HCl), carbon dioxide ($CO_2$)
and oxygen ($O_2$) by the equation~\cite{purser2002toxicity,korhonen2007fds}:
\begin{equation}
FED_{total} = (FED_{CO} + FED_{HCN} + FED_{HCl}) \times HV_{CO_2} + FED_{O_2}
\end{equation}
where $HV_{CO_2}$ is the hyperventilation induced by the concentration
of $CO_2$. Following are the formulas for the terms in the above
equation. FEDs are given in ppm and time t in minutes. $C$ stands for
concentration of the species in \%:
\begin{equation}
FED_{CO} = \int_{0}^{t} 2.764 \times 10^{-5}(C_{CO}(t))^{1.036}dt
\end{equation}
\begin{equation}
FED_{HCN} = \int_{0}^{t} \frac{exp \big(\frac{C_{HCN}(t)}{43}\big)}{220} - 0.0045 dt
\end{equation}
Based on~\cite{hull2008hydrogen}
In contrast to the model applied in Evac, CFAST does not allow for
proactive correction of effect of nitrogen dioxide -- $C_{CN} = C_{HCN}
- C_{NO_2}$. Therefore this effect is not included in the calculations.
\begin{equation}
FED_{HCl} = \int_{0}^{t} \frac{C_{HCl}(t)}{1900} dt
\end{equation}
Based on~\cite{speitel1995toxicity,hull2008hydrogen}
\begin{equation}
FED_{O_2} = \int_{0}^{t} \frac{dt}{60 \cdot exp\big[8.13 - 0.54(20.9 - C_{O_2}(t))\big]}
\end{equation}
\begin{equation}
HV_{CO_2} = \frac{exp\big(0.1903 \cdot C_{CO_2} (t) + 2.0004\big)}{7.1}
\end{equation}
There are few quantitative data from controlled experiments concerning
the sublethal effect of the smoke on people. In
works~\cite{speitel1995toxicity,gann2008combustion,purser2002toxicity,gann2001sublethal,stec2010fire}
sublethal effect in a form of incapacitation ($IC_{50}$), escape ability
($EC_{50}$), \emph{lingering health problems} and \emph{minor effects}
were reported. Incapacitation was inferred from lethality data, to be
about one-third to one-half of those required for lethality. The mean
value of the ratios of the $IC_{50}$ to the $LC_{50}$ was 0.50 and the
standard deviation 0.21, respectively. In~\cite{gann2001sublethal} a
scale for effects based on FED was introduced. The three ranges were
proposed: 1 FED indicating lethality, 0.3 FED indicating incapacitation
and 0.01 FED indicating no significant sublethal effects should occur.
We propose based on this data a scale for sublethal effects of smoke for
evacuees as presented in Table~\ref{fed}.
$FED_{total}$ affects the agent's movement in the smoke. For $FED_{total}>0.3$,
the smoke inhalation leads to sublethal
effects~\cite{gann2008combustion} -- the agent is not able to find
safety from the fire and just stays where he is. For $FED_{total} > 1$
we model lethal effects. We later use these effects in the final risk
assessment.
Table~\ref{fed} summarizes the FED effects on human health, what is our
original proposition for evaluation of sublethal effect of smoke. These
ranges are incorporeted in Aamks. It is based on the following
works:~\cite{speitel1995toxicity,gann2008combustion,purser2002toxicity,gann2001sublethal,stec2010fire}
\begin{table}[!h]\caption{FED effects on human health in Aamks.}\label{fed}
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{FED} & \textbf{Effect on human health} \\
\hline
$<0.01$ & Minor or negligible \\
\hline
$[0.01 - 0.3[$ & Low -- short period of hospitalization \\
\hline
$[0.3 - 1[$ & Heavy -- lingering health problems or permanent disability \\
\hline
$\ge 1 $ & Lethal \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Probabilistic evacuation modeling
This section presents the internals of our probabilistic evacuation
model, which we find distinct across the available, similar software.
Table~\ref{stochastic_setup} presents the distributions of the input
parameters used in Aamks. Each of the thousands of simulations in a
single project is initialized with some random input setup according to
these distributions. Aamks has a library of the default parameters
values for important building categories (schools, offices, malls etc.).
The Aamks users should find it convenient to have all the distributions
in a library, but they may choose to alter these values, which is
possible.
Most of the data in table~\ref{stochastic_setup} come from the standards
and from other models, mostly FDS/Evac. Following are some comments on
table~\ref{stochastic_setup}.
Aamks puts much attention to the pre-evacuation
time~\cite{cuesta2015evacuation}, which models how people lag before
evacuating after the alarm has sounded. Positions 7. and 8. are
separated, because the behaviour of humans in the room of fire origin is
distinct. We compile two regulations \emph{C/VM2 Verification Method:
Framework for Fire Safety Design}~\cite{nznorm2013} and \emph{British
Standard PD 7974-6:2004}~\cite{bs7974} in order to get the most
realistic, probability-based pre-evacuation in the room of fire origin
and in the rest of the rooms.
The Horizontal/Vertical speed (unimpeded, walking speed of an agent) is
based
on~\cite{hurley2015sfpe,fruin1971pedestrian,predtetschenski1978planning,helbing2002simulation}.
Speed in the smoke is modeled by formula~\ref{speedF}.
\begin{table}[!h]
\setlength\extrarowheight{3pt}
\caption{Parameters of the distributions for the exemplary scenario.}
\label{stochastic_setup}
\begin{center}
\begin{tabular}{ccccc}
& Parameter & Distribution & $\mu$/min & $\sigma$/max \\
\hline
1. & Denstity in rooms [$m^2/humans$] & normal & 5 & 2 \\
2. & Densitiy on corridors [$m^2/humans$] & normal & 20 & 3 \\
3. & Densitiy in stairways [$m^2/humans$] & normal & 50 & 3 \\
4. & Human location in the compartment $x$ & uniform & 0 & room width \\
5. & Human location in the compartment $y$ & uniform & 0 & room depth \\
6. & Time of the alarm & log-normal & 0.7 & 0.2 \\
7. & Pre-evacuation time in the room of fire origin & uniform & 0 & 30 \\
8. & Pre-evacuation time in other compartments & log-normal & 3.04 & 0.142 \\
9. & Horizontal speed & normal & 1.2 & 0.2 \\
10. & Vertical speed & normal & 0.7 & 0.2 \\
11. & $\alpha$ for speed in the smoke & normal & 0.706 & 0.069 \\
12. & $\beta$ for speed in the smoke & normal & -0.057 & 0.015 \\
13. & Humans taking an alternative evacuation route & binomial & 0.03 & 0.97 \\
\end{tabular}
\end{center}
\end{table}
The randomness of the simulations comes from the random number
generator's \emph{seed}. We save the seed for each simulation so that we
can repeat the very same simulation, which is useful for debugging and
visualisation.
We register all the random input setups and the corresponding results in
the database. We are expecting to research at some point the relationships
in these data with data mining or sensitivity analysis.
The final result of Aamks is the compilation of multiple simulations as
a set of distributions i.e. F-N curves. The F-N curves were created as
in~\cite{frantzich2003utrymning}. Figure~\ref{ccdf} depicts the exemplary
results.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1\textwidth]{ccdf}
\end{center}
\caption{The results of evacuation modeling as F-N curves.}
\label{ccdf}
\end{figure}
\subsection{Visualization
In Aamks we use a 2D visualization for supervising the potential user's
faults in his CAD work (e.g. rooms with no doors (Figure~\ref{2dvis})), for
the final results, and for our internal developing needs. We use a web
based technology which allows for displaying both static images and the
animations of evacuees.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.8\textwidth]{2d.png}
\end{center}
\caption{2D visualization: animation of evacuees }
\label{2dvis}
\end{figure}
We also have a web based 3D visualization made with WebGL
\texttt{Threejs}. This subsystem displays realistic animations of humans
during their evacuation under fire and smoke. (Figure~\ref{3dvis}).
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1\textwidth]{eggtjs}
\end{center}
\caption{3D visualization}
\label{3dvis}
\end{figure}
\section{Quality and the performance of a-evac}\label{qq
Below we evaluate the quality of a-evac as described
in~\cite{cuesta2015evacuation,ronchi2013process} as well as it's
computer performance.
\subsection{Verification of a-evac}\label{vv
Verification and validation deals with how close the results of the
simulations to the reality are. We took care to be compliant with the
general development recommendations~\cite{cuesta2015evacuation} by: (1)
obeying good programming practices, (2) verifying intermediate simulation
outputs, (3) comparing simulation outputs against the analytical results,
and (4) creating debugging animations.
The are three types of errors that can be generated by our software: a)
error in deterministic modeling of single scenario, b) error of Monte
Carlo approximation, c) statistical error -- disturbance.
For the first type of error we applied the methods proposed
in~\cite{ronchi2013process}. The proposed tests are organized in five
core components: (1) pre-evacuation time, (2) movement and navigation,
(3) exit usage, (4) route availability, and (5) flow
conditions/constraints. For each category there are detailed tests for
the geometry, the scenario and the expected results. The results are in
table~\ref{IMO}.
\begin{table}[!h]\caption{The results of Aamks tests}\label{IMO}
\setlength\extrarowheight{2pt}
\begin{center}
\begin{tabular}{llll}
Id. & Name of the tests & Test code & Results\\
\hline
1. & Pre-evacuation time distributions & Verif.1.1 & OK \\
2. & Speed in a corridor & Verif.2.1 & OK \\
3. & Speed on stairs & Verif.2.2 & OK \footnote{The method is not straightforward}\\
4. & Movement around the corner & Verif.2.3 & OK \\
5. & Assigned occupant demographics & Verif.2.4 & OK \\
6. & Reduced visibility vs walking speed & Verif.2.5 & OK \\
7. & Occupant incapatication & Verif.2.6 & OK \\
8. & Elevator usage & Verif.2.7 & --\\
9. & Horizontal counter-flows (rooms) & Verif.2.8 & OK \\
10. & Group behaviours & Verif.2.9 & --\\
11. & People with movement disabilities & Verif.2.10 & -- \\
12. & Exit route allocation & Verif.3.1 & OK \\
13. & Social influence & Verif.3.2 & --\\
14. & Affiliation & Verif.3.3 & --\\
15. & Dynamic availability of exits & Verif.4.1 & OK \\
16. & Congestion & Verif.5.1 & OK \\
17. & Maximum flow rates & Verif.5.2 & OK \\
\end{tabular}
\end{center}
\end{table}
RVO2, the core library of a-evac which drives the local movement, was
also evaluated in~\cite{viswanathan2014quantitative}. The conclusions
are that RVO2 is of the quality comparable with the lattice gas and
social force models. The social force model is commonly used in a number
of evacuation software.
The above is the evaluation of a single, deterministic simulation.
However, the final result is the compilation of the whole collection of
such single simulations -- this is how we get the big picture of the
safety of the inquired building. The picture is meant to present risk.
Probability of risk is calculated as a share of simulations resulted in
fatalities, in the total number of simulations.
The accuracy of this evaluation depends on the method applied --
stochastic simulations. The error is proportional to the square root of
number simulations. Namely for the discrete Bernoulli probability
distributions used for example for evaluation of probability of scenario
with fatalities, the error is calculated as follows:
\begin{equation}
\hat{\sigma}_n = 1.96\sqrt{\frac{\hat{p}_n(1-\hat{p}_n)}{n}};
\end{equation}
where: $\hat{p}_n$ is the probability of fatalities obtained as a number
of simulations resulted in fatalities to the total number of
simulations, $n$ is the number of simulations.
We are aware of the third type of error which may be generated by the
application. The input for Aamks is a set of various probability
distributions what may occasionally generate unreal scenario. For
example an evacuee who moves very slowly on corridors and very fast on
stairs. In most cases these errors are related to the other parts of
Aamks i.e. probabilistic fire modeling. However, fire environment
impacts the evacuation. This error can be evaluated by comparison of
data generated by Aamks with real statistics. So far we do not have idea
how to tackle this problem efficiently. We consider to evaluate this
error by launching simulations for the building stock and check
whether we reconstruct historical data. This method is very laborious
and not justified at that moment, because our application still lacks
some models i.e. fire service intervention -- what has significant
impact on fire.
\subsection{Performance of the Model
The main loop of Aamks processes all agents in the time iteration.
Table~\ref{timing} summarizes how costly the specific calculations for a
single agent within a single time iteration are. The tests were
performed on computer with Intel Core i5-2500K CPU at 3.30 GHz with 8 GB
of RAM.
\begin{table}[!h]\caption{The costs of a single loop iteration per agent}
\label{timing}
\setlength\extrarowheight{3pt}
\begin{center}
\begin{tabular}{llr}
Activity & Time & Total share\\
\hline
Position update & \SI{8.12e-6}{\second} & 3.97 \%\\
Velocity update & \SI{1.07e-5}{\second} & 5.26 \%\\
Speed update & \SI{8.50e-5}{\second} & 41.63 \%\\
State update & \SI{7.75e-8}{\second} & 0.03 \%\\
Goals update & \SI{9.25e-8}{\second} & 0.04 \%\\
FED update & \SI{1.01e-4}{\second} & 48.98 \%\\
Time update & \SI{1.22e-7}{\second} & 0.06 \%\\
\end{tabular}
\end{center}
\end{table}
The total time of a single step of the simulation for one agent is
\SI{2e-4}{\second} and it grows linearly with the number of agents. The
speed and FED calculations are most costly, because they both make
database queries against the fire conditions in the compartment. The
time step for a-evac iteration is 0.05 s. There is no significant
change in fire conditions within this time frame. Therefore for
performance optimization we update speed and FED every 20-th step of
the simulation.
\section{Discussion}\label{discussion
Vertical evacuation is troublesome and not implemented. There is RVO2
3D, but it is for aviation where agents can pass above each other --
clearly not for our needs. Besides, we think things look actually better
in 2D. We like the idea that vertical evacuation can be still considered
2D, just rotated, and we plan to move in this direction.
A-evac does not model the social or group behaviours. However, it is
difficult to evaluate, how much the lack of such functionalities impacts
the resulting probability distributions.
In the workflow we run a CFAST simulation first. Then a-evac simulation
runs on top of CFAST results. This sequential procedure has it's
drawbacks, e.g. we don't know how long the CFAST simulation should last
to produce enough data for a-evac, so we run "too much" CFAST for
safety. Also, evacuees cannot trigger any events such as opening the
door, etc. We considered a closer a-evac-CFAST integration.
There seems to be lot's of space for improvement in Aamks. We work with
the practitioners and know the reality of fire engineering. We know the
limitations of our current implementations and most of them can be
addressed -- there are models and approaches that we can implement and
the major obstacle are the limits of our team resources. Therefore we
invite everyone interested to join our project at
\url{http://github.com/aamks}.
\section{Conclusion}\label{con
Aamks is actively developed since 2016 and we are truly engaged in
making it better. The software, though not really ready for end-users
has already served as a support for commercial projects and fire
engineers and scientists regard Aamsk as having potential. The
stochastic based workflow of Aamks is not a new concept. There are
opinions in the community that this approach is how fire engineering
should be done. Since no wide-used implementation has been created so
far, this is an additional motivation that drives our project.
\begin{acknowledgements
Aamks is hosted at github at: \url{http://gitub.com/aamks}.
This work was supported in part by ConsultRisk LTD and F\&K Consulting
Engineers LTD under research grants CR-2016 SiMo (Simulation Modules) .
\end{acknowledgements}
|
1,116,691,500,945 | arxiv | \section{Introduction}
Gradient Boosted Decision Tree (GBDT)
is one of the most powerful
methods for solving prediction problems in both
classification and regression domains.
It is a dominant tool today
in application domains
where tabular data is abundant, for example, in e-commerce, financial, and retail industries.
GBDT has contributed to a large amount of top solutions in
benchmark competitions such as Kaggle.
This makes GBDT a fundamental component
in the modern data scientist’s toolkit.
\par
\begin{wrapfigure}{r}{0.47\textwidth}
\centering
\includegraphics[width=\linewidth]{pics/main_train_time.png}
\caption{\small Training time of XGBoost and CatBoost for different number of classes on a synthetic dataset for multiclass classification. Synthetic dataset contains $2000$k instances, each described by $100$ features. The maximal tree depth was limited to $6$. The experiment was conducted on GPU.
Further details are given in
\Cref{sec:synthetic_dataset}.
\label{pic:train_time_main}}
\vspace{-1em}
\end{wrapfigure}
The main focus of this paper is the scalability of GBDT to multioutput problems. Such problems include
multiclass classification (a classification task with more than two mutually exclusive classes), multilabel classification (a classification task with more than two classes
that are not mutually exclusive),
and multioutput regression (a regression task with a multivariate response variable). These problems
arise in various areas such as Finance
\citep*{oberman-waack-2016},
Multivariate Time Series Forecasting \citep*{zyz-2020},
Recommender Systems \citep*{jtl-2010}, and others.
\par
There are several extremely efficient, open-source,
and production-ready implementations of gradient
boosting such as
XGBoost \citep{xgboost-2016},
LightGBM \citep*{lightgbm-2017},
and CatBoost~\citep*{catboost-2018}.
Even for them, learning a GBDT model for moderately
large datasets can require much time.
Furthermore, this time also
grows with the output size of a model.
\Cref{pic:train_time_main} demonstrates how
rapidly the training time of XGBoost and
CatBoost grows with the output dimension.
Consequently, the number of possible applications of
GBDT in the multioutput regime
is very limited.
\par
GBDT is a boosting-based algorithm
that ensembles decision trees as \textquotedblleft base learners\textquotedblright. At each boosting step,
a newly added tree improves
the ensemble by minimizing
the error of an already built composition.
There are two possible strategies on
how to use GBDT to handle a multioutput problem.
\par
\vspace{-0.5em}
\begin{itemize}[leftmargin=*]
\item \textit{One-versus-all strategy.}
Here, at each boosting step,
a single decision tree is built for every output.
Consequently, every output is handled separately.
XGBoost and LightGBM use this strategy.
\item \textit{Single-tree strategy.}
Here, at each boosting step,
a single multivariate decision tree is built
for all outputs.
Consequently, all outputs are handled together.
CatBoost uses this strategy.
\end{itemize}
\vspace{-0.5em}
\par
The computational complexity of both strategies is proportional to the number of outputs. Specifically, the one-versus-all strategy requires fitting a separate decision tree for each single output at each boosting step. The single-tree strategy requires scanning all the output dimensions (a)~to estimate the information gain
during the search of the best tree structure
and (b)~to compute leaf values of
a decision tree with a given structure
(see details in \Cref{sec:preliminaries}).
A straightforward idea to reduce the training time of single-tree GBDT
is to exclude some of the outputs during the search of the tree structure which is the most time-consuming step of GBDT. However, this turns out to be rather challenging since it is unclear what outputs contribute the most to the information gain. In this paper, we address this problem and propose novel methods for fast scoring of multivariate decision trees which show a significant decrease in computational overhead without compromising the performance
of the final model.
\par
\paragraph{Related work.}
Many suggestions have been made to speed up the
training process of GBDT.
Some methods reduce
the number of data instances used
to train each base learner.
For example, Stochastic Gradient Boosting (SGB) \citep{friedman-2002}
chooses a random subset of data instances,
gradient-based one-side sampling (GOSS)
\citep{lightgbm-2017}
keeps the instances with large gradients
and randomly drops the instances with small gradients,
and
Minimal Variance Sampling (MVS)
\citep{mvs-2019}
randomly chooses the instances
to maximize the estimation accuracy of split scoring.
Similarly,
some methods reduce
the number of features.
For example, one can
choose a random subset of features
or use principal component analysis or projection pursuit
to exclude weak features; see
\citep{jl-1999, zhou-2012, afdp-2013}.
LightGBM~\citep{lightgbm-2017} uses exclusive feature
bundling (EFB) where
sparse features are greedily bundled together.
CatBoost~\citep{catboost-2018}
replaces categorical features with numerical ones
using a special algorithm based on target statistics.
Finally,
some methods reduce
the number of split candidates
during the split scoring.
The pre-sorted algorithm~\citep{mrj-1996}
enumerates all possible split
points on the pre-sorted feature values.
The histogram-based algorithm~\citep{ars-1998, rg-2003, lbw-2007}
buckets continuous feature values into discrete
bins and uses these bins to construct feature histograms.
\par
Regarding the multioutput regime,
the existing methods to accelerate the training process
of GBDT naturally fall into the following two categories:
problem transformation and algorithm adaptation.
Transformation methods (see, for example,
\citep{hklz2009,tl2012,kvj2012,cag2012,wtk2016})
reduce the number of targets before training a model.
They mainly differ in the choice of compression and decompression techniques
and significantly rely on the problem structure or data assumptions. These methods pay a price in terms of prediction accuracy due to the loss of information during the compression phase, and as a result, they do not consistently outperform the full baseline.
Adaptation methods directly extend some specific algorithms
to efficiently solve multioutput problems.
To the best of our knowledge, there are only
two algorithm adaptation works for GBDT.
Namely,
\citet{gbdt-sparse-2017} and \citet{gbdt-mo-2021}
consider models with sparse output and
discuss how to utilize this sparsity to enforce the
leaf values to be also sparse.
Their modifications of GBDT are called
GBDT-Sparse and GBDT-MO (sparse).
\par
We approach the problem
of fast GBDT training in the multioutput regime
from a different perspective.
Namely, instead of employing the model sparsity, we,
loosely speaking, approximate the scoring function
used to find the best tree structure using
the most essential outputs
while keeping other boosting steps without change.
The methods we suggest are
completely different from the ones mentioned above
and can be applied to models
with both dense and sparse outputs.
Moreover, our methods can be easily combined with
transformation methods (by compressing the outputs beforehand
and decompressing predictions afterward) or
the sparsity utilization as in GBDT-Sparse and GBDT-MO
(by computing the optimal
leaf values with sparsity constraint as in these algorithms).
\paragraph{Contributions.}
The contributions of this work can be summarized as follows.
\par
\vspace{-0.5em}
\begin{itemize}[leftmargin=*]
\item We propose and theoretically justify three novel methods
to speed up GBDT on multioutput tasks.
These methods are generic,
they can be used with any loss function
and do not rely on any specific data assumptions
(for example, sparsity or class hierarchy)
or the problem structure
(for example, multilabel or multiclass).
Moreover, they
do not drop down the model quality and can be easily
integrated into any GBDT realization that uses
the single-tree strategy.
\item We implemented the proposed methods
in SketchBoost. SketchBoost itself is a part of
our Python-based implementation of GBDT
called Py-Boost.
This implementation seems to be of independent interest
since it does not use low-level programming languages
and is easily customizable.
Although it is written in Python,
it is fast since it works on GPU.
\item We present an empirical study using
public datasets which demonstrates that SketchBoost
achieves comparable or even better performance
compared to the existing state-of-the-art
boosting toolkits but in remarkably less time.
\end{itemize}
\vspace{-0.5em}
\par
\paragraph{Paper Organization.}
First, we review the GBDT algorithm in \Cref{sec:preliminaries}.
Next, we propose methods leading to a noticeable
reduction in the training time of GBDT on multioutput tasks in
\Cref{sec:methods}.
We illustrate the performance of these methods on
real-world datasets in \Cref{sec:experiments}.
Proofs and experiment details are postponed
to \Cref{sec:proofs} and \Cref{sec:experiment_details}.
\section{Preliminaries}
\label{sec:preliminaries}
Let $\{(x_i,y_i)\}_{i=1}^{n}$ be a dataset with
$n$ samples,
where $x_i\in\mathbb{R}^{m}$ is an $m$ dimensional input and
$y_i\in\mathbb{R}^{d}$ is a $d$ dimensional output.
Let also $\mathcal{F}$ be a class of base learners,
that is, functions $f:\mathbb{R}^{m}\to\mathbb{R}^{d}$.
In Gradient Boosting, the idea of which goes back
to \citet{shapire-1990},
\citet{freund-1995}, \citet{freund-schapire-1997},
the model $F_T$ uses $T\in\mathbb{N}$
base learners $f\in\mathcal{F}$
and is trained in an additive and greedy manner.
Namely, at the $t$-th iteration, a newly
added base learner $f$ improves the quality of
an already built model $F_{t-1}$
by minimization of some specified loss function $l:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}$,
\[
\mathcal{L}_{t}(f) = \sum\nolimits_{i=1}^{n} l(y_i, F_{t-1}(x_i) + f(x_i)).
\]
This optimization problem is usually approached by
the Newton method using the second-order approximation
of the loss function
\begin{align}
f^*_t \in
\operatornamewithlimits{argmin}_{f\in\mathcal{F}} \Biggl\{
\sum_{i=1}^{n}
\Bigl(
g_i^{\top} f(x_i)
+ \frac12 \bigl(f(x_i)\bigr)^{\top} H_i f(x_i)
\Bigr)
+ \Omega(f)
\Biggr\},
\label{eq:objfunc}
\end{align}
where we omitted a term independent of $f$; here
$\Omega(f)$ is a regularization term,
usually added to build non-complex models, and
\begin{align}
g_i = \nabla_a l(y,a)
\Bigr|_{\tsub{y=y_i$\\$a=F_{t-1}(x_i)}},
\qua
H_i = \nabla^2_{aa} l(y,a) \Bigr|_{\tsub{y=y_i$\\$a=F_{t-1}(x_i)}}.
\label{eq:derivatives}
\end{align}
Due to the complexity of optimization over
a large set of base learners $\mathcal{F}$, the problem
\eqref{eq:objfunc} is solved typically in a greedy fashion
which leads us to an approximate minimizer $f_t$.
Finally, the model $F_t$ is updated by applying
a learning rate $\varepsilon>0$ typically treated as a hyperparameter:
$
F_{t} = F_{t-1} + \varepsilon f_t.
$
\par
GBDT uses decision trees as the base learners $\mathcal{F}$;
see the seminal paper of \citet{friedman-2001}.
A decision tree is a model built by a recursive
partition of the feature space into several
disjoint regions.
Each final leaf is assigned to a value,
which is a response of the tree in the given region.
Based on this construction mechanism,
a decision tree $f$ can be expressed as
\[
f(x) = \sum\nolimits_{j=1}^J v_j \cdot \I{x\in R_j},
\]
where $\I{\text{predicate}}$ denotes the indicator function,
$J$ is the number of leaves, $R_j$ is the
$j$-th leaf, and $v_j\in\mathbb{R}^{d}$ is the value of $j$-th leaf.
The problem of learning $f_t$ can be naturally
divided into two separate problems:
(1) finding the best tree structure
(dividing the feature space into $J$ areas $R_1,\ldots,R_J$),
and
(2) fitting a decision tree with a given structure
(computing leaf values $v_1,\ldots,v_J$).
\par
\paragraph{Finding the leaf values.}
Since decision trees take constant values at each leaf,
for a decision tree $f_t$ with leaves $R_1,\ldots,R_{J}$,
we can optimize the objective function from \eqref{eq:objfunc}
for each leaf $R_j$ separately,
\begin{align*}
v_j
&= \operatornamewithlimits{argmin}_{v\in\mathbb{R}^{d}} \Biggl\{ \sum_{x_i\in R_j} \Bigl(g_i^{\top} v
+ \frac12 v^{\top} H_i v\Bigr)
+ \frac{\lambda}{2} \|v\|^2 \Biggr\}
= - \Biggl(\sum_{x_i\in R_j} H_i + \lambda I\Biggr)^{-1}
\Biggl(\sum_{x_i\in R_j}g_i \Biggr),
\end{align*}
where we employ $l_2$ regularization on leaf
values with a parameter $\lambda>0$;
here $I$ denotes the identity matrix and
$\|\cdot\|$ denotes the Euclidean norm.
\par
It is worth mentioning that if the loss function
$l$ is separable with respect to different
outputs, all Hessians $H_1,\ldots,H_{n}$
are diagonal. If it is not the case,
it is a common practice to purposely simplify them
to this extent in order to avoid time-consuming
matrix inversion.
It is done so in most of the single-tree GBDT algorithms
(for example, CatBoost, GBDT-Sparse, and GBDT-MO).
We will also follow this idea in our work.
For diagonal Hessians, the optimal leaf values
can be rewritten as
\begin{align}
v_j
=
-\frac{\sum_{i \in R} g_{i}^{j}}{\sum_{i \in R} h_{i}^{j}+\lambda},
\quad
\text{where }\,
g_i =
\begin{pmatrix}
g_i^1\\
\vdots\\
g_i^{d}
\end{pmatrix}
\,\text{ and }\,
H_i =
\begin{pmatrix}
h_i^1 & \ldots & 0\\
\vdots & \ddots & \vdots \\
0 & \ldots & h_i^{d}
\end{pmatrix}.
\label{eq:weightstaylorsimp}
\end{align}
\par
\paragraph{Finding the tree structure.}
Substituting the leaf values
from~\eqref{eq:weightstaylorsimp}
back into the objective function,
and omitting insignificant terms,
we obtain
\begin{align}
\operatorname{Loss}(f_t) = -\frac12 \sum_{j=1}^J S(R_j),
\quad\text{where}\quad
S(R)
=
\sum_{j=1}^d\frac{\bigl( \sum_{x_i \in R} g_{i}^{j} \bigr)^2}{\sum_{x_i \in R} h_{i}^{j}+\lambda}.
\label{eq:scorefunc}
\end{align}
The function $S(\cdot)$ will be referred to as the scoring
function.
To find the best tree structure,
we use a greedy algorithm that starts from a
single leaf and iteratively adds branches
to the tree. At a general step,
we want to split one of existing leaves.
To do this, we iterate through all leaves,
features, and thresholds for each feature
(they are usually determined by the histogram-based algorithm).
For all leaves $R$ and all possible
splits for $R$, say $R_{\text{left}}$ and $R_{\text{right}}$,
we compute the impurity score given by
$
S(R_{\text{left}}) + S(R_{\text{right}}).
$
The best split is considered the one
which achieves the largest impurity score.
This is equivalent to maximization of the information gain
which is usually defined as the difference between values of the
loss function before
and after the split, that is,
\[
\text{Gain} = -0.5\Bigl(S(R) - \bigl(S(R_{\text{left}}) + S(R_{\text{right}})\bigr) \Bigr).
\]
\par
Similar to the previous step,
some simplifications can be made to
speed up computation of the scoring
function which is done a tremendous number of times.
For instance, GBDT-Sparse
does not use the second-order information at all
(Hessians are simplified to identity matrices).
In the multioutput regime of CatBoost,
the second-order derivatives are left out during
the split search and are used only to compute leaf values.
GBDT-MO uses the second-order
derivatives in both steps
but it increases the computational complexity twice
(histograms for both gradients and Hessians
need to be accumulated).
\section{Sketched Split Scoring}
\label{sec:methods}
In this section, we propose three novel methods
to speed up the split search
for multivariate decision trees.
These methods can achieve a good balance
between reducing the computational complexity in
the output dimension and keeping the accuracy for
learned decision trees.
They are generic and can
be used together with the methods mentioned
in the Related work section that
aim at reducing the number of sample instances,
features, or split candidates.
Moreover, the proposed methods
are easy to implement
upon modern boosting frameworks
such as XGBoost, LightGBM, and CatBoost.
\par
As it was mentioned before, there are two
\textquotedblleft best practices\textquotedblright\
to speed up the training of a GBDT model
on multioutput tasks:
(a)~to totally ignore the
second-order derivatives during the split search
and (b)~to use only the main diagonal of the
second-order derivatives
to compute the leaf values.
It is done so, for example, in CatBoost,
one of the few boosting toolkits that
use the single-tree strategy and
achieve state-of-the-art results on
multioutput problems.
We will also develop our work on this basis.
\par
The proposed methods are applied at each boosting step before the search for the best tree structure and after first- and second-order derivatives
(see \eqref{eq:derivatives}) are computed.
The key idea of the proposed methods
is to reduce the number of gradient values used
in the split search
so that the scoring function $S$ from~\eqref{eq:scorefunc} or, equivalently,
the information gain will not change much.
Specifically, the scoring function
without the second-order
information can be rewritten as
\[
S_{G}(R) = \frac{\bigl\|G^{\top}v_{R}\bigr\|^2}{|R|+ \lambda},
\quad\text{where}\ \,
G =
\begin{pmatrix}
g_1^1 & g_1^2 & \ldots & g_1^{d}\\
\vdots & \vdots & \ddots & \vdots\\
g_{n}^1 & g_{n}^2 & \ldots & g_{n}^{d}\\
\end{pmatrix}
\ \,\text{and}\ \,
v_{R} =
\begin{pmatrix}
\I{x_1 \in R} \\
\vdots \\
\I{x_{n} \in R} \\
\end{pmatrix}.
\]
Here $G\in\mathbb{R}^{n \times d}$ is the gradient matrix
and $v_{R}$ is the indicator vector of the leaf $R$
(its $i$-th coordinate is equal to $1$ if $x_i\in R$ and $0$ otherwise).
Note that we added the subscript to $S$ to indicate its
dependence on the gradient matrix~$G$.
To reduce the complexity of computing $S_{G}$ in $d$,
we approximate it with $S_{G_{\rdim}}$
for some other matrix $G_{\rdim}\in\mathbb{R}^{n \times k}$
with $k \ll d$.
We will refer to $G_{\rdim}$ as the sketch matrix
and to $k$ as the reduced dimension or sketching dimension.
We emphasize that $G_{\rdim}$
is assumed to be used only in
building histograms and finding the tree structure.
After this, the optimal leaf values
of a tree
are assumed to be computed fairly using the
full gradient matrix $G$.
\par
Further we discuss three novel methods
to construct reasonably good sketches $G_{\rdim}$ ---
Top Outputs, Random Sampling, and Random Projections.
These methods are motivated by the minimization
of the approximation error given by
\begin{align*}
\operatorname{Error}(S_{G},S_{G_{\rdim}})
= \sup\nolimits_{R}\bigl|S_{G}(R) - S_{G_{\rdim}}(R)\bigr|.
\label{eq:errorfunc}
\end{align*}
Here the supremum is taken over all
possible leaves $R$.
The reason for this choice is that
we want the proposed approximation to be
universal and uniformly accurate for all
splits we will possibly iterate over.
In \Cref{sec:proofs},
we show that the proposed methods lead
to a nearly-optimal upper bounds on the proposed error.
Since the corresponding optimization problem
is an instance of Integer Programming problem,
methods leading to the optimal
upper bounds
can be obtained only by brute force,
which is not an option in our case.
For further details see \Cref{sec:proofs}.
\subsection{Top Outputs}
\label{sec:topoutputs}
The key idea of Top Outputs is rather straightforward:
to choose the columns of $G$ with the largest Euclidian norm.
Namely, by a slight abuse of notation, let us denote the columns of $G$ by $g_1,\ldots,g_{d}$.
Let also $i_1,\ldots,i_d$ be the
indexes which sort the
columns of $G$ in descending order by their norm,
that is,
$
\|g_{i_1}\|
\ge \|g_{i_2}\|
\ge \ldots
\ge \|g_{i_d}\|
$.
Now the full gradient matrix and its sketch
can be written as
\begin{align*}
G =
\begin{pmatrix}
\vert & \vert & & \vert\\
g_1 & g_2 & \ldots & g_{d}\\
\vert & \vert & & \vert\\
\end{pmatrix}
\quad\text{and}\quad
G_{\rdim} =
\begin{pmatrix}
\vert & \vert & & \vert\\
g_{i_1}& g_{i_2} & \ldots & g_{i_k}\\
\vert & \vert & & \vert\\
\end{pmatrix}.
\end{align*}
The parameter $k$ here can be chosen adaptively
to the norms of $g_1,\ldots,g_{d}$.
We have not considered
this generalization here since, in our view,
it will greatly complicate the algorithm.
Moreover, the adaptive choice of $k$
may result in large values for this parameter
and hence less gain in training time.
\par
It is worth pointing out that Top Outputs
is akin to the Gradient-based One-Side Sampling (GOSS),
which is successively used in LightGBM; see \citep{lightgbm-2017}.
In GOSS, data instances with small gradients are excluded
to speed up the split search. Similarly,
Top Outputs excludes output components with small gradient values.
\par
This method has one major drawback.
This method chooses top $k$ output dimensions
which may not vary much from step to step.
For instance, if several columns have large norms
and others have medium norms, Top Outputs
may completely ignore the latter columns during
the split search.
Below
we consider another method that deals with this
problem by introducing the randomness in the choice
of output dimensions.
\subsection{Random Sampling}
\label{sec:randsamp}
The probabilistic approach
for algebraic computations, sometimes called the
\textquotedblleft Monte-Carlo method\textquotedblright,
is ubiquitous; we refer the reader to the monographs of
\citet{robert-casella-2005},\citet{mahoney-11}, and \citet{wodruff-2014}.
Here we consider its application
to the fast split search.
\par
The key idea of Random Sampling is to randomly sample
the columns of $G$ with probabilities proportional
to their norms.
Namely, we define the sampling probabilities by
\begin{align*}
p_i = \|g_i\|^2 \Big/\sum\nolimits_{j=1}^{d}\|g_j\|^2,
\quad
i=1,\ldots,d.
\end{align*}
These probabilities are known to be optimal for random sampling since they minimize the variance of the resulting estimate; see, for example, \citep{robert-casella-2005}.
Further, let ${i}_1,\ldots,{i}_{k}$ be
independent and identically distributed random variables
taking values $j$ with probabilities $p_j$,
$j=1,\ldots,k$. These random variables represent indexes
of the chosen columns of $G$.
Finally, we consider the following sketch
\[
G_{\rdim} =
\begin{pmatrix}
\vert & \vert & & \vert\\
\overline{g}_{i_1} & \overline{g}_{i_2} & \ldots & \overline{g}_{i_k}\\
\vert & \vert & & \vert
\end{pmatrix},
\quad\text{where}\quad
\overline{g}_{i} = \frac{1}{\sqrt{k p_i}} \, g_{i}.
\]
The additional column normalization
by $1/\sqrt{k p_i}$ is needed
for unbiasedness of the resulting estimate.
\par
\par
There is a close affinity between
Importance Sampling and
Minimal Variance Sampling (MVS) of \citet{mvs-2019}.
MVS decreases the number of sample instances in the split
search by maximizing
the estimation accuracy of split scoring.
Our idea is the same with the only difference that
it is applied to output dimensions rather than sample instances.
\par
Random Sampling works well especially in the extreme cases
as those mentioned above.
For example, if several outputs have large
weights and others have medium weights,
Random Sampling will not ignore the latter
outputs due to randomness. Or, if the number
of outputs with large weights is larger than $k$,
Random Sampling will choose different
output dimensions at different steps.
As a result, the corresponding base learners will
also be quite different, which usually leads
to a better generalization ability of the ensemble; see \cite{breiman1996bagging}.
\subsection{Random Projections}
\label{sec:randproj}
In the previous section, the sketch $G_{\rdim}$
was constructed by sampling columns
from $G$ according to some probability distribution.
This process can be viewed as
multiplication of $G$ by a matrix $\Pi$,
$
G_{\rdim} = G\Pi,
$
where $\Pi\in\mathbb{R}^{d\timesk}$
has independent columns,
and each column is all zero except for a $1$
in a random location.
In Random Projections,
we consider sampling matrices $\Pi$,
every entry of which is an independently sampled
random variable.
This results in using random
linear combinations of columns of
$G$ as columns of $G_{\rdim}$.
\par
This approach is based on the
Johnson-Lindenstrauss (JL) lemma; see
the seminal paper of \citet{johnson-lindenstrauss-1984}.
They showed that projections $\Pi$ from $d$ dimensions onto a
randomly chosen $k$-dimensional subspace do not
distort the pairwise distances too much.
\citet{indyk-motwani-1998}
proved that to obtain the same guarantee,
one can independently sample every entry of $\Pi$
using the normal distribution.
In fact, this is true for many other distributions;
see, for example, \citet{achlioptas-2003}.
Since there was no significant difference
between distributions in our numerical experiments,
we decided to focus on the normal distribution.
\par
In Random Projections, we consider the following sketch
\[
G_{\rdim} = G\Pi,
\]
where $\Pi\in\mathbb{R}^{d\timesk}$ is a random matrix filled with independently sampled $\mathcal{N}(0,k^{-1})$
entries.
In \Cref{sec:proofs}, we discuss why this choice leads to a nearly-optimal solution to the problem we consider and why the property of preserving the pairwise distances matters here.
\par
Random Projections has the same merits as Random Sampling
since it is also a random approach.
Besides that, the sketch matrix $G_{\rdim}$ here uses
gradient information from all outputs
since each column of $G_{\rdim}$ is a linear combination
of columns of $G$.
\subsection{Complexity analysis.}
Most of the GBDT frameworks use histogram-based algorithm to speed up split finding; see \citep*{ars-1998}, \citep*{jin-agrawal-2003}, and \citep*{lwb-2008}.
Instead of finding the split points on
all possible
feature values,
histogram-based algorithm buckets feature values
into discrete bins and uses these bins
to construct feature histograms during training.
Let us say that the number of possible splits
per feature is limited to $h \ll n$
(usually $h\leq256$ to store the histogram bin index using a single byte).
It is shown in \citep{lightgbm-2017} that
in the case of a single output, splitting a leaf $R$
with $n_R$ samples requires $O(m n_R)$ operations
for histogram building and $O(hm)$ operations for split finding.
As a result, if the actual tree construction
is performed using a depth-first-search
algorithm,
the complexity of building a complete
tree of depth $D$ is $O(D nm +2^Dhm)$.
In the multioutput scenario,
this complexity increases by $d$ times:
splitting a leaf $R$ with $n_R$ samples costs
$O(m n_R d+hmd)$ and
depth-wise tree construction costs
$O(Dm n d+2^Dhmd)$.
The methods we propose reduce the
impact of $d$ to $k$ with $k \ll d$.
They require a preprocessing step which
can be done, depending on the method,
in $O(n d k)$ or $O(n d)$ operations.
As a result, the complexity of building
a complete tree of depth $D$
using the depth-first search
can be reduced from
$O(Dm n d +2^Dhmd)$ to $O(n d + Dm n k +2^Dhm k)$.
Taking into account that $n$, $m$, and $d$
can be extremely large,
these methods may lead to a significant improvement
in the training time.
\section{Numerical Experiments}
\label{sec:experiments}
In this section, we numerically compare
(a) the proposed methods from \Cref{sec:methods} to speed up
GBDT in the multioutput regime
and (b)
existing state-of-art boosting toolkits supporting
multioutput tasks.
\par
\vspace{-0.5em}
\paragraph{Data.}
The experiments are conducted on $9$ real-world
publicly available datasets from Kaggle, OpenML, and
\href{http://mulan.sourceforge.net/datasets.html}{Mulan}\footnote{\url{http://mulan.sourceforge.net/datasets.html}}
for multiclass ($4$ datasets) and multilabel ($3$ datasets) classification and multitask regression ($2$ datasets).
The associated details are given in
\Cref{tb:datasets} in \Cref{sec:experiment_details}.
\par
\vspace{-0.5em}
\paragraph{Py-Boost.}
We implemented a simple and fast GBDT
toolkit called {Py-Boost}.
It is written in Python and hence is easily
customizable.
Py-Boost works only on GPU
and uses Python GPU libraries such as CuPy and Numba.
It follows the classic scheme described in
\citep{xgboost-2016}; further details are provided
in \Cref{sec:about_pyboost}.
Py-Boost is available on
\href{https://github.com/sb-ai-lab/Py-Boost}{GitHub}\footnote{\url{https://github.com/sb-ai-lab/Py-Boost}}.
\par
\vspace{-0.5em}
\paragraph{SketchBoost.}
SketchBoost is a part of Py-Boost library which implements the following three sketching strategies for fast split search:
\textbf{Top Outputs} (\Cref{sec:topoutputs}),
\textbf{Random Sampling} (\Cref{sec:randsamp}),
and \textbf{Random Projections} (\Cref{sec:randproj}).
For convenience, Py-Boost without any sketching strategy
is referred to as \textbf{SketchBoost Full}.
All the following experimental results
and evaluation code are also available on
\href{https://github.com/sb-ai-lab/sketchboost-paper}{GitHub}\footnote{\url{https://github.com/sb-ai-lab/SketchBoost-paper}}.
\par
\vspace{-0.5em}
\paragraph{Baselines.} Primarily we compare SketchBoost with
\textbf{XGBoost} (v1.6.0)
and
\textbf{CatBoost} (v1.0.5).
There are two reasons why we have chosen
these GBDT frameworks.
First,
they are commonly used among
practitioners and represent two different approaches
to multiouput tasks (one-vs-all and single-tree).
Second, they can be efficiently trained
on GPU,
which allows us to compare their training time
with GPU-based SketchBoost (with an exception for CatBoost which
supports
multilabel classification and multioutput regression
tasks only on CPU).
The reason why we have not considered LightGBM as a baseline is that
it uses the same multiouput strategy as XGBoost (one-vs-all) and
its latest version (v3.3.2) does not
support multilabel classification
and multioutput regression tasks
without external wrappers.
Further, we also compare
SketchBoost with
\textbf{TabNet} (v3.1.1), a popular deep learning model
for tabular data; see \citep*{arik2021tabnet}.
Our aim here is not to make an exhaustive comparison with
existing deep learning approaches (it deserves its own investigation), but to make a comparison with a
different in nature
approach which moreover often has satisfactory complexity on
large multioutput datasets.
\par
\vspace{-0.5em}
\paragraph{Experiment Design.}
If there is no official train/test split,
we randomly split the data into training and
test sets with ratio $80\%$-$20\%$.
Then each algorithm is trained
with 5-fold cross-validation
(the train folds are used to fit a model and
the validation fold is used for early stopping).
We evaluate all the obtained models
on the test set
and get $5$ scores for each model.
The overall performance of algorithms is
computed as an average score.
As a performance measure,
we use the cross-entropy for classification
and RMSE for regression, but,
for the sake of completeness,
we also report the accuracy score for classification and R-squared score for regression
in \Cref{sec:experiment_results}.
For XGBoost, Catboost, and TabNet, we do
the hyperparameter optimization using the Optuna
framework \citep*{akiba2019optuna}.
For SketchBoost, we use the same
hyperparameters as for CatBoost
(to speed up the experiment; we do not expect that
hyperparameters will vary much since we
use the same single-tree approach).
The sketch size $k$ is iterated
through the grid $\{1, 2, 5, 10, 20\}$
(or through a subset of this grid with values less than the output dimension).
Further information on experiment design is given in
\Cref{sec:experiment_design,sec:experiment_hype,sec:experiment_design_tabnet}.
\par
\begin{table}[ht!]
\setlength\tabcolsep{2pt}
\setlength\extrarowheight{2pt}
\captionsetup{justification=centering}
\centering
\vspace{-1em}
\caption{\small
Test errors (cross-entropy for classification and RMSE for regression)
$\pm$ their standard deviation.
\label{tb:test_score}}
\vspace{0.2em}
\scalebox{0.63}{
\begin{tabular}{@{\extracolsep{4pt}}lccccccc@{}}
\toprule
& \multicolumn{4}{c}{\textbf{SketchBoost}} & \multicolumn{3}{c}{\textbf{Baseline}}
\vspace{0.4em}\\
\cline{2-5} \cline{6-8} \vspace{-0.2em}
\textbf{Dataset} & \textbf{Top Outputs} & \textbf{Random Sampling} & \textbf{Random Projection} & \textbf{SketchBoost Full} & \textbf{CatBoost} & \textbf{XGBoost} & \textbf{TabNet} \\
&\small{(for the best $k$)} & \small{(for the best $k$)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(multioutput)} & \small{(one-vs-all)} & \small{(multioutput)}\\
\midrule
\textbf{Multiclass classification} & & & & & \textbf{} &\textbf{} & \textbf{} \\
Otto (9 classes)& 0.4715 & 0.4636 & \textbf{0.4566} & 0.4697 & 0.4658 & 0.4599 & 0.5363\\
& \qquad\tcgr{$\pm$0.0035} & \qquad\tcgr{$\pm$0.0026} & \qquad\tcgr{$\pm$0.0023} & \qquad\tcgr{$\pm$0.0030} & \qquad\tcgr{$\pm$0.0033} & \qquad\tcgr{$\pm$0.0028} & \qquad\tcgr{$\pm$0.0063}\\
SF-Crime (39 classes) & 2.2070 & 2.2037 & 2.2038 & 2.2067 & \textbf{2.2036} & 2.2208 & 2.4819\\
& \qquad\tcgr{$\pm$0.0005} & \qquad\tcgr{$\pm$0.0004} & \qquad\tcgr{$\pm$0.0004} & \qquad\tcgr{$\pm$0.0003} & \qquad\tcgr{$\pm$0.0005} & \qquad\tcgr{$\pm$0.0008} & \qquad\tcgr{$\pm$0.0199}\\
Helena (100 classes) & 2.5923 & 2.5693 & \textbf{2.5673} & 2.5865 & 2.5698 & 2.5889\ & 2.7197\\
& \qquad\tcgr{$\pm$0.0024} & \qquad\tcgr{$\pm$0.0022} & \qquad\tcgr{$\pm$0.0026} & \qquad\tcgr{$\pm$0.0025} & \qquad\tcgr{$\pm$0.0025} & \qquad\tcgr{$\pm$0.0032} & \qquad\tcgr{$\pm$0.0235}\\
Dionis (355 classes) & 0.3146 & 0.3040 & \textbf{0.2848} & 0.3114 & 0.3085 & 0.3502 & 0.4753\\
& \qquad\tcgr{$\pm$0.0011} & \qquad\tcgr{$\pm$0.0014} & \qquad\tcgr{$\pm$0.0012} & \qquad\tcgr{$\pm$0.0009} & \qquad\tcgr{$\pm$0.0010} & \qquad\tcgr{$\pm$0.0020} & \qquad\tcgr{$\pm$0.0126}\\
\midrule
\textbf{Multilabel classification} & & & & & \textbf{} & & \\
Mediamill (101 labels) & 0.0745 & 0.0745 & \textbf{0.0743} & 0.0747 & 0.0754 & 0.0758 & 0.0859\\
& \qquad\tcgr{$\pm$1.3e-04} & \qquad\tcgr{$\pm$1.3e-04} & \qquad\tcgr{$\pm$1.1e-04} & \qquad\tcgr{$\pm$1.3e-04} & \qquad\tcgr{$\pm$1.1e-04} & \qquad\tcgr{$\pm$1.1e-04} & \qquad\tcgr{$\pm$3.3e-03}\\
MoA (206 labels) & 0.0163 & 0.0160 & \textbf{0.0160} & 0.0160 & 0.0161 & 0.0166 & 0.0193\\
& \qquad\tcgr{$\pm$2.2e-05} & \qquad\tcgr{$\pm$1.0e-05} & \qquad\tcgr{$\pm$6.0e-06} & \qquad\tcgr{$\pm$9.0e-06} & \qquad\tcgr{$\pm$2.6e-05} & \qquad\tcgr{$\pm$2.1e-05} & \qquad\tcgr{$\pm$3.0e-04} \\
Delicious (983 labels) & 0.0622 & 0.0619 & 0.0620 & 0.0619 & \textbf{0.0614} & 0.0620 & 0.0664\\
& \qquad\tcgr{$\pm$6.2e-05} & \qquad\tcgr{$\pm$5.9e-05} & \qquad\tcgr{$\pm$6.2e-05} & \qquad\tcgr{$\pm$5.5e-05} & \qquad\tcgr{$\pm$5.2e-05} & \qquad\tcgr{$\pm$3.3e-05} & \qquad\tcgr{$\pm$8.0e-04}\\
\midrule
\textbf{Multitask regression} & & & & & \textbf{} & \\
RF1 (8 tasks) & 1.1860 & 0.9944 & 0.9056 & 1.1687 & \textbf{0.8975} & 0.9250 & 3.7948\\
& \qquad\tcgr{$\pm$0.1366} & \qquad\tcgr{$\pm$0.1015} & \qquad\tcgr{$\pm$0.0582} & \qquad\tcgr{$\pm$0.0835} & \qquad\tcgr{$\pm$0.0384} & \qquad\tcgr{$\pm$0.0307} & \qquad\tcgr{$\pm$1.5935}\\
SCM20D (16 tasks) & 88.7442 & 86.2964 & \textbf{85.8061} & 91.0142 & 90.9814 & 89.1045 & 87.3655\\
& \qquad\tcgr{$\pm$0.6346} & \qquad\tcgr{$\pm$0.4398} & \qquad\tcgr{$\pm$0.5534} & \qquad\tcgr{$\pm$0.3397} & \qquad\tcgr{$\pm$0.3652} & \qquad\tcgr{$\pm$0.4950} & \qquad\tcgr{$\pm$1.3316}\\
\bottomrule
\end{tabular}}
\vspace{-0.5em}
\end{table}
\begin{table}[ht!]
\setlength\tabcolsep{3pt}
\setlength\extrarowheight{2pt}
\captionsetup{justification=centering}
\centering
\vspace{-1em}
\caption{\small
Training time per fold in seconds.
\\(CatBoost does not support multilabel classification
and multioutput regression tasks in the GPU mode.)\label{tb:train_time}}
\vspace{0.2em}
\scalebox{0.63}{
\begin{tabular}{@{\extracolsep{4pt}}lccccccc@{}}
\toprule
& \multicolumn{4}{c}{\textbf{SketchBoost (GPU)}} & \multicolumn{3}{c}{\textbf{Baseline (CPU/GPU)}}
\vspace{0.4em}\\
\cline{2-5} \cline{6-8} \vspace{-0.2em}
\textbf{Dataset} & \textbf{Top Outputs} & \textbf{Random Sampling} & \textbf{Random Projection} & \textbf{SketchBoost Full} & \textbf{CatBoost} & \textbf{XGBoost} & \textbf{TabNet} \\
&\small{(for the best $k$)} & \small{(for the best $k$)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(multioutput)} & \small{(one-vs-all)} & \small{(multioutput)}\\
\midrule
\textbf{Multiclass classification } & & & & & \textbf{GPU} &\textbf{GPU} &\textbf{GPU} \\
Otto (9 classes)& 113 & 102 & 89 & 131 & \textbf{73} & 1244 & 903 \\
SF-Crime (39 classes) & 705 & 676 & \textbf{612} & 1146 & 659 & 4016 & 2683\\
Helena (100 classes) & 154 & 180 & \textbf{113} & 355 & 436 & 1036 & 1196\\
Dionis (355 classes) & 1889 & 2038 & \textbf{419} & 23919 & 18600 & 18635 & 1853\\
\midrule
\textbf{Multilabel classification} & & & & &\textbf{CPU} & \textbf{GPU} &\textbf{GPU}\\
Mediamill (101 labels) & \textbf{251} & 263 & 294 & 1777 & 10164 & 2074 & 1231\\
MoA (206 labels) & 103 & 189 & \textbf{87} & 696 & 9398 & 376 & 672\\
Delicious (983 labels) & \textbf{575} & 664 & 1259 & 19553 & 20120 & 15795 & 2902\\
\midrule
\textbf{Multitask regression} & & & & &\textbf{CPU} & \textbf{GPU} &\textbf{GPU}\\
RF1 (8 tasks) & 369 & 396 & 340 & 413 & 804 & {315} & \textbf{207}\\
SCM20D (16 tasks) & 499 & 528 & {479} & 597 & 798 & 1432 & \textbf{296} \\
\bottomrule
\end{tabular}}
\vspace{-1em}
\end{table}
\par
\vspace{-0.5em}
\paragraph{Results.}
The final test errors are summarized in \Cref{tb:test_score}.
Experiments show that, in general, SketchBoost with a sketching strategy obtains results comparable to or even better than the competing boosting frameworks.
Promisingly, there is always a sketching strategy that outperforms SketchBoost Full.
Random Projection achieves the best scores, but Random Sampling also performs quite well. The deterministic Top Outputs strategy scores less than other baselines everywhere. In addition, it is noticeable that the one-vs-all strategy implemented in XGBoost leads to a worse generalization ability than the single-tree strategy on most datasets.
\par
The dependence of test scores on the sketch size $k$ for
four datasets is shown in \Cref{pic:test_score};
for other datasets see \Cref{pic:test_score_appendix} in \Cref{sec:experiment_details}.
It confirms the idea that, in general, the larger values $k$ we take, the better performance we obtain.
Moreover, our numerical study shows that there is a wide range of values of $k$ for which sketching strategies work well;
see the detailed results for all $k$
in \Cref{tb:detailed_test_score_bce} and \Cref{tb:detailed_test_score_acc} in \Cref{sec:experiment_details}.
For most datasets, $k\leq10$ is enough to obtain a result similar to or even better than SketchBoost Full or other baselines.
Loosely speaking, an intuitive explanation of why reducing the output dimension may increase the ensemble quality is that building a tree using all outputs often leads to bad split choices for some particular outputs. Sketching strategies use small groups of outputs, which leads to better tree structures for these outputs and a more diverse ensemble overall. In this connection, the optimal value of $k$ strongly depends on the relations between the outputs in a given dataset.
With limited resources in practice, we would recommend using a predefined value $k=5$. It is common in GBDTs: modern toolkits have more than $100$ hyperparameters, and many of them are not usually tuned (default values typically work well).
But at the same time, one can always add $k$ to the set of hyperparameters that are tuned. In our view, an additional hyperparameter will not play a significant role here taking into account that hyperparameter optimization is usually done using the random search or Bayesian optimization.
\par
Further, the learning curves for validation errors on some datasets are given in \Cref{pic:learning_curve}. In general, it shows that small values of $k$ result in a slower error decay at early iterations. But if $k$ is properly defined, the validation error of SketchBoost with a sketching strategy is comparable to the error of SketchBoost Full, and hence both algorithms need approximately the same number of steps to convergence. This means that the proposed sketching strategies do not result in more complex models and do not significantly affect the model size or inference time.
Detailed information on the number of steps to convergence for all strategies and baselines is given in \Cref{tb:best_iter} in \Cref{sec:experiment_details}.
\par
SketchBoost does a good job in reducing the training time.
In \Cref{tb:train_time} we compare training times for SketchBoost, XGBoost, CatBoost, and TabNet.
One can see that it
significantly increases with the dataset size and,
in particular, the output dimension.
If a dataset is small,
as, for example, RF1 ($8$~targets, $9$k~rows, $64$ features) or Otto ($9$~classes, $61$k~rows, $93$ features), our Python implementation is slightly slower than the efficient CatBoost or XGBoost GPU implementations written on low-level programming languages. But for Dionis ($355$~classes, $416$k~rows, $60$ features), our implementation together with a sketching strategy becomes $40$ times faster than XGBoost or CatBoost without sacrificing performance.
Overall, we can conclude that
the proposed sketching algorithms can
significantly speed up SketchBoost Full
and can lead to considerably faster training than other GBDT baselines.
We recall that CatBoost can be trained on GPU only for multiclass classification tasks, and hence the time comparison with other algorithms on other tasks is not fair for CatBoost.
\par
Finally,
we see that
all the GBDT implementations outperform TabNet
in terms of test score on almost all tasks;
see \Cref{tb:test_score} again. These results confirm the conclusion from the recent surveys \citep*{borisov2021deep} and \citep*{qin2021neural} that algorithms based on gradient-boosted tree ensembles still mostly outperform deep learning models on tabular supervised learning tasks.
Nevertheless, \Cref{tb:train_time} shows that TabNet converges faster than GBDTs without sketching strategies. Moreover, TabNet is even faster than SketchBoost with sketching strategies on two regression tasks. The reason for this is that if the target dimension is high, it affects the complexity of a neural network only in the last layer and, in general, has little effect on the training time. Further, it is also worth mentioning that neural networks tend to have much more hyperparameters than GBDTs and, as the result, need more time to be properly fine-tuned.
Further details on this experiment are given in \Cref{sec:experiment_design_tabnet}.
\par
\begin{figure}[t!]
\captionsetup{justification=centering}
\centering
\includegraphics[width=0.245\linewidth]{pics/test_score_mini/scm20d.png}
\includegraphics[width=0.245\linewidth]{pics/test_score_mini/sf-crime.png}
\includegraphics[width=0.245\linewidth]{pics/test_score_mini/moa.png}
\includegraphics[width=0.245\linewidth]{pics/test_score_mini/dionis.png}
\\
\includegraphics[width=0.95\linewidth]{pics/legend.png}
\caption{\small Dependence of test errors (cross-entropy for classification and RMSE for regression) \\ on sketch dimension $k$. \label{pic:test_score}}
\vspace{-1em}
\end{figure}
\par
\begin{figure}[t!]
\captionsetup{justification=centering}
\centering
\includegraphics[width=0.27\linewidth]{pics/learning_curve/otto.png}
\includegraphics[width=0.27\linewidth]{pics/learning_curve/moa.png}
\\
\caption{\small Learning curves for validation error for SketchBoost Full and SketchBoost with Random Sampling.\label{pic:learning_curve}}
\vspace{-1em}
\end{figure}
\par
\vspace{-0.5em}
\paragraph{Comparison with GBDT-MO.}
We also compare SketchBoost with GBDT-MO Full and GBDT-MO (sparse) from \cite{gbdt-mo-2021} (we want to highlight that GBDT-Sparse from \cite{gbdt-sparse-2017} does not have an open-source implementation).
As sketching strategies, we consider here only Random Sampling and Random Projection. As the baseline, we consider only CatBoost on CPU
(to make it comparable to GBDT-MO which works only on CPU). The datasets to compare and the best hyperparameters were taken from the original paper.
\par
Summary results are presented in \Cref{tb:test_score_gbdtmo} and \Cref{tb:train_time_gbdtmo}. SketchBoost with sketching strategies outperforms other algorithms on most datasets in terms of accuracy. GBDT-MO (sparse) is everywhere slower than GBDT-MO Full (because of optimization with a sparsity constraint).
Furthermore, its training time is comparable to CatBoost.
The time comparison with SketchBoost is not fair because of the GPU training, but, as it is shown, it is orders of magnitude faster. It is worth noting that SketchBoost Full is sometimes faster than SketchBoost with a sketching strategy.
The reason for this is that if the dataset is small, then each boosting iteration requires little time. Therefore, when a sketching strategy is used, the speed up for each boosting iteration may be insignificant (especially because of ineffective utilization of GPU). At the same time, the number of iterations needed to convergence may be greater, which may result in an increase in the overall training time. Exactly this happened here.
Further details on this experiment are given in \Cref{sec:experiment_gbdtmo}.
\par
\begin{table}[t!]
\setlength\tabcolsep{3pt}
\setlength\extrarowheight{2pt}
\captionsetup{justification=centering}
\centering
\vspace{-1em}
\caption{\small
Test scores (accuracy for classification and RMSE for regression)
$\pm$ their standard deviation.\label{tb:test_score_gbdtmo}}
\vspace{0.2em}
\scalebox{0.63}{
\begin{tabular}{@{\extracolsep{4pt}}lcccccc@{}}
\toprule
& \multicolumn{3}{c}{\textbf{SketchBoost}} & \multicolumn{2}{c}{\textbf{GBDT-MO}}
& \textbf{Baseline}
\vspace{0.4em}\\
\cline{2-4} \cline{5-6} \cline{7-7} \vspace{-0.2em}
\textbf{Dataset} & \textbf{Random Sampling} & \textbf{Random Projection} & \textbf{SketchBoost Full} & \textbf{GBDT-MO (sparse)} & \textbf{GBDT-MO Full} & \textbf{CatBoost} \\
&\small{(for the best $k$)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(multioutput)}\\
\midrule
\textbf{Multiclass classification} & & & & & &\\
MNIST (10 classes) & 0.9755 & 0.9740 & 0.9730 & 0.9758 & \textbf{0.9760} & 0.9684\\
& \qquad\tcgr{$\pm$0.0042} & \qquad\tcgr{$\pm$0.0032} & \qquad\tcgr{$\pm$0.0028} & \qquad\tcgr{$\pm$0.0048} & \qquad\tcgr{$\pm$0.0040} & \qquad\tcgr{$\pm$0.0040}\\
Caltech (101 classes) & \textbf{0.5704} & 0.5623 & 0.5549 & 0.4796 & 0.4469 & 0.5049\\
& \qquad\tcgr{$\pm$0.0273} & \qquad\tcgr{$\pm$0.0159} & \qquad\tcgr{$\pm$0.0080} & \qquad\tcgr{$\pm$0.0375} & \qquad\tcgr{$\pm$0.0590} & \qquad\tcgr{$\pm$0.0167}\\
\midrule
\textbf{Multilabel classification} & & & & & &\\
NUS-WIDE (81 labels) & 0.9892 & \textbf{0.9897} & 0.9893& 0.9892 & 0.9891 & 0.9893\\
& \qquad\tcgr{$\pm$0.0003} & \qquad\tcgr{$\pm$0.0003} & \qquad\tcgr{$\pm$0.0002} & \qquad\tcgr{$\pm$0.0006} & \qquad\tcgr{$\pm$0.0002} & \qquad\tcgr{$\pm$0.0001}\\
\midrule
\textbf{Multitask regression} & & & & & &\\
MNIST-REG (24 tasks) & 0.2661 & \textbf{0.2654} & 0.2660 & 0.2736 & 0.2723 & 0.2708\\
& \qquad\tcgr{$\pm$0.0019} & \qquad\tcgr{$\pm$0.0012} & \qquad\tcgr{$\pm$0.0019} & \qquad\tcgr{$\pm$0.0017} & \qquad\tcgr{$\pm$0.0026} & \qquad\tcgr{$\pm$0.0023}\\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[t!]
\setlength\tabcolsep{3pt}
\setlength\extrarowheight{2pt}
\captionsetup{justification=centering}
\centering
\vspace{-1em}
\caption{\small
Training time per fold in seconds.\label{tb:train_time_gbdtmo}}
\vspace{0.2em}
\scalebox{0.63}{
\begin{tabular}{@{\extracolsep{4pt}}lcccccc@{}}
\toprule
& \multicolumn{3}{c}{\textbf{SketchBoost (GPU)}} & \multicolumn{2}{c}{\textbf{GBDT-MO (CPU)}}
& \textbf{Baseline (CPU)}
\vspace{0.4em}\\
\cline{2-4} \cline{5-6} \cline{7-7} \vspace{-0.2em}
\textbf{Dataset} & \textbf{Random Sampling} & \textbf{Random Projection} & \textbf{SketchBoost Full} & \textbf{GBDT-MO (sparse)} & \textbf{GBDT-MO Full} & \textbf{CatBoost} \\
&\small{(for the best $k$)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(for the best $k$)} & \small{(multioutput)} & \small{(multioutput)}\\
\midrule
\textbf{Multiclass classification} & & & & & &\\
MNIST (10 classes) & 102 & 66 & \textbf{46} & 399 & 362 & 156\\
Caltech (101 classes) & 15 & 16 & \textbf{13} & 1312 & 776 & 136\\
\midrule
\textbf{Multilabel classification} & & & & & &\\
NUS-WIDE (81 labels) & \textbf{36} & 72 & 87 & 3660 & 2606 & 13857\\
\midrule
\textbf{Multitask regression} & & & & & &\\
MNIST-REG (24 tasks) & 120 & \textbf{45} & 90 & 163 & 210 & 964\\
\bottomrule
\end{tabular}}
\vspace{-1em}
\end{table}
\par
\vspace{-0.5em}
\section{Conclusion}
\label{sec:conclusion}
\begin{wrapfigure}{r}{0.47\textwidth}
\vspace{-2.5em}
\centering
\includegraphics[width=\linewidth]{pics/main_train_time_SB.png}
\caption{\small Training time of XGBoost, CatBoost, and SkechBoost in the same experiment as in \Cref{pic:train_time_main}. Here SketchBoost
uses Random Projection with sketch dimension $k=5$. Further details are given in \Cref{sec:synthetic_dataset}.\label{pic:train_time_main_SB}}
\vspace{-1em}
\end{wrapfigure}
\par
In this paper, we presented effective
methods to speed up GBDT on multioutput tasks.
These methods are generic and can be easily
integrated into any single-tree GBDT realization.
On real-world datasets, these methods achieve
comparable and sometimes even better results
to the existing state-of-the-art
GBDT implementations but in remarkably less time.
The proposed methods are implemented in SketchBoost
which itself is a part of
our Python-based implementation of GBDT
called Py-Boost.
\Cref{pic:train_time_main_SB} concludes this paper
by showing the gain in training time of SkechBoost
in the same experiment as in \Cref{pic:train_time_main}
from the Introduction.
\section*{Acknowledgements}
\label{sec:acknowledgements}
We would like to thank
Gleb Gusev and Bulat Ibragimov for
helpful discussions and feedback for an
earlier draft of this work,
Dmitry Simakov and Mikhail Kuznetsov
for the help with the TabNet experiments,
and Maxim Savchenko and all the Sber AI Lab team for
their support and active interest in this project.
We would also like to thank the anonymous reviewers for their thoughtful feedback.
\clearpage
\bibliographystyle{plainnat}
|
1,116,691,500,946 | arxiv | \section{Introduction}
\label{sec:intro}
Autonomous vehicles (AVs) are believed to be the foundation of the next-decade transportation system
and are expected to improve the traffic flow that is presently dominated by human vehicles (HVs).
Modeling AV's driving behavior and quantifying different penetration rates of AVs' impact on the traffic is of great significance.
This paper focuses on traffic stability, which is one of the most substantial traffic features. Traffic stability refers to a traffic system's asymptotic stability around uniform flows. HV traffic is observed to be an unstable system in which a small perturbation (caused by driving errors or delays) to the uniform flow will grow up with time and develop traffic congestion. By removing human errors, it is expected that AVs
will help stabilize the traffic system. A field experiment \cite{stern2018dissipation} showed that one AV is able to stabilize the traffic with approximately twenty vehicles on a ring road.
AV's capability of stabilizing traffic is validated using microscopic models for mixed AV-HV traffic. In microscopic models, the traffic system is described by ordinary differential equations.
One can carry out standard linear stability analysis to characterize such a traffic system's stability, built upon
connected cruise controllers \cite{jin2018connected} or generic car-following models \cite{cui2017stabilizing,wu2018stabilizing}. \cite{jin2018connected,cui2017stabilizing} considered only one AV with multiple HVs; \cite{wu2018stabilizing} studied multiple AVs and multiple HVs but only focused on the head-to-tail stability.
However, in the general case, the mixed traffic stability analysis relies on the topology of mixed vehicles and the vehicle-to-vehicle communication network,
which suffers from scalability issues.
One alternative approach to address the scalability issues is the PDE approximation \cite{zheng2016stability}.
This approach suggests to study the stability of continuum traffic flow models which are the limits of microscopic models. The approach is well suited for the mixed traffic since one needs only to concern about the density distributions of different classes.
In continuum traffic flow models,
the traffic system is described by partial differential equations (PDEs) of traffic density and velocity. For single class traffic, the Lighthill-Whitham-Richards (LWR) model \cite{lighthill1955kinematic} is the most extensively used continuum model.
As a generalization, the multiclass LWR is widely used to model the interaction between two types of vehicles.
\cite{levin2016multiclass} is among a few studies that applied multi-class LWR models to AV-HV mixed traffic and proposed networked traffic controls in the presence of AVs.
Based on gas-kinetic theory, \cite{ngoduy2009continuum} proposed a multiclass macroscopic model to capture the effect of communication and information sharing on traffic flow
and analyzed the model's stability with respect to the connected vehicle's penetration rate;
\cite{porfyri2015stability,delis2016simulation,delis2018macroscopic} modeled the macroscopic traffic flow of mixed Adaptive Cruise Control (ACC) and Cooperative Adaptive Cruise Control (CACC) vehicles and analyzed how the ACC vehicle's penetration rate influences the traffic stability.
This paper models AVs using the \emph{mean field game} following the authors' work \cite{huang2019game}. In this framework, AVs are assumed to be rational, utility-optimizing agents with anticipation capabilities and play a non-cooperative game by selecting their driving speeds.
AVs' utility-optimizing and anticipation behaviors are distinctive characteristics from the aforementioned continuum models. By extending \cite{huang2019game}, this paper aims to build continuum traffic flow models for both pure AV traffic and mixed AV-HV traffic based on mean field games and analyze the models' traffic stability.
The remainder of the paper is organized as follows. Section \ref{sec:pre} provides an overview of the mean field game and the Aw-Rascle-Zhang model, used for modeling AVs and HVs, respectively. Section \ref{sec:form} formulates models for both pure AV traffic and mixed AV-HV traffic. Based on the proposed models, Section \ref{sec:av_analysis} shows the linear stability analysis for the pure AV traffic and Section \ref{sec:mixed_analysis} demonstrates the mixed traffic's stability through numerical experiments.
\section{Preliminaries}
\label{sec:pre}
\subsection{Mean Field Game}
\label{sec:pre_mfg}
Mean field game (MFG) is a game-theoretic framework to model complex multi-agent dynamic systems \cite{lasry2007mean}.
In the MFG framework, a population of $N$ rational utility-optimizing agents are modeled by a dynamic system. The agents interact with each other through their utilities. Assuming those agents optimize their utilities in a non-cooperative way, they form a \emph{differential game}.
Exact Nash equilibria to the differential game are generally hard to solve when $N$ is large. Alternatively, MFG considers the continuum problem as $N\to\infty$. By exploiting the ``smoothing" effect of a large number of interacting individuals, MFG assumes that each agent only responds to and contributes to the density distribution of the whole population. Then the equilibria are characterized by a set of two PDEs: a backward Hamilton-Jacobi-Bellman (HJB) equation describing a generic agent's optimal control provided the density distribution and a forward Fokker-Planck equation describing the population's density evolution provided individual controls.
In this paper we shall formulate AV traffic as a mean field game. AVs are modeled as rational agents with predefined driving costs.
Their density distribution is exactly the traffic density and the Fokker-Planck equation is the same as the continuity equation (CE) that is widely used in continuum traffic flow models.
\subsection{Aw-Rascle-Zhang Model}
\label{sec:pre_arz}
The following Aw-Rascle-Zhang (ARZ) model:
\begin{align}
\mbox{(CE)}\quad \rho_t+(\rho u)_x&=0,\label{eq:arz1}\\
\mbox{(ME)}\quad [u+h(\rho)]_t+u[u+h(\rho)]_x&=\frac{1}{\tau}[U(\rho)-u],\label{eq:arz2}
\end{align}
is a non-equilibrium continuum traffic flow model describing human driving behaviors \cite{aw2000resurrection,zhang2002non}, where,\\
$\rho(x,t)$, $u(x,t)$: the traffic density and speed;\\
$U(\cdot)$: the desired speed function;\\
$h(\cdot)$: the hesitation function that is an increasing function of the density;\\
$\tau$: the relaxation time quantifying how fast drivers adapt their current speeds to desired speeds.
Equation \eqref{eq:arz1} is the continuity equation describing the flow conservation and \eqref{eq:arz2} is a momentum equation (ME) prescribing human driver's dynamic behavior.
The ARZ model is able to predict important human driving features such as stop-and-go waves and traffic instability \cite{Seibold2012}. Traffic stability is defined around uniform flows. In continuum models, uniform flows are described by constant solutions $\rho(x,t)\equiv\bar{\rho}$, $u(x,t)\equiv\bar{u}$. The constant solutions of the ARZ model is given by $\bar{u}=U(\bar{\rho})$. Then the traffic stability for the ARZ model is defined as follows
\begin{defn}\label{def:stab1}
The ARZ model \eqref{eq:arz1}\eqref{eq:arz2} is stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=U(\bar{\rho})$ if for any $\varepsilon>0$, there exists $\delta>0$ such that for any solution $\rho(x,t),u(x,t)$ to the system:
\begin{align}
\sup_{0\leq t<\infty}\left\{\norm{\rho(\cdot,t)-\bar{\rho}}+\norm{u(\cdot,t)-\bar{u}}\right\}\leq \varepsilon,
\end{align}
whenever $\norm{\rho(\cdot,0)-\bar{\rho}}+\norm{u(\cdot,0)-\bar{u}}\leq \delta$.
Here $\norm{\cdot}$ is a given norm. The system is linearly stable if its linearized system at $(\bar{\rho},\bar{u})$ is stable around the zero solution.
\end{defn}
The ARZ model has a simple linear stability criterion \cite{Seibold2012}:
\begin{thm}
The ARZ model \eqref{eq:arz1}\eqref{eq:arz2} is linearly stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=U(\bar{\rho})$ if and only if $h'(\bar{\rho})>-U'(\bar{\rho})$.
\end{thm}
Because of its capability of producing traffic instability, we shall use the ARZ model \eqref{eq:arz1}\eqref{eq:arz2} to characterize HV's driving behavior.
\section{Model Formulation}
\label{sec:form}
\subsection{Pure AV Traffic: Mean Field Game}
In this section, we will build a pure AV continuum traffic flow model based on a mean field game following \cite{huang2019game}.
Assume that a large population of homogeneous AVs are driving on a closed highway without any entrance nor exit. Those AVs anticipate others' behaviors and the evolution of the traffic density $\rho(x,t)$ on a predefined time horizon $[0,T]$. AVs control their speeds and aim to minimize their driving costs on the horizon $[0,T]$.
Then the AVs' optimal cost $V(x,t)$ and optimal velocity field $u(x,t)$ can be described by a set of HJB equations \cite{huang2019game}:
\begin{align}
\mbox{(HJB)}\quad &V_t+uV_x+f(u,\rho)=0,\label{eq:hjb1}\\
&u=\text{argmin}_{\alpha} \{\alpha V_x+f(\alpha,\rho)\},\label{eq:hjb2}
\end{align}
where $f(\cdot,\cdot)$ is the \emph{cost function} \cite{huang2019game}.
When all AVs follow their optimal velocity controls, the system's density evolution is described by the continuity equation:
\begin{align}
\mbox{(CE)}\quad \rho_t+(\rho u)_x=0.\label{eq:cemfg}
\end{align}
The mean field game is described by the coupled system \eqref{eq:hjb1}\eqref{eq:hjb2}\eqref{eq:cemfg}.
\begin{itemize}
\item The initial condition for the forward continuity equation \eqref{eq:cemfg} is given by the initial density $\rho(x,0)=\rho_0(x)$.
\item The terminal condition for the backward HJB equations \eqref{eq:hjb1}\eqref{eq:hjb2} is given by the terminal cost $V(x,T)=V_T(x)$. We will always set $V_T(x)=0$ meaning that the cars have no preference on their destinations.
\item The choice of the spatial boundary condition depends on the traffic scenario. In this paper we assume that the highway is a ring road of fixed length $L$ and specify the periodic boundary condition $\rho(0,t)=\rho(L,t)$, $V(0,t)=V(L,t)$.
\end{itemize}
The cost function represents certain driving objectives. The choice of the cost function determines AV's driving behavior. In this paper we shall follow \cite{huang2019game} and take the following cost function:
\begin{align}
f(u,\rho)=\underbrace{\frac12\left(\frac{u}{u_{\text{max}}}\right)^2}_{\text{kinetic energy}}-\underbrace{\frac{u}{u_{\text{max}}}}_{\text{efficiency}}+\underbrace{\frac{u\rho}{u_{\text{max}}\rho_{\text{jam}}}}_{\text{safety}},\label{eq:costfct1}
\end{align}
where,\\
$u_\text{max}$ and $\rho_\text{jam}$ are the free flow speed and the jam density;\\
$\frac12\left(u/u_{\text{max}}\right)^2$ models the car's kinetic energy;\\
$-u/u_{\text{max}}$ models the car's efficiency, minimizing this term means that the car should drive as fast as possible;\\
$u\rho/u_{\text{max}}\rho_{\text{jam}}$ models the safety, it is a penalty term that restricts the car's speed in traffic congestion;
The MFG system corresponding to the cost function \eqref{eq:costfct1} is \cite{huang2019game}:
\begin{subequations}
\begin{numcases}{}
\rho_t+(\rho u)_x=0,\label{eq:mfg_cont}\\
V_t+uV_x+\frac12\left(\frac{u}{u_{\text{max}}}\right)^2-\frac{u}{u_{\text{max}}}+\frac{u\rho}{u_{\text{max}}\rho_{\text{jam}}}=0,\quad\quad\label{eq:mfg_hjb1}\\
u=g_{[0,u_\text{max}]}\left(u_{\text{max}}\left(1-\frac{\rho}{\rho_{\text{jam}}}-u_{\text{max}}V_x\right)\right),\label{eq:mfg_hjb2}
\end{numcases}
\end{subequations}
where $g_{[0,u_\text{max}]}(u)=\max\{\min\{u,u_\text{max}\},0\}$ is a cut-off function which ensures the cars' speeds satisfy the constraint $0\leq u\leq u_\text{max}$.
\cite{huang2019game} provides theoretical and numerical analysis on the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2}.
The uniform flows of the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} are given by $\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}}{\rho_\text{jam}}\right)$.
Note that Definition \ref{def:stab1} does not apply to the MFG system since the system is defined and solved on a fixed time horizon $[0,T]$. In this case, we define traffic stability as follows:
\begin{defn}\label{def:stab2}
The MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} is stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=u_\text{max}(1-\bar{\rho}/\rho_\text{jam})$ if for any $\varepsilon>0$, there exists $\delta>0$ such that for any $T>0$ and for any solution $\rho^{(T)}(x,t),u^{(T)}(x,t)$ to the system with $V_T(x)=0$ on the time horizon $[0,T]$:
\begin{align}
\sup_{0\leq t\leq T}\left\{\norm{\rho^{(T)}(\cdot,t)-\bar{\rho}}+\norm{u^{(T)}(\cdot,t)-\bar{u}}\right\}\leq \varepsilon,
\end{align}
whenever $\norm{\rho(\cdot,0)-\bar{\rho}}\leq \delta$.
The system is linearly stable if its linearized system at $(\bar{\rho},\bar{u})$ is stable around the zero solution.
\end{defn}
\subsection{Mixed Traffic: Coupled MFG-ARZ System}
This section aims to develop a continuum mixed AV-HV traffic flow model. We denote $\rho^{\text{AV}}(x,t)$ the AV density, $\rho^{\text{HV}}(x,t)$ the HV density and
\begin{align}
\rho^{\text{TOT}}(x,t)=\rho^{\text{AV}}(x,t)+\rho^{\text{HV}}(x,t),
\end{align}
the total density. Denote $u^{\text{AV}}(x,t)$ and $u^{\text{HV}}(x,t)$ the velocities of AVs and HVs, respectively.
We model HVs by the ARZ model and AVs by the MFG, respectively. The next step is to model the interactions between AVs and HVs. The interactions include the flow interaction and the dynamic interaction.
\emph{Flow interaction}. The flow interaction relates to how the multiclass flows are computed and assigned. We follow the framework from \cite{fan2015heterogeneous} and suppose that the multiclass flows are described by the following continuity equations for both AVs and HVs:
\begin{align}
\mbox{(CE-AV)}\quad &\rho^{\text{AV}}_t+(\rho^{\text{AV}} u^{\text{AV}})_x=0,\\
\mbox{(CE-HV)}\quad &\rho^{\text{HV}}_t+(\rho^{\text{HV}} u^{\text{HV}})_x=0.
\end{align}
\emph{Dynamic interaction}. Each of the velocities $u^{\text{AV}}$ and $u^{\text{HV}}$ should depend on both AV density $\rho^{\text{AV}}$ and HV density $\rho^{\text{HV}}$. The way of defining the velocities over multiclass densities characterizes the dynamic interaction. \cite{fan2015heterogeneous} summarized some possible formulations of the dynamic interaction.
In this paper we model an asymmetric dynamic interaction between AVs and HVs by introducing multiclass densities into the HJB equations and the momentum equation of the system. For HVs, we assume that HVs only observe the total density $\rho^\text{TOT}$ to adapt their speeds. The momentum equation \eqref{eq:arz2} in the ARZ model then becomes:
\begin{align}
\left[u^{\text{HV}}+h(\rho^{\text{TOT}})]_t+u^{\text{HV}}[u^{\text{HV}}+h(\rho^{\text{TOT}})\right]_x=\notag\\
\frac{1}{\tau}\left[U(\rho^{\text{TOT}})-u^{\text{HV}}\right].
\end{align}
We take the Greenshields desired speed function $U(\rho)=u_{\text{max}}\left(1-\rho/\rho_{\text{jam}}\right)$.
For AVs, we assume that AVs observe both AV and HV densities. We model AVs' reaction to multiclass densities by introducing an extra term into the AV's cost function. The AV's modified cost function for mixed traffic is:
\begin{align}
f(u^{\text{AV}},\rho^{\text{AV}},\rho^{\text{HV}})=&\underbrace{\frac12\left(\frac{u^{\text{AV}}}{u_{\text{max}}}\right)^2}_{\text{kinetic energy}}-\underbrace{\frac{u^{\text{AV}}}{u_{\text{max}}}}_{\text{efficiency}}\notag\\
+&\underbrace{\frac{u^{\text{AV}}\rho^{\text{TOT}}}{u_{\text{max}}\rho_{\text{jam}}}+\beta\frac{\rho^{\text{HV}}}{\rho_\text{jam}}}_{\text{safety}}\label{eq:costcc},
\end{align}
where the safety is modeled by two penalty terms: one is similar to the penalty term in \eqref{eq:costfct1} but the congestion is modeled by the total density $\rho^{\text{TOT}}$, the other quantifies HV's impact on AV's speed selection and the parameter $\beta$ represents AV's sensitivity to HV's density. From \eqref{eq:costcc} we can derive the corresponding HJB equations.
Summarizing all above, we obtain the following coupled MFG-ARZ system:
\begin{subequations}
\begin{numcases}{}
\rho^{\text{AV}}_t+(\rho^{\text{AV}} u^{\text{AV}})_x=0,\label{eq:mixed_1}\\
V_t+u^{\text{AV}}V_x+\frac12\left(\frac{u^{\text{AV}}}{u_\text{max}}\right)^2-\frac{u^{\text{AV}}}{u_\text{max}}+\frac{u^{\text{AV}}\rho^{\text{TOT}}}{u_\text{max}\rho_\text{jam}}\notag\\
+\beta\frac{\rho^{\text{HV}}}{\rho_\text{jam}}=0,\quad\quad\ \label{eq:mixed_2}\\
u^{\text{AV}}=g_{[0,u_\text{max}]}\left(u_\text{max}\left(1-\frac{\rho^{\text{TOT}}}{\rho_\text{jam}}-u_\text{max}V_x\right)\right),\quad\quad \label{eq:mixed_3}\\
\rho^{\text{HV}}_t+(\rho^{\text{HV}} u^{\text{HV}})_x=0,\label{eq:mixed_4}\\
\left[u^{\text{HV}}+h(\rho^{\text{TOT}})]_t+u^{\text{HV}}[u^{\text{HV}}+h(\rho^{\text{TOT}})\right]_x=\notag\\
\frac{1}{\tau}\left[u_\text{max}\left(1-\frac{\rho^{\text{TOT}}}{\rho_\text{jam}}\right)-u^{\text{HV}}\right],\label{eq:mixed_5}\\
\rho^{\text{TOT}}=\rho^{\text{AV}}+\rho^{\text{HV}}.\label{eq:mixed_6}
\end{numcases}
\end{subequations}
\begin{itemize}
\item The initial conditions are given by the initial densities $\rho^\text{AV}(x,0)=\rho^\text{AV}_0(x)$, $\rho^\text{HV}(x,0)=\rho^\text{HV}_0(x)$ and the initial velocity $u^\text{HV}(x,0)=u^\text{HV}_0(x)$.
\item The terminal condition is given by the terminal cost $V(x,T)=V_T(x)$. We will always set $V_T(x)=0$.
\item We specify the periodic boundary conditions for all of $\rho^{\text{AV}}$, $\rho^{\text{HV}}$, $u^{\text{HV}}$ and $V$.
\end{itemize}
The mixed traffic's uniform flows are defined as the system's constant solutions $\rho^{\text{AV}}(x,t)\equiv\bar{\rho}^{\text{AV}}$, $\rho^{\text{HV}}(x,t)\equiv\bar{\rho}^{\text{HV}}$,
\begin{align}
\rho^{\text{TOT}}(x,t)\equiv\bar{\rho}^{\text{TOT}}=\bar{\rho}^{\text{AV}}+\bar{\rho}^{\text{HV}},
\end{align}
and
\begin{align}
u^{\text{AV}}(x,t)\equiv u^{\text{HV}}(x,t)\equiv\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}^\text{TOT}}{\rho_\text{jam}}\right).\label{eq:mix_con_sol}
\end{align}
Since AVs are modeled by a mean field game, the mixed traffic system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) is defined and solved on a predefined time horizon $[0,T]$. Similar to Definition \ref{def:stab2}, the mixed traffic system's stability is defined as:
\begin{defn}\label{def:stab3}
The system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) is stable around the uniform flow $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$ which satisfies \eqref{eq:mix_con_sol} if for any $\varepsilon>0$, there exists $\delta>0$ such that for any $T>0$ and for any solution $\rho^{\text{AV},(T)}(x,t)$, $u^{\text{AV},(T)}(x,t)$, $\rho^{\text{HV},(T)}(x,t)$, $u^{\text{HV},(T)}(x,t)$ to the system with $V_T(x)=0$ on the time horizon $[0,T]$:
\begin{align}
\sup_{0\leq t\leq T}\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,t)-\bar{\rho}^i}+\norm{u^{i,(T)}(\cdot,t)-\bar{u}}\leq \varepsilon,\label{eq:stab1}
\end{align}
whenever
\begin{align}
\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,0)-\bar{\rho}^i}+\norm{u^{\text{HV},(T)}(\cdot,0)-\bar{u}}\leq \delta.\label{eq:stab2}
\end{align}
The system is linearly stable if its linearized system at $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$ is stable around the zero solution.
\end{defn}
\section{Pure AV Traffic: Linear Stability Analysis}
\label{sec:av_analysis}
In this section we will carry out the standard linear stability analysis for the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2}.
By scaling to dimensionless quantities we assume $u_{\text{max}}=1$ and $\rho_{\text{jam}}=1$. In addition we remove the speed constraint $0\leq u\leq u_\text{max}$ since the existence of the constraint does not change the system's stability when $0<\bar{u}<u_\text{max}$. Then we eliminate $V$ from the system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} and obtain a simpler system of $\rho$ and $u$:
\begin{align}
\begin{cases}
\rho_t+(\rho u)_x=0,\\
u_t+uu_x-(\rho u)_x=0.
\end{cases}\label{mfg_clean}
\end{align}
Fix a uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=1-\bar{\rho}$. Suppose that the system \eqref{mfg_clean} has the initial condition $\rho(x,0)=\bar{\rho}+\tilde{\rho}_0(x)$ and the terminal condition $V_T(x)=0$. Here $\tilde{\rho}_0(x)$ is any small perturbation.
Then we linearize the system \eqref{mfg_clean} near the uniform flow $(\bar{\rho},\bar{u})$. Suppose $\rho(x,t)=\bar{\rho}+\tilde{\rho}(x,t)$, $u(x,t)=\bar{u}+\tilde{u}(x,t)$. Note that $\bar{u}=1-\bar{\rho}$, we get the following linearized system:
\begin{align}
\begin{cases}
\tilde{\rho}_t+(1-\bar{\rho})\tilde{\rho}_x+\bar{\rho}\tilde{u}_x=0,\\
\tilde{u}_t+(\bar{\rho}-1)\tilde{\rho}_x+(1-2\bar{\rho})\tilde{u}_x=0.
\end{cases}\label{mfg_lin}
\end{align}
\eqref{mfg_lin} is also a forward-backward system with the initial condition $\tilde{\rho}(x,0)=\tilde{\rho}_0(x)$ and the terminal condition $\tilde{\rho}(x,T)+\tilde{u}(x,T)=0$.
\begin{prop}\label{prop:lin}
The linearized system \eqref{mfg_lin} is stable near the zero solution for all $0<\bar{\rho}<1$.
\end{prop}
We provide a computer-assisted proof for Proposition \ref{prop:lin} in the Appendix. The analytical proof is left for future research. As a corollary of Proposition \ref{prop:lin} we have the following results on the MFG system's stability:
\begin{cor}
The MFG system \eqref{mfg_clean} is linearly stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=1-\bar{\rho}$ for all $0<\bar{\rho}<1$.
\end{cor}
Our analysis shows that the proposed MFG system for AVs is always stable even if each AV only aims to optimize his own utility. Then we turn our attention to the mixed traffic and study whether the existence of AVs can stabilize the unstable HV traffic.
\section{Mixed Traffic: Numerical Experiments}
\label{sec:mixed_analysis}
In this section, we will demonstrate the stability of the mixed traffic system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) by numerical experiments. We will run numerical simulations in different scenarios and check the stability in those simulations automatically with a stability criterion. Then we discuss how AVs' different penetration rates and different controller designs influence the stabilizing effect.
\subsection{Experimental Settings}
Take vehicles' free flow speed $u_{\text{max}}=\SI{30}{m/s}$ and the jam density $\rho_{\text{jam}}=\SI{1/7.5}{m}$. Choose the hesitation function $h(\rho)$ in the ARZ model to be:
\begin{align}
h(\rho)=\SI{9}{m/s}\cdot\left(\frac{\rho/\rho_\text{jam}}{1-\rho/\rho_\text{jam}}\right)^{1/2},
\end{align}
which has the same form as the one used in \cite{Seibold2012}. For all of the numerical experiments, the length of the ring road $L=\SI{1}{km}$ and the length of the time horizon $T=2L/u_{\text{max}}$.
For the system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) and its arbitrary uniform flow solution $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$, the initial densities are set to be:
\begin{align}
\rho^i_0(x)=\bar{\rho}^i+0.1\times\bar{\rho}^i\sin(2\pi x/L),
\end{align}
for $i=\text{AV},\text{HV}$ so that the initial perturbations on both AV and HV densities are sine waves whose magnitudes are 10\% of the respective uniform states. The HV's initial velocity is set to be:
\begin{align}
u^{\text{HV}}_0(x)\equiv\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}^\text{TOT}}{\rho_\text{jam}}\right),
\end{align}
where $\bar{\rho}^\text{TOT}=\bar{\rho}^\text{AV}+\bar{\rho}^\text{HV}$ so that there is no initial perturbation on HV's velocity.
The AV's terminal cost is always set to be $V_T(x)=0$.
It is not easy to check the conditions \eqref{eq:stab1}\eqref{eq:stab2} directly. Alternatively we shall use a simplified stability criterion. Suppose $\rho^{\text{AV},(T)}(x,t)$, $u^{\text{AV},(T)}(x,t)$, $\rho^{\text{HV},(T)}(x,t)$, $u^{\text{HV},(T)}(x,t)$ is any solution to the system, we define an \emph{error function}:
\begin{align}
E(t)=\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,t)-\bar{\rho}^i}+\norm{u^{i,(T)}(\cdot,t)-\bar{u}},
\end{align}
for $0\leq t\leq T$ and the system is said to be unstable if:
\begin{align}
\max_{0\leq t\leq T}E(t)\geq 2E(0),\label{eq:crit}
\end{align}
otherwise it is said to be stable. The stability criterion \eqref{eq:crit} is checked automatically in the numerical experiments. It is validated in the experiments with no presence of AVs that the criterion \eqref{eq:crit} predicts the same stability as the ARZ model's analytical stability criterion
\subsection{Numerical Method}
To solve the coupled MFG-ARZ system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) numerically, we apply a finite difference method (FDM) on the spatial-temporal grids.
We discretize the continuity equations \eqref{eq:mixed_1}\eqref{eq:mixed_4} by the Lax-Friedrichs scheme. We discretize the HJB equations \eqref{eq:mixed_2}\eqref{eq:mixed_3} of the MFG by an upwind scheme \cite{huang2019game}. The momentum equation \eqref{eq:mixed_5} of the ARZ model is transformed into its conservative form with a relaxation term. Then we apply a hybrid scheme with an explicit Lax-Friedrichs scheme for the conservation part and an implicit Euler scheme for the relaxation part. Finally we compress all equations into a large nonlinear system and solve the system by Newton's method \cite{huang2019game}.
\subsection{Numerical Results}
In the first group of experiments we fix $\beta=0$ and try different pairs of $\bar{\rho}^{\text{AV}}$ and $\bar{\rho}^{\text{HV}}$. We restrict the values to be under $\bar{\rho}^{\text{AV}}+\bar{\rho}^{\text{HV}}\leq0.75\rho_{\text{jam}}$ to avoid the total density exceeding the jam density. We check the system's stability from each numerical experiment and plot the results in the phase diagram between the normalized AV and HV density, see Figure \ref{fig:avhv}. We observe that when the HV density is fixed, adding AVs can stabilize the traffic.
when the AV density is large enough, the mixed traffic is always stable.
In the second group of experiments we still keep $\beta=0$ but try different total densities $\bar{\rho}^{\text{TOT}}$ and different AV's penetration rates. Then we plot the results in the phase diagram between the AV's penetration rate and the normalized total density, see Figure \ref{fig:pc}. We observe that when the total density is fixed, traffic becomes more stable with a higher portion of AVs. In addition, the minimal AV's penetration rate to make the traffic stable increases as the total density increases. We also observe that when the AV's penetration rate is large enough, the mixed traffic is always stable.
Figure \ref{fig:simu} compares the total density evolution between a stable example and an unstable example. When the total density is $\bar{\rho}^{\text{TOT}}=0.4\rho_{\text{jam}}$, the pure HV traffic is unstable while 30\% AVs can stabilize the mixed traffic. In the former case, the initial perturbation on the total density grows up and develops a shock; In the latter case, the same initial perturbation decays and the total density converges to a uniform flow.
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/av_hv.png}
\caption{First group}
\label{fig:avhv}
\end{subfigure}%
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/rate_tot.png}
\caption{Second group}
\label{fig:pc}
\end{subfigure}
\caption{Stability regions for the first and second groups of experiments}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{.2\textwidth}
\centering
\includegraphics[width=.8\textwidth]{fig/simu1.png}
\end{minipage}
\begin{minipage}[t]{.2\textwidth}
\centering
\includegraphics[width=.8\textwidth]{fig/simu2.png}
\end{minipage}
\caption{Evolution of normalized total density when $\beta=0$, $\bar{\rho}^{\text{TOT}}=0.4\rho_{\text{jam}}$, 0\% AV (left) and 30\% AVs (right)}
\label{fig:simu}
\end{figure}
In the third group of experiments we fix the total density $\bar{\rho}^{\text{TOT}}=0.5\rho_{\text{jam}}$ and vary the AV's penetration rate and the parameter $\beta$. Then we plot the results in the phase diagram between $\beta$ and the AV's penetration rate, see Figure \ref{fig:beta}. We observe that for any fixed $\beta$, increasing AV's penetration rate makes the traffic more stable. When the AV's penetration rate is fixed but higher than 20\%, increasing $\beta$ makes the traffic more stable. This means that when AV is more sensitive to HV, the traffic becomes more stable.
\begin{figure}[htbp]
\centering
\includegraphics[width=.18\textwidth]{fig/beta.png}
\caption{Stability region for the third group of experiments}
\label{fig:beta}
\end{figure}
\section{Conclusion}
This paper presents continuum traffic flow models for both pure AV traffic and mixed AV-HV traffic. The pure AV traffic is modeled by a mean field game and the linear stability analysis shows the traffic is always stable. The mixed AV-HV traffic is modeled by a coupled MFG-ARZ system. To demonstrate the mixed traffic stability analysis, three groups of numerical experiments are performed. In particular, we characterize the stability regions over AV density and HV density as well as over total density and AV's penetration rate in the mixed traffic. We also quantify the impact of the AV controller parameter on traffic stability. In future work, we plan to develop analytical stability analysis for mixed traffic and discuss the relation between more general AV controller designs and stability under different types of AV-HV interactions.
\section*{APPENDIX}
Apply Fourier analysis to \eqref{mfg_lin}, denote $\hat{\rho}(\xi,t)$ and $\hat{u}(\xi,t)$ the Fourier modes of $\tilde{\rho}(x,t)$ and $\tilde{u}(x,t)$, $\xi=\frac{2k\pi x}{L}$ ($k\in\mathbb{Z}$). For any frequency $\xi$:
\begin{align}
\begin{cases}
\hat{\rho}_t+i\xi(1-\bar{\rho})\hat{\rho}+i\xi\bar{\rho}\hat{u}=0,\\
\hat{u}_t+i\xi(\bar{\rho}-1)\hat{\rho}+i\xi(1-2\bar{\rho})\hat{u}=0.
\end{cases}\label{eq:ode}
\end{align}
It is an ODE system with the initial condition $\hat{\rho}(\xi,0)=\hat{\rho}_0(\xi)$ where $\hat{\rho}_0(\xi)$ is the Fourier transform of $\tilde{\rho}_0(x)$ and the terminal condition $\hat{\rho}(\xi,T)+\hat{u}(\xi,T)=0$. The linear PDE system \eqref{mfg_lin} is stable in $L^2$ norm if and only if there exists a universal constant $C>0$ such that for any $T>0$ and $\xi$, the solution of the ODE system \eqref{eq:ode} on $[0,T]$ satisfies:
\begin{align}
|\hat{\rho}(\xi,t)|^2+|\hat{u}(\xi,t)|^2\leq C|\hat{\rho}_0(\xi)|^2,\ \forall t\in[0,T]\label{eq:fin}.
\end{align}
The ODE system \eqref{eq:ode} is homogeneous. We can assume without loss of generality that $\hat{\rho}_0(\xi)=1$.
To check the condition \eqref{eq:fin} we directly solve this boundary value problem of the ODE system \eqref{eq:ode}. Denote $r=\sqrt{\bar{\rho} (5 \bar{\rho} -4)}$, $\eta=\xi t$, $\lambda =\xi T$ and $S=\exp \left(-\frac{1}{2} i \eta \left(r-3 \bar{\rho} +2\right) \right)$,
the solution is:
\begin{align}
\hat{\rho}(\xi,t)&=S\frac{(r+\bar{\rho})e^{i r \eta}+(r-\bar{\rho})e^{i r \lambda}}{r+\bar{\rho}+(r-\bar{\rho})e^{i r \lambda}},\\
\hat{u}(\xi,t)&=-S\frac{(r+3\bar{\rho}-2)e^{i r \eta}+(r-3\bar{\rho}+2)e^{i r \lambda}}{r+\bar{\rho}+(r-\bar{\rho})e^{i r \lambda}},
\end{align}
when $\bar{\rho}\neq\frac45$ or $\hat{\rho}(\xi,t)=e^{\frac15i\eta}\frac{5i-2\eta+2\lambda}{5i+2\lambda}$ and $\hat{u}(\xi,t)=-e^{\frac15i\eta}\frac{5i-\eta+\lambda}{5i+2\lambda}$
when $\bar{\rho}=\frac45$.
Define:
\begin{align}
E_{\bar{\rho}}(\lambda)=\max_{0\leq\eta\leq\lambda\text{ or }\lambda\leq \eta\leq 0}\left[|\hat{\rho}(\xi,t)|^2+|\hat{u}(\xi,t)|^2\right].
\end{align}
Then to check \eqref{eq:fin} it suffices to check the boundedness of the function $E_{\bar{\rho}}(\lambda)$ for all $0<\bar{\rho}<1$. We do this by computing the values of $E_{\bar{\rho}}(\lambda)$ from discrete values of $\bar{\rho}$ and $\lambda$.
The computation shows that for any $\bar{\rho}$, $E_{\bar{\rho}}(\lambda)$ is bounded when $|\lambda|\to\infty$.
\section*{ACKNOWLEDGMENT}
The authors would like to thank Data Science Institute from Columbia University for providing a seed grant for this research.
\bibliographystyle{IEEEtran}
\section{Introduction}
\label{sec:intro}
Autonomous vehicles (AVs) are believed to be the foundation of the next-decade transportation system
and are expected to improve the traffic flow that is presently dominated by human vehicles (HVs).
Modeling AV's driving behavior and quantifying different penetration rates of AVs' impact on the traffic is of great significance.
This paper focuses on traffic stability, which is one of the most substantial traffic features. Traffic stability refers to a traffic system's asymptotic stability around uniform flows. HV traffic is observed to be an unstable system in which a small perturbation (caused by driving errors or delays) to the uniform flow will grow up with time and develop traffic congestion. By removing human errors, it is expected that AVs
will help stabilize the traffic system. A field experiment \cite{stern2018dissipation} showed that one AV is able to stabilize the traffic with approximately twenty vehicles on a ring road.
AV's capability of stabilizing traffic is validated using microscopic models for mixed AV-HV traffic. In microscopic models, the traffic system is described by ordinary differential equations.
One can carry out standard linear stability analysis to characterize such a traffic system's stability, built upon
connected cruise controllers \cite{jin2018connected} or generic car-following models \cite{cui2017stabilizing,wu2018stabilizing}. \cite{jin2018connected,cui2017stabilizing} considered only one AV with multiple HVs; \cite{wu2018stabilizing} studied multiple AVs and multiple HVs but only focused on the head-to-tail stability.
However, in the general case, the mixed traffic stability analysis relies on the topology of mixed vehicles and the vehicle-to-vehicle communication network,
which suffers from scalability issues.
One alternative approach to address the scalability issues is the PDE approximation \cite{zheng2016stability}.
This approach suggests to study the stability of continuum traffic flow models which are the limits of microscopic models. The approach is well suited for the mixed traffic since one needs only to concern about the density distributions of different classes.
In continuum traffic flow models,
the traffic system is described by partial differential equations (PDEs) of traffic density and velocity. For single class traffic, the Lighthill-Whitham-Richards (LWR) model \cite{lighthill1955kinematic} is the most extensively used continuum model.
As a generalization, the multiclass LWR is widely used to model the interaction between two types of vehicles.
\cite{levin2016multiclass} is among a few studies that applied multi-class LWR models to AV-HV mixed traffic and proposed networked traffic controls in the presence of AVs.
Based on gas-kinetic theory, \cite{ngoduy2009continuum} proposed a multiclass macroscopic model to capture the effect of communication and information sharing on traffic flow
and analyzed the model's stability with respect to the connected vehicle's penetration rate;
\cite{porfyri2015stability,delis2016simulation,delis2018macroscopic} modeled the macroscopic traffic flow of mixed Adaptive Cruise Control (ACC) and Cooperative Adaptive Cruise Control (CACC) vehicles and analyzed how the ACC vehicle's penetration rate influences the traffic stability.
This paper models AVs using the \emph{mean field game} following the authors' work \cite{huang2019game}. In this framework, AVs are assumed to be rational, utility-optimizing agents with anticipation capabilities and play a non-cooperative game by selecting their driving speeds.
AVs' utility-optimizing and anticipation behaviors are distinctive characteristics from the aforementioned continuum models. By extending \cite{huang2019game}, this paper aims to build continuum traffic flow models for both pure AV traffic and mixed AV-HV traffic based on mean field games and analyze the models' traffic stability.
The remainder of the paper is organized as follows. Section \ref{sec:pre} provides an overview of the mean field game and the Aw-Rascle-Zhang model, used for modeling AVs and HVs, respectively. Section \ref{sec:form} formulates models for both pure AV traffic and mixed AV-HV traffic. Based on the proposed models, Section \ref{sec:av_analysis} shows the linear stability analysis for the pure AV traffic and Section \ref{sec:mixed_analysis} demonstrates the mixed traffic's stability through numerical experiments.
\section{Preliminaries}
\label{sec:pre}
\subsection{Mean Field Game}
\label{sec:pre_mfg}
Mean field game (MFG) is a game-theoretic framework to model complex multi-agent dynamic systems \cite{lasry2007mean}.
In the MFG framework, a population of $N$ rational utility-optimizing agents are modeled by a dynamic system. The agents interact with each other through their utilities. Assuming those agents optimize their utilities in a non-cooperative way, they form a \emph{differential game}.
Exact Nash equilibria to the differential game are generally hard to solve when $N$ is large. Alternatively, MFG considers the continuum problem as $N\to\infty$. By exploiting the ``smoothing" effect of a large number of interacting individuals, MFG assumes that each agent only responds to and contributes to the density distribution of the whole population. Then the equilibria are characterized by a set of two PDEs: a backward Hamilton-Jacobi-Bellman (HJB) equation describing a generic agent's optimal control provided the density distribution and a forward Fokker-Planck equation describing the population's density evolution provided individual controls.
In this paper we shall formulate AV traffic as a mean field game. AVs are modeled as rational agents with predefined driving costs.
Their density distribution is exactly the traffic density and the Fokker-Planck equation is the same as the continuity equation (CE) that is widely used in continuum traffic flow models.
\subsection{Aw-Rascle-Zhang Model}
\label{sec:pre_arz}
The following Aw-Rascle-Zhang (ARZ) model:
\begin{align}
\mbox{(CE)}\quad \rho_t+(\rho u)_x&=0,\label{eq:arz1}\\
\mbox{(ME)}\quad [u+h(\rho)]_t+u[u+h(\rho)]_x&=\frac{1}{\tau}[U(\rho)-u],\label{eq:arz2}
\end{align}
is a non-equilibrium continuum traffic flow model describing human driving behaviors \cite{aw2000resurrection,zhang2002non}, where,\\
$\rho(x,t)$, $u(x,t)$: the traffic density and speed;\\
$U(\cdot)$: the desired speed function;\\
$h(\cdot)$: the hesitation function that is an increasing function of the density;\\
$\tau$: the relaxation time quantifying how fast drivers adapt their current speeds to desired speeds.
Equation \eqref{eq:arz1} is the continuity equation describing the flow conservation and \eqref{eq:arz2} is a momentum equation (ME) prescribing human driver's dynamic behavior.
The ARZ model is able to predict important human driving features such as stop-and-go waves and traffic instability \cite{Seibold2012}. Traffic stability is defined around uniform flows. In continuum models, uniform flows are described by constant solutions $\rho(x,t)\equiv\bar{\rho}$, $u(x,t)\equiv\bar{u}$. The constant solutions of the ARZ model is given by $\bar{u}=U(\bar{\rho})$. Then the traffic stability for the ARZ model is defined as follows
\begin{defn}\label{def:stab1}
The ARZ model \eqref{eq:arz1}\eqref{eq:arz2} is stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=U(\bar{\rho})$ if for any $\varepsilon>0$, there exists $\delta>0$ such that for any solution $\rho(x,t),u(x,t)$ to the system:
\begin{align}
\sup_{0\leq t<\infty}\left\{\norm{\rho(\cdot,t)-\bar{\rho}}+\norm{u(\cdot,t)-\bar{u}}\right\}\leq \varepsilon,
\end{align}
whenever $\norm{\rho(\cdot,0)-\bar{\rho}}+\norm{u(\cdot,0)-\bar{u}}\leq \delta$.
Here $\norm{\cdot}$ is a given norm. The system is linearly stable if its linearized system at $(\bar{\rho},\bar{u})$ is stable around the zero solution.
\end{defn}
The ARZ model has a simple linear stability criterion \cite{Seibold2012}:
\begin{thm}
The ARZ model \eqref{eq:arz1}\eqref{eq:arz2} is linearly stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=U(\bar{\rho})$ if and only if $h'(\bar{\rho})>-U'(\bar{\rho})$.
\end{thm}
Because of its capability of producing traffic instability, we shall use the ARZ model \eqref{eq:arz1}\eqref{eq:arz2} to characterize HV's driving behavior.
\section{Model Formulation}
\label{sec:form}
\subsection{Pure AV Traffic: Mean Field Game}
In this section, we will build a pure AV continuum traffic flow model based on a mean field game following \cite{huang2019game}.
Assume that a large population of homogeneous AVs are driving on a closed highway without any entrance nor exit. Those AVs anticipate others' behaviors and the evolution of the traffic density $\rho(x,t)$ on a predefined time horizon $[0,T]$. AVs control their speeds and aim to minimize their driving costs on the horizon $[0,T]$.
Then the AVs' optimal cost $V(x,t)$ and optimal velocity field $u(x,t)$ can be described by a set of HJB equations \cite{huang2019game}:
\begin{align}
\mbox{(HJB)}\quad &V_t+uV_x+f(u,\rho)=0,\label{eq:hjb1}\\
&u=\text{argmin}_{\alpha} \{\alpha V_x+f(\alpha,\rho)\},\label{eq:hjb2}
\end{align}
where $f(\cdot,\cdot)$ is the \emph{cost function} \cite{huang2019game}.
When all AVs follow their optimal velocity controls, the system's density evolution is described by the continuity equation:
\begin{align}
\mbox{(CE)}\quad \rho_t+(\rho u)_x=0.\label{eq:cemfg}
\end{align}
The mean field game is described by the coupled system \eqref{eq:hjb1}\eqref{eq:hjb2}\eqref{eq:cemfg}.
\begin{itemize}
\item The initial condition for the forward continuity equation \eqref{eq:cemfg} is given by the initial density $\rho(x,0)=\rho_0(x)$.
\item The terminal condition for the backward HJB equations \eqref{eq:hjb1}\eqref{eq:hjb2} is given by the terminal cost $V(x,T)=V_T(x)$. We will always set $V_T(x)=0$ meaning that the cars have no preference on their destinations.
\item The choice of the spatial boundary condition depends on the traffic scenario. In this paper we assume that the highway is a ring road of fixed length $L$ and specify the periodic boundary condition $\rho(0,t)=\rho(L,t)$, $V(0,t)=V(L,t)$.
\end{itemize}
The cost function represents certain driving objectives. The choice of the cost function determines AV's driving behavior. In this paper we shall follow \cite{huang2019game} and take the following cost function:
\begin{align}
f(u,\rho)=\underbrace{\frac12\left(\frac{u}{u_{\text{max}}}\right)^2}_{\text{kinetic energy}}-\underbrace{\frac{u}{u_{\text{max}}}}_{\text{efficiency}}+\underbrace{\frac{u\rho}{u_{\text{max}}\rho_{\text{jam}}}}_{\text{safety}},\label{eq:costfct1}
\end{align}
where,\\
$u_\text{max}$ and $\rho_\text{jam}$ are the free flow speed and the jam density;\\
$\frac12\left(u/u_{\text{max}}\right)^2$ models the car's kinetic energy;\\
$-u/u_{\text{max}}$ models the car's efficiency, minimizing this term means that the car should drive as fast as possible;\\
$u\rho/u_{\text{max}}\rho_{\text{jam}}$ models the safety, it is a penalty term that restricts the car's speed in traffic congestion;
The MFG system corresponding to the cost function \eqref{eq:costfct1} is \cite{huang2019game}:
\begin{subequations}
\begin{numcases}{}
\rho_t+(\rho u)_x=0,\label{eq:mfg_cont}\\
V_t+uV_x+\frac12\left(\frac{u}{u_{\text{max}}}\right)^2-\frac{u}{u_{\text{max}}}+\frac{u\rho}{u_{\text{max}}\rho_{\text{jam}}}=0,\quad\quad\label{eq:mfg_hjb1}\\
u=g_{[0,u_\text{max}]}\left(u_{\text{max}}\left(1-\frac{\rho}{\rho_{\text{jam}}}-u_{\text{max}}V_x\right)\right),\label{eq:mfg_hjb2}
\end{numcases}
\end{subequations}
where $g_{[0,u_\text{max}]}(u)=\max\{\min\{u,u_\text{max}\},0\}$ is a cut-off function which ensures the cars' speeds satisfy the constraint $0\leq u\leq u_\text{max}$.
\cite{huang2019game} provides theoretical and numerical analysis on the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2}.
The uniform flows of the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} are given by $\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}}{\rho_\text{jam}}\right)$.
Note that Definition \ref{def:stab1} does not apply to the MFG system since the system is defined and solved on a fixed time horizon $[0,T]$. In this case, we define traffic stability as follows:
\begin{defn}\label{def:stab2}
The MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} is stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=u_\text{max}(1-\bar{\rho}/\rho_\text{jam})$ if for any $\varepsilon>0$, there exists $\delta>0$ such that for any $T>0$ and for any solution $\rho^{(T)}(x,t),u^{(T)}(x,t)$ to the system with $V_T(x)=0$ on the time horizon $[0,T]$:
\begin{align}
\sup_{0\leq t\leq T}\left\{\norm{\rho^{(T)}(\cdot,t)-\bar{\rho}}+\norm{u^{(T)}(\cdot,t)-\bar{u}}\right\}\leq \varepsilon,
\end{align}
whenever $\norm{\rho(\cdot,0)-\bar{\rho}}\leq \delta$.
The system is linearly stable if its linearized system at $(\bar{\rho},\bar{u})$ is stable around the zero solution.
\end{defn}
\subsection{Mixed Traffic: Coupled MFG-ARZ System}
This section aims to develop a continuum mixed AV-HV traffic flow model. We denote $\rho^{\text{AV}}(x,t)$ the AV density, $\rho^{\text{HV}}(x,t)$ the HV density and
\begin{align}
\rho^{\text{TOT}}(x,t)=\rho^{\text{AV}}(x,t)+\rho^{\text{HV}}(x,t),
\end{align}
the total density. Denote $u^{\text{AV}}(x,t)$ and $u^{\text{HV}}(x,t)$ the velocities of AVs and HVs, respectively.
We model HVs by the ARZ model and AVs by the MFG, respectively. The next step is to model the interactions between AVs and HVs. The interactions include the flow interaction and the dynamic interaction.
\emph{Flow interaction}. The flow interaction relates to how the multiclass flows are computed and assigned. We follow the framework from \cite{fan2015heterogeneous} and suppose that the multiclass flows are described by the following continuity equations for both AVs and HVs:
\begin{align}
\mbox{(CE-AV)}\quad &\rho^{\text{AV}}_t+(\rho^{\text{AV}} u^{\text{AV}})_x=0,\\
\mbox{(CE-HV)}\quad &\rho^{\text{HV}}_t+(\rho^{\text{HV}} u^{\text{HV}})_x=0.
\end{align}
\emph{Dynamic interaction}. Each of the velocities $u^{\text{AV}}$ and $u^{\text{HV}}$ should depend on both AV density $\rho^{\text{AV}}$ and HV density $\rho^{\text{HV}}$. The way of defining the velocities over multiclass densities characterizes the dynamic interaction. \cite{fan2015heterogeneous} summarized some possible formulations of the dynamic interaction.
In this paper we model an asymmetric dynamic interaction between AVs and HVs by introducing multiclass densities into the HJB equations and the momentum equation of the system. For HVs, we assume that HVs only observe the total density $\rho^\text{TOT}$ to adapt their speeds. The momentum equation \eqref{eq:arz2} in the ARZ model then becomes:
\begin{align}
\left[u^{\text{HV}}+h(\rho^{\text{TOT}})]_t+u^{\text{HV}}[u^{\text{HV}}+h(\rho^{\text{TOT}})\right]_x=\notag\\
\frac{1}{\tau}\left[U(\rho^{\text{TOT}})-u^{\text{HV}}\right].
\end{align}
We take the Greenshields desired speed function $U(\rho)=u_{\text{max}}\left(1-\rho/\rho_{\text{jam}}\right)$.
For AVs, we assume that AVs observe both AV and HV densities. We model AVs' reaction to multiclass densities by introducing an extra term into the AV's cost function. The AV's modified cost function for mixed traffic is:
\begin{align}
f(u^{\text{AV}},\rho^{\text{AV}},\rho^{\text{HV}})=&\underbrace{\frac12\left(\frac{u^{\text{AV}}}{u_{\text{max}}}\right)^2}_{\text{kinetic energy}}-\underbrace{\frac{u^{\text{AV}}}{u_{\text{max}}}}_{\text{efficiency}}\notag\\
+&\underbrace{\frac{u^{\text{AV}}\rho^{\text{TOT}}}{u_{\text{max}}\rho_{\text{jam}}}+\beta\frac{\rho^{\text{HV}}}{\rho_\text{jam}}}_{\text{safety}}\label{eq:costcc},
\end{align}
where the safety is modeled by two penalty terms: one is similar to the penalty term in \eqref{eq:costfct1} but the congestion is modeled by the total density $\rho^{\text{TOT}}$, the other quantifies HV's impact on AV's speed selection and the parameter $\beta$ represents AV's sensitivity to HV's density. From \eqref{eq:costcc} we can derive the corresponding HJB equations.
Summarizing all above, we obtain the following coupled MFG-ARZ system:
\begin{subequations}
\begin{numcases}{}
\rho^{\text{AV}}_t+(\rho^{\text{AV}} u^{\text{AV}})_x=0,\label{eq:mixed_1}\\
V_t+u^{\text{AV}}V_x+\frac12\left(\frac{u^{\text{AV}}}{u_\text{max}}\right)^2-\frac{u^{\text{AV}}}{u_\text{max}}+\frac{u^{\text{AV}}\rho^{\text{TOT}}}{u_\text{max}\rho_\text{jam}}\notag\\
+\beta\frac{\rho^{\text{HV}}}{\rho_\text{jam}}=0,\quad\quad\ \label{eq:mixed_2}\\
u^{\text{AV}}=g_{[0,u_\text{max}]}\left(u_\text{max}\left(1-\frac{\rho^{\text{TOT}}}{\rho_\text{jam}}-u_\text{max}V_x\right)\right),\quad\quad \label{eq:mixed_3}\\
\rho^{\text{HV}}_t+(\rho^{\text{HV}} u^{\text{HV}})_x=0,\label{eq:mixed_4}\\
\left[u^{\text{HV}}+h(\rho^{\text{TOT}})]_t+u^{\text{HV}}[u^{\text{HV}}+h(\rho^{\text{TOT}})\right]_x=\notag\\
\frac{1}{\tau}\left[u_\text{max}\left(1-\frac{\rho^{\text{TOT}}}{\rho_\text{jam}}\right)-u^{\text{HV}}\right],\label{eq:mixed_5}\\
\rho^{\text{TOT}}=\rho^{\text{AV}}+\rho^{\text{HV}}.\label{eq:mixed_6}
\end{numcases}
\end{subequations}
\begin{itemize}
\item The initial conditions are given by the initial densities $\rho^\text{AV}(x,0)=\rho^\text{AV}_0(x)$, $\rho^\text{HV}(x,0)=\rho^\text{HV}_0(x)$ and the initial velocity $u^\text{HV}(x,0)=u^\text{HV}_0(x)$.
\item The terminal condition is given by the terminal cost $V(x,T)=V_T(x)$. We will always set $V_T(x)=0$.
\item We specify the periodic boundary conditions for all of $\rho^{\text{AV}}$, $\rho^{\text{HV}}$, $u^{\text{HV}}$ and $V$.
\end{itemize}
The mixed traffic's uniform flows are defined as the system's constant solutions $\rho^{\text{AV}}(x,t)\equiv\bar{\rho}^{\text{AV}}$, $\rho^{\text{HV}}(x,t)\equiv\bar{\rho}^{\text{HV}}$,
\begin{align}
\rho^{\text{TOT}}(x,t)\equiv\bar{\rho}^{\text{TOT}}=\bar{\rho}^{\text{AV}}+\bar{\rho}^{\text{HV}},
\end{align}
and
\begin{align}
u^{\text{AV}}(x,t)\equiv u^{\text{HV}}(x,t)\equiv\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}^\text{TOT}}{\rho_\text{jam}}\right).\label{eq:mix_con_sol}
\end{align}
Since AVs are modeled by a mean field game, the mixed traffic system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) is defined and solved on a predefined time horizon $[0,T]$. Similar to Definition \ref{def:stab2}, the mixed traffic system's stability is defined as:
\begin{defn}\label{def:stab3}
The system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) is stable around the uniform flow $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$ which satisfies \eqref{eq:mix_con_sol} if for any $\varepsilon>0$, there exists $\delta>0$ such that for any $T>0$ and for any solution $\rho^{\text{AV},(T)}(x,t)$, $u^{\text{AV},(T)}(x,t)$, $\rho^{\text{HV},(T)}(x,t)$, $u^{\text{HV},(T)}(x,t)$ to the system with $V_T(x)=0$ on the time horizon $[0,T]$:
\begin{align}
\sup_{0\leq t\leq T}\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,t)-\bar{\rho}^i}+\norm{u^{i,(T)}(\cdot,t)-\bar{u}}\leq \varepsilon,\label{eq:stab1}
\end{align}
whenever
\begin{align}
\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,0)-\bar{\rho}^i}+\norm{u^{\text{HV},(T)}(\cdot,0)-\bar{u}}\leq \delta.\label{eq:stab2}
\end{align}
The system is linearly stable if its linearized system at $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$ is stable around the zero solution.
\end{defn}
\section{Pure AV Traffic: Linear Stability Analysis}
\label{sec:av_analysis}
In this section we will carry out the standard linear stability analysis for the MFG system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2}.
By scaling to dimensionless quantities we assume $u_{\text{max}}=1$ and $\rho_{\text{jam}}=1$. In addition we remove the speed constraint $0\leq u\leq u_\text{max}$ since the existence of the constraint does not change the system's stability when $0<\bar{u}<u_\text{max}$. Then we eliminate $V$ from the system \eqref{eq:mfg_cont}\eqref{eq:mfg_hjb1}\eqref{eq:mfg_hjb2} and obtain a simpler system of $\rho$ and $u$:
\begin{align}
\begin{cases}
\rho_t+(\rho u)_x=0,\\
u_t+uu_x-(\rho u)_x=0.
\end{cases}\label{mfg_clean}
\end{align}
Fix a uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=1-\bar{\rho}$. Suppose that the system \eqref{mfg_clean} has the initial condition $\rho(x,0)=\bar{\rho}+\tilde{\rho}_0(x)$ and the terminal condition $V_T(x)=0$. Here $\tilde{\rho}_0(x)$ is any small perturbation.
Then we linearize the system \eqref{mfg_clean} near the uniform flow $(\bar{\rho},\bar{u})$. Suppose $\rho(x,t)=\bar{\rho}+\tilde{\rho}(x,t)$, $u(x,t)=\bar{u}+\tilde{u}(x,t)$. Note that $\bar{u}=1-\bar{\rho}$, we get the following linearized system:
\begin{align}
\begin{cases}
\tilde{\rho}_t+(1-\bar{\rho})\tilde{\rho}_x+\bar{\rho}\tilde{u}_x=0,\\
\tilde{u}_t+(\bar{\rho}-1)\tilde{\rho}_x+(1-2\bar{\rho})\tilde{u}_x=0.
\end{cases}\label{mfg_lin}
\end{align}
\eqref{mfg_lin} is also a forward-backward system with the initial condition $\tilde{\rho}(x,0)=\tilde{\rho}_0(x)$ and the terminal condition $\tilde{\rho}(x,T)+\tilde{u}(x,T)=0$.
\begin{prop}\label{prop:lin}
The linearized system \eqref{mfg_lin} is stable near the zero solution for all $0<\bar{\rho}<1$.
\end{prop}
We provide a computer-assisted proof for Proposition \ref{prop:lin} in the Appendix. The analytical proof is left for future research. As a corollary of Proposition \ref{prop:lin} we have the following results on the MFG system's stability:
\begin{cor}
The MFG system \eqref{mfg_clean} is linearly stable around the uniform flow $(\bar{\rho},\bar{u})$ where $\bar{u}=1-\bar{\rho}$ for all $0<\bar{\rho}<1$.
\end{cor}
Our analysis shows that the proposed MFG system for AVs is always stable even if each AV only aims to optimize his own utility. Then we turn our attention to the mixed traffic and study whether the existence of AVs can stabilize the unstable HV traffic.
\section{Mixed Traffic: Numerical Experiments}
\label{sec:mixed_analysis}
In this section, we will demonstrate the stability of the mixed traffic system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) by numerical experiments. We will run numerical simulations in different scenarios and check the stability in those simulations automatically with a stability criterion. Then we discuss how AVs' different penetration rates and different controller designs influence the stabilizing effect.
\subsection{Experimental Settings}
Take vehicles' free flow speed $u_{\text{max}}=\SI{30}{m/s}$ and the jam density $\rho_{\text{jam}}=\SI{1/7.5}{m}$. Choose the hesitation function $h(\rho)$ in the ARZ model to be:
\begin{align}
h(\rho)=\SI{9}{m/s}\cdot\left(\frac{\rho/\rho_\text{jam}}{1-\rho/\rho_\text{jam}}\right)^{1/2},
\end{align}
which has the same form as the one used in \cite{Seibold2012}. For all of the numerical experiments, the length of the ring road $L=\SI{1}{km}$ and the length of the time horizon $T=2L/u_{\text{max}}$.
For the system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) and its arbitrary uniform flow solution $(\bar{\rho}^{\text{AV}},\bar{\rho}^{\text{HV}},\bar{u})$, the initial densities are set to be:
\begin{align}
\rho^i_0(x)=\bar{\rho}^i+0.1\times\bar{\rho}^i\sin(2\pi x/L),
\end{align}
for $i=\text{AV},\text{HV}$ so that the initial perturbations on both AV and HV densities are sine waves whose magnitudes are 10\% of the respective uniform states. The HV's initial velocity is set to be:
\begin{align}
u^{\text{HV}}_0(x)\equiv\bar{u}=u_\text{max}\left(1-\frac{\bar{\rho}^\text{TOT}}{\rho_\text{jam}}\right),
\end{align}
where $\bar{\rho}^\text{TOT}=\bar{\rho}^\text{AV}+\bar{\rho}^\text{HV}$ so that there is no initial perturbation on HV's velocity.
The AV's terminal cost is always set to be $V_T(x)=0$.
It is not easy to check the conditions \eqref{eq:stab1}\eqref{eq:stab2} directly. Alternatively we shall use a simplified stability criterion. Suppose $\rho^{\text{AV},(T)}(x,t)$, $u^{\text{AV},(T)}(x,t)$, $\rho^{\text{HV},(T)}(x,t)$, $u^{\text{HV},(T)}(x,t)$ is any solution to the system, we define an \emph{error function}:
\begin{align}
E(t)=\sum_{i=\text{AV},\text{HV}}\norm{\rho^{i,(T)}(\cdot,t)-\bar{\rho}^i}+\norm{u^{i,(T)}(\cdot,t)-\bar{u}},
\end{align}
for $0\leq t\leq T$ and the system is said to be unstable if:
\begin{align}
\max_{0\leq t\leq T}E(t)\geq 2E(0),\label{eq:crit}
\end{align}
otherwise it is said to be stable. The stability criterion \eqref{eq:crit} is checked automatically in the numerical experiments. It is validated in the experiments with no presence of AVs that the criterion \eqref{eq:crit} predicts the same stability as the ARZ model's analytical stability criterion
\subsection{Numerical Method}
To solve the coupled MFG-ARZ system (\ref{eq:mixed_1}-\ref{eq:mixed_6}) numerically, we apply a finite difference method (FDM) on the spatial-temporal grids.
We discretize the continuity equations \eqref{eq:mixed_1}\eqref{eq:mixed_4} by the Lax-Friedrichs scheme. We discretize the HJB equations \eqref{eq:mixed_2}\eqref{eq:mixed_3} of the MFG by an upwind scheme \cite{huang2019game}. The momentum equation \eqref{eq:mixed_5} of the ARZ model is transformed into its conservative form with a relaxation term. Then we apply a hybrid scheme with an explicit Lax-Friedrichs scheme for the conservation part and an implicit Euler scheme for the relaxation part. Finally we compress all equations into a large nonlinear system and solve the system by Newton's method \cite{huang2019game}.
\subsection{Numerical Results}
In the first group of experiments we fix $\beta=0$ and try different pairs of $\bar{\rho}^{\text{AV}}$ and $\bar{\rho}^{\text{HV}}$. We restrict the values to be under $\bar{\rho}^{\text{AV}}+\bar{\rho}^{\text{HV}}\leq0.75\rho_{\text{jam}}$ to avoid the total density exceeding the jam density. We check the system's stability from each numerical experiment and plot the results in the phase diagram between the normalized AV and HV density, see Figure \ref{fig:avhv}. We observe that when the HV density is fixed, adding AVs can stabilize the traffic.
when the AV density is large enough, the mixed traffic is always stable.
In the second group of experiments we still keep $\beta=0$ but try different total densities $\bar{\rho}^{\text{TOT}}$ and different AV's penetration rates. Then we plot the results in the phase diagram between the AV's penetration rate and the normalized total density, see Figure \ref{fig:pc}. We observe that when the total density is fixed, traffic becomes more stable with a higher portion of AVs. In addition, the minimal AV's penetration rate to make the traffic stable increases as the total density increases. We also observe that when the AV's penetration rate is large enough, the mixed traffic is always stable.
Figure \ref{fig:simu} compares the total density evolution between a stable example and an unstable example. When the total density is $\bar{\rho}^{\text{TOT}}=0.4\rho_{\text{jam}}$, the pure HV traffic is unstable while 30\% AVs can stabilize the mixed traffic. In the former case, the initial perturbation on the total density grows up and develops a shock; In the latter case, the same initial perturbation decays and the total density converges to a uniform flow.
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/av_hv.png}
\caption{First group}
\label{fig:avhv}
\end{subfigure}%
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/rate_tot.png}
\caption{Second group}
\label{fig:pc}
\end{subfigure}
\caption{Stability regions for the first and second groups of experiments}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{.2\textwidth}
\centering
\includegraphics[width=.8\textwidth]{fig/simu1.png}
\end{minipage}
\begin{minipage}[t]{.2\textwidth}
\centering
\includegraphics[width=.8\textwidth]{fig/simu2.png}
\end{minipage}
\caption{Evolution of normalized total density when $\beta=0$, $\bar{\rho}^{\text{TOT}}=0.4\rho_{\text{jam}}$, 0\% AV (left) and 30\% AVs (right)}
\label{fig:simu}
\end{figure}
In the third group of experiments we fix the total density $\bar{\rho}^{\text{TOT}}=0.5\rho_{\text{jam}}$ and vary the AV's penetration rate and the parameter $\beta$. Then we plot the results in the phase diagram between $\beta$ and the AV's penetration rate, see Figure \ref{fig:beta}. We observe that for any fixed $\beta$, increasing AV's penetration rate makes the traffic more stable. When the AV's penetration rate is fixed but higher than 20\%, increasing $\beta$ makes the traffic more stable. This means that when AV is more sensitive to HV, the traffic becomes more stable.
\begin{figure}[htbp]
\centering
\includegraphics[width=.18\textwidth]{fig/beta.png}
\caption{Stability region for the third group of experiments}
\label{fig:beta}
\end{figure}
\section{Conclusion}
This paper presents continuum traffic flow models for both pure AV traffic and mixed AV-HV traffic. The pure AV traffic is modeled by a mean field game and the linear stability analysis shows the traffic is always stable. The mixed AV-HV traffic is modeled by a coupled MFG-ARZ system. To demonstrate the mixed traffic stability analysis, three groups of numerical experiments are performed. In particular, we characterize the stability regions over AV density and HV density as well as over total density and AV's penetration rate in the mixed traffic. We also quantify the impact of the AV controller parameter on traffic stability. In future work, we plan to develop analytical stability analysis for mixed traffic and discuss the relation between more general AV controller designs and stability under different types of AV-HV interactions.
\section*{APPENDIX}
Apply Fourier analysis to \eqref{mfg_lin}, denote $\hat{\rho}(\xi,t)$ and $\hat{u}(\xi,t)$ the Fourier modes of $\tilde{\rho}(x,t)$ and $\tilde{u}(x,t)$, $\xi=\frac{2k\pi x}{L}$ ($k\in\mathbb{Z}$). For any frequency $\xi$:
\begin{align}
\begin{cases}
\hat{\rho}_t+i\xi(1-\bar{\rho})\hat{\rho}+i\xi\bar{\rho}\hat{u}=0,\\
\hat{u}_t+i\xi(\bar{\rho}-1)\hat{\rho}+i\xi(1-2\bar{\rho})\hat{u}=0.
\end{cases}\label{eq:ode}
\end{align}
It is an ODE system with the initial condition $\hat{\rho}(\xi,0)=\hat{\rho}_0(\xi)$ where $\hat{\rho}_0(\xi)$ is the Fourier transform of $\tilde{\rho}_0(x)$ and the terminal condition $\hat{\rho}(\xi,T)+\hat{u}(\xi,T)=0$. The linear PDE system \eqref{mfg_lin} is stable in $L^2$ norm if and only if there exists a universal constant $C>0$ such that for any $T>0$ and $\xi$, the solution of the ODE system \eqref{eq:ode} on $[0,T]$ satisfies:
\begin{align}
|\hat{\rho}(\xi,t)|^2+|\hat{u}(\xi,t)|^2\leq C|\hat{\rho}_0(\xi)|^2,\ \forall t\in[0,T]\label{eq:fin}.
\end{align}
The ODE system \eqref{eq:ode} is homogeneous. We can assume without loss of generality that $\hat{\rho}_0(\xi)=1$.
To check the condition \eqref{eq:fin} we directly solve this boundary value problem of the ODE system \eqref{eq:ode}. Denote $r=\sqrt{\bar{\rho} (5 \bar{\rho} -4)}$, $\eta=\xi t$, $\lambda =\xi T$ and $S=\exp \left(-\frac{1}{2} i \eta \left(r-3 \bar{\rho} +2\right) \right)$,
the solution is:
\begin{align}
\hat{\rho}(\xi,t)&=S\frac{(r+\bar{\rho})e^{i r \eta}+(r-\bar{\rho})e^{i r \lambda}}{r+\bar{\rho}+(r-\bar{\rho})e^{i r \lambda}},\\
\hat{u}(\xi,t)&=-S\frac{(r+3\bar{\rho}-2)e^{i r \eta}+(r-3\bar{\rho}+2)e^{i r \lambda}}{r+\bar{\rho}+(r-\bar{\rho})e^{i r \lambda}},
\end{align}
when $\bar{\rho}\neq\frac45$ or $\hat{\rho}(\xi,t)=e^{\frac15i\eta}\frac{5i-2\eta+2\lambda}{5i+2\lambda}$ and $\hat{u}(\xi,t)=-e^{\frac15i\eta}\frac{5i-\eta+\lambda}{5i+2\lambda}$
when $\bar{\rho}=\frac45$.
Define:
\begin{align}
E_{\bar{\rho}}(\lambda)=\max_{0\leq\eta\leq\lambda\text{ or }\lambda\leq \eta\leq 0}\left[|\hat{\rho}(\xi,t)|^2+|\hat{u}(\xi,t)|^2\right].
\end{align}
Then to check \eqref{eq:fin} it suffices to check the boundedness of the function $E_{\bar{\rho}}(\lambda)$ for all $0<\bar{\rho}<1$. We do this by computing the values of $E_{\bar{\rho}}(\lambda)$ from discrete values of $\bar{\rho}$ and $\lambda$.
The computation shows that for any $\bar{\rho}$, $E_{\bar{\rho}}(\lambda)$ is bounded when $|\lambda|\to\infty$.
\section*{ACKNOWLEDGMENT}
The authors would like to thank Data Science Institute from Columbia University for providing a seed grant for this research.
\bibliographystyle{IEEEtran}
|
1,116,691,500,947 | arxiv | \section{Abstract}\label{sec:abstract}
\noindent In a previous work \citep{luo2016sparse2d_spej}, the authors proposed an ensemble-based 4D seismic history matching (SHM) framework, which has some relatively new ingredients, in terms of the type of seismic data in choice, the way to handle big seismic data and related data noise estimation, and the use of a recently developed iterative ensemble history matching algorithm.
In seismic history matching, it is customary to use inverted seismic attributes, such as acoustic impedance, as the observed data. In doing so, extra uncertainties may arise during the inversion processes. The proposed SHM framework avoids such intermediate inversion processes by adopting amplitude versus angle (AVA) data. In addition, SHM typically involves assimilating a large amount of observed seismic attributes into reservoir models. To handle the big-data problem in SHM, the proposed framework adopts the following wavelet-based sparse representation procedure: First, a discrete wavelet transform is applied to observed seismic attributes. Then, uncertainty analysis is conducted in the wavelet domain to estimate noise in the resulting wavelet coefficients, and to calculate a corresponding threshold value. Wavelet coefficients above the threshold value, called leading wavelet coefficients hereafter, are used as the data for history matching. The retained leading wavelet coefficients preserve the most salient features of the observed seismic attributes, whereas rendering a substantially smaller data size. Finally, an iterative ensemble smoother is adopted to update reservoir models, in such a way that the leading wavelet coefficients of simulated seismic attributes better match those of observed seismic attributes.
As a proof-of-concept study, \cite{luo2016sparse2d_spej} applied the proposed SHM framework to a 2D case study, and numerical results therein indicated that the proposed framework worked well. However, the seismic attributes used in \cite{luo2016sparse2d_spej} are 2D datasets with relatively small data sizes, in comparison to those in practice. In the current study, we extend our investigation to a 3D benchmark case, the Brugge field. The seismic attributes used are 3D datasets, with the total number of seismic data in the order of $\mathcal{O}(10^6)$. Our study indicates that, in this 3D case study, the wavelet-based sparse representation is still highly efficient in substantially reducing the size of seismic data, whereas preserving the information content therein as much as possible. Meanwhile, the proposed SHM framework also achieves reasonably good history matching performance. This investigation thus serves as an important preliminary step towards applying the proposed SHM framework to real field case studies.
\section*{Introduction}
\label{sec:introduction}
\noindent
Seismic is one of the most important tools used for reservoir exploration, monitoring, characterization and management in the petroleum industry. Compared to conventional production data used in history matching, seismic data is less frequent in time, but much denser in space. Therefore, complementary to production data, seismic data provide valuable additional information for reservoir characterization.
There are different types of seismic data that one can use in history matching. Figure \ref{fig:seis_type} provides an outline of the relation of some types of seismic data to reservoir petro-physical parameters (e.g., permeability and porosity) in forward simulations. As indicated there, using petro-physical parameters as the inputs to reservoir simulators, one generates fluid saturation and pressure fields. Through a petro-elastic model (PEM), one obtains acoustic and/or shear impedance (or equivalently, compressional and/or shear velocities (velocity) and formation density) based on fluid saturation and/or pressure, and porosity. Finally, amplitude versus angle (AVA) data are computed by plugging impedance (or velocities and density) into an AVA equation (e.g., Zoeppritz equation, see, for example, \citealp{avseth2010quantitative}).
To reduce the computational cost in forward simulations, many seismic history matching (SHM) studies use inverted seismic attributes that are obtained through seismic inversions. Such inverted properties can be, for instance, acoustic impedance (see, for example, \citealp{emerick2012history,emerick2013history,fahimuddin2010ensemble,skjervheim2007incorporating}) or fluid saturation fronts (see, for example, \citealp{abadpour20134d,leeuwenburgh2014distance,trani2012seismic}). One issue in using inverted seismic attributes as the observational data is that, they may not provide uncertainty quantification for the observation errors, since inverted seismic attributes are often obtained using certain deterministic inversion algorithms.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.45]{pyramid.png}
\caption{\label{fig:seis_type} Some types of seismic data and their relation to reservoir petro-physical parameters.}
\end{figure*}
Typically the volume of seismic data is huge, therefore SHM often constitutes a ``big data assimilation'' problem. For ensemble-based history matching algorithms, a big data size may lead to certain numerical problems, e.g., ensemble collapse and high costs in computing and storing Kalman gain matrices \citep{Aanonsen-ensemble-2009,emerick2012history}. In addition, many history matching algorithms are developed for under-determined inverse problems, whereas a big data size could make the inverse problem become over-determined instead. This may thus affect the performance of history matching algorithms, as demonstrated in our previous study \citep{luo2016sparse2d_spej}.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.4]{sr.png}
\caption{\label{fig:sparse_representation} Workflow of wavelet-based sparse representation.}
\end{figure*}
In \cite{luo2016sparse2d_spej}, we proposed an ensemble-based 4D SHM framework in conjunction with a wavelet-based sparse representation procedure. We take AVA attributes as the seismic data to avoid the extra uncertainties arising from a seismic inversion process. To address of the issue of big data, we adopt a wavelet-based sparse representation procedure. Figure \ref{fig:sparse_representation} explains the idea behind wavelet-based sparse representation. Given a set of seismic data (which can be 2D or 3D), one first applies a multilevel discrete wavelet transform (DWT) to the data. DWT is adopted for the following two purposes: one is to reduce the size of seismic data by exploiting the sparse representation nature of wavelet basis functions, and the other is to exploit its capacity of noise estimation in the wavelet domain \citep{donoho1995adapting}. Based on the estimated standard deviation (STD) of the noise, one can construct the corresponding observation error covariance matrix that is needed in many (including ensemble-based) history matching algorithms.
For a chosen family of wavelet basis functions, seismic data are represented by the corresponding wavelet coefficients. When dealing with 2D data, DWT is similar to singular value decomposition (SVD) applied to a matrix. In the latter case, the matrix is represented by the corresponding singular values in the space spanned by the products of associated left and right singular vectors. Likewise, in the 2D case, one can also draw similarities between wavelet-based sparse representation and truncated singular value decomposition (TSVD). Indeed, in DWT, small wavelet coefficients are typically dominated by noise, whereas large coefficients mainly carry signal information \citep{jansen2012noise}. Therefore, as will be demonstrated later, it is possible for one to use only a small subset of leading wavelet coefficients to capture the main features of the signal, while significantly reducing the number of wavelet coefficients. We remark that TSVD-based sparse representation is not a suitable choice in the context of history matching, since the associated basis functions (i.e., products of left and right singular vectors) are data-dependent, meaning that in general it is not meaningful to compare and match the singular values of observed and simulated data.
Wavelet-based sparse representation involves suppressing noise components in the wavelet domain. To this end, we first estimate the STD of the noise in wavelet coefficients, and then compute a threshold value that depends on both the noise STD and data size. One can substantially reduce the data size by only keeping leading wavelet coefficients above the threshold, while setting those below the threshold value to zero. The leading wavelet coefficients are then taken as the (transformed) seismic data, and are history-matched using a certain algorithm. In the experiments later, we will examine the impact of the threshold value on the history matching performance.
A number of ensemble-based SHM frameworks have been proposed in the literature. For instance, \cite{abadpour20134d,fahimuddin2010ensemble,katterbauer2015history,leeuwenburgh2014distance,skjervheim2007incorporating,trani2012seismic} adopt the ensemble Kalman filter (EnKF) or a combination of the EnKF and ensemble Kalman smoother (EnKS), whereas \cite{emerick2012history,emerick2013history,luo2016sparse2d_spej} employ the ensemble smoother with multiple data assimilation (ES-MDA), and regularized Levenburg-Marquardt (RLM) based iterative ensemble smoother (RLM-MAC, see \citealp{luo2015Iterative}), respectively. We note that the history matching algorithm itself is independent of the wavelet-based sparse representation procedure. Therefore, one may combine the sparse representation procedure with a generic history matching algorithm, whether it is ensemble-based or not.
This work is organized as follows. First, we will introduce three key components of the proposed workflow, which includes: (1) forward AVA simulation, (2) 3D DWT based sparse representation procedure, and (3) regularized Levenburg-Marquardt based iterative ensemble smoother. Then, we will apply the proposed framework to the 3D Brugge benchmark case, and investigate the performance of the proposed framework in various situations. Finally, we draw conclusions based on the results of our investigation and discuss possible future works.
\section{The proposed framework}\label{sec:framework}
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.4]{framework.png}
\caption{\label{fig:framework} The proposed 4D seismic history matching framework.}
\end{figure*}
The proposed framework consists of three key components (see Figure \ref{fig:framework}), namely, forward AVA simulation, sparse representation (in terms of leading wavelet coefficients) of both observed and simulated AVA data, and the history matching algorithm. It is expected that the proposed framework can be also extended to other types of seismic data, and more generally, geophysical data with spatial correlations.
\subsection{Forward AVA simulation}
As can be seen in Figure \ref{fig:seis_type}, the forward AVA simulation involves several steps. First, pore pressure and fluid saturations are generated through reservoir flow simulation that takes petro-physical (e.g., permeability and porosity) and other parameters as the inputs. The generated pressure and saturation values are then used to calculate seismic parameters, such as P- and S-wave velocities and densities of reservoir and overburden formations, through a petro-elastic model (PEM). Finally, a certain AVA equation is adopted to compute the AVA attributes at different angles or offsets.
Building a proper PEM is crucial to the success of SHM, whereas it is a challenging task at the same time. To interpret the changes in seismic response over time, an in-depth understanding of rock and fluid properties is required \citep{jack2001coming}. In this study, since our focus is on validating the performance of the proposed framework in a 3D problem, we assume that the PEM is perfect. Here, we use a soft sand model as the PEM \citep{mavko2009rock}. The model assumes that, the cement is deposited away from the grain contacts. It further considers that, the initial framework of the uncemented sand rock is densely random pack of spherical grains with the porosity (denote by $\phi$ hereafter) around 36\%, which is the maximum porosity value that the rock could have before suspension. For convenience of discussion later, we denote this value as the critical porosity $(\phi_c)$ \citep{nur1991wave, nur1998critical}. The dry bulk modulus $(K_{HM})$ and shear modulus $(\mu_{HM})$ at critical porosity can then be computed using the Hertz-Mindlin model \citep{Mindlin:1949} below
\begin{equation}
K_{HM}= \sqrt[n]{\frac{C_{p}^2(1- \phi_c)^2 \mu_{s}^2}{18\pi^2(1-\nu_{s})^2}P_{eff}} \, ,
\end{equation}
and
\begin{equation}
\mu_{HM}= \frac{5-4\nu_{s}}{5(2-\nu_{s})}\sqrt[n]{\frac{3C_{p}^2(1- \phi_c)^2 \mu_{s}^2}{2\pi^2(1-\nu_{s})^2}P_{eff}} \, ,
\end{equation}
where $\mu_{s}$, $\nu_{s}$, $P_{eff}$ are grain shear modulus, Poisson's ratio, and effective stress, respectively. The coordination number $C_{p}$ denotes the average number of contacts per sphere, and $n$ is the degree of root. Here, $C_{p}$ and $n$ are set to 9 and 3, respectively.
To find the effective dry moduli for a porosity value less than $\phi_c$, the modified Lower Hashin-Shtrikman (MLHS) bound can be used \citep{mavko2009rock}. The MLHS connects two end points in the elastic modulus-porosity plane. One end point, ($K_{HM}$, $\mu_{HM}$), corresponds to critical porosity $\phi_c$. The other end point corresponds to zero porosity, taking the moduli of the solid phase, i.e. quartz mineral ($K_s$, $\mu_s$). For a porosity value $\phi$ between zero and $\phi_{c}$, the lower bound for dry rock effective bulk $K_{d}$ and shear moduli $G_{d}$ can be expressed as
\begin{equation}\label{eqn:MUHS_K}
K_{d} = \left[\frac{\frac{\phi}{\phi_{c}}}{K_{HM} + \frac{4}{3}\mu_{HM}} + \frac{\frac{1 - \phi}{\phi_{c}}}{K_{s} + \frac{4}{3}\mu_{HM}} \right]^{-1} - \frac{4}{3}\mu_{HM}
\end{equation}
and
\begin{equation}\label{eqn:MUHS_G}
G_{d} = \left[\frac{\frac{\phi}{\phi_{c}}}{\mu_{HM} + \frac{\mu_{HM}}{6} Z} + \frac{\frac{1 - \phi}{\phi_{c}}}{\mu_{s} + \frac{\mu_{HM}}{6} Z} \right]^{-1} - \frac{\mu_{HM}}{6} Z \, ,
\end{equation}
respectively, where $K_{s}$ is solid/mineral bulk modulus and $ Z= (9K_{HM} + 8\mu_{HM}) / (K_{HM} + 2\mu_{HM})$.
Further, the saturation effect is incorporated using the Gassmann model \citep{Gassmann:1951}. The saturated bulk modulus $K_{sat}$ and shear modulus $\mu_{sat}$ can be expressed as
\begin{equation}
K_{sat}= K_{d} + \frac{(1-\frac{K_{d}}{K_{s}})^2}{\frac{\phi}{K_{f}} + \frac{1 - \phi}{K_{s}} - \frac{K_{d}}{K_{s}^2} } \, ,
\end{equation}
and
\begin{equation}
\mu_{sat}= \mu_{d} \, ,
\end{equation}
respectively, where $K_{f}$ is the effective fluid bulk modulus, and is estimated using the Reuss average \citep{reuss1929berechnung}. For an oil-water mixture (as is the case in the Brugge field), $K_{f}$ is given by
\begin{equation}
K_{f}= (\frac{S_{w}}{K_{w}} + \frac{S_{o}}{K_{o}} )^{-1} \, ,
\end{equation}
where $K_{w}$, $K_{o}$, $S_{w}$ and $S_{o}$ are bulk modulus of water/brine, bulk modulus of oil, saturation of water/brine and saturation of oil, respectively.
Further, the saturated density \citep{mavko2009rock} can be written as (for the water-oil mixture)
\begin{equation}\label{eqn:density}
\rho_{sat} = (1-\phi)\rho_{m} + \phi S_{w}\rho_{w} + \phi S_{o}\rho_{o} \, ,
\end{equation}
where $\rho_{sat}$, $\rho_{m}$, $\rho_{w}$ and $\rho_{o}$ are saturated density of rock, mineral density, water/brine density and oil density, respectively.
Using the above equations, we can obtain P- and S-wave velocities given by \citep{mavko2009rock}
\begin{equation}
V_{P} = \sqrt{\frac{K_{sat} + \frac{4}{3}\mu_{sat}}{\rho_{sat}}} \, ,
\end{equation}
and
\begin{equation}
V_{S} = \sqrt{\frac{\mu_{sat}}{\rho_{sat}}} \, ,
\end{equation}
where $V_{P}$ and $V_{S}$ represent P- and S-wave velocities, respectively.
After seismic parameters are generated by plugging reservoir parameters into the PEM, we can then simulate seismogram based on these seismic parameters. First, the Zoeppritz equation is used to calculate the reflection coefficient at an interface between two layers. For multi-layer cases, we need to calculate reflectivity series as a function of two-way travel time (see, for example, \citealp{buland2003bayesian,mavko2009rock}). Here, travel time is computed from the P-wave velocity and vertical thickness of each grid block. We then convolve the reflectivity series with a Ricker wavelet of 45 Hz dominant frequency to obtain the desired seismic AVA data. In the experiments later, we generate AVA data at two different angles (i.e. 10$^{\circ}$ and 20$^{\circ}$). We use these AVA attributes directly without converting them further to other attributes like intercept and gradient. In doing so, we avoid introducing extra uncertainties in the course of attribute conversion.
\subsection{Sparse representation and noise estimation in wavelet domain}
Let $\mathbf{m}^{ref} \in \mathcal{R}^{m}$ denote the reference reservoir model. In the current study, we consider 3D AVA attributes (near- and far-offset traces) in the form of $p_1 \times p_2 \times p_3$ arrays (tensors), where $p_1$, $p_2$ and $p_3$ represent the numbers of inline, cross-line and time slices in seismic surveys, respectively. Accordingly, let $\mathbf{g}: \mathcal{R}^{m} \rightarrow \mathcal{R}^{p_1 \times p_2 \times p_3}$ be the forward simulator of AVA attributes. The observed AVA attributes $\mathbf{d}^o$ are supposed to be the forward simulation $\mathbf{g}(\mathbf{m}^{ref})$ with respect to the reference model, plus certain additive observation errors $\boldsymbol{\epsilon}$, that is,
\begin{linenomath*}
\begin{equation} \label{eq:obs_system}
\mathbf{d}^o = \mathbf{g}(\mathbf{m}^{ref}) + \boldsymbol{\epsilon} \, .
\end{equation}
\end{linenomath*}
For ease of discussion below, suppose at the moment that all the tensors in Eq. (\ref{eq:obs_system}), i.e., $\mathbf{d}^o$, $\mathbf{g}(\mathbf{m}^{ref})$ and $\boldsymbol{\epsilon}$, are reshaped into vectors with $p_1 \times p_2 \times p_3$ elements. Throughout this study, we assume that, for a given AVA attribute, the elements of $\boldsymbol{\epsilon}$ are independently and identically distributed (i.i.d) Gaussian white noise, with zero mean but unknown variance $\sigma^2$, where $\sigma$ will be estimated through wavelet multiresolution analysis below. More generally, one may also assume that $\boldsymbol{\epsilon}$ follows a Gaussian distribution with zero mean and covariance $\sigma^2 \, \mathbf{R}$, where $\mathbf{R}$ is a known covariance matrix and $\sigma^2$ is a scalar to be estimated. In this case, one can whiten the additive noise by multiplying both sides of Eq. (\ref{eq:obs_system}) by $\mathbf{R}^{-1/2}$.
As shown in Figure \ref{fig:sparse_representation}, wavelet-based sparse representation involves the following steps: (\rom{1}) Apply DWT to seismic data; (\rom{2}) Estimate noise STD of wavelet coefficients; and (\rom{3}) Compute a threshold value that depends on both noise STD and data size, and do thresholding accordingly.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.4]{3D_DWT.png}
\caption{\label{fig:3D_DWT} A single level decomposition in 3D decimated discrete wavelet transform.}
\end{figure*}
We adopt 3D DWT to handle AVA attributes in this study. An introduction to wavelet theory is omitted here, and interested readers are referred to, for example, \cite{mallat1999wavelet}.
Figure \ref{fig:3D_DWT} illustrates a single level decomposition in 3D DWT. For convenience, we call these three dimensions of AVA attributes $x$, $y$ and $z$, respectively. The input data is $\mathbf{d}^o_{LLL_j}$, which can be either the original observation $\mathbf{d}^o$ (when $j=0$) or the partial data recovered using the wavelet coefficients of the sub-band $LLL_j$ (when $j>0$). Without loss of generality, let us assume $j=0$, such that Figure \ref{fig:3D_DWT} corresponds to the first level 3D wavelet transform. In this case, the 3D transform is achieved by sequentially applying 1D DWT along $x$, $y$ and $z$ directions. In the 1D DWT along each direction, there are both low- (L) and high-pass (H) filters, and the transform results in one ``L'' and one ``H'' sub-band of wavelet coefficients, respectively. The ``H'' sub-band corresponds to high frequency components in wavelet domain, while the ``L'' sub-band to low frequency ones. As a result, after the first level of 3D DWT, there are 8 sub-bands in the wavelet domain, which are labelled as $LLL_1$, $LLH_1$, $LHL_1$, $LHH_1$, $HLL_1$, $HLH_1$, $HHL_1$ and $HHH_1$, respectively. The sub-band $LLL_1$ ($HHH_1$) results only from low-pass (high-pass) filters, while the others from mixtures of low- and high-pass ones. One can continue the 3D DWT to the next level by applying the transform to the data $\mathbf{d}^o_{LLL_1}$ that corresponds to the sub-band $LLL_1$. This leads to a set of new sub-bands of wavelet coefficients (labelled as $LLL_2$, $LLH_2$, $LHL_2$, $LHH_2$, $HLL_2$, $HLH_2$, $HHL_2$ and $HHH_2$, respectively), and so on.
Since $HHH_1$ corresponds to the high frequency (typically noise) components of the original data $\mathbf{d}^o$, it can be used to infer noise STD in the wavelet domain. Specifically, let $\mathcal{W}$ and $\mathcal{T}$ denote orthogonal wavelet transform and thresholding operators, respectively; $\tilde{\mathbf{d}}^o = \mathcal{W} (\mathbf{d}^o)$ stands for the whole set of wavelet coefficients corresponding to the original data $\mathbf{d}^o$, and $\tilde{\mathbf{d}}^o_{HHH_1}$ for the wavelet coefficients in the sub-band $HHH_1$. After DWT and thresholding, the effective observation system becomes
\begin{linenomath*}
\begin{equation} \label{eq:tw_obs_system}
\mathcal{T} \circ \mathcal{W} (\mathbf{d}^o) = \mathcal{T} \circ \left( \mathcal{W} \circ \mathbf{g}(\mathbf{m}^{ref}) + \mathcal{W} (\boldsymbol{\epsilon}) \right) \, .
\end{equation}
\end{linenomath*}
As will be discussed below, for leading wavelet coefficients (those above the threshold value), $\mathcal{T}$ is an identity operator such that it does not modify the values of leading wavelet coefficients. The reason for us to require orthogonal $\mathcal{W}$ is as follows: If $\mathcal{W}$ is orthogonal, then the wavelet transform preserves the energy of Gaussian white noise (e.g., the Euclidean norm of the noise term $\boldsymbol{\epsilon}$). In addition, like the power spectral distribution of white noise in frequency domain, the noise energy in the wavelet domain is uniformly distributed among all wavelet coefficients \citep{jansen2012noise}. This implies that, if one can estimate the noise STD $\sigma$ of small wavelet coefficients (e.g., those in $HHH_1$), then this estimation can also be used to infer the noise STD of leading wavelet coefficients used in history matching. Similar to our previous study \cite{luo2016sparse2d_spej}, the noise STD $\sigma$ is estimated using the median absolute deviation (MAD) estimator \citep{donoho1995adapting}:
\begin{linenomath*}
\begin{equation} \label{eq:noise_std_mad}
\sigma = \dfrac{\operatorname{median}(\operatorname{abs}(\tilde{\mathbf{d}}^o_{HHH_1}))}{0.6745} \, ,
\end{equation}
\end{linenomath*}
where $\operatorname{abs}(\bullet)$ is an element-wise operator, and takes the absolute value of an input quantity.
After estimating $\sigma$ in an $n$-level wavelet decomposition, we apply hard thresholding and select leading wavelet coefficients on the element-by-element basis, in a way such that
\begin{linenomath*}
\begin{equation} \label{eq:hard_thresholding}
\mathcal{T}(\tilde{{d}}^o_i) =
\begin{cases}
0 & \quad \text{if } \tilde{{d}}^o_i < \lambda \, ,\\
\tilde{{d}}^o_i & \quad \text{otherwise} \, ,\\
\end{cases}
\end{equation}
\end{linenomath*}
where, without loss of generality, the scalar $\tilde{{d}}^o_i \in \tilde{\mathbf{d}}^o$ represents an individual wavelet coefficient, and $\lambda$ is a certain threshold value to be computed later. Eq. (\ref{eq:hard_thresholding}) means that, for leading wavelet coefficients above (or equal to) the threshold $\lambda$, their values are not changed, whereas for those below $\lambda$, they are set to zero. Note that in \cite{luo2016sparse2d_spej}, hard thresholding is not applied to the coarsest sub-band (i.e., the $LL_n$/$LLL_n$ sub-band for an $n$-level 2D/3D DWT) in light of the fact that the wavelet coefficients in this sub-band correspond to low-frequency components, which are typically dominated by the signal. As a result, applying thresholding to this sub-band may lead to certain loss of signal information in history matching. However, for an AVA attribute in this study, we have observed that its corresponding $LLL_n$ sub-band may contain a large amount of wavelet coefficients (e.g., in the order of $10^4$). To have the flexibility of efficiently reducing the data size, we lift the restriction such that thresholding can also be applied to the sub-band $LLL_n$.
In \cite{luo2016sparse2d_spej}, the threshold value $\lambda$ is computed using the universal rule \citep{donoho1994ideal}
\begin{linenomath*}
\begin{equation} \label{eq:universal_rule}
\lambda = \sqrt{2 \, \operatorname{ln}(\# \mathbf{d}^o)} \, \sigma \, ,
\end{equation}
\end{linenomath*}
with $\# \mathbf{d}^o$ being the number of elements in $\mathbf{d}^o$. In the current work, when using Eq. (\ref{eq:universal_rule}) to select the threshold value, it is found that the resulting number of leading wavelet coefficients may still be very large. As a result, in the experiments later, we select the threshold value according to
\begin{linenomath*}
\begin{equation} \label{eq:multiple_universal_rule}
\lambda = c \, \sqrt{2 \, \operatorname{ln}(\# \mathbf{d}^o)} \, \sigma \, ,
\end{equation}
\end{linenomath*}
where $c>0$ is a positive scalar, and the larger the value of $c$, the less the number of leading wavelet coefficients. Therefore, the scalar $c$ can be used to control the total number of leading wavelet coefficients.
Combining Eqs. (\ref{eq:tw_obs_system}) -- (\ref{eq:multiple_universal_rule}), the effective observation system in history matching becomes
\begin{linenomath*}
\begin{equation} \label{eq:reduced_obs_system}
\tilde{\mathbf{d}}^o = \mathcal{W} \circ \mathbf{g}(\mathbf{m}^{ref}) + \mathcal{W} (\boldsymbol{\epsilon}) \, , \text{ for } \tilde{\mathbf{d}}^o \geq \lambda \, ,
\end{equation}
\end{linenomath*}
where now $\tilde{\mathbf{d}}^o$ is a vector containing all selected leading wavelet coefficients, and $\mathcal{W} (\boldsymbol{\epsilon})$ the corresponding noise component in the wavelet domain, with zero mean and covariance $\mathbf{C}_{\tilde{\mathbf{d}}^o} = \sigma^2 \mathbf{I}$ (here $\mathbf{I}$ is the identity matrix with a suitable dimension).
\newcommand{0.33}{0.33}
\begin{figure*}
{
\centering
\subfigure[Reference AVA far-offset trace]{ \label{subfig:ref_data}
\includegraphics[scale=0.33]{test.png}
\subfigure[Noisy AVA far-offset trace]{ \label{subfig:noisy_data}
\includegraphics[scale=0.33]{test_noisy.png}
\subfigure[Reconstructed AVA far-offset trace]{ \label{subfig:denoised_data}
\includegraphics[scale=0.33]{test_rec.png}
\subfigure[$HHL_1$ of reference trace]{ \label{subfig:ref_HLL1}
\includegraphics[scale=0.33]{ref_HLL1.eps}
}\hfill
\subfigure[$HHL_1$ of noisy trace]{ \label{subfig:noisy_HLL1}
\includegraphics[scale=0.33]{noisy_HLL1.eps}
}\hfill
\subfigure[$HHL_1$ of reconstructed trace]{ \label{subfig:rec_HLL1}
\includegraphics[scale=0.33]{rec_HLL1.eps}
}\hfill
\subfigure[Reference noise]{ \label{subfig:ref_noise}
\includegraphics[scale=0.33]{ref_noise.png}
\subfigure[Estimated noise]{ \label{subfig:rec_noise}
\includegraphics[scale=0.33]{rec_noise.png}
\subfigure[Noise difference]{ \label{subfig:diff_noise}
\includegraphics[scale=0.33]{diff_noise.png}
}
\caption{\label{fig:illustration_data} Illustration of sparse representation of a 3D AVA far-offset trace using slices at $X=40, 80, 120$ and at $Z= 50, 100, 150, 200$, respectively. (a) Reference AVA trace; (b) Noisy AVA trace obtained by adding Gaussian white noise (noise level = 30\%) to the reference data; (c) Reconstructed AVA trace obtained by first conducting a 3D DWT on the noisy data, then applying hard thresholding (using the universal threshold value) to wavelet coefficients, and finally reconstructing the data using an inverse 3D DWT based on the modified wavelet coefficients; (d) Wavelet sub-band $HHL_1$ corresponding to the reference AVA data; (e) Wavelet sub-band $HHL_1$ corresponding to the noisy AVA data; (f) Wavelet sub-band $HHL_1$ corresponding to the reconstructed AVA data; (g) Reference noise, defined as noisy AVA data minus reference AVA data; (g) Estimated noise, defined as noisy AVA data minus reconstructed AVA data; (i) Noise difference, defined as estimated noise minus reference noise. All 3D plots are created using the package \it{Sliceomatic} (version 1.1) from MATLAB Central (File ID: \#764).}
\end{figure*}
We use an example to illustrate the performance of sparse representation and noise estimation in 3D DWT. In this example, we first generate a reference AVA far-offset trace using the forward AVA simulator. The dimension of this trace is $139 \times 48 \times 251$, therefore the data size is $1,674,672$. Figure \ref{subfig:ref_data} plots slices of the AVA trace at $X=40$, $80$, $120$, and $Z=50$, $100$, $150$ and $200$, respectively. We then add Gaussian white noise to obtain the noisy AVA trace, with the noise level being $30\%$. Here, noise level is defined as:
\begin{linenomath*}
\begin{equation} \label{eq:def_noise_level}
\text{Noise level } = \dfrac{\text{variance of noise}}{\text{variance of pure signal}} \, .
\end{equation}
\end{linenomath*}
Figure \ref{subfig:noisy_data} shows slides of the noisy AVA trace at the same locations as in Figure \ref{subfig:ref_data}.
We apply a three-level 3D DWT to the noisy data using Daubechies wavelets with two vanishing moments \citep{mallat1999wavelet}, and use hard thresholding combined with the universal rule (Eqs. (\ref{eq:noise_std_mad}) -- (\ref{eq:universal_rule})) to select leading wavelet coefficients. After thresholding, the number of leading wavelet coefficients reduces to $33,123$, only $2\%$ of the original data size. On the other hand, by applying Eq. (\ref{eq:noise_std_mad}), the estimated noise STD is $0.0105$, and it is very close to the true noise STD $0.0104$. By applying an inverse 3D DWT to leading wavelet coefficients, we obtain the reconstructed AVA trace (Figure \ref{subfig:denoised_data} plots slices of this trace at the same places as Figures \ref{subfig:ref_data} and \ref{subfig:noisy_data}). Comparing Figures \ref{subfig:ref_data} -- \ref{subfig:denoised_data}, one can see that, using leading wavelet coefficients that amounts to only $2\%$ of the original data size, the slices of reconstructed AVA trace well capture the main features in the corresponding slices of the reference AVA data.
Figures \ref{subfig:ref_HLL1} -- \ref{subfig:rec_HLL1} show wavelet coefficients in the sub-bands $HHL_1$ of the reference, noisy and reconstructed AVA traces, respectively. From these figures, one can see that, after applying thresholding to wavelet coefficients of noisy data (Figure \ref{subfig:noisy_HLL1}), the modified coefficients (Figure \ref{subfig:rec_HLL1}) preserve those with large amplitudes in the reference case (Figure \ref{subfig:ref_HLL1}). In general, the modified coefficients appear similar to those of the reference case, whereas certain small coefficients of the reference case are suppressed due to thresholding.
Finally, Figures \ref{subfig:ref_noise} -- \ref{subfig:diff_noise} depict slices of reference and estimated noise, and their difference, respectively, at the same places as in Figure \ref{subfig:ref_data}. Here, reference noise is defined as noisy AVA data (Figure \ref{subfig:noisy_data}) minus reference AVA data (Figure \ref{subfig:ref_data}), estimated noise as noisy AVA data minus reconstructed AVA data (Figure \ref{subfig:denoised_data}), and noise difference as estimated noise minus reference noise. The estimated noise appears very similar to the reference noise, although there are also certain differences according to Figure \ref{subfig:diff_noise}. This might be largely due to the fact that some small wavelet coefficients of the reference data are smeared out after thresholding, as aforementioned.
\subsection{The ensemble history matching algorithm}
We adopt the RLM-MAC algorithm \citep{luo2015Iterative} in history matching.
Without loss of generality, let $\mathbf{d}^o$ denote $p$-dimensional observations in history matching, which stands for values in the ordinary data space (e.g., 3D AVA attributes by reshaping 3D arrays into vectors), or their sparse representations in the transform domain (e.g., leading wavelet coefficients in wavelet domain). The observations $\mathbf{d}^o$ are contaminated by Gaussian noise with zero mean and covariance $\mathbf{C}_d$ (denoted by $\mathbf{d}^o \sim N(\mathbf{0},\mathbf{C}_d)$). Also denote by $\mathbf{g}$ the forward simulator that generates simulated observations $\mathbf{d} \equiv \mathbf{g}(\mathbf{m})$ given an $m$-dimensional reservoir model $\mathbf{m}$.
In the RLM-MAC algorithm, let $\mathbf{M}^i \equiv \{ \mathbf{m}_j^i \}_{j=1}^{{N_e}}$ be an ensemble of ${N_e}$ reservoir models obtained at the $i$th iteration step, based on which we can construct two square root matrices used in the RLM-MAC algorithm. One of the matrices is in the form of
\begin{linenomath*}
\begin{IEEEeqnarray}{clc} \label{eq:model_sqrt}
& \mathbf{S}_m^i = \frac{1}{\sqrt{{N_e}-1}}\left[\mathbf{m}_1^i - \bar{\mathbf{m}}^i,\dotsb, \mathbf{m}_{N_e}^i - \bar{\mathbf{m}}^i \right] \, ; & \quad \bar{\mathbf{m}}^i = \frac{1}{{N_e}} \sum_{j=1}^{{N_e}} \mathbf{m}_j^i \, ,
\end{IEEEeqnarray}
\end{linenomath*}
and is called \textit{model square root matrix}, in the sense that $\mathbf{C}_{m}^{i} \equiv \mathbf{S}_m^i \left( \mathbf{S}_m^i \right)^T$ equals the sample covariance matrix of the ensemble $\mathbf{M}^i$. The other, defined as
\begin{linenomath*}
\begin{IEEEeqnarray}{clc} \label{eq:data_sqrt}
& \mathbf{S}_d^i = \frac{1}{\sqrt{{N_e}-1}}\left[\mathbf{g}(\mathbf{m}_1^i) - \mathbf{g}(\bar{\mathbf{m}}^i),\dotsb, \mathbf{g}(\mathbf{m}_{N_e}^i) - \mathbf{g}(\bar{\mathbf{m}}^i) \right] \, ,
\end{IEEEeqnarray}
\end{linenomath*}
is called \textit{data square root matrix} for a similar reason.
The RLM-MAC algorithm updates $\mathbf{M}^i$ to a new ensemble $\mathbf{M}^{i+1} \equiv \{ \mathbf{m}_j^{i+1} \}_{j=1}^{{N_e}}$ by solving the following minimum-average-cost problem
\begin{linenomath*}
\begin{IEEEeqnarray}{lll} \label{eq:wls_rlm_mac}
\underset{\{\mathbf{m}^{i+1}_j\}_{j=1}^{N_e}}{\operatorname{argmin}} & \dfrac{1}{N_e} \sum\limits_{j=1}^{N_e} & \, \left[ \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i+1}_j \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i+1}_j \right) \right) + \gamma^{i} \left( \mathbf{m}^{i+1}_j - \mathbf{m}^{i}_j \right)^T \left( \mathbf{C}_{m}^{i} \right)^{-1} \left( \mathbf{m}^{i+1}_j - \mathbf{m}^{i}_j \right) \right] \, , \nonumber
\end{IEEEeqnarray}
\end{linenomath*}
where $\mathbf{d}^o_j$ ($j = 1,2,\dotsb, N_e$) are perturbed observations generated by drawing $N_e$ samples from the Gaussian distribution $N(\mathbf{d}^o,\mathbf{C}_{d})$, and $\gamma^i$ a positive scalar that can be used to control the step size of an iteration step, and is automatically chosen using a procedure similar to back-tracking line search \citep{luo2015Iterative}. Through linearization, the MAC problem is approximately solved through the following iteration:
\begin{linenomath*}
\begin{IEEEeqnarray}{crlc} \label{eq:rlm_mac}
& \mathbf{m}^{i+1}_j = \mathbf{m}^{i}_j + \mathbf{S}_m^i (\mathbf{S}_d^i)^T \left( \mathbf{S}_d^i (\mathbf{S}_d^i)^T + \gamma^i \, \mathbf{C}_{d} \right)^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_j \right) \right) \, , \text{ for } j = 1, 2, \dotsb, N_e \, .& &
\end{IEEEeqnarray}
\end{linenomath*}
The stopping criteria have substantial impact on the performance of an iterative inversion algorithm \citep{Engl2000-regularization}. \cite{luo2015Iterative} mainly used the following two stopping conditions for the purpose of run-time control:
\begin{itemize}
\item[(C1)] RLM-MAC stops if it reaches a maximum number of iteration steps;
\item[(C2)] RLM-MAC stops if the relative change of average data mismatch over two consecutive iteration steps is less than a certain value.
\end{itemize}
For all the experiments later, we set the maximum number of iterations to $20$, and the limit of the relative change to $0.01\%$.
Let
\begin{linenomath*}
\begin{IEEEeqnarray}{lll} \label{eq:avg_data_mismatch}
\boldsymbol{\Xi}^i \equiv \dfrac{1}{N_e} \sum\limits_{j=1}^{N_e} & \, \left[ \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_j \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_j \right) \right) \right] &
\end{IEEEeqnarray}
\end{linenomath*}
be the average (normalized) data mismatch with respect to the ensemble $\mathbf{M}^i$.
Following Proposition 6.3 of \cite{Engl2000-regularization}, a third stopping condition is introduced and implemented in \cite{luo2016sparse2d_spej}. Concretely, we also stop the iteration in Eq. (\ref{eq:rlm_mac}) when
\begin{linenomath*}
\begin{IEEEeqnarray}{lll} \label{eq:stopping_criterion_ndm}
\boldsymbol{\Xi}^i < 4 p
\end{IEEEeqnarray}
\end{linenomath*}
for the first time, where the factor $4$ is a critical value below which the iteration process starts to transit from convergence to divergence \citep[page 158]{Engl2000-regularization}. Numerical results in \cite{luo2016sparse2d_spej} indicate that, in certain circumstances, equipping RLM-MAC with the extra stopping condition (\ref{eq:stopping_criterion_ndm}) may substantially improve its performance in history matching. Readers are referred to \cite{luo2016sparse2d_spej} for more details. In the current study, however, the impact of the stopping criterion (\ref{eq:stopping_criterion_ndm}) is not as substantial as that in \cite{luo2016sparse2d_spej}. Nevertheless, we prefer to keep this stopping criterion as an extra safeguard procedure.
\section{Numerical results in the Brugge benchmark case}\label{sec:results}
We demonstrate the performance of the proposed workflow through a 3D Brugge benchmark case study. Table \ref{tab:brugge_3d} summarizes the key information of the experimental settings. Readers are referred to \cite{peters2010results} for more information of the benchmark case study.
\begin{table*}
\centering
\caption{\label{tab:brugge_3d} Summary of experimental settings in the Brugge benchmark case study}
\begin{tabular}{||l||l||}
\hline
\multirow{2}{*}{Model dimension} & $139 \times 48 \times 9$ (60048 gridblocks), with 44550 out of 60048 being \\
& active cells \\
\hline
Parameters to estimate & PORO, PERMX, PERMY, PERMZ. Total number is $4 \times 44550 = 178200$ \\
\hline
Gridblock size & Irregular. Average $\Delta X \approx 93 m$, $\Delta Y \approx 91 m$, and average $\Delta Z \approx 5 m$ \\
\hline
Reservoir simulator & ECLIPSE 100 (control mode LRAT) \\
\hline
Number of wells & 10 injectors and 20 producers \\
\hline
Production period & 3647.5 days (with 20 report times) \\
\hline
\multirow{2}{*}{Production data} & Production wells : BHP, OPR and WCT; Injection wells : BHP. \\ & Total number: $20 \times 70 = 1400$ \\
\hline
\multirow{2}{*}{Seismic survey time} & Base: day 1; Monitor (1st): day 991; Monitor (2nd): \\
& day 2999 \\
\hline
\multirow{2}{*}{4D seismic data} & AVA data from near- and far- offsets at three survey times. \\
& Total number: $\sim$ 7.02 M \\
\hline
\multirow{2}{*}{DWT (seismic)} & Three-level decomposition using 3D Daubechies wavelets with \\
& two vanishing moments \\
\hline
Thresholding & Hard thresholding based on Eqs.(\ref{eq:hard_thresholding}) and (\ref{eq:multiple_universal_rule}) \\
\hline
History matching method & iES (RLM-MAC) with an ensemble of 103 reservoir models \\
\hline
\end{tabular}
\end{table*}
The Brugge field model has 9 layers, and each layer consists of $139 \times 48$ gridblocks. The total number of gridblocks is 60048, whereas among them $44550$ are active. The data set of the original benchmark case study does not contain AVA attributes, therefore we generate synthetic seismic data in the following way: The benchmark case contains an initial ensemble of 104 members. We randomly pick one of them as the reference model (which turned out to be the member ``FN-SS-KP-1-92''), and use the rest 103 members as the initial ensemble in this study. The model variables to be estimated include porosity (PORO) and permeability (PERMX, PERMY, PERMZ) at all active gridblocks. Consequently, the total number of parameters in estimation is $178200$.
There are 20 producers and 10 water injectors in the reference model, and they are controlled by the liquid rate (LRAT) target. The production period is 10 years, and in history matching we use production data at 20 report times. The production data consist of oil production rates (WOPR) and water cuts (WWCT) at 20 producers, and bottom hole pressures (WBHP) at all 30 wells. Therefore the total number of production data is $1400$. Gaussian white noise is added to production data of the reference model. For WOPR and WWCT data, their noise STD are taken as the maximum values between $10\%$ of their magnitudes and $10^{-6}$ (the latter is used to prevent the numerical issue of division by zero), whereas for WBHP data, the noise STD is 1 bar.
In the experiments, there are three seismic surveys taking place on day 1 (base), day 991 (1st monitor), and day 2999 (2nd monitor), respectively. At each survey, we apply forward AVA simulation described in the previous section to the static (porosity) and dynamic (pressure and saturation) variables of the reference model, and generate AVA attributes at two different angles: $10^\circ$ (near-offset) and $20^\circ$ (far-offset). Each AVA attribute is a 3D ($139 \times 48 \times 176$) cube, and consists of around $1.17 \times 10^6$ elements. Therefore the total number of 4D seismic data is around $3 \times 2 \times 1.17 \times 10^6 = 7.02 \times \times 10^6$. For convenience of discussion later, we label the dimensions of the 3D cubes by $X$, $Y$ and $Z$, respectively, such that $X = 1, 2, \dotsb, 139$, $Y = 1, 2, \dotsb, 48$ and $Z = 1, 2, \dotsb, 176$. In history matching, we add Gaussian white noise to each reference AVA attribute, with the noise level being $30\%$. Here we do not assume the noise STD is known. Instead, we first apply three-level 3D DWT to each AVA cube using Daubechies wavelets with two vanishing moments, and then use Eq. (\ref{eq:noise_std_mad}) to estimate noise STD in the wavelet domain.
In what follows, we consider three history matching scenarios that involve: (S1) production data only; (S2) 4D seismic data only; and (S3) both production and 4D seismic data. Because of the huge volumes of AVA attributes, in scenarios (S2) and (S3), it is not convenient to directly use the 4D seismic data in the original data space. Therefore, to examine the impact of data size on the performance of SHM, in each scenario (S2 or S3), we consider two cases that have different numbers of leading wavelet coefficients. This is achieved by letting the scalar $c$ of Eq. (\ref{eq:multiple_universal_rule}) be $1$ and $5$, respectively.
\subsection{Results of scenario S1 (using production data only)}
\begin{figure*}
\centering
\includegraphics[scale=0.4]{Brugge_boxplot_objRealIter_S1.eps}
\caption{\label{fig:Brugge_boxplot_objRealIter_S1} Boxplots of production data mismatch as a function of iteration step (scenario S1). The horizontal dashed line indicates the threshold value ($4 \times 1400 = 5600$) for the stopping criterion (\ref{eq:stopping_criterion_ndm}). For visualization, the vertical axis is in the logarithmic scale. In each box plot, the horizontal line (in red) inside the box denotes the median; the top and bottom of the box represent the 75th and 25th percentiles, respectively; the whiskers indicate the ranges beyond which the data are considered outliers, and the whiskers positions are determined using the default setting of MATLAB$^\copyright$ R2015b, while the outliers themselves are plotted individually as plus signs (in red).}
\end{figure*}
Figure \ref{fig:Brugge_boxplot_objRealIter_S1} shows the boxplots of data mismatch as a function of iteration step. The average data mismatch of the initial ensemble (iteration 0) is around $5.65 \times 10^9$. After 20 iteration steps, the average data mismatch is reduced to $5431.97$, lower than the threshold value $4 \times 1400 = 5600$ in (\ref{eq:stopping_criterion_ndm}) for the first time. In this particular case, the stopping step selected according to the criterion (\ref{eq:stopping_criterion_ndm}) coincides with the maximum number of iteration steps. Therefore, we take the ensemble at the 20th iteration step as the final estimation.
\renewcommand{0.33}{0.4}
\begin{figure*}
\centering
\subfigure[RMSEs of log PERMX]{ \label{subfig:rmse_PERMX_boxplot_ensemble_S1}
\includegraphics[scale=0.33]{rmse_PERMX_boxplot_ensemble_S1.eps}
\subfigure[RMSEs of PORO]{ \label{subfig:rmse_PORO_boxplot_ensemble_S1}
\includegraphics[scale=0.33]{rmse_PORO_boxplot_ensemble_S1.eps}
}
\caption{\label{fig:Brugge_RLM-MAC_RMSE_S1} Boxplots of RMSEs of (a) log PERMX and (b) PORO as functions of iteration step (scenario S1).}
\end{figure*}
In this synthetic study, the reference reservoir model is known. As a result, we use root mean squared error (RMSE) in the sequel to measure the $\ell_2$-distance (up to a factor) between an estimated model and the reference one. More specifically, let $\mathbf{v}^{tr}$ be the $\ell$-dimensional reference property, and $\hat{\mathbf{v}}$ an estimation, then the RMSE $e_{\mathbf{v}}$ of $\hat{\mathbf{v}}$ with respect to the reference $\mathbf{v}^{tr}$ is defined by
\begin{linenomath*}
\begin{IEEEeqnarray}{lll} \label{eq:RMSE_def}
e_{\mathbf{v}} = \dfrac{\Vert \hat{\mathbf{v}} - \mathbf{v}^{tr} \Vert_2}{\sqrt{\ell}} \, ,
\end{IEEEeqnarray}
\end{linenomath*}
where $\Vert \bullet \Vert_2$ denotes the $\ell_2$ norm.
For brevity, Figure \ref{fig:Brugge_RLM-MAC_RMSE_S1} reports the boxplots of RMSEs in estimating PERMX (in the natural log scale) and PORO, at different iteration steps, whereas the results for PERMY and PERMZ are similar to that for PERMX. As can be seen in Figure \ref{fig:Brugge_RLM-MAC_RMSE_S1}, the average RMSEs of both log PERMX and PORO tend to reduce as the number of iteration steps increases.
\renewcommand{0.33}{0.33}
\begin{figure*}
\centering
\subfigure[WBHP of the initial ensemble]{ \label{subfig:WBHP_BR-P-5_initial}
\includegraphics[scale=0.33]{WBHP_BR-P-5_initial.eps}
}%
\subfigure[WOPR of the initial ensemble]{ \label{subfig:WOPR_BR-P-5__initial}
\includegraphics[scale=0.33]{WOPR_BR-P-5_initial.eps}
}%
\subfigure[WWCT of the initial ensemble]{ \label{subfig:WWCT_BR-P-5__initial}
\includegraphics[scale=0.33]{WWCT_BR-P-5_initial.eps}
}%
\subfigure[WBHP of the final ensemble]{ \label{subfig:WBHP_BR-P-5_S1}
\includegraphics[scale=0.33]{WBHP_BR-P-5_S1.eps}
}%
\subfigure[WOPR of the final ensemble]{ \label{subfig:WOPR_BR-P-5__S1}
\includegraphics[scale=0.33]{WOPR_BR-P-5_S1.eps}
}%
\subfigure[WWCT of the final ensemble]{ \label{subfig:WWCT_BR-P-5__S1}
\includegraphics[scale=0.33]{WWCT_BR-P-5_S1.eps}
}%
\caption{\label{fig:production_P5_S1} Profiles of WBHP, WOPR and WWCT of the initial (1st row) and final (2nd row) ensembles at the producer BR-P-5 (scenario S1). The production data of the reference model are plotted as orange curves, the observed production data at 20 report times as red dots, and the simulated production data of initial and final ensembles as blue curves.}
\end{figure*}
\renewcommand{0.33}{0.21}
\begin{figure*}
\centering
\subfigure[Rerefrence log PERMX]{ \label{subfig:PERMX_L2_true}
\includegraphics[scale=0.33]{PERMX_L2_true.eps}
\subfigure[Mean of initial log PERMX]{ \label{subfig:PERMX_L2_Mean_initEns}
\includegraphics[scale=0.33]{PERMX_L2_Mean_initEns.eps}
\subfigure[Mean of final log PERMX]{ \label{subfig:PERMX_L2_Mean_ensemble20_S1}
\includegraphics[scale=0.33]{PERMX_L2_Mean_ensemble20_S1.eps}
}%
\subfigure[Rerefrence PORO]{ \label{subfig:PORO_L2_true}
\includegraphics[scale=0.33]{PORO_L2_true.eps}
\subfigure[Mean of initial PORO]{ \label{subfig:PORO_L2_Mean_initEns}
\includegraphics[scale=0.33]{PORO_L2_Mean_initEns.eps}
\subfigure[Mean of final PORO]{ \label{subfig:PORO_L2_Mean_ensemble20_S1}
\includegraphics[scale=0.33]{PORO_L2_Mean_ensemble20_S1.eps}
}%
\caption{\label{fig:estimation_S1} Log PERMX (top row) and PORO (bottom row) of the reference reservoir model (1st column) and the means of the initial (2nd column) and final (3rd column) ensembles at Layer 2 (scenario S1). The black dots in the figures represent the locations of injection and production wells (top view).}
\end{figure*}
Figure \ref{fig:production_P5_S1} shows the profiles of WBHP, WOPR and WWCT of the initial (1st row) and final (2nd row) ensemble at the producer BR-P-5. It is evident that, through history matching, the final ensemble matches the production data better than the initial one, and this is consistent with the results in Figure \ref{fig:Brugge_boxplot_objRealIter_S1}.
For illustration, Figure \ref{fig:estimation_S1} presents the reference log PERMX and PORO at layer 2 (1st column), the mean of log PERMX and PORO at layer 2 from the initial ensemble (2nd column), and the mean of log PERMX and PORO at layer 2 from the final ensemble (3rd column). A comparison between the initial and final estimates of log PERMX and PORO indicates that the final estimates appear more similar to the reference fields, in consistence with results in Figure \ref{fig:Brugge_RLM-MAC_RMSE_S1}.
\subsection{Results of scenario S2 (using seismic data only)}
\renewcommand{0.33}{0.45}
\begin{figure*}
\centering
\centering
\subfigure[Seismic data mismatch ($c=1$)]{ \label{subfig:Brugge_boxplot_objRealIter_c1_S2}
\includegraphics[scale=0.33]{Brugge_boxplot_objRealIter_c1_S2.eps}
}%
\subfigure[Seismic data mismatch ($c=5$)]{ \label{subfig:Brugge_boxplot_objRealIter_c5_S2}
\includegraphics[scale=0.33]{Brugge_boxplot_objRealIter_c5_S2.eps}
}%
\caption{\label{fig:Brugge_boxplot_objRealIter_S2} Boxplots of seismic data mismatch as functions of iteration step (scenario S2). Case (a) corresponds to the results with $c=1$, for which choice the number of leading wavelet coefficients is $178332$, roughly $2.5\%$ of the original data
size; Case (b) to the results with $c=5$, for which choice the number of leading wavelet coefficients is $1665$, more than $4000$ times reduction in data size.}
\end{figure*}
To examine the impact of data size on the performance of SHM, we consider two cases with different threshold values chosen through Eq. (\ref{eq:multiple_universal_rule}). In the first case, we let $c=1$, such that Eq. (\ref{eq:multiple_universal_rule}) reduces to the universal rule in choosing the threshold value \citep{donoho1994ideal}. Under this choice, the number of leading wavelet coefficients is $178332$, around $2.5\%$ of the original AVA data size ($7.04 \times 10^6$). In the second case, we increase the value of $c$ to $5$, such that the number of leading wavelet coefficients further reduces to $1665$, which is more than $4000$ times reduction in comparison to the original data size.
Figure \ref{fig:Brugge_boxplot_objRealIter_S2} indicates the boxplots of seismic data mismatch as functions of iteration step. In either case, seismic data mismatch reduces fast at the first few iteration steps, and then changes slowly afterwards. The stopping criterion (C2), monitoring the relative change of average data mismatch, becomes effective in both cases, such that the iteration stops at the $11$th step when $c=1$, and at the $19$th when $c=5$. Accordingly, the ensembles at iteration step $11$ and $19$, respectively, are taken as the final estimates in these two cases. In addition, it appears that ensemble collapse takes place in both cases, although this phenomenon is somewhat mitigated in the case $c=5$, in comparison to the case $c=1$. The mitigation of ensemble collapse is even more evident when we further increase $c$ to $8$, and accordingly, reduce the data size to $534$. By doing so, however, the history matching performance is deteriorated (results not included here), largely due to the fact that such a significant reduction of data size leads to substantial loss of information content in the seismic data, a point to be elaborated soon.
\renewcommand{0.33}{0.45}
\begin{figure*}
\centering
\subfigure[RMSEs of log PERMX ($c=1$)]{ \label{subfig:rmse_PERMX_boxplot_ensemble_c1_S2}
\includegraphics[scale=0.33]{rmse_PERMX_boxplot_ensemble_c1_S2.eps}
}%
\subfigure[RMSEs of PORO ($c=1$)]{ \label{subfig:rmse_PORO_boxplot_ensemble_c1_S2}
\includegraphics[scale=0.33]{rmse_PORO_boxplot_ensemble_c1_S2.eps}
}%
\subfigure[RMSEs of log PERMX ($c=5$)]{ \label{subfig:rmse_PERMX_boxplot_ensemble_c5_S2}
\includegraphics[scale=0.33]{rmse_PERMX_boxplot_ensemble_c5_S2.eps}
}%
\subfigure[RMSEs of PORO ($c=5$)]{ \label{subfig:rmse_PORO_boxplot_ensemble_c5_S2}
\includegraphics[scale=0.33]{rmse_PORO_boxplot_ensemble_c5_S2.eps}
}%
\caption{\label{fig:Brugge_RLM-MAC_RMSE_S2} Boxplots of RMSEs of log PERMX (1st column) and PORO (2nd column) as functions of iteration step, with $c$ being $1$ (top) and $5$ (bottom), respectively (scenario S2).}
\end{figure*}
Figure \ref{fig:Brugge_RLM-MAC_RMSE_S2} shows boxplots of RMSEs of log PERMX (1st column) and PORO (2nd column) as functions of iteration step. It is clear that the RMSEs of the final ensembles are lower than those of the initial ones, even at $c=5$, the case in which data size is reduced more than $4000$ times. On the other hand, when $c=1$ (top row), the RMSEs of both log PERMX and PORO in the final ensemble are lower than those at $c=5$ (bottom row). This indicates that, better history matching performance is achieved at $c=1$, with more information content captured in the leading wavelet coefficients.
\renewcommand{0.33}{0.33}
\begin{figure*}
\centering
\subfigure[Observed slice (base)]{ \label{subfig:X80_tstep1_mid_trace_noisy_S2}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_noisy_S2.eps}
}%
\subfigure[Observed slice (1st mornitor)]{ \label{subfig:X80_tstep2_mid_trace_noisy_S2}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_noisy_S2.eps}
}%
\subfigure[Observed slice (2st mornitor)]{ \label{subfig:X80_tstep3_mid_trace_noisy_S2}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_noisy_S2.eps}
}%
\subfigure[Reconstructed (base) with $c=1$]{ \label{subfig:X80_tstep1_mid_trace_rec_C1_S2}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_rec_C1_S2.eps}
}%
\subfigure[Reconstructed (1st mornitor) with $c=1$]{ \label{subfig:X80_tstep2_mid_trace_rec_C1_S2}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_rec_C1_S2.eps}
}%
\subfigure[Reconstructed (2st mornitor) with $c=1$]{ \label{subfig:X80_tstep3_mid_trace_rec_C1_S2}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_rec_C1_S2.eps}
}%
\subfigure[Reconstructed (base) with $c=5$]{ \label{subfig:X80_tstep1_mid_trace_rec_C5_S2}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_rec_C5_S2.eps}
}%
\subfigure[Reconstructed (1st mornitor) with $c=5$]{ \label{subfig:X80_tstep2_mid_trace_rec_C5_S2}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_rec_C5_S2.eps}
}%
\subfigure[Reconstructed (2st mornitor) with $c=5$]{ \label{subfig:X80_tstep3_mid_trace_rec_C5_S2}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_rec_C5_S2.eps}
}%
\caption{\label{fig:observed_and_rec_slices} Top row: slices of the observed far-offset AVA cubes at $X=80$, with respect to the base survey (1st column), the 1st monitor survey (2nd column) and the 2nd monitor survey (3rd column), respectively. Middle row: corresponding reconstructed slices at $X=80$ using the leading wavelet coefficients at $c=1$ (while all other wavelet coefficients are set to zero). Bottom row: corresponding reconstructed slices at $X=80$ using the leading wavelet coefficients at $c=5$ (while all other wavelet coefficients are set to zero). }
\end{figure*}
As aforementioned, each 3D AVA cube is in the dimension of $139 \times 48 \times 176$. For illustration, the top row of Figure \ref{fig:observed_and_rec_slices} indicates the slices of far-offset AVA cubes at $X=80$, with respect to the base survey (1st column), the 1st monitor survey (2nd column) and the 2nd monitor survey (3rd column), respectively, whereas the middle and bottom rows show the corresponding slices reconstructed using the leading wavelet coefficients (while setting other coefficients to zero) at $c=1$ and $c=5$, respectively. Compared to figures in the top row, it is clear that the reconstructed ones at $c=1$ capture the main features of the observed slices, while removing the noise component. Therefore in this case, although the universal rule (corresponding to $c=1$) still leads to a relatively large data size, it achieves a good trade-off between data size reduction and feature preservation. In contrast, at $c=5$, the seismic data size is significantly reduced. However, the reconstructed slices in the bottom row only retain a small portion of the strips in the observed slices of the top row, meaning that the data size is reduced at the cost of losing substantial information content of the seismic data. Nevertheless, even with such an information loss, using the leading wavelet coefficients at $c=5$ still leads to significantly improved model estimation in comparison to the initial ensemble, and this will become more evident when both production and seismic data are used in scenario S3.
\renewcommand{0.33}{0.33}
\begin{figure*}
\centering
\subfigure[Slice of initial differerence (base)]{ \label{subfig:X80_tstep1_mid_trace_diff_initEns_S2}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of initial differerence (1st mornitor)]{ \label{subfig:X80_tstep2_mid_trace_diff_initEns_S2}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of initial differerence (2st mornitor)]{ \label{subfig:X80_tstep3_mid_trace_diff_initEns_S2}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of final differerence (base)]{ \label{subfig:X80_tstep1_mid_trace_diff_finalEns_S2}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_diff_finalEns_S2.eps}
}%
\subfigure[Slice of final differerence (1st mornitor)]{ \label{subfig:X80_tstep2_mid_trace_diff_finalEns_S2}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_diff_finalEns_S2.eps}
}%
\subfigure[Slice of final differerence (2st mornitor)]{ \label{subfig:X80_tstep3_mid_trace_diff_finalEns_S2}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_diff_finalEns_S2.eps}
}%
\caption{\label{fig:diff_slices} Top row: slices (at $X=80$) of the differences between the reconstructed far-offset AVA cubes using the leading wavelet coefficients ($c=1$) of the observed seismic data, and the reconstructed far-offset AVA cubes using the corresponding leading wavelet coefficients ($c=1$) of the means of the simulated seismic data of the \textbf{initial} ensemble. From left to right, the three columns correspond to the differences at the base, the 1st monitor, and the 2nd monitor surveys, respectively. Bottom row: as in the top row, except that it is for the differences between the reconstructed far-offset AVA cubes of the observed seismic data, and the reconstructed far-offset AVA cubes of the mean simulated seismic data of the \textbf{final} ensemble. }
\end{figure*}
For brevity, in what follows we only present the results with respect to the case $c=1$. In the top row of Figure \ref{fig:diff_slices}, we show the slices (at X = 80) of differences between two groups of reconstructed far-offset AVA cubes. One group corresponds to the reconstructed far-offset AVA cubes at three survey times, using the leading wavelet coefficients ($c = 1$) of the observed far-offset AVA cubes. The other group contains the reconstructed far-offset AVA cubes at three survey times, using the corresponding leading wavelet coefficients ($c = 1$) of the mean simulated seismic data of the initial ensemble. Therefore the slices of differences in the top row can be considered as a reflection of the initial seismic data mismatch in Figure \ref{subfig:Brugge_boxplot_objRealIter_c1_S2}. Here, we use the slices of differences for ease of visualization, as the reconstructed slices of the observed and the mean simulated AVA cubes look very similar. Similarly, in the bottom row, we show the slices of differences between the reconstructed far-offset AVA cubes of the observed seismic data, and the reconstructed far-offset AVA cubes of the mean simulated seismic data of the final ensemble. In this case, the slices of differences can be considered as a reflection of the final seismic data mismatch in Figure \ref{subfig:Brugge_boxplot_objRealIter_c1_S2}. Comparing the top and bottom rows at a given survey time, one can observe certain distinctions, which, however, are not very significant in general. This is in line with the results in Figure \ref{subfig:Brugge_boxplot_objRealIter_c1_S2}, where the initial and final seismic data mismatch remain in the same order, in contrast to the substantial reduction of production data mismatch in scenario S1 (Figure \ref{fig:Brugge_boxplot_objRealIter_S1}).
\renewcommand{0.33}{0.21}
\begin{figure*}
\centering
\subfigure[Rerefrence log PERMX]{ \label{subfig:PERMX_L2_true_S2}
\includegraphics[scale=0.33]{PERMX_L2_true.eps}
\subfigure[Mean of initial log PERMX]{ \label{subfig:PERMX_L2_Mean_initEns_S2}
\includegraphics[scale=0.33]{PERMX_L2_Mean_initEns.eps}
\subfigure[Mean of final log PERMX]{ \label{subfig:PERMX_L2_Mean_ensemble11_S2}
\includegraphics[scale=0.33]{PERMX_L2_Mean_ensemble11_S2.eps}
}%
\subfigure[Rerefrence PORO]{ \label{subfig:PORO_L2_true_S2}
\includegraphics[scale=0.33]{PORO_L2_true.eps}
\subfigure[Mean of initial PORO]{ \label{subfig:PORO_L2_Mean_initEns_S2}
\includegraphics[scale=0.33]{PORO_L2_Mean_initEns.eps}
\subfigure[Mean of final PORO]{ \label{subfig:PORO_L2_Mean_ensemble11_S2}
\includegraphics[scale=0.33]{PORO_L2_Mean_ensemble11_S2.eps}
\caption{\label{fig:estimation_S2} As in Figure \ref{fig:estimation_S1}, but for scenario S2 with $c=1$.}
\end{figure*}
Similar to Figure \ref{fig:estimation_S1}, Figure \ref{fig:estimation_S2} depicts the reference log PERMX and PORO at layer 2 (1st column), the mean of initial log PERMX and PORO at layer 2 (2nd column), and the mean of final log PERMX and PORO at layer 2 (3rd column). Compared to the initial mean estimates, the final mean log PERMX and PORO show clear improvements, in terms of the similarities to the reference fields. In addition, an inspection on the 3rd columns of Figures \ref{fig:estimation_S1} and \ref{fig:estimation_S2} reveals that the final mean estimates in S2 capture the geological structures of the reference fields better, especially in areas where there is neither injection nor production well (well locations are represented by black dots in Figures \ref{fig:estimation_S1} and \ref{fig:estimation_S2}).
\subsection{Results of scenario S3 (using both production and seismic data)}
\renewcommand{0.33}{0.45}
\begin{figure*}
\centering
\subfigure[Production data mismatch ($c=1$)]{ \label{subfig:Brugge_boxplot_objRealIter_prod_C1_S3}
\includegraphics[scale=0.33]{Brugge_boxplot_objRealIter_prod_C1_S3.eps}
}%
\subfigure[Production data mismatch ($c=5$)]{ \label{subfig:Brugge_boxplot_objRealIter_prod_C5_S3}
\includegraphics[scale=0.33]{Brugge_boxplot_objRealIter_prod_C5_S3.eps}
}%
\subfigure[Seimic data mismatch ($c=1$)]{ \label{subfig:Brugge_boxplot_objRealIter_seis_C1_S3}
\includegraphics[scale=0.425]{Brugge_boxplot_objRealIter_seis_C1_S3.eps}
}%
\subfigure[Seimic data mismatch ($c=5$)]{ \label{subfig:Brugge_boxplot_objRealIter_seis_C5_S3}
\includegraphics[scale=0.425]{Brugge_boxplot_objRealIter_seis_C5_S3.eps}
}%
\caption{\label{fig:Brugge_boxplot_objRealIter_S3} Boxplots of production (top) and seismic (bottom) data mismatch as functions of iteration step (scenario S3).}
\end{figure*}
In scenario S3, production and seismic (in terms of leading wavelet coefficients) data are assimilated simultaneously. Figure \ref{fig:Brugge_boxplot_objRealIter_S3} reports the boxplots of production (top) and seismic (bottom) data mismatch as functions of iteration step. Because of the simultaneous assimilation of production and seismic data, the way to use the seismic data (in terms of the value of $c$ in Eq. (\ref{eq:multiple_universal_rule})) will affect the history matching results. This becomes evident if one compares the first and second columns of Figure \ref{fig:Brugge_boxplot_objRealIter_S3}. Indeed, when $c=1$, because the relatively large data size, it is clear that ensemble collapse takes place in Figures \ref{subfig:Brugge_boxplot_objRealIter_prod_C1_S3} and \ref{subfig:Brugge_boxplot_objRealIter_seis_C1_S3}. Also, the iteration stops at step 14, due to the stopping criterion (C2). By increasing $c$ to 5, the size of seismic data is reduced from 178332 to 1665, and ensemble collapse seems mitigated to some extent, especially for production data, while the final iteration step is 19, due to the stopping criterion (C2). On the other hand, by comparing Figures \ref{fig:Brugge_boxplot_objRealIter_S1}, \ref{fig:Brugge_boxplot_objRealIter_S2} and \ref{fig:Brugge_boxplot_objRealIter_S3}, it is clear that, in S3, the presence of both production and seismic data makes the reduction of data mismatch different from the case of using either production or seismic data only. For instance, in the presence of seismic data, the production data mismatch (see Figures \ref{subfig:Brugge_boxplot_objRealIter_prod_C1_S3} and \ref{subfig:Brugge_boxplot_objRealIter_prod_C5_S3}) tend to be higher than that in Figure \ref{fig:Brugge_boxplot_objRealIter_S1}. On the other hand, with the influence of production data, the occurrence of ensemble collapse seems to be postponed in Figures \ref{subfig:Brugge_boxplot_objRealIter_seis_C1_S3} and \ref{subfig:Brugge_boxplot_objRealIter_seis_C5_S3}, in comparison to those in Figure \ref{fig:Brugge_boxplot_objRealIter_S2}.
\renewcommand{0.33}{0.45}
\begin{figure*}
\centering
\subfigure[RMSEs of log PERMX ($c=1$)]{ \label{subfig:rmse_PERMX_boxplot_ensemble_C1_S3}
\includegraphics[scale=0.33]{rmse_PERMX_boxplot_ensemble_C1_S3.eps}
}%
\subfigure[RMSEs of PORO ($c=1$)]{ \label{subfig:rmse_PORO_boxplot_ensemble_C1_S3}
\includegraphics[scale=0.33]{rmse_PORO_boxplot_ensemble_C1_S3.eps}
}%
\subfigure[RMSEs of log PERMX ($c=5$)]{ \label{subfig:rmse_PERMX_boxplot_ensemble_C5_S3}
\includegraphics[scale=0.33]{rmse_PERMX_boxplot_ensemble_C5_S3.eps}
}%
\subfigure[RMSEs of PORO ($c=5$)]{ \label{subfig:rmse_PORO_boxplot_ensemble_C5_S3}
\includegraphics[scale=0.33]{rmse_PORO_boxplot_ensemble_C5_S3.eps}
}%
\caption{\label{fig:Brugge_RLM-MAC_RMSE_S3} Boxplots of RMSEs of log PERMX (1st column) and PORO (2nd column) as functions of iteration step, with $c$ being $1$ (top) and $5$ (bottom), respectively (scenario S3).}
\end{figure*}
Figure \ref{fig:Brugge_RLM-MAC_RMSE_S3} shows boxplots of RMSEs of log PERMX (1st column) and PORO (2nd column) as functions of iteration step. Again, the RMSEs of the final ensembles are lower than those of the initial ones, for either $c=1$ or $c=5$. On the other hand, a comparison of Figures \ref{fig:Brugge_RLM-MAC_RMSE_S1}, \ref{fig:Brugge_RLM-MAC_RMSE_S2} and \ref{fig:Brugge_RLM-MAC_RMSE_S3} indicates that the RMSEs of log PERMX and PORO (and similarly, the RMSEs of log PERMY and log PERMZ) are the lowest when using both production and seismic data in history matching. Using $c=5$ in scenario S3 (Figures \ref{subfig:rmse_PERMX_boxplot_ensemble_C5_S3} and \ref{subfig:rmse_PORO_boxplot_ensemble_C5_S3}) leads to higher RMSEs than using $c=1$. Nevertheless, they are still better than the RMSEs in scenario S1, and close to (for PORO) or better than (for log PERMX) those in scenario S2 with $c=1$ (see Figures \ref{subfig:rmse_PERMX_boxplot_ensemble_c1_S2} and \ref{subfig:rmse_PORO_boxplot_ensemble_c1_S2}). This suggests that, in this particular case, reasonably good history matching performance can still be achieved, even though the data size is reduced more than $4000$ times (at $c=5$) through the wavelet-based sparse representation procedure.
\renewcommand{0.33}{0.33}
\begin{figure*}
\centering
\subfigure[WBHP of the initial ensemble]{ \label{subfig:WBHP_BR-P-5_initial_S3}
\includegraphics[scale=0.33]{WBHP_BR-P-5_initial.eps}
}%
\subfigure[WOPR of the initial ensemble]{ \label{subfig:WOPR_BR-P-5__initial_S3}
\includegraphics[scale=0.33]{WOPR_BR-P-5_initial.eps}
}%
\subfigure[WWCT of the initial ensemble]{ \label{subfig:WWCT_BR-P-5__initial_S3}
\includegraphics[scale=0.33]{WWCT_BR-P-5_initial.eps}
}%
\subfigure[WBHP of the final ensemble]{ \label{subfig:WBHP_BR-P-5_S3}
\includegraphics[scale=0.33]{WBHP_BR-P-5_S3.eps}
}%
\subfigure[WOPR of the final ensemble]{ \label{subfig:WOPR_BR-P-5_S3}
\includegraphics[scale=0.33]{WOPR_BR-P-5_S3.eps}
}%
\subfigure[WWCT of the final ensemble]{ \label{subfig:WWCT_BR-P-5_S3}
\includegraphics[scale=0.33]{WWCT_BR-P-5_S3.eps}
}%
\caption{\label{fig:production_P5_S3} As in Figure \ref{fig:production_P5_S1}, but for the production data profiles in scenario S3 with $c=1$.}
\end{figure*}
Again, for brevity, in what follows we only report the results with respect to the case $c=1$. Figure \ref{fig:production_P5_S3} shows the production data profiles at the producer BR-P-5 in scenario S3 with $c=1$. Clearly, compared to the initial ensemble, the final one matches the observed production data (red dots) better. Nevertheless, a comparison of the bottom rows of Figures \ref{fig:production_P5_S1} and \ref{fig:production_P5_S3} indicates that, the ensemble spreads of simulated production data (blue curves) tend to be under-estimated, such that the reference production data (yellow curves) are outside the profiles of simulated production data at certain time instances.
\renewcommand{0.33}{0.33}
\begin{figure*}
\centering
\subfigure[Slice of initial differerence (base)]{ \label{subfig:X80_tstep1_mid_trace_diff_initEns_S3}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of initial differerence (1st mornitor)]{ \label{subfig:X80_tstep2_mid_trace_diff_initEns_S3}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of initial differerence (2st mornitor)]{ \label{subfig:X80_tstep3_mid_trace_diff_initEns_S3}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_diff_initEns_S2.eps}
}%
\subfigure[Slice of final differerence (base)]{ \label{subfig:X80_tstep1_mid_trace_diff_finalEns_S3}
\includegraphics[scale=0.33]{X80_tstep1_mid_trace_diff_finalEns_S3.eps}
}%
\subfigure[Slice of final differerence (1st mornitor)]{ \label{subfig:X80_tstep2_mid_trace_diff_finalEns_S3}
\includegraphics[scale=0.33]{X80_tstep2_mid_trace_diff_finalEns_S3.eps}
}%
\subfigure[Slice of final differerence (2st mornitor)]{ \label{subfig:X80_tstep3_mid_trace_diff_finalEns_S3}
\includegraphics[scale=0.33]{X80_tstep3_mid_trace_diff_finalEns_S3.eps}
}%
\caption{\label{fig:diff_slices_S3} As in Figure \ref{fig:diff_slices_S3}, but for the slices (at $X=80$) of differences in scenario S3 with $c=1$. }
\end{figure*}
Similar to Figure \ref{fig:diff_slices}, in Figure \ref{fig:diff_slices_S3} we show the slices (at X = 80) of differences between the reconstructed far-offset AVA cubes of observed and mean simulated seismic data, at three survey times. Again, compared to the slices with respect to the initial ensemble (top), there are some visible distinctions in the slices with respect to the final ensemble (bottom). However, if one compares the bottom rows of Figures \ref{fig:diff_slices} and \ref{fig:diff_slices_S3}, it seems that these slices look very similar to each other. This is consistent with the results in Figures \ref{subfig:Brugge_boxplot_objRealIter_c1_S2} and \ref{subfig:Brugge_boxplot_objRealIter_seis_C1_S3}, where the final seismic data mismatch of S2 and S3 remains close.
\renewcommand{0.33}{0.21}
\begin{figure*}
\centering
\subfigure[Rerefrence log PERMX]{ \label{subfig:PERMX_L2_true_S3}
\includegraphics[scale=0.33]{PERMX_L2_true.eps}
\subfigure[Mean of initial log PERMX]{ \label{subfig:PERMX_L2_Mean_initEns_S3}
\includegraphics[scale=0.33]{PERMX_L2_Mean_initEns.eps}
\subfigure[Mean of final log PERMX]{ \label{subfig:PERMX_L2_Mean_ensemble11_S3}
\includegraphics[scale=0.33]{PERMX_L2_Mean_ensemble14_S3.eps}
}%
\subfigure[Rerefrence PORO]{ \label{subfig:PORO_L2_true_S3}
\includegraphics[scale=0.33]{PORO_L2_true.eps}
\subfigure[Mean of initial PORO]{ \label{subfig:PORO_L2_Mean_initEns_S3}
\includegraphics[scale=0.33]{PORO_L2_Mean_initEns.eps}
\subfigure[Mean of final PORO]{ \label{subfig:PORO_L2_Mean_ensemble11_S3}
\includegraphics[scale=0.33]{PORO_L2_Mean_ensemble14_S3.eps}
\caption{\label{fig:estimation_S3} As in Figure \ref{fig:estimation_S1}, but for scenario S3 with $c=1$.}
\end{figure*}
Finally, Figure \ref{fig:estimation_S3} compares the reference, initial and final mean log PERMX and PORO fields at layer 2. Again, the final mean estimates improve over the initial mean fields, in terms of the closeness to the references. In addition, a comparison of the final estimated fields (the 3rd columns) of Figures \ref{fig:estimation_S1}, \ref{fig:estimation_S2} and \ref{fig:estimation_S3} shows that the final mean estimates in S3 best capture the geological structures of the reference fields (the same observation is also obtained at $c = 5$). This indicates the benefits of using both production and seismic data in history matching.
\section{Discussions and conclusions}\label{sec:conclusion}
In this work, we apply an efficient, ensemble-based seismic history matching framework to the 3D Brugge field case. The seismic data used in this study are near- and far-offset amplitude versus angle (AVA) attributes, with the data size more than 7 million. To handle the big data, we introduce a wavelet-based sparse representation procedure to substantially reduce the data size, while capturing the main features of the seismic data as many as possible. Through numerical experiments, we demonstrate the efficacy of the proposed history matching framework with the sparse representation procedure, even in the case that the seismic data size is reduced more than 4000 times. The size of seismic data (in the form of leading wavelet coefficients) can be conveniently controlled through a threshold value. A relatively large threshold value means more reduction in data size, which is desirable for the history matching algorithm, but at the cost of extra information loss. In contrast, a relatively small threshold value results in a larger number of leading wavelet coefficients, hence better preserves the information content of observed data. In this case, however, the history matching algorithm may become more vulnerable to certain practical issues like ensemble collapse. As a result, the best practice would need to achieve a trade-off between reduction of data size and preservation of data information. Another observation from the experiment results is that, in this particular case, a combined use of production and seismic data in history matching leads to better estimation results than the cases of using either production or seismic data only.
Ensemble collapse is clearly visible when seismic data is used in history matching. This phenomenon can be mitigated to some extent by increasing the threshold value (hence reducing the seismic data size), but it cannot be completely avoided. A possible remedy to this problem is to also introduce localization (see, for example, \citealp{Emerick2011combining,chen2010cross}) to the iterative ensemble smoother. In the presence of the sparse presentation procedure, however, seismic data are transformed into wavelet domain, and the concept of ``physical distance'' may not be valid any more. As a result, localization will need to be adapted to this change. We will investigate this issue in our future study.
\section*{Acknowledgments}
\label{sec:acknowledgments}
\noindent
We would like to thank Schlumberger for providing us academic software licenses to ECLIPSE$^\copyright$. XL acknowledges partial financial supports from the CIPR/IRIS cooperative research project ``4D Seismic History Matching'' which is funded by industry partners Eni, Petrobras, and Total, as well as the Research Council of Norway (PETROMAKS). All authors acknowledge the Research Council of Norway and the industry partners -- ConocoPhillips Skandinavia AS, BP Norge AS, Det Norske Oljeselskap AS, Eni Norge AS, Maersk Oil Norway AS, DONG Energy A/S, Denmark, Statoil Petroleum AS, ENGIE E\&P NORGE AS, Lundin Norway AS, Halliburton AS, Schlumberger Norge AS, Wintershall Norge AS -- of The National IOR Centre of Norway for financial supports.
|
1,116,691,500,948 | arxiv | \section*{Introduction}
A differential operator $A$ on a closed manifold $M$
lifts to a differential operator $\widetilde A$ on its universal covering $\widetilde M\to M$. This is due to the locality property of differential operators which preserve the support of the sections they act on. This lifting property does not extend to general pseudodifferential operators which are only pseudo-local i.e., they only preserves the singular support of the sections. Hence arises the problem of lifting complex powers $ Q^{-z}$ involved in spectral $\zeta$-functions
\begin{equation}\zeta_{A,Q}(z):={\rm TR}(A\, Q^{-z})\ .\label{eq:zeta1}\end{equation}
where $A$ and $Q$ are differential operator on $M$, and ${\rm TR}$ is the canonical trace.
Nevertheless, we prove that spectral $\zeta$-invariants \begin{equation}\label{eq:zetaintro}\zeta_{A,Q}(0):={\rm fp}_{z=0}\left( {\rm TR}(A\, Q^{-z})\right)\end{equation} corresponding to the constant term of the Laurent expansion of \eqref{eq:zeta1} canonically lift to coverings (see (\ref{eq:diffliftedregtraces}) in Theorem \ref{thm:liftedregtraces}).
This results from a detailed analysis of the intertwining between the pseudo-locality of the complex powers and the locality of the canonical trace on non-integer order operators. In our approach, the locality of spectral $\zeta$-invariants is only an instance of the more general locality expressed by defect formulae. Another central result of the paper are $L^2$-counterparts of such defect formulae (\ref{eq:introlifteddefect2}). A natural application is the locality of Atiyah's $L^2$-index, which is expressed as a $\Gamma$-Wodzicki residue (\ref{eq:indres2}). \\
Our approach can be summarised as follows. We build:
\begin{itemize}
\item a holomorphic germ of pseudifferential (and hence pseudo-local) operators $A(z)$ (of holomorphic order $\alpha(z)$) which at zero is the differential (and hence {\it local}) operator $A$;
\item the corresponding meromorphic germ ${\rm TR}(A(z))$ of functions built from the {\it local linear form} given by the canonical trace;
\item the {\it local} invariant is obtained as the value at $z=0$ of this germ of functions in terms of the $1$-jet of the germ of operators
\[\lim_{z\to 0}{\rm TR}(A(z))=-\frac{1}{\alpha^\prime(0)}\,{\rm Res}(A^\prime(0)).\]
\end{itemize}
To achieve our goal, following Shubin \cite{Shub}, we view pseudodifferential operators as \lq\lq small perturbations\rq\rq{} of pseudodifferential operator with finite propagation, more precisely those which are $\ve$-local for some small enough $\ve>0$. Such operators, which fall into the more general class of quasi-local operators introduced by Roe \cite{Roe2}, are properly supported and hence determined by their symbol. Quasi-local operators can roughly be viewed as operators with controlled propagation at infinity, see \cite{E} for a detailed discussion. They bare the advantage over finite propagation operators, that they are stable under functional calculus, a property which is not needed here. An $\ve$-local pseudodifferential operator modifies the support of the sections it is acting on, by at most the distance $\ve$.
The fact that a differential operator preserves the supports is therefore confirmed by the fact that it is $0$-local.\\
Choosing $\ve$ small enough, one can lift without ambiguity an $\ve$-local operator $A_0$ to an $\ve$-local operator $\widetilde A_0$. The lifted operator is a (uniformly) properly supported pseudodifferential operator, and hence also defined in terms of its symbol $\sigma(\widetilde A_0)$, which is the lifted symbol $\sigma(\widetilde A_0)= \reallywidetilde{\sigma( A_0)}$ of the original operator.\\ To go from $\ve$-local to a general classical pseudodifferential operator $A$ on a closed manifold, we observe that the latter differs from a $\ve$-local classical pseudodifferential operator $A_0$ by an operator with smooth kernel supported outside the diagonal (see Proposition \ref{prop:ShubinTh1}), so it lies in the equivalence class $[A_0]_{\rm diag}$ of $A_0$ for the equivalence relation on classical pseudodifferential operators on the base manifold $M$
$$A\underset{ \rm \small diag}{\sim} B\Longrightarrow A-B \quad \text{has a smooth kernel supported outside the diagonal}.$$
Clearly, $\sigma(A)\sim \sigma(A_0)$ and $\reallywidetilde{\sigma(A)}\sim \sigma(\widetilde A_0)$ for any $ A $ in $[A_0]_{\rm diag}.$
The following
diagramme, where ${\mathcal A}$, ${\mathcal A}_0$ are $\Gamma$-invariant operators on the covering, $\Gamma$ being the fundamental group, $\pi_\sharp {\mathcal A }_0$ the projected operator onto the base manifold, presents the notations in a compact form:
\begin{equation}\label{eq:diagramme}
\begin{xy}
\xymatrix{
\mathcal A \underset{ \rm \tiny diag}{\sim} \hspace{-1cm} & {\mathcal A}_0:= \widetilde{A_0} \ar[d]^{\pi_\sharp} & \hspace{-.7cm} (\ve-\text{local})\\
A \underset{ \rm \tiny diag}{\sim} \hspace{-1cm} & A_0:=\pi_\sharp {\mathcal A}_0 \ar@<1ex>[u]^{\pi^\sharp}& \hspace{-.7cm}(\ve-\text{local})
}
\end{xy}
\end{equation}
Note that the operator $\pi_\sharp\colon {\mathcal A}_0\longrightarrow \left(s\longmapsto \pi_*\left({\mathcal A}_0\pi^*(s)\right)\right)$ is well-defined in view of the equivariance and $\ve$-locality of ${\mathcal A}_0$ (see Proposition \ref{prop:ShubinTh1})).
Alongside the pseudo-locality of pseudodifferential operators, the other essential ingredient in our approach is the use of {\bf local} linear forms defined on a class of classical pseudodifferential operators (see Definition \ref{defn:locallambda}). These only detect the symbol of an operator and are therefore constant on equivalence classes $[A_0]_{\rm diag}$ and hence constant along the horizontal lines of the above diagramme.
{ More precisely, a local linear form $\Lambda$ reads:}
\begin{equation}\label{eq:Lambdaintro}
\Lambda( [A]_{\rm diag}):=\Lambda(A)=\int_M \Lambda_x(A)\, dx:=\int_M \lambda\left({\rm tr}\left(\sigma(A)(x,\cdot)\right)\right)\,dx,
\end{equation}
$\lambda$ being a linear form on an appropriate class of scalar valued symbols, ${\rm tr}$ the fibrewise trace on the endomorphism bundle in which the symbol $\sigma(A)(\cdot,\xi)$ of $A$ lies for any $\xi$ in the cotangent bundle to the underlying manifold.\begin{itemize}
\item
Our first main result is Theorem \ref{thm:uniqueness}, which states that any continuous local linear form on the class of classical pseudodifferential operators with integer order (resp. on the class of classical pseudodifferential operators non-integer order)
is proportional to the Wodzicki residue Res (see \eqref{eq:Resint}) (resp. the canonical trace TR (see \eqref{eq:TRint}))
$$
{\rm Res}(A)=\int_M {\rm Res}_x(A)\, dx;\quad \left(\text{resp.}\quad {\rm TR}(A)=\int_M {\rm TR}_x(A)\, dx\right),
$$
acting respectively on the algebra of classical pseudodifferential operators with integer order and on the class of classical pseudodifferential operators with non-integer order. The densities ${\rm Res}_x(A)\, dx$ (resp. ${\rm TR}_x(A)\, dx$) are defined by means of the residue res (resp. the canonical integral $-\hskip -10pt\int_{\mathbb{R}^n}$) on integer order (resp. on non-integer order) scalar symbols.
A local linear form (\ref{eq:Lambdaintro}) can be lifted from a $\ve$-local operator $ A_0$ on $M$ to its lift $\widetilde A_0$ on $\widetilde M$ by \begin{equation}
\label{eq:Lambdaintlift}
\Lambda_\Gamma(\widetilde A_0)= \int_F \lambda\left({\rm tr}\left( \sigma(\widetilde A_0)(x,\cdot)\right)\right)\,dx,
\end{equation} where $F$ is a fundamental domain for the action of the fundamental group. It further lifts to any $\Gamma$-invariant operator $\mathcal A$ on the covering; indeed $\mathcal A$ lies in the class $[\widetilde A_0]_{\rm diag}$ of some lifted $\ve$-local operator $\widetilde{A_0}$. Since $ \Lambda_\Gamma$ is constant on such a class, we set \begin{equation}\label{eq:Lambdawt}
\Lambda_\Gamma(\mathcal A):= \int_F \lambda\left({\rm tr}\left( \sigma(\widetilde A_0)(x,\cdot)\right)\right)\,dx.
\end{equation}
Prototypes are
the $\Gamma$-residue (resp. $\Gamma$-canonical trace) (see Proposition \ref{prop:ResTRliftedA})
$${\rm Res}_\Gamma(\mathcal A):= \int_F{\rm Res}_{\widetilde x}(\widetilde A_0)\,d\widetilde x,\quad \left(\text{resp.} {\rm TR}_\Gamma(\mathcal A):= \int_F{\rm TR}_{\widetilde x}(\widetilde A_0)\,d\widetilde x\,\right),$$
obtained from integrating the residue and canonical trace densities on $F$.
Whereas the canonical trace lifts to coverings due to its local feature, the {\bf regularised trace} evaluated at $z=p$ of a holomorphic family $A(z)$ of classical pseudodifferential operators on $M$, defined as the Hadamard finite part
$$
{\rm fp}_{z=p} {\rm TR}(A(z)):=\lim_{z\to p}\left({\rm TR}(A(z))-\frac{{\rm Res}_{z=p}\left({\rm TR}(A(z))\right)}{z-p} \right),
$$
(here ${\rm Res}_{z=p}$ stands for the complex residue at $p$) is generally non local and does not a priori lift to coverings. However, {\bf defect formulae}, which express the discrepancies of regularised traces in terms of the Wodzicki residue and therefore also enjoy a local feature, do lift to coverings (Theorem \ref{thm:KVPScov}). More precisely, if $A(p)$ has a well-defined canonical trace TR$(A(p))$, the trace defect formula (Theorem \ref{thm:KVPS}, borrowed from \cite{KV} and \cite{PS}), relates the regularised trace $ {\rm fp}_{z=p} {\rm TR}(A(z)) $
with the (extended) residue ${\rm Res}\left(\log A^\prime(p)\right)$ of the derivative of the family at this pole (see (\ref{eq:PSclassicalop})),
\begin{equation}\label{eq:introdefect}{\rm fp}_{z=p} {\rm TR}(A(z))= {\rm TR}(A(p))+\frac{1}{q}\,{\rm Res}\left( A^\prime(p)\right),\end{equation}
where the operators $A(z)$ have order $a-qz$ for some given positive $q$.\\
If $A(p)$ is a differential operator, then ${\rm TR}(A(p))=0$ and (\ref{eq:introdefect}) reduces to a local expression of the regularised trace \begin{equation}
\label{eq:defectformulaintro}
{\rm fp}_{z=p} {\rm TR}(A(z))= \frac{1}{q}\,{\rm Res}\left(\log A^\prime(p)\right)\end{equation} in terms of the Wodzicki residue.\\
The trace defect formula \eqref{eq:defectformulaintro} is central to our approach since it relates regularised traces (on the l.h.s. of the above formula) with Wodzicki residues (on the r.h.s. of the above formula) and yields index type theorems as an application. Wodzicki residues, which only depend on one homogeneous component of the symbol and not the whole symbol, ar local. In contrast, regularised traces built from the canonical trace, apriori depend on the whole symbol so are not expected to be local.
\item Our second main result is Theorem \ref{thm:KVPScov} which yields the lifted analogue of the (more general) trace-defect formula (\ref{eq:introdefect})
\begin{equation}\label{eq:introlifteddefect}{\rm fp}_{z=p} {\rm TR}_\Gamma(\mathcal A(z))= {\rm TR}_\Gamma(\mathcal A(p))+\frac{1}{q}{\rm Res}_\Gamma\left( \mathcal A^\prime(p)\right),\end{equation} for a holomorphic family $\mathcal A(z)$ of $\Gamma$-invariant operators on the covering, such that $\mathcal A(p)$ at the point $p$ has a well-defined $\Gamma$-canonical trace ${\rm TR}_\Gamma(\mathcal A(p))$. If $\mathcal A(p)$ is a differential operator, then (\ref{eq:introlifteddefect}) reduces to \begin{equation}\label{eq:introlifteddefect2}{\rm fp}_{z=p} {\rm TR}_\Gamma(\mathcal A(z))= \frac{1}{q}{\rm Res}_\Gamma\left( \mathcal A^\prime(p)\right).\end{equation}
Corollary \ref{cor:KVPScomparison}, which is useful for applications, then says that if $\mathcal A(z)$ is a holomorphic family on the covering and if there exists a holomorphic family $A(z)$ of $\ve$-local operators on $M$, such that the difference $\mathcal A (p)-\widetilde{A(p)}$ at a point $p$ has a smooth kernel, then the map $z\mapsto {\rm TR}_\Gamma\left(\mathcal A(z)\right)- {\rm TR}_\Gamma\left(\widetilde{A(z)}\right)$ is holomorphic at point $p$
and
\begin{equation}\label{eq:IntroRestildeAz}
{\rm fp}_{z=p}{\rm TR}_\Gamma\left(\mathcal A (z)\right)-{\rm fp}_{z=p}{\rm TR}\left(A(z)\right)= {\rm Tr}_\Gamma(\mathcal A(p)-\widetilde{A(p)}).
\end{equation}
We apply (\ref{eq:IntroRestildeAz}) to the holomorphic families $A(z)= P(\mathbf D)\, h( \mathbf \Delta)\, Q_\ve( \mathbf \Delta)^{-z}$ on $M$ and \hfill \break \noindent $ \mathcal A(z)=P(\widetilde {\mathbf D})\, h\left(Q_\ve(\widetilde{\mathbf \Delta})\right)\, Q_\ve(\widetilde{\mathbf \Delta})^{-z}$ on $\widetilde M$. Here $\mathbf D$ is a Dirac-type operator, and consequently $\mathbf \Delta:=\mathbf D^2$ a Laplace-type operator, $Q_\ve(\mathbf \Delta)$ defined in (\ref{eq:Qeps}) for some $\ve>0$ is a smooth deformation of $\mathbf \Delta$, $P$ is a polynomial, and $h$ some measurable function on a contour around the spectrum of $Q_\ve(\mathbf \Delta)$. \\
\item This yields our third main result, Theorem \ref{thm:liftedregtraces}, which compares the corresponding $Q_\ve(\mathbf \Delta)$-regularised trace of $P(\mathbf D)\, h\left(\mathbf \Delta\right)$ and the $Q_\ve(\widetilde {\mathbf \Delta})$-regularised trace of $P(\widetilde{\mathbf D})\, h\left(Q_\ve(\widetilde{\mathbf \Delta})\right)$. \begin{itemize}
\item In the $\mathbb{Z}_2$-graded case and for $P\equiv 1$, $h\equiv 1$ this gives back Atiyah's $L^2$-index theorem
(Corollary \ref{cor:indres2}).
\item In the non-graded case and for $P(x)=x$, $h(x)= x^{-\frac{1}{2}}$, assuming both operators $D$ and its lift $\widetilde D$ to be invertible, the above constructions show that the eta-invariant of the lifted Dirac operator
differs from the eta-invariant of the Dirac operator $D$ on the base manifold by an ordinary $\Gamma$-trace ${\rm Tr}_\Gamma( \widetilde A_0-\mathcal A)$ of the difference of two $\Gamma$-invariant operators, one of which $\widetilde A_0$, is the lift of an $\ve$-local operator $A_0\in [A]_{\rm diag}$ (Corollary \ref{cor:eta}).
\end{itemize}
Theorem \ref{thm:geometricop} discusses the case of geometric operators showing that the lifted $\zeta$-functions correspond to integrals of densities generated by Pontrjagin forms on the fundamental domain and Chern forms on the auxillary bundle.\\
\end{itemize}
One advantage of our approach is that it yields the $L^2$-Atiyah theorem as an instance of the much more general {\it lifted trace defect formulae}. Here is the general scheme of the argument. Theorem \ref{thm:KVPScov} gives the lifted trace defect formulae. Theorem \ref{thm:liftedregtraces} compares $\zeta$-regularised traces of operators with the $\zeta$-regularised traces of their lifted counterparts using the locality property of the only two {\it local linear forms} characterised in Theorem \ref{thm:uniqueness}-- the canonical trace and the Wodzicki residue. Corollary \ref{cor:indres2} gives the $L^2$-index theorem combining the two previous ingredients.
The above arguments make use of functions of pseudodifferential operators, in particular their complex power and the related logarithm. Even though the constructions are similar to the ones for operators on closed manifolds, special care is to be taken in the open manifold case. In Section \ref{sec:PDOs}, we first review various classes of pseudodifferential operators on open manifolds, soon specialising to coverings. The essential difference between the various classes lies in the smoothing part, to which we therefore dedicate the first section --Section \ref{sec:smoothing}-- of the paper. We then relate pseudodifferential operators on coverings to pseudodifferential operators on the associated groupoid (Appendix \ref{sec:groupoids}) and the associated Hilbert module bundle (Appendix \ref{sec:appHM}), thereby relating constructions of complex powers and logarithms on groupoids and Hilbert module bundles with the ones on coverings presented here.
\bigskip
\newpage
\tableofcontents
\setcounter{section}{0}
\vspace{-1.3cm}
\allowdisplaybreaks
\vfill \eject \noindent
\section{$\Gamma$-invariant operators on coverings and functional calculus}
\subsection{Operators with smooth kernels}
\label{sec:smoothing}
Let $X$ be an $n$-dimensional manifold and $F\to X$ a vector bundle over $X$ of rank $k$.
To a linear operator $A\colon C^\infty_c(X, F)\to C^\infty(X, F)$ we assign its Schwartz kernel denoted by $K_A$, which is a distributional section of the bundle $F\boxtimes F$ over $M\times M$.
The support of $A$ is the smallest subset of $X\times X$ on the complement of which $K_A$ vanishes as a distribution.
\begin{defn} Let $\Psi^{-\infty}(X,F)$ be the space of linear operators $A\colon C^\infty_c(X, F)\to C^\infty(X, F)$ with smooth Schwartz kernel.
\end{defn}
Sobolev spaces will be useful to introduce another class of operators; in order to have Sobolev spaces at hand, we henceforth assume that our manifold $X$ has \emph{bounded geometry} \cite{Shu, MS, Kor}, a property verified by covering spaces of interest to us.
\subsubsection{Smoothing operators on manifolds of bounded geometry}\begin{defn}
A Riemannian manifold $(X,g)$ is said to have \emph{bounded geometry} if
\begin{itemize}
\item it has positive injectivity radius (there is $r>0$ s.t. the exponential map is a diffeomorphism on $B(0,r)\subset T_x X$, $\forall x\in X$);
\item every covariant derivative of the Riemannian curvature tensor is bounded.
\end{itemize}
In the same way, a Hermitian vector bundle $F\to X $ has bounded geometry if every covariant derivative of the curvature is bounded.
\end{defn}
\begin{ex}
Lie groups or homogeneous spaces with invariant metrics, compact Riemannian manifolds, regular $\Gamma$-covering of compact Riemannian manifolds endowed with the induced Riemannian structure, all provide examples of manifolds of bounded geometry.
\end{ex}
We now assume the bundle $F\to X$ to be of bounded geometry. The fundamental property is the existence of a \lq\lq good" partition of unity, which allows to define Sobolev spaces $H^s(X,F)$, see for instance \cite[Lemmas 1.3, 3.22, and (1.3)]{Shu} and \cite[Definition 3.23]{Sch}.
\begin{rk}
The Banach space structure of $H^s(X, F)$ is independent of the choices in the definition (see \cite[Lemma 3.24]{Shu}).
Just as in the case of closed manifolds, Sobolev spaces can alternatively be defined by means of a (uniformly) elliptic operator, see \cite[Lemma 4.29 and Corollary 4.30]{Sch} for the comparison with this definition.
\end{rk}
Let $H^\infty(X,F)=\cap_{k\in \mathbb{N}} H^k(X,F)$ denote the projective limit of $H^{k}(X,F), k\in \mathbb{N}$ and let $H_\iota^{-\infty} (X,F)\supset \cup_{k\in \mathbb{N}} H^{-k}(X,F)$ denote the regular inductive limit of $H^{-k}(X,F),k\in \mathbb{N}$. Then define
(see e.g. \cite[Definition 5.3]{Roe2} and \cite[Lemma 2.13]{E}
\begin{eqnarray}\label{def:Ssmooth}
{\mathcal S}\Psi^{-\infty}(X,F)&:=&{\mathcal L}\left(H_\iota^{-\infty}(X,F),H^\infty(X,F)\right)\\ \nonumber&=&\cap_{(k,l)\in \mathbb{N}^2} {\mathcal L}\left(H^{-k}(X,F),H^l(X,F)\right)\\ \nonumber
& =&\cap_{(s,t)\in \mathbb{R}^2} {\mathcal L}\left(H^s(X,F),H^t(X,F)\right),
\end{eqnarray}
where ${\mathcal L}(A,B) $ stands for continuous linear operators from a topological space $A$ to a topological space $B$.
The notation we chose is inspired by Shubin, who calls these operators $\mathcal S$-smoothing \cite[Def. 1, Ch. 3]{ShuG}.
By \cite[Theorem 3.5]{Va}, an operator which smoothens sections has a smooth kernel, which leads to the following inclusion
\begin{equation}
\label{eq:Vaillant}
{\mathcal S}\Psi^{-\infty}(X,F) \subseteq \Psi^{-\infty}(X,F).
\end{equation}
Following \cite{Shu2} and \cite{Roe2}, we set the following definition.
\begin{defn}
\label{def:Uinfty} Let ${\mathcal U}\Psi^{-\infty}(X,F)$ denote the space of linear operators $A: C^\infty_c(X,F) \to \Ci(X,F)$ with smooth kernel $K_A$ satisfying the following uniform boundedness condition: for any multiindices $\alpha, \beta$
$$\Vert \partial_x^\alpha \partial_y^\beta K_A(x,y)\Vert\leq C_{\alpha,\beta}\quad \forall (x,y)\in X\times X$$ for some positive constant $C_{\alpha,\beta}$.
\end{defn}
Proposition 2.9 in \cite{Roe2} yields a refinement of (\ref{eq:Vaillant}), namely \begin{equation}\label{eq:Roe}{\mathcal S}\Psi^{-\infty}(X,F)\subseteq {\mathcal U}\Psi^{-\infty}(X,F).\end{equation}
The uniformity follows from uniform estimates that naturally arise in the context of bounded geometry as they do for closed manifolds.
\begin{rk}If $X$ is a closed manifold, the three above spaces coincide:
$${\mathcal S}\Psi^{-\infty}(X,F)={\mathcal U}\Psi^{-\infty}(X,F)=\Psi^{-\infty}(X,F),
$$
since the equality in \eqref{eq:Vaillant} holds. Indeed any linear operator $A\colon C^\infty_c(X, F)\to C^\infty(X, F)$ with smooth Schwarz kernel $K_A$ is smoothing when $X$ is closed as can be seen on direct inspection from the formula $(Au)(x)=\int_X K_A(x,y)u(y)dy$.
\end{rk}
The notion of properly supported
operators recalled in Definition \ref{defn:properly supported}, extends in a straightforward manner to linear operators $A:C^\infty_c(X, F)\to C^\infty(X,F)$.
The following definition is inspired by \cite{Shu}, and follows the terminology of \cite{E}.
\begin{defn} An operator $A\colon C^\infty_c(X,F)\to C^\infty( X,F)$ with Schwartz kernel $K_A$ has {\bf finite propagation} if it is $C$-local (see Definition \ref{defn:Clocal}) for some positive $C$, i.e. if there is some $C> 0$ such that $K_A(x,y)=0$
$\forall x, y$ with $\vert x-y\vert>C$ or equivalently, if
$$\forall u\in C^\infty_c(U,\C^k); \quad \supp (Au) \subset \{x: d(x, \supp u) \leq C\}.$$
Note that requiring finite propagation is more constraining than the assumption of quasi-locality of \cite{Roe2}.
\end{defn}
Let ${\mathcal U}\Psi_{\rm fp}^{-\infty}(X,F)$ denote the subspace of ${\mathcal U}\Psi^{-\infty}(X,F)$ consisting of finite propagation operators with uniformly bounded smooth kernels. We have \cite{Shu2}
\begin{equation}\label{eq:Shu} {\mathcal U}\Psi_{{\rm fp}}^{-\infty}(X,F)\subsetneq {\mathcal S}\Psi^{-\infty}(X,F)\subseteq {\mathcal U}\Psi^{-\infty}(X,F)\subseteq \Psi^{-\infty}(X,F).\end{equation}
\begin{rk} The class $ {\mathcal U}\Psi_{{\rm fp}}^{-\infty}(X,F)$ is strictly contained in ${\mathcal S}\Psi^{-\infty}(X,F)$ for the heat operator $e^{-tD^2}$ on a bounded geometry manifold, belongs to the class ${\mathcal S}\Psi^{-\infty}(X,F)$ (see for example \cite[\textsection 3.2]{Va}) but it does not have finite propagation.
\end{rk}
\begin{rk}\label{rk:nocomp}
Whereas ${\mathcal S}\Psi^{-\infty}(X,F)$ is an algebra, the class $\Psi^{-\infty}(X,F)$ is not.
Indeed, the composition of two operators with smooth kernels is defined only under appropriate decay conditions at infinity and when this is the case, the Schwartz kernel of the composition might not be smooth.
\end{rk}
\subsubsection{
Coverings and classes of $\Gamma$-invariant operators with smooth kernel}
\label{sec:smoothG}
Let us now specialise to covering manifolds.
Let $M$ be a (connected) closed manifold and $\widetilde M$ a regular covering given by a $\Gamma$-principal bundle $\pi:\widetilde M\to M$ with $\Gamma = {\rm Aut}(p)$ the discrete Lie group of deck transformations (smooth diffeomorphisms $\phi: \widetilde M\to \widetilde M$ such that $\pi\circ f= \pi$).
Let $\widetilde M$ be the universal cover so that $\Gamma=\pi_1(M)$ is the fundamental group of $M$.
\begin{ex}$\mathbb{R}^n$ is a universal cover of $\mathbb{T}^n$ with group $\Gamma=\pi_1(\mathbb{T}^n)=\mathbb{Z}^n$.
\end{ex}
If $M$ is a Riemannian manifold, we endow the covering $\widetilde M$ with the Riemannian structure induced by $\pi$. Let $E\to M$ be a Hermitian vector bundle and $\widetilde E:=\pi^* E\to \widetilde M$ its pullback.
Both $\widetilde M$ and $\widetilde E$ are of bounded geometry \cite{Shu}.
The action of $\Gamma$ on $\widetilde M$ via diffeomorphisms $L_\gamma$
\begin{eqnarray*}
\Gamma\times\widetilde M&\longrightarrow & \widetilde M\\
(\gamma, x)&\longmapsto &L_\gamma( x)
\end{eqnarray*}
induces an action on linear operators $A:C^\infty_c(\widetilde M,\widetilde E)\to C^\infty(\widetilde M,\widetilde E)$:
$$L_\gamma^\sharp A:= L_\gamma \circ A\circ L_\gamma^{-1}.$$
The operator $A$ is said to be {\bf $\Gamma$-invariant} whenever \begin{equation}
\label{invariance}
L_\gamma ^\sharp A = A \quad\forall \gamma \in \Gamma.
\end{equation}
The action $L_\gamma^\sharp$ stabilises $\Psi^{-\infty} (\widetilde M,\widetilde E)$.
Imposing a $\Gamma$-invariance condition leads to the following subclass of operators.
\begin{defn}
\label{def:gammaclasses}
Let $\Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E)$, resp. ${\mathcal U}\Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E)$, resp. ${\mathcal U}\Psi_{{\rm fp},\Gamma}^{-\infty}(\widetilde M, \widetilde E)$, resp. ${\mathcal S}\Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E)$ denote the space of $\Gamma$-invariant operators in $\Psi^{-\infty}(\tilde M, \tilde E)$, resp. ${\mathcal U}\Psi^{-\infty}(\widetilde M, \widetilde E)$, ${\mathcal U}\Psi_{\rm fp}^{-\infty}(\widetilde M, \widetilde E)$, ${\mathcal S}\Psi^{-\infty}(X,F)$.
\end{defn}
The following inclusions follow from (\ref{eq:Shu})
\begin{equation}\label{eq:ShuGamma} {\mathcal U}\Psi_{{\rm fp}, \Gamma}^{-\infty} (\widetilde M, \widetilde E)\subsetneq {\mathcal S}\Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E)\subseteq {\mathcal U}\Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E)\subseteq \Psi_\Gamma^{-\infty}(\widetilde M, \widetilde E).\end{equation}
\subsection{Classes of pseudodifferential operators}
\label{sec:PDOs}
In this section we discuss different classes of pseudodifferential operators on an open manifold of bounded geometry (we also consider very general classes, which possibly do not form algebras).
This short review which brings together and compares different approaches, follows \cite{Shu, MS, Kor, Roe2, E}.
\subsubsection{Classical pseudodifferential operators on manifolds with bounded geometry}
\label{sec:clpsibd}
Let $X$ be an $n$-dimensional manifold and $F\to X$ a vector bundle over $X$ of rank $k$. We assume $X$ and $F$ are both of bounded geometry.
\begin{defnlem}
\label{defn:modulosmoothkernels}
The following relations
\begin{itemize}
\item
$A\sim B\Longleftrightarrow A-B \;\text{ has a smooth kernel}$
\item
$A\underset{ \rm \small diag}{\sim} B\Longleftrightarrow A-B \quad\text{has a smooth kernel supported outside the diagonal} $
\end{itemize}
define equivalence relations on the space ${\mathcal L}\left(C^\infty_c(X, F),C^\infty(X,F)\right)$ of linear operators acting on the space $C^\infty_c(X, F)$ of compactly supported sections of $F$ with values in the space $C^\infty(X,F)$ of smooth sections of $F$. We write
$[ A]$ (resp. $[ A]_{\rm diag}$) for the equivalence class of $A$ with respect to $\sim$ (resp. $\underset{ \rm \small diag}{\sim}$).
\end{defnlem}
\begin{rk}\label{rk:simDelta}
\begin{itemize}
\item Clearly, we have $A\underset{ \rm \small diag}{\sim} B\Longrightarrow A\sim B$.
\item Whereas the equivalence relation $\sim$ is stable under composition of operators (whenever composable), the equivalence relation $\underset{ \rm \small diag}{\sim}$ is not. Indeed, if $A-A_1 =:R$ and $B-B_1=: S$ have smooth kernels supported outside the diagonal, then $AB= A_1S+ RB_1+ RS$ has a smooth kernel but it might not be supported outside the diagonal since the supports of $A_1$ and $B_1$ intersect the diagonal.
\end{itemize}
\end{rk}
\begin{ex}
Given a linear operator $A:C^\infty_c(X, F)\to C^\infty(X,F),$ any localisation $\chi_1\, A\chi_2$ induced by two smooth functions $\chi_1$, $\chi_2$ whose compact supports have a non void intersection in a trivialising set, is properly supported.
\end{ex}
We shall make use of the the existence of a \lq\lq good" partition of unity, \cite[Lemmas 1.3, 3.22]{Shu} and \cite[A1.1]{Shu} built as follows. For small enough $\rho$ (smaller than a third of the injectivity radius), there is a countable covering of $X$ by balls $B(x_i,\rho)$ centered at $x_i\in X$ with radius $\rho$ such that $d(x_i,x_j)\geq \rho$ for $i\neq j$ and any point $x\in X$ lies in at most $C_x$ such balls for some constant $C_x$. Moreover, there is a partition of unity
$1=\sum_i\chi_i$ with smooth functions $\chi_i$ whose supports $\supp \chi_i$ lie in $ B(x_i,2\rho)$ and which together with their derivatives taken in normal coordinates, are bounded independently of $i$.
\begin{lem}\label{lem:A0S} Given a linear operator $A:C^\infty_c(X, F)\to C^\infty(X,F)$ there is a properly supported operator $A_0:C^\infty_c(X, F)\to C_c^\infty(X,F)$ of finite propagation such that \begin{equation}\label{eq:PDOBG}A\underset{ \rm \small diag}{\sim} A_0.
\end{equation}
For any $\ve>0$, the operator $A_0$ can be chosen $\ve$-local.
\end{lem}
\begin{proof}
Given a "good" finite open cover $\mathcal U=(U_i)_{i\in I}$ of $X$ and a "good" partition of unity $(\chi_i)_{i\in I}$ subordinated to $\mathcal U$, we write the operator $A$ as
\begin{equation}
\label{Aij}
A=\sum_{i, j}\chi_iA\chi_j=\underbrace{\sum_{\supp \chi_i\cap\supp\chi_j\neq \emptyset}\chi_i A\chi_j}_{=:\sum_{\{i,j\}\in \mathcal P} A_{ij}}+\sum_{\supp \chi_i\cap\supp\chi_j= \emptyset}\chi_iA\chi_j
\end{equation}
where $\mathcal P$ is the set of pairs $\{i,j\}$ satisfying $\supp \chi_i\cap\supp\chi_j\neq \emptyset$, and $A_{ij}:=\chi_i A\chi_j$. Then, $A_0:= \sum_{\{i,j\}\in \mathcal P} A_{ij}$ is a properly supported operator of finite propagation since each $\chi_i A\chi_j$ is supported in balls with uniformly bounded radii. Moreover, by construction $S(A):=\sum_{\{i,j\}\in \complement\mathcal P}\chi_iA\chi_j$ has Schwartz kernel supported outside the diagonal. \\
For $ \ve>0$, we can choose the diameter of the partition such that $\forall i\in I$, $\diam U_i<\frac{\ve}{2}$, in which case $A_0$ is an $\ve$-local operator.
\end{proof}
\begin{defn}Given any real (resp. complex) number $m$, a linear operator $A\colon C^\infty_c(X, F)\to C^\infty(X,F)$ is a (resp. classical) {\bf pseudodifferential operator} of order $m$ if there is a properly supported operator $A_0\colon C^\infty_c(X, F)\to C_c^\infty(X,F)$ of finite propagation --- for any $\ve>0$, the operator $A_0$ can be chosen $\ve$-local--- with $A\underset{ \rm \small diag}{\sim} A_0$ as in (\ref{eq:PDOBG}) and
such that
\begin{itemize}
\item the operator $S(A):= A-A_0$ lies in $ \Psi^{-\infty}(X,F)$,
\item the operator $A_0$ is a sum $A_0=\sum_\alpha {\rm Op}(\sigma_\alpha)$ ( When applying the operator to a compactly supported section, the sum becomes finite, due to the local finiteness of a "good" open cover.) of (classical) properly supported pseudodifferential operators ${\rm Op}(\sigma_\alpha)$ of order $m$ supported on "good" open subsets of $X$ and identified via the trivialising charts with pseudodifferential operators on open subsets of $\mathbb{R}^n$. The symbol $\sigma_\alpha$ is interpreted as the symbol $\sigma(A)$ of $A$ seen in the trivialising chart indexed by $\alpha$.
\end{itemize}
By abuse of notation we shall set \begin{equation}\label{eq:abusenotation}{\rm Op}(\sigma(A)):=A_0=\sum_\alpha {\rm Op}(\sigma_\alpha),\end{equation} so that $A\underset{ \rm \small diag}{\sim} {\rm Op}(\sigma(A))$.
Let $\Psi^m(X, F)$ (resp. $\Psi_{\rm cl}^m(X, F)$) denote the class of such operators.
\end{defn}
\begin{rk}
\begin{itemize}
\item
Neither the class $\Psi(X,F):=\cup_{m\in \mathbb{R}} \Psi^m (X,F)$ nor \\$\Psi_{\rm cl}(X,F):=\cup_{m\in \C} \Psi^m_{\rm cl }(X,F)$ form an algebra since two such operators do not generally compose, compare Remark \ref{rk:nocomp}.
\item
The equivalence relations $\sim$ and $\underset{ \rm \small diag}{\sim}$ induce equivalence relations on $\Psi_{\rm cl}^m(X,F)$ for any $m\in \C$, which preserve the symbol in any trivialising chart.
\end{itemize}
\end{rk}
\subsubsection{Uniform classical pseudodifferential operators}
\label{subsec:ups}
We now specialise to the smaller class of {\bf uniform} pseudodifferential operators, introduced by Shubin and Meladze on Lie groups in \cite{MS} and by Kordyukov \cite{Kor} in the general setting of a bounded geometry manifold (see for instance \cite[Section 3]{Shu2}). As we shall see later, it is an appropriate class to host pseudodifferential operators on coverings and consists of the usual H\"ormander properly supported pseudo-differential operators with additional uniformity conditions.
\begin{defn}
\label{def:upsi}
Given (resp. $m\in \C$) $m\in \mathbb{R}$, let $ {\mathcal U}\Psi^{m}(X, F) $ (resp. $ {\mathcal U}\Psi_{\rm cl}^{m}(X, F) $) be the class of all {\bf uniform (resp. classical) pseudodifferential operators of order $m$} i.e., operators
$A\in \Psi^m(X,F)$ (resp. $A\in \Psi_{\rm cl}(X,F)$)
which in a "good" trivialising covering
$X=\cup_i B(x_i, \rho)$ of $X$ read $A\underset{ \rm \small diag}{\sim} A_0 $ as in (\ref{eq:PDOBG}), where
\begin{itemize}
\item the operator $S(A):=A-A_0$ lies in $ {\mathcal U}\Psi^{-\infty}(X,F)$
\item and $A$ has a {\bf uniformly} bounded symbol $\sigma(A)=\sigma(A_0)$ i.e., for any multiindices $\alpha, \beta$ there is a constant $C_{\alpha,\beta}$ independent of $i$ such that
\begin{equation}
\label{eq:uni-pdo}\Vert \partial_x^\alpha \partial_\xi^\beta \sigma(A) (x,\xi)\Vert\leq C_{\alpha,\beta}\, (1+\vert \xi\vert)^{m-\vert \beta\vert}\quad \forall (x,\xi)\in T^*B(x_i,\rho),
\end{equation}
\item $\left(\right.$resp. for classical operators, with an additional {\bf uniform} bound on the remainder terms in (\ref{eq:classical}), namely
for any multiindices $\alpha, \beta$, for any $N\in \mathbb{N}$, and for any excision function $\chi$ around zero, there is a constant $C_{\alpha,\beta, N}$ independent of $i$ such that
\begin{equation}\label{eq:uni-clpdo}\left. \left\Vert \partial_x^\alpha \partial_\xi^\beta \left(\sigma(A) (x,\xi)-\sum_{j=0}^{N-1} \sigma_{m-j}(A)\chi\right) (x,\xi)\right\Vert\leq C_{\alpha,\beta,N}\, (1+\vert \xi\vert)^{m-\vert \beta\vert-N}\quad \forall (x,\xi)\in T^* B(x_i,\rho) \right).\end{equation}
\end{itemize}
On the grounds of (\ref{eq:Vaillant}) we can furthermore require that $S(A)$ lies in ${\mathcal S}\Psi^{-\infty}(X,F) $ which defines the following subclasses of operators:
\begin{equation}\label{eq:Vaillant1} {\mathcal S}{\mathcal U}\Psi^m(X,F)\subseteq {\mathcal U}\Psi^m(X,F)\quad \forall m\in \mathbb{R}, \quad {\mathcal S}{\mathcal U}\Psi_{{\rm cl}}^m(X,F)\subseteq {\mathcal U}\Psi_{\rm cl}^m(X,F)\quad \forall m\in \C . \end{equation}
\end{defn}
\begin{rk}As can be seen from \eqref{eq:A1estimate}, for a vector bundle $E\to M$ on a closed manifold $M$, (\ref{eq:uni-pdo}) is verified by any pseudodifferential operator so that
$ {\mathcal U}\Psi^{m}(M, E) = \Psi^{m}(M, E) $. Similarly, (\ref{eq:uni-clpdo}) is satisfied by any classical pseudodifferential operator and we have $ {\mathcal U}\Psi_{\rm cl}^{m}(M, E) = \Psi_{\rm cl}^{m}(M, E) $.
\end{rk}
Similarly to pseudodifferential operators on closed manifolds, uniform pseudodifferential operators modify the degree of regularity of Sobolev spaces by the order of the operator \cite[Remark (c) after Def. 3.3]{Shu}.
\begin{lem}\label{lem:ShuThm1} {} \cite[Proposition 2.20]{E} An operator $A\in {\mathcal U}\Psi^m(X,F)$ extends to a bounded operator
\begin{equation}\label{eq:regm}
\overline A: H^s(X,F)\to H^{s-m}(X,F) \;, \text{ for any } s\in \mathbb{R}.
\end{equation}
\end{lem}
\begin{rk}\begin{itemize}
\item The proof of \cite[Proposition 2.20]{E}, stated for quasi-local uniform pseudodifferential operators, relies on the local finiteness of the covering and does not use quasi-locality. Hence the proof extends to elements of ${\mathcal U}\Psi^m(X,F)$.
\item Consequently, the space of uniform pseudodifferential operators of real order $m$ compares with the space $\mathcal O p ^m(X, F)$ used in \cite[pag. 11]{Va} to denote the space of all \lq\lq $m$-regularising operators\rq\rq, i.e. the linear operators $A\colon C_c^\infty(X, F)\to C^\infty_c(X, F)'$ which extend as in \eqref{eq:regm}
$$
{\mathcal S}{\mathcal U}\Psi^m(X, F)\subsetneq {\mathcal U}\Psi^m(X, F)\subsetneq \mathcal O p ^m(X, F)\ .
$$
\end{itemize}
\end{rk}
This leads to the following identifications.
\begin{prop}\label{prop:Comparison} (compare with \cite[Lemma 2.22]{E})
$${\mathcal S}\Psi^{-\infty}(X,F)= \cap_{m\in \mathbb{R}} {\mathcal U}\Psi^m(X,F)={\mathcal U}\Psi^{-\infty}(X,F).$$
Consequently, ${\mathcal S}{\mathcal U}\Psi^m(X,F)= {\mathcal U}\Psi^m(X,F)$ for any real number $m$ and
${\mathcal S}{\mathcal U}\Psi_{\rm cl}^m(X,F)= {\mathcal U}\Psi_{\rm cl}^m(X,F)$ for any complex number $m$.
\end{prop}
\begin{proof}For any real number $m$, on the one hand we have
$\cap_{m\in \mathbb{R}}{\mathcal U}\Psi^m(X,F)\subset {\mathcal S}\Psi^{-\infty}(X,F)$ as a consequence of \eqref{eq:regm}. On the other hand, we know by (\ref{eq:Roe}) that ${\mathcal S}\Psi^{-\infty}(X,F)\subset {\mathcal U}\Psi^{-\infty}(X,F)\subset{\mathcal U}\Psi^m(X,F)$ for any real number $m$. Hence ${\mathcal S}\Psi^{-\infty}(X,F)\subset \cap_{m\in \mathbb{R}} {\mathcal U}\Psi^m(X,F)$ and the first identity follows.\\
As for the second identity, we have the straightforward inclusion ${\mathcal U}\Psi^{-\infty}(X,F)\subset {\mathcal U}\Psi^m(X,F)$ for any real number $m$ which yields the inclusion from right to left. The inclusion from left to right follows from observing that the uniform estimates \eqref{eq:uni-pdo} imply the uniform boundedness of the derivatives of the Schwartz kernel of the operator.
\end{proof}
\subsection{$\Gamma$-invariant classical pseudodifferential operators on covering spaces}
\label{sec:cover}
Let $M$ be a (connected) closed manifold and $\pi\colon \widetilde M\to M$ a regular $\Gamma$-covering as in Section \ref{sec:smoothG}.
\begin{defn}
\label{def:gammaclasses2}
Imposing the $\Gamma$-invariance condition \eqref{invariance} leads to the following subclasses
$\Psi_\Gamma^m(\widetilde M,\widetilde E)$, ${\mathcal U}\Psi_\Gamma^m(\widetilde M,\widetilde E)$, $ \Psi_\Gamma^{-\infty}(\widetilde M,\widetilde E)$, ${\mathcal U}\Psi_\Gamma^{-\infty}(\widetilde M,\widetilde E)$ of $\Gamma$-invariant operators in the corresponding classes
defined in Section \ref{sec:clpsibd}. The spaces $\Psi_{ {\rm cl},\Gamma}^m(\widetilde M,\widetilde E)$ and ${\mathcal U}\Psi_{ {\rm cl}, \Gamma}^m(\widetilde M,\widetilde E)$ are defined analogously.
\end{defn}
\begin{rk}
\label{rk:Gamma-sup} As consequences of cocompactness of $\widetilde M$,
a $\Gamma$-invariant operator on $\widetilde M$ is properly supported if and only if its Schwartz kernel has compact support in $(\widetilde M\times \widetilde M)/\Gamma$, see \cite[Chapter 3, above Definition 3]{ShuG}.
\end{rk}
We equip $\widetilde M$ with a $\Gamma$-invariant locally finite open cover in the following way: given a finite open cover $\mathcal U_M=\{U_j, j=1, \cdots, N\}$ of $M$, we lift it to $\widetilde M$ and take all the connected components to have a cover by connected open subsets. We obtain a $\Gamma$-invariant locally finite open cover
$
\widetilde M=\bigcup_{{j=1,..,N \atop \gamma\in \Gamma}} \gamma\, U_j.
$
We then build a $\Gamma$-invariant partition of unity \begin{equation}\label{eq:tildechi}\widetilde\chi_j:=\{\chi_{j, \gamma}\in C^\infty_c(\gamma\, U_j), \gamma\in\Gamma\}_{ j=1,\cdots, N}\end{equation} subordinated to this cover with $\chi_{j, \gamma}(x)=\chi_{j, e}(\gamma^{-1}x)$. This way, a partition of unity $\{\chi_j\}_{j=1,\cdots, N}$ of $M$ subordinated to the covering $\mathcal U_M$ is lifted to a $\Gamma$-invariant partition of unity $\{\widetilde \chi_j\}_{j=1,\cdots, N}$, which is a "good" partition of unity in the sense of manifolds with bounded geometry.
Such a partition of unity combined with a subordinated trivialisation of $\widetilde E$ can be used to construct Sobolev spaces $H^s(\widetilde M,\widetilde E)$ of sections on $\widetilde E$ \cite[Definition 1, \textsection 3.9]{Sch}.
As a consequence of the corresponding property on manifolds with bounded geometry, see Lemma \ref{lem:ShuThm1}, we have:
\begin{rk}
\begin{itemize}
\item An operator $A\in {\mathcal U}\Psi^m_\G(\widetilde M,\widetilde E)$ extends to a bounded operator\\ $H^s(\widetilde M,\widetilde E)\to H^{s-m}(\widetilde M,\widetilde E) $, for any $ s\in \mathbb{R}$.
\item Consequently, by Proposition \ref{prop:Comparison}, ${\mathcal U}\Psi_\G^{-\infty}(\widetilde M,\widetilde E)=\cap_{m\in \mathbb{R}}{\mathcal U}\Psi^m_\G(\widetilde M,\widetilde E)$.
\end{itemize}
\end{rk}
\subsection{Lifted operators}
As proved in \cite{ShuG}, $\ve$-local pseudodifferential operators can be lifted from $M$ to $\widetilde M$, and their lifts are uniform properly supported operators.
We include the proof of this classical fact for completeness.
\begin{lem}
{}\cite[Proposition 1, \textsection 3.9]{ShuG}
\label{prop:ep-local}
With the notation as above, let $r_0:=\inf_{x\in X} \{d(x, \gamma x), \gamma\in \Gamma\setminus \{e\}\}>0$, where $e$ is the unit of $\,\G$ and let $A\colon C^\infty( M, E)\rightarrow C^\infty (M, E)$ be an $\ve$-local operator with $\ve<\frac{r_0}{2}$.
\begin{enumerate}
\item
There exists a unique $\ve$-local operator $\widetilde A\colon C_c^\infty (\widetilde M,\widetilde E)\to C^\infty (\widetilde M,\widetilde E)$ such that for any lifted local section $\widetilde s $ of $F$ of a local section $s$ of $E$ \begin{equation}\label{tildepistar}\widetilde A\, \tilde s=\pi^*(As). \end{equation}
With the notations of the introduction, we write $\pi_\sharp (\widetilde A)= A; \quad \pi^\sharp ( A)= \widetilde A.$
\item If moreover $A $ lies in $ \Psi^m(M, E) $ for some $m\in \mathbb{R}$ (resp. $\,\Psi_{cl}^m(M, E)$ for some $m\in \C$), we have (with the notation of \eqref{eq:simsymb}, see Appendix A)
\begin{equation}
\label{eq:liftedsymbolm}
\sigma(\widetilde A) = \widetilde {\sigma (A)}, \quad \left({\rm resp.}\quad \sigma_{m-j} (\widetilde A) = \reallywidetilde {\sigma_{m-j}(A)},\quad \forall j\geq 0\right).
\end{equation} This is
to be understood as a local identity in appropriate local trivialisations around a point $ x$.
In particular, $\widetilde A$ lies in $ {\mathcal U}\Psi_\Gamma^m( \widetilde M,\widetilde E)$ (resp. $ \,{\mathcal U}\Psi_{{\rm cl}, \Gamma}^m(\widetilde M,\widetilde E)$).
\end{enumerate}
\end{lem}
\begin{proof}
We have that if $d(x,y)<\ve$, then $d(\gamma x, y)>\ve$ for all $ e\neq \gamma\in \Gamma$. If $K_A$ denotes the Schwartz kernel of $A$, define $\widetilde A$ by constructing the operator with Schwartz kernel
$$
K_{\tilde A}= \left\{\begin{array}{cc}K_A(\pi(x),\pi(y)), & d(x,y)<\ve\\ 0\;\;\;\;, & \text{elsewhere}.\end{array}\right.
$$
To show (2), let $(V,\Phi)$ be a local trivialisation of $E$ where $V$ is an evenly covered open set. Recall that the symbol of $A$ on this local chart, denoted by $\sigma_V (A)(x, \xi)$, is by definition the symbol of the operator $\Phi^\sharp A_V$ acting on matrix valued functions on $\phi(V)\subset \mathbb{R}^n$, where $A_V$ is the localization of $A$ on $V$. The symbol of the lifted operator $\widetilde A$ is described as follows. Let $\pi^{-1}(V)=\bigsqcup_{\gamma\in \Gamma}U_\gamma$; on each local chart $(U_\gamma, \phi\circ \pi)$ the symbol is defined as the symbol of $(\Phi\circ \pi)^\sharp \widetilde A_{U_\gamma}$. It follows that $
\sigma_{U_\gamma}(\widetilde A)( x,\xi)=\sigma_V(A)(\pi( x),\xi)$. \\
The fact that $\widetilde A$ lies in $ {\mathcal U}\Psi_\Gamma^m( \widetilde M,\widetilde E)$ (resp. $ \,{\mathcal U}\Psi_{{\rm cl}, \Gamma}^m(\widetilde M,\widetilde E)$) then follows from the fact that $ {\mathcal U}\Psi^m( M, E)= \Psi_\Gamma^m( \widetilde M,\widetilde E)$ (resp. $ \,{\mathcal U}\Psi_{{\rm cl}}^m( M, E)$) = $\Psi_{{\rm cl, \Gamma}}^m( M, E)$).
\end{proof}
We now combine Proposition \ref{prop:ep-local} with the partition of the unity \eqref{eq:tildechi} to lift operators {\it modulo $\underset{ \rm \small diag}{\sim}$}, and have therefore a \lq\lq lifted" analogue of Proposition \ref{prop:PsidoU}.
\begin{prop}
\label{prop:ShubinTh1}
Let $ \ve>0$ and $\mathcal A$ be an operator in $\Psi_\Gamma(\widetilde M, \widetilde E)$.
\begin{enumerate}
\item If $\mathcal A$ is $\ve$-local, with the notation of \eqref{tildepistar}, there exists a unique $A $ such that \begin{equation}\label{eq:tildeAtildes}\mathcal A=\pi^\sharp A.\end{equation}
\item In general, there exists an $\ve$-local operator $A\in \Psi(M, E)$
such that \begin{equation}\label{eq:mathfrakAtildeA}\mathcal A\underset{ \rm \small diag}{\sim} \widetilde A.\end{equation}
Consequently, the symbols of the two operators relate by
\begin{equation}\label{eq:liftedsymbol}\sigma(\mathcal A)\sim \reallywidetilde{\sigma(A)},\end{equation}
independently of the choice of $\widetilde A\in [\mathcal A ]_{\rm diag}$.
If $\mathcal A$ is $\ve$-local, in particular if it is a differential operator, then $\mathcal A =\widetilde {\pi_* \mathcal A}.$
\end{enumerate}
\end{prop}
\begin{proof}
Let $\{\widetilde \chi_j\}_{ j=1, \cdots, N}$ be a $\Gamma$-invariant partition of unity subordinated to a cover $X=\bigcup_{{j=1,..,N \atop \gamma\in \Gamma}} \gamma\, U_j
$ with open sets $U_j$ of diameter smaller than $\ve$. As in the proof of Lemma \ref{lem:A0S} we write a $\G$-invariant operator $\mathcal A\in \Psi_\Gamma (\widetilde M, \widetilde E)$
as
$$
\mathcal A=\sum_{i, j}\widetilde \chi_i \mathcal A\widetilde \chi_j=\sum_{\supp \widetilde \chi_i\cap\supp\widetilde \chi_j\neq \emptyset}\widetilde \chi_i\mathcal A\widetilde \chi_j+\sum_{\supp \widetilde \chi_i\cap\supp \widetilde\chi_j= \emptyset}\widetilde \chi_i\mathcal A\widetilde \chi_j,
$$
where with a slight abuse of notation, using the notations of \eqref{eq:tildechi}, we have set $\supp \widetilde \chi_i= \cup_{\gamma\in \Gamma}\supp \widetilde \chi_{i,\gamma}$.\\
Choosing the diameter of the partition small enough and applying Proposition \ref{prop:ep-local} to the $\ve$-local operators $\chi_i \mathcal A\chi_j $, we have $$\chi_i\, \mathcal A\,\chi_j =\reallywidetilde { \pi_\sharp(\widetilde \chi_i\, \mathcal A\, \widetilde \chi_j)}, $$ which yields
\begin{equation}
\label{eq:localdescriptionA}
\mathcal A= \sum_{\supp\widetilde \chi_i\cap\supp\widetilde \chi_j\neq \emptyset}\widetilde{ A}+ S(\mathcal A)=\widetilde A+ S(\mathcal A)
\end{equation}
with
\begin{equation}
A:= \sum_{\supp\widetilde \chi_i\cap\supp\widetilde \chi_j\neq \emptyset}\pi_\sharp\left(\chi_i \, \mathcal A\,\chi_j\right)
\end{equation}
and
$$
S(\mathcal A):=\mathcal A- \widetilde A=\sum_{\supp \widetilde\chi_i\cap\widetilde \supp\chi_j= \emptyset}\widetilde \chi_i \,\mathcal A\,\widetilde \chi_j\ .
$$
a linear operator with smooth kernel supported outside the diagonal. \\
If $\mathcal A$ is $\ve$-local, then the above construction reduces to
$$
\mathcal A= \reallywidetilde{\sum_{\supp \widetilde \chi_i\cap\supp\widetilde \chi_j\neq \emptyset}\, \widetilde \chi_i (\pi_*\mathcal A)\, \widetilde \chi_j}=
\widetilde{\pi_*\mathcal A}.
$$
This proves \eqref{eq:mathfrakAtildeA} from which \eqref{eq:liftedsymbol} then follows.
\end{proof}
\begin{rk}In view of \eqref{eq:liftedsymbol}, properties of pseudodifferential operators such as being classical, the order, invertibility of the principal symbol can be lifted without ambiguity.
\end{rk}
On the grounds of the above Remark, we set the following
\begin{defn}\label{def:elliptic_frak} With the notations of Proposition \ref{prop:ShubinTh1},
an operator $\mathcal A$ in ${\mathcal U}\Psi_{{\rm cl},\Gamma}(\widetilde M, \widetilde E)$ is {\bf elliptic } whenever $A$ is elliptic, i.e. whenever its principal symbol is invertible.
\end{defn}
On the grounds of the above proposition, we set the following
\begin{defn}
\label{defn:classlift}
Let $ A\in \Psi_{\rm cl}(M,E)$, let $A_0$ be $\ve$-local such that $A\underset{ \rm \small diag}{\sim} A_0$ as in \eqref{eq:PDOBG}. We define the lift of the class $[A]_{\rm diag}$ to
\begin{equation}
\label{eq:liftedOp}
\reallywidetilde {[A]_{\rm diag}}:= {[\widetilde A_0]}_{\rm diag}\ .
\end{equation}
\end{defn}
With this defintion at hand, for any $\mathcal A\in \reallywidetilde {[A]_{\rm diag}}$ we have
\begin{equation}\label{eq:sigmafraktildeAzero}\sigma(\mathcal A)\sim \reallywidetilde {\sigma(A_0)}.\end{equation}
\subsection{Lifting functions of operators}
Let $E$ be a hermitian vector bundle over the closed Riemannian manifold $M$.\\
We borrow the following definition from \cite{ALNP}.
\begin{defn}
\label{defn:weight}
We call a {\bf weight} in $\Psi_{\rm cl}(M, E)$, an operator $Q\in \Psi_{\rm cl}(M, E)$ such that
\begin{enumerate}
\item $Q$ is invertible, namely its kernel is non trivial or equivalently, it admits an inverse defined on $L^2( M, E)$,
\item $Q$ has positive order $q$,
\item $Q$ has a principal angle $\theta$, which means that there exists a ray $R_\theta= \{re^{i\theta}, \; r\geq 0\}$, called {\bf spectral cut} , which is disjoint from the spectrum of the ${\rm End}(E_x)$-valued leading symbol $\sigma_L(Q)(x,\xi)$ for any $x\in M$, $\xi \in T_x^*M \setminus \{0\}$.
\end{enumerate}
This last condition implies that the spectrum of the operator $Q$ lies outside a cone $\Lambda_\theta$ containing the ray $R_\theta$, \cite[Lemma 1.6]{ALNP}.
\end{defn}
\begin{ex}Let $D$ in $ \Psi^d_{\rm cl}(M, E)$ be an essentially self-adjoint elliptic differential operator of positive order $d$. Then $\Delta:= D^2$ is a non-negative elliptic differential operator on $M$ of positive order $q:=2d$ and the operator $\Delta+1$ is a differential operator which defines a weight.
\end{ex}
\begin{ex}\label{ex:Delta} With the same notations as in the above example, we can instead add to $\Delta$ a smoothing operator $ \chi_{[-\ve,\ve]}(\Delta)$ with $\ve>0$ chosen small enough so that it coincides with the orthogonal projection $\chi_0(\Delta)$ onto the kernel of $\Delta$. Then \begin{equation}
\label{eq:Qeps}Q_\ve(\Delta):= \Delta+\chi_{[-\ve,\ve]}(\Delta)
\end{equation} defines a weight with spectral cut $R_\pi=\mathbb{R}_{\leq 0}$ .
\end{ex}
A weight $Q\in \Psi_{\rm cl}(M,E)$ satisfies a resolvent estimate, see \cite[(9.30)]{Shub} and \cite[Cor. 1, p. 298]{Se}:
\begin{equation}
\label{eq:estimateR}
\|(Q-\lambda )^{-1}\|_{s, s+l}\leq C_{s,l} \,\vert\lambda\vert^{-1+\frac{l}{q}} \;\;\forall \,0\leq l\leq q\;\; \forall \la\in \Lambda_\theta\cap\{|\la|>R>0\}.
\end{equation}
Let $\Gamma_\theta$ be a contour around the ray $R_\theta$, then to a measurable function $h$ on $\Gamma_\theta$, such that $\vert h(\lambda)\vert \leq \vert \lambda\vert^{-\delta} $ for some positive $\delta$, we can associate the operator \begin{equation}\label{eq:hQ}h(Q) :=-\frac{1}{2i\pi} \int_{\Gamma_\theta} h(\lambda)\, (Q-\lambda)^{-1}\, d\lambda,\end{equation}
whose symbol is given by the corresponding Cauchy integral \begin{equation}\label{eq:sigmahQ}\sigma(h(Q))\sim h_\star (\sigma(Q)):=-\frac{1}{2i\pi}\, \int_{\Gamma_\theta} h(\lambda)\, (\sigma(Q)-\lambda)^{\star -1}\, d\lambda, \end{equation}
where the exponent $\star k$ stands for the $k$-th $\star$-product exponent of symbols. We refer the reader to any book on pseudodifferential operatorsfor the precise definition of the $\star$-product, see e.g. \cite[(3.41)]{Shub}.
\begin{ex}
For a polynomial $h(x)=\sum_{k=0}^n a_k x^k$, \eqref{eq:hQ} yields the operator $h(Q):=\sum_{k=0}^n a_k \, Q^k$ with symbol $h_\star(\sigma(Q))= \sum_{k=0}^n a_k \sigma(Q)^{\star k}$.
\end{ex}
\begin{ex}
If $Q=D^2$, with $D$ an essentially self-adjoint operator, then the function $h(x)=\frac{1}{\sqrt{ x}}$ yields the operator $\vert D\vert^{-1}:= Q^{-\frac{1}{2}}$ from which we build the sign operator
\begin{equation}
\label{eq:sgn}
{\rm sgn}(D):= D\,h(Q)= D\, \vert D\vert^{-1}\quad \text{with symbol} \quad \sigma\left({\rm sgn}(D)\right)\sim h_\star (\sigma(D)):=\sigma(D)\star \left(\sigma(\Delta)\right)^{\star-\frac{1}{2}}.
\end{equation}
\end{ex}
Definition \ref{defn:weight} carries out to ${\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$, up to the fact that, in contrast with the closed case (see \cite[Def. 3.6]{ALNP} for details), the spectrum being not necessarily purely discrete in the noncompact case, we need an extra condition for the existence of an Agmon angle, defined as follows.
\begin{defn}\label{def:agmonL2} For an angle $\beta$ and for $\epsilon >0$, denote
$$V_{\beta,\epsilon}:=\{z\in \C\;: |z|<\epsilon \}\cup \{z\in \C\setminus 0\,:\; {\rm arg} z \in (\beta-\epsilon, \beta+\epsilon)\} .$$
Then $\beta$ is called an {\bf Agmon angle} for $A\in {\mathcal U}\Psi_{{\rm cl},\Gamma}(\widetilde M, \widetilde E )$ if there is some $ \epsilon >0$ such that $\spec( A)\cap V_{\beta, \epsilon}=\emptyset$
\end{defn}
\begin{defn}
We call a {\bf weight} in ${\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$, an operator $\mathfrak Q\in {\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$ such that
\begin{enumerate}
\item $ \mathfrak Q$ is invertible in the strong sense of the term, namely that it admits an inverse defined on $L^2(\widetilde M, \widetilde E)$,
\item $\mathfrak Q$ has positive order $q$,
\item $ \mathfrak Q$ has a principal angle $\theta$ as in Definition \ref{defn:weight},
\item $\theta$ is an Agmon angle for $\mathfrak Q$.
\end{enumerate}
\end{defn}
\begin{rk} \label{rk:sigmafrak}
In view of \eqref{eq:liftedsymbol}, for any weight $\mathfrak Q\in {\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$, there exists an operator $Q$ in $\Psi( M, E)$) such that
\begin{equation}\label{eq:sigmafrak}\sigma(\mathfrak Q)\sim \reallywidetilde{\sigma( Q)}
\end{equation} so that it has the same order and same principal angle. Moreover it can be chosen invertible modulo addition of the projection onto its kernel. Hence for any weight $\mathfrak Q$ in ${\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$ there is a weight $ Q$ in $\Psi( M, E)$ with the same spectral cut and
such that \eqref{eq:sigmafrak} holds.
\end{rk}
\begin{figure}[ht]
\label{fig1}
\caption{Agmon angle $\beta$}
\centering
\includegraphics[]{disegno1.pdf}
\end{figure}
\begin{lem}\label{lem:hstarQ} Let $\mathfrak Q$ be a weight in ${\mathcal U}\Psi_\Gamma(\widetilde M, \widetilde E)$ with spectral cut $R_\theta$ and let $ Q$ be a weight in $\Psi( M, E)$ with the same spectral cut as in Remark \ref{rk:sigmafrak}.
With the notations of \eqref{eq:sigmafrak}, for every measurable function $h$ on a contour $\Gamma_\pi$ around the ray $R_\theta$, such that
$\vert h(\lambda)\vert\leq\vert \lambda\vert^{-\delta} $ for some positive $\delta$,
we have:
\begin{equation}\label{eq:hstarQ}
h_\star\left(\sigma(\mathfrak Q)\right)\sim\reallywidetilde{h_\star(\sigma(Q))},\end{equation}
where we have used the notation of (\ref{eq:sigmahQ}).
\end{lem}
\begin{proof}
The star product $\star$, which is a local operation for it only involves derivatives, commutes with the lift. For two local symbols $\sigma$ and $\tau$, we have
$\reallywidetilde{\sigma\star \tau}\sim \widetilde\sigma\star \widetilde \tau$, which implies $\reallywidetilde{(\sigma-\lambda)^{\star -1}}\sim (\widetilde \sigma-\lambda)^{\star -1}$ and hence
$$\reallywidetilde{ h_\star (\sigma )}\sim h_\star (\widetilde \sigma)=-\frac{1}{2i\pi}\, \int_\Gamma h(\lambda)\, (\widetilde \sigma-\lambda)^{\star -1}.$$Implementing $h_*$ therefore yields
$$\sigma(\mathfrak Q )\sim\reallywidetilde{\sigma(Q)}\Longrightarrow h_\star(\sigma(\mathfrak Q))\sim h_\star(\reallywidetilde{\sigma(Q)})\Longrightarrow h_\star(\sigma(\mathfrak Q))\sim \reallywidetilde{h_\star(\sigma(Q))} .$$
\end{proof}
\begin{ex} Let $Q$ be a weight with spectral cut $R_\theta$.
For $\Re(z)>0$, the function $h(x)=x_\theta^{-z}$ with the complex power determined by the angle $\theta$, yields the complex power
\begin{equation}\label{Qz}Q_\theta^{-z}:=-\frac{1}{2i\pi}\, \int_{\Gamma_\theta} \lambda_\theta^{-z}\, (Q-\lambda)^{\star -1}\, d\lambda,\end{equation}
which can be extended to any complex value $z$ setting $Q_\theta^{-z}:=Q^k\, Q_\theta^{-z+k}$ for $\Re(z)>-k$.
\end{ex}
With the same notations as in the Example \ref{ex:Delta}, the differential operator $\Delta$ lifts to a differential operator $\widetilde \Delta$ whose leading symbol is the lifted leading symbol of $\Delta$.
With the notation introduced in Appendix \ref{sec:appHM}, the isomorphism $\Phi$ induces a map
\begin{eqnarray*}
{\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E) &\longrightarrow &\Psi(M; E\otimes \mathcal H)\\
A&\longmapsto &\Phi^\sharp A.
\end{eqnarray*}
Let $\Delta_{\mathcal H}:=\Phi^\sharp \widetilde \Delta$ be the corresponding elliptic operator in $\Psi_{\rm cl}(M, E\otimes {\mathcal H})$.
\begin{prop}
\label{prop:chieps} For any positive $\ve$, the operator \begin{equation}\label{eq:tildeQeps}Q_\ve(\widetilde \Delta):=\widetilde \Delta+\chi_{[-\ve,\ve]}(\widetilde \Delta),\end{equation} resp. $Q_\ve(D_{\mathcal H}):=\Delta_{\mathcal H}+\chi_{[-\ve,\ve]}(\Delta_{\mathcal H})$, defines a weight in ${\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)$, resp. $\Psi_{\rm cl}(M, E\otimes \mathcal H)$ with spectral cut $R_\pi=\mathbb{R}_{\leq 0}$ and we have
\begin{equation}\label{eq:QHtilde}Q_\ve(\Delta_{\mathcal H})=\Phi^\sharp Q_\ve(\widetilde \Delta).
\end{equation}
\end{prop}
\begin{proof} To ensure that the operator $Q_\ve(\widetilde D)$, resp. $Q_\ve(D_{\mathcal H})$ defines a weight (for the latter, see also \cite{ALNP}), we need to check that
\begin{enumerate}
\item the operator $\chi_{[-\ve,\ve]}(\widetilde \Delta)$, resp. $\chi_{[-\ve,\ve]}( \Delta_{\mathcal H})$ has a smooth Schwartz kernel in $\Psi_\Gamma(\widetilde M, \widetilde E)$, resp. $\Psi(M, E\otimes \mathcal H)$) which follows from \cite[Cor 3.6]{Va} , resp. from \cite{BFKM}.
\item the operator $\widetilde \Delta+\chi_{[-\ve,\ve]}(\widetilde \Delta)$, resp. $ \Delta_{\mathcal H}+\chi_{[-\ve,\ve]}( \Delta_{\mathcal H})$ is invertible, which is an immediate consequence of its non-negativity.
\end{enumerate}
The compatibility of the map $\Phi$ with functional calculus (see \ref{prop:dict}) implies \eqref{eq:QHtilde} .
\end{proof}
Vassout's functional calculus on groupoids recalled in Appendix \ref{sec:groupoids}, allows to extend the notion of weight to the groupoid $G(M):=(\widetilde M\times \widetilde M)/\Gamma$ associated with ${\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)$, by the isomorphism $\rho$ defined in \eqref{eq:G_x}. Indeed, the map
\begin{eqnarray*}
{\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)&\longrightarrow& \Psi(G, E)\\
A&\longmapsto & \rho^\sharp A,
\end{eqnarray*}
induced preserves the properties 1)-3) of a weight and therefore transforms a weight $\mathfrak Q\in{\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)$ with spectral cut $\theta$ to a weight $\rho^\sharp (\mathfrak Q)$ on the associated groupoid with the same spectral cut. The map $\rho^\sharp$ preserves the estimate \eqref{eq:estimateR} on weights, which enables us to transport the related functional calculus from the groupoid to ${\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)$. With the notations of \eqref{eq:hQ} and for a measurable function $h$ on on a contour $\Gamma_\pi$ around the ray $R_\pi=]-\infty, 0]$, such that $h(\lambda)\leq \lambda^{-\delta} $ for some positive $\delta$, we can define
\begin{equation}\label{eq:rhohQ}h(\mathfrak Q)=\left(\rho^\sharp\right)^{-1} ( h(\rho^\sharp (\mathfrak Q)),
\end{equation}
whose symbol is given by \begin{equation}\label{eq:sigmahfrakQ}\sigma(h(\mathfrak Q))\sim h_\star (\sigma(\mathfrak Q)):=-\frac{1}{2i\pi}\, \int_{\Gamma_\theta} h(\lambda)\, (\sigma(\mathfrak Q)-\lambda)^{\star -1}\, d\lambda.\end{equation}
\begin{ex}
For $\Re(z)>0$, the function $h(x)=x_\theta^{-z}$ with the complex power determined by the angle $\theta$, yields the complex power
\begin{equation}\label{frakQz}\mathfrak Q_\theta^{-z}:=-\frac{1}{2i\pi}\, \int_{\Gamma_\theta} \lambda_\theta^{-z}\, (\mathfrak Q-\lambda)^{\star -1}\, d\lambda,\end{equation}
where $\mathfrak Q\in{\mathcal U}\Psi_{{\rm cl},\Gamma} (\widetilde M, \widetilde E)$ is a weight with spectral cut $R_\theta$.
\end{ex}
\subsection{Lifting complex powers to coverings}
We now specialise to $h:x\mapsto x_\theta^{-z}$, where $\theta$ stands for the determination of the complex power. Applied to a weight $Q\in \Psi_{\rm cl}(M,E)$ with spectral cut $R_\theta$, this gives rise to complex powers $Q_\theta^{-z} \in \Psi_{\rm cl}(M,E)$.
\begin{rk} Let $m\in \mathbb{R}$, $z\in \C$ and $Q\in \Psi^m_{\rm cl}(G, E)$ a positive (with respect to the $L^2$-inner product) elliptic, invertible operator. It defines a weight with spectral cut $R_\pi=\mathbb{R}_{\leq 0}$ (we drop the mention of $\theta=\pi$ in the notation). The complex power $Q^{-z}$
is defined in \cite[p. 25]{Vas} according to \eqref{frakQz}
and proved to belong to $ \Psi_{\rm cl}^{-mz} (G, E)$ and to act as an an element of $\mathcal L(H^{t-m\Re z}(\mathcal W), H^t(\mathcal W))$ for any $t\in \mathbb{R}$. By Proposition \ref{prop:Gvscov}, the inverse map $\rho^{-1} $ identifies the operator $Q^{-z}$ with the operator ${\rho^\sharp}^{-1}(Q^{-z})$ in ${\mathcal U}\Psi_{\rm cl}^{-mz} (\widetilde M, \widetilde E)$. \\
Vassout's construction, which is carried out for positive operators, easily extends to any weight using an appropriate spectral cut.
\end{rk}
Families $z\mapsto Q^{-z}$ of complex powers are holomorphic, a notion we briefly recall.
\begin{defn}\label{defn:holfamilies} Let $U$ be an open
subset of $\mathbb{R}^n$, let $V$ be a linear space and let $W$ be a domain in $\C$. A holomorphic
family of classical (also called polyhomogeneous) symbols on $U$ with values in End$(V)$ parametrized by $W$ of order $\alpha:W\to \C$ is a function
$$\s(z)(x,\xi) := \s(z,x,\xi)\in\Ci(W \times U \times \mathbb{R}^n,
{\rm End} (V))$$ for which:
\begin{enumerate}
\item $\s(z)(x,\xi)$ is holomorphic at $z\in W$ as an element of
$\Ci(W \times U \times \mathbb{R}^n, {\rm End } V)$ and
\begin{equation}\label{e:logclassical}
\s(z)(x,\xi) \sim \sum_{j\geq 0}
\s_{\alpha(z)-j}(z)(x,\xi)
\end{equation}
is a classical symbol of order $\alpha(z)$ where the function is holomorphic;
\item for any integer $N\geq 1$ the remainder
$\ds\sigma_{(N)}(z)(x, \xi):= \sigma(z)(x,\xi)- \sum_{j=0}^{N-1}
\sigma_{\alpha(z)-j}(z)(x, \xi)$
is holomorphic in $z\in W$ as an
element of $\Ci(W \times U \times \mathbb{R}^n, \End V)$ with $k^{\rm
th}$ $z$-derivative
\begin{equation}\label{e:kthderivlogclassical}
\sigma^{(k)}_{(N)}(z)(x, \xi) := \frac{\partial^k}{\partial z^k}(\sigma_{(N)}(z)(x, \xi))
\end{equation}
defining a locally uniform family of symbols of order $\Re(\alpha(z))-N+\ve$ in a compact neighborhood of any $z_0 \in W$, for any positive $\e$.
\end{enumerate}
A family $z\mapsto A(z)$ in $\Psi_{\rm cl}(M,E)$ parametrised by a domain $W\subset \C$ is holomorphic
if in each local trivialisation of $E$ one has
$$A(z) = {\rm Op}(\sigma(A(z)) + S(z)
$$
with $\sigma (A(z))$ a holomorphic
family of classical symbols and $S(z)$ a
operator with Schwartz kernel $R(z,x,y)\in \Ci(W\times M\times
M,E\boxtimes E)$ holomorphic in $z$.
\end{defn}
There are at least two types of approaches to show that complex powers define holomorphic families. One by Seeley (\cite{Se}, see also \cite[Thm. 11.2]{Shub})
using in a central manner the calculus of the symbol of the resolvent of the operator, and a cohomological construction by Guillemin \cite[Thm 5.2]{Gui} axiomatic in nature and therefore easily transposable to more general contexts (see e.g. \cite{ALNV}). Note that Seeley and Shubin consider complex powers of differential elliptic operators but their construction can be extended to classical pseudodifferential operators using the symbol of the resolvent of these operators. Guillemin applies it to classical pseudodifferential operators with leading symbols that have a unique determination of the logarithm.
These constructions, which rely on basic properties of classical pseudodifferential operators, namely
\begin{enumerate}
\item an estimate of the type \eqref{eq:estimateR} on the resolvent of a weight,
leading to the existence of Cauchy integrals for weights,
\item the existence of a map $\Op:\sigma\mapsto \Op(\sigma)$ taking symbols to operators in the algebra, that \lq\lq commutes" with Cauchy integrals,
\end{enumerate} extend to very general algebras of classical pseudodifferential operators, including classical pseudodifferential operators on groupoids and uniform classical pseudodifferential operators on coverings.
In particular, a weight $\mathfrak Q\in {\mathcal U}\Psi_{\Gamma, {\rm cl}}(\widetilde M, \widetilde E)$ gives rise to a holomorphic family
\begin{equation}
\label{eq:weightonlift}
\mathfrak Q^{-z}=\left( \rho^\sharp\right)^{-1}\left( \rho^\sharp (\mathfrak Q)\right)^{-z},\end{equation}
where the map $\rho: {\mathcal U}\Psi_{\Gamma, {\rm cl}}(\widetilde M, \widetilde E)\longrightarrow G(M, E)$ is defined in Appendix \ref{sec:groupoids}.
\section{Linear forms and trace-defect formulae on $\Gamma$-invariant operators }
\subsection{Local linear forms on classical pseudodifferential operators}
\label{section2}
Locality plays a fundamental role in the lifting procedure; in this section we single out \lq\lq local\rq\rq linear forms which can be lifted to coverings.
\subsubsection{The Wodzicki residue and canonical trace densities}
Let $X$ and $F$ be of bounded geometry as in the previous section, let $m\in\C$. To an operator $A={\rm Op}(\sigma(A))+ R(A)\in \Psi_{\rm cl}^m(X, F)$ and $x\in M$ with symbol $\sigma(A)(x,\cdot)\sim\sum_{j=0}^\infty \sigma_{m-j}(A) (x, \cdot)$ in a local trivialisation around $x$, where $\sigma_{m-j}(A),\; j\in \mathbb{Z}_{\geq 0}$ are the positively homogeneous components of the symbol of degree $m-j$, we assign
\begin{itemize}
\item the {\bf pointwise Wodzicki} residue $${\rm Res}_x(A):= \frac{1}{(2\pi)^n}\,\int_{\vert \xi\vert=1} {\rm tr}\left(\sigma_{-n}(A)(x,\xi)\right)\, d\xi,$$
\item the {\bf pointwise canonical trace} $${\rm TR}_x(A):= \frac{1}{(2\pi)^n}\, -\hskip -10pt\int_{\mathbb{R}^n} {\rm tr}\left(\sigma(A)(x,\xi)\right)\, d\xi:=\frac{1}{(2\pi)^n}\,{\rm fp}_{R\to \infty} \int_{\vert \xi\vert\leq R} {\rm tr}\left(\sigma(A)(x,\xi)\right)\, d\xi,$$ where the abreviation fp for finite part means we pick the constant term in the asymptotic expansion as $R$ tends to infinity and $-\hskip -10pt\int_{\mathbb{R}^n}$ is the corresponding cut-off integral. Here, tr stands for the fibrewise trace. \end{itemize}
We recall well-known facts.
\begin{lem}
\label{lem:Alocalised} {} \cite{W, KV} Given an operator $A\in \Psi_{\rm cl}(X,F)$, both ${\rm Res}_x(A)\, dx$ and --whenever the order of $A$ does not lie in $[-n, +\infty[ \,\cap \,\mathbb{Z}$-- ${\rm TR}_x(A)\, dx$ define a global density on $X$.
\end{lem}
\subsubsection{The Wodzicki residue and the canonical trace on closed manifolds}
From now on in this section we assume that {\bf $X=M$ is a closed manifold} and $F=E$ is a vector bundle over $M$. We adopt the notations of \cite{Sc} and of Lemma \ref{lem:Alocalised}.
\begin{prop}
Let $A\in \Psi_{\rm cl}(M, E)$ have local symbol $\sigma_U(A)$ over any trivialising open subset $U$, and let $(U_i, \chi_i)_{i\in I}$ be a partition of unity on $M$ subordinated to a trivialisation of $E$.
\begin{enumerate}
\item The Wodzicki residue density integrates to the Wodzicki residue \cite{W}
\begin{eqnarray}\label{eq:Resint}
{\rm Res}\left(A\right)
&:=& \int_{M} {\rm Res}_x( A) \, dx\\
&=& \sum_{\supp \chi_i\cap\supp\chi_j\neq \emptyset}\int_{U_i\cap U_j}\chi_i(x) {\rm Res}_x\left(\Op(\sigma_{U_i\cap U_j})(A)\right)\, \chi_j(x) dx. \nonumber
\end{eqnarray}
\item Provided the order of the operator does not lie in $\mathbb{Z}_n:=[-n,+ \infty[\,\cap\, \mathbb{Z}$, the canonical trace density integrates to the canonical trace \cite{KV,Le}
\begin{eqnarray}\label{eq:TRint}
{\rm TR}\left(A\right)
&:=&\int_{M}{\rm TR}_x(A)\, dx\\
&=& \sum_{\supp \chi_i\cap\supp\chi_j\neq \emptyset}\int_{U_i\cap U_j}\chi_i(x) \, {\rm TR}_x
\left(\Op(\sigma_{U_i\cap U_j}(A)) \right)\,\chi_j(x)\, dx, \nonumber
\end{eqnarray}
\end{enumerate}
which are both well-defined, independently of the choice of trivialisation and subordinated partition of unity.\\ When $A$ is trace-class, namely when the order of $A$ has real part smaller than $-n$, then ${\rm Res}(A)=0$ and $ {\rm TR}\left(A\right)= {\rm Tr}\left(A\right)$, where ${\rm Tr}$ stands for the ordinary trace of $A$.
Furthermore, we have
\begin{eqnarray*}
\label{eq:TRDelta}
A\sim B\Longrightarrow {\rm Res}\left(A\right)={\rm Res}\left(B\right)\quad &\text{and}&\quad A\underset{ \rm \small diag}{\sim} B\Longrightarrow {\rm TR}\left(A\right)={\rm TR}\left(B\right)
\\
{\rm Res}(A)={\rm Res}(A_0)= {\rm Res}([A])\quad &\text{and}&\quad {\rm TR}(A)={\rm TR}(A_0)= {\rm TR}([A]_{\rm diag}).
\end{eqnarray*}
\end{prop}
\begin{proof}We refer to \cite{W, KV} for the existence of Res and TR. The pointwise residue ${\rm Res}_x$ vanishes on operators with smooth kernels and the pointwise canonical trace ${\rm TR}_x$ vanishes on operators with smooth kernel supported outside the diagonal. Consequently, only $A_0:= \sum_{\supp \chi_i\cap\supp\chi_j\neq \emptyset}\chi_i A\chi_j$ arises in \eqref{eq:Resint} and \eqref{eq:TRint}.
\end{proof}
\subsubsection{Characterisation of local linear forms on operators}
As before, $ \pi:E\to M$ denotes a rank $k$-complex vector bundle over a closed $n$-dimensional manifold $M$.
\begin{defn}
\label{def:sigmaclass}
We call a {\bf $\Sigma$-class} ($\Sigma$-for symbol) any class $\Sigma(M, E)\in \Psi_{\rm cl}(M, E)$ of classical pseudodifferential operators such that
\[A\in \Sigma(M, E)\, \wedge\, B\underset{\rm diag}{\sim}A \Longrightarrow B\in \Sigma(M, E),\] so that for an operator to belong to $\Sigma$ is a condition on its symbol. Let $\Sigma_{\rm symb}\subset{\rm CS}(\mathbb{R}^n, {\rm gl}_k(\C)) $ be the corresponding class of symbols so that \[A\in \Sigma(M, E)\Longleftrightarrow A\sim_{\rm diag} \Op(\sigma (A))\;\; \wedge \;\;\sigma(A)\in \Sigma_{\rm symb}. \]
\end{defn}
\begin{ex}
The algebra $\Psi_{\rm cl}^{\mathbb{Z}}(M, E)$ of integer order classical operators and the set $\Psi_{\rm cl}^{\notin\mathbb{Z}}(M, E)$ of noninteger order classical operators form $\Sigma$-classes.
\end{ex}
We define a class of linear forms (by linear form on a $\Sigma(M, E)$, we mean that $\Lambda(\alpha A+B)=\alpha \Lambda(A)+ \Lambda(B)$ for any scalar $\alpha$ and any operators $A$ and $B$ in $\Sigma$ such that $\alpha A+B$ also lies in $\Sigma$) on a $\Sigma$-class $\Sigma(M, E)$, which only detect the symbol of the operator. For this purpose, it is useful to introduce the corresponding class of scalar valued symbols
\[\Sigma_{\rm tr, symb}:=\{x\longmapsto {\rm tr}_x\circ \sigma(x,\cdot), \; \sigma\in \Sigma_{\rm symb}\}\subset {\rm CS}(\mathbb{R}^n, \C),\]
where ${\rm tr}_x$ is the fibrewise trace on the group $\End(E_x)$ of endomorphisms of the fibre $E_x$ over the point $x$.
\begin{defn}\label{defn:locallambda}We call {\bf local} any linear form $\Lambda\colon\Sigma(M, E)\to \C$ on a $\Sigma$-class of classical pseudodifferential operators $\Sigma(M, E)$ which
\begin{itemize}
\item is constant on $\underset{\rm diag}{\sim}$-equivalence classes:
\[A\underset{\rm diag}{\sim} B\Longrightarrow \Lambda(A)=\Lambda(B),\]
so that \begin{equation}
\label{eq:localLambda}
\Lambda(A)=\Lambda( {\rm Op}(\sigma(A))),
\end{equation}
\item and such that
\[\Lambda({\rm Op}(\sigma))=\int_M \lambda\circ {\rm tr}(\sigma(x,\cdot))\, dx\] for some linear form $\lambda:\Sigma_{\rm tr, symb}\to \C$, so that
\begin{equation}\label{eq:Lambda} \Lambda(A)=\int_M \Lambda_x(A)\, dx:=\int_M \lambda\circ {\rm tr}\left(\sigma(A)(x, \cdot)\right)\, dx.\end{equation}
\end{itemize}
\end{defn}
\begin{rk} The adjective \lq\lq local\rq\rq{} reflects the fact that the linear form $\Lambda$ is the integral of a density\[\omega_\Lambda(A)(x):=\lambda\circ {\rm tr}\left(\sigma(A)(x, \cdot)\right)\, dx\] over $M$.
\end{rk}
\begin{ex}
The Wodzicki residue on integer order classical operators (resp. the canonical trace on non-integer order classical operators) are local.
\end{ex}
In the remaining part of this paragraph, $\Sigma(M, E)\subset \Psi_{\rm cl}(M,E)$ is a $\Sigma$-class of operators which is {\bf closed} for the Fr\'echet topology of classical pseudodifferential operators of fixed order (see e.g. \cite[p.117]{Pa} and references therein).\begin{ex} The $\Sigma$-classes $\Psi_{\rm cl}^\mathbb{Z}(M, E)$ of integer order classical pseudodifferential operators and $\Psi_{\rm cl}^{\notin\mathbb{Z}}(M, E)$ of noninteger order classical pseudodifferential operators, both determined by conditions on their order, are closed for the Fr\'echet topology of classical pseudodifferential operators of fixed order.
\end{ex}
\begin{lem}Given a $\Sigma$-class $ \Sigma(M, E)\subset \Psi_{\rm cl}(M, E)$,
\begin{itemize}
\item the corresponding symbol class $\Sigma_{\rm symb}$ is invariant under scaling $\xi\longmapsto t\, \xi$ for $0<t<1$ and ${\rm O}_n(\mathbb{R})$;
\item Given a {\rm local and continuous} linear form $\Lambda\colon\Sigma(M, E)\to \C$, the associated linear form $\lambda\colon\Sigma_{\rm tr, symb}\to \C$ on the corresponding class of scalar valued symbols is continuous, behaves covariantly under rescaling $\xi\longmapsto t\, \xi$ for any $0<t<1$ and ${\rm O}_n(\mathbb{R})$-invariant.
\end{itemize}
\end{lem}
\begin{proof}
We first observe that $\lambda$ is continuous for the Fr\'echet topology on symbols
of constant order as a result of the continuity of $\Lambda$. Let us deduce further properties of $\lambda$ from the covariance of $\Lambda$. \begin{enumerate}
\item
Since $\omega_\Lambda(A)(x)$ defines a density, for any local diffeomorphism $\kappa\colon U\to U$, $\kappa^*\omega_\Lambda(\kappa^\sharp A_U)= \omega_\Lambda( A_U)$, where $A_U$ is a localisation of $A$ in that chart.
If $A_U$ has symbol $\sigma$, following the notations of \cite{Pa}, let $\widetilde{\kappa_*\sigma}$ be the symbol of $\kappa^\sharp A_U$. The above covariance property for the form $\omega_\Lambda$ translates to \begin{equation}
\label{eq:kappa}\vert {\rm det}\kappa_* \vert\, \lambda\left(\widetilde {\kappa_*\sigma}\right)= \lambda\left( \sigma \right)
\end{equation} for any local diffeomorphism $\kappa:U\to U$.
\item Choosing $U$ to be an open ball and $\kappa= t\,R$ for any $R\in O_n(\mathbb{R})$ with $0<t<1$, it follows from the invariance property (\ref{eq:kappa}), that the symbol class $\Sigma_{\rm symb} $ is invariant and the linear form $\lambda$ behaves covariantly under rescaling $\xi\longmapsto t\, \xi$ for any $0<t<1$ as well as under isometric transformations.\qedhere
\end{enumerate}
\end{proof}
In the following, continuity of local linear forms is defined with respect to the Fr\'echet topology of classical pseudodifferential operators of fixed order (see e.g. \cite[p.117]{Pa} and references therein.
\begin{thm}
\label{thm:uniqueness}
Any local continuous linear form on $\Psi_{\rm cl}^\mathbb{Z}(M, E)$ (resp. $\Psi_{\rm cl}^{ \notin \mathbb{Z}}(M, E)$) is proportional to the Wodzicki residue ${\rm Res}$ (resp. the canonical trace ${\rm TR}$).
\end{thm}
\begin{proof} The proof relies on results of \cite{Pa}, the basic ideas being that rescaling and isometric invariant linear forms i) on homogeneous functions are proportional to the residue (see Lemma \cite[3.42]{Pa}) and ii) on Schwartz functions are proportional to the ordinary integral (see Lemma \cite[3.40]{Pa}). Let $k$ be the rank of $E$ as a complex bundle over $M$.
\begin{enumerate}
\item We first characterise $\Lambda $ on $\Psi_{\rm cl}^{\notin\mathbb{Z}}(M,E)$.
Theorem 3.43 in \cite{Pa} characterises continuous rescaling and ${\rm O}_n(\mathbb{R})$-invariant linear forms on $CS^{\notin \mathbb{Z}}(\mathbb{R}^n)$ which turn out to be proportional to the canonical integral. Note that the proof of \cite[Lemma 3.42]{Pa} on which in \cite[Theorem 3.43]{Pa} relies, only requires an invariance under rescaling by $0<t<1$.
So there is a constant $C$ such that $\lambda=C\, -\hskip -10pt\int_{\mathbb{R}^n}$ on $CS^{\notin \mathbb{Z}}(\mathbb{R}^n)$. It follows that on $\Psi_{\rm cl}^{\notin \mathbb{Z}}(M,M\times\C)$, the linear form $\Lambda$ is proportional to the canonical trace TR. Composing with the trace on matrices yields the expected characterisation on $\Psi_{\rm cl}^{\notin\mathbb{Z}}(M,E)$.
\item Let us now characterise $\Lambda $ on $\Psi_{\rm cl}^{\mathbb{Z}}(M,E)$.
The covariance assumption of Theorem 4.21 in \cite{Pa}-- which says that any covariant continuous linear form on $CS^\mathbb{Z} (\mathbb{R}^n)$ is proportional to the residue, the proof of which relies on Theorem 3.43-- can easily be relaxed to an invariance under rescaling and isometric transformations. Thus, any continuous linear form on $CS^\mathbb{Z}(\mathbb{R}^n)$ which is invariant under rescaling and isometric transformations is proportional to the residue ${\rm res}$. It follows that
$\Lambda$ on $\Psi_{\rm cl}^\mathbb{Z}(M, M\times\C)$ is proportional to the Wodzicki residue ${\rm Res}$. Composing with the trace on matrices yields the expected characterisation on $
\Psi_{\rm cl}^\mathbb{Z}(M, E)$. \qedhere
\end{enumerate}\end{proof}
We use this locality in an essential way to lift trace defect formulae. Here is an immediate corollary which uses the known traciality of the Wodzicki residue and the canonical trace on the appropriate classes of operators they are defined on.
\begin{cor} Any local continuous linear form on the class $\Psi^\mathbb{Z}(M, E)$ of integer order classical pseudodifferential operators, resp. on the class $\Psi^{\notin \mathbb{Z}}(M, E)$, is a trace, resp. vanishes on brackets.
\end{cor}
\subsection{Trace defect formulae on closed manifolds and applications}
\subsubsection{Trace defect formulae}
We recall (without proof) useful trace defect formulae for the canonical trace of holomorphic families of classical pseudodifferential operators \cite{KV,PS}.
\begin{prop}\label{thm:KVPS} For any holomorphic family $ A(z)\in \Psi_{\rm cl}(M,E)$ of classical operators parametrised by $\C$ with holomorphic order $-qz+a$ for some positive $q$ and some real number $a$,
\begin{enumerate}
\item the map $z\mapsto {\rm TR} \left(A(z)\right) $ is meromorphic with simple poles $d_j:=\frac{a+n-j}{q}$, $ j\in \mathbb{Z}_{\geq 0}$,
\item $\:$\cite{KV} the complex residue at the point $d_j$ is given by
\begin{equation}\label{eq:classicalKV}{\rm Res}_{z=d_j} {\rm TR} \left(A(z)\right)= \frac{1}{q}{ \rm
Res} (A(d_j) ).
\end{equation}
\item $\:$\cite{PS} If $A(d_j)$ has a well-defined canonical trace ${\rm TR}\left(A (d_j)\right)$, then $A^\prime (d_j)$ has a well defined
Wodzicki residue ${ \rm
Res} (A^\prime(d_j) ) $ and the constant term in the meromorphic expansion of ${\rm TR}\left(A (z)\right)$ at $d_j$ is
\begin{equation}\label{eq:PSclassicalop}
{\rm fp}_{z=d_j}{\rm TR}\left(A (z)\right)={\rm TR}\left(A (d_j)\right) +
\frac{1}{q}{ \rm
Res} (A^\prime (d_j)).
\end{equation}
In particular, this holds if $A(d_j)$ is a differential operator, in which case ${\rm TR}\left(A (d_j)\right) =0$ and we have
\begin{equation}\label{eq:PSclassicalopdiff}
{\rm fp}_{z=d_j}{\rm TR}\left(A (z)\right)=
\frac{1}{q}{ \rm
Res} (A^\prime (d_j)).
\end{equation}
\end{enumerate}
\end{prop}
\begin{rk}
Actually, the derivatives $A^\prime(d_j)$ are log-polyhomogeneous operators (see \cite{Le} and references therein) and formula (\ref{eq:PSclassicalopdiff}) yields an extension of the Wodzicki residue to this particular operator, which differs from Lesch's graded residue.
\end{rk}
\subsubsection{The index as a trace defect}\label{subsec:index}
We specialise to a Hermitian $\mathbb{Z}_2$-bundle $E=E_+\oplus E_-$ over a closed Riemannian manifold $M$ and let $D=:\left(\begin{array}{cc}0 & D^- \\D^+ & 0\end{array}\right)$ be an essentially selfadjoint elliptic differential operator
of positive order $d$. The operator $\Delta:=D_-D_+\oplus D_+D_-=D^2$ is essentially self-adjoint and has a finite dimensional kernel.
Let $\Delta+S$ be an invertible perturbation of $\Delta $ by a smoothing operator $S\in \Psi_{cl}^{-\infty}(M,E)$; typically $S:=\chi_{\{0\}}(\Delta)$, the orthogonal projection onto the kernel of $\Delta$. The operator $\Delta+S$ is then a weight with principal angle $\theta=\pi$.
Applying the $\mathbb Z_2$-graded version of Proposition \ref{thm:KVPS} to the family $A(z)= \phi\, (\Delta+S)^{-z}$ yields the following corollary.
\begin{cor}\label{cor:KVPS} For any smooth function $\phi$ and any invertible perturbation $\Delta+S$ as above, the Wodzicki residue
${ \rm
sRes}(\phi\, \log (\Delta +S) )$ is well-defined and we have
\begin{equation}\label{eq:reslogdeltaR}{\rm fp}_{z=0}{\rm sTR}\left(\phi\, (\Delta+S)^{-z}\right)=- \frac{1}{2d}{ \rm
sRes}_\theta(\phi\, \log( \Delta+S)).\end{equation}
\end{cor}
\begin{proof} This follows from Proposition \ref{thm:KVPS} applied to the family $A(z)= \phi\, (\Delta+S)^{-z}$ at $z=0$, with the fibrewise trace replaced by the $\mathbb{Z}_2$-graded fibrewise trace.
\end{proof}
Applying Corollary \ref{cor:KVPS} to the constant $1$-valued function $\phi$ yields the following formula for the index.
\begin{cor}\label{cor:indexD}\cite{Sc}
\begin{equation}
\label{eq:indres}
{\rm ind} D^+=-\frac{1}{2d}{\rm sRes} ( \log ( \Delta +\chi_{\{0\}})).
\end{equation}
\end{cor}
\begin{proof}Expressing the index of $D_+$ as the supertrace of the projection operator onto the kernel we write
\begin{eqnarray}
{\rm ind} ( D_+)&=&{\rm sTr} (\chi_{\{0\}}(D))\nonumber\\
&=&{\rm sTR} \left(( \Delta+\chi_{\{0\}}(\Delta))^{-z}\right)\nonumber
\end{eqnarray}
since the nonzero eigenvalues of $ \; D_-D_+$ and $ D_+D_-$ coincide.
This last expression defines a constant meromorphic function, whose finite part at zero therefore coincides with the index:
\begin{eqnarray}
{\rm ind} ( D_+)
&=& {\rm fp}_{z=0} {\rm sTr} \left(( \Delta+\chi_{\{0\}})^{-z}\right)
\nonumber\\
&& \text{taking the finite part at zero} \nonumber \\
&=&-\frac{1}{2d}{\rm sRes} ( \log ( \Delta+\chi_{\{0\}})) \;\;\text{ using \eqref{eq:reslogdeltaR}}. \nonumber
\end{eqnarray}
\end{proof}
\subsection{Trace defect formulae on coverings }\label{sec:Tdef-cov}
Let as before, $M$ be a (connected) closed Riemannian manifold and $\widetilde M$ a regular $\Gamma$-covering.
Let $\pi\colon E\to M$ be a Hermitian vector bundle and $\widetilde E:=\pi^* E\to \widetilde M$ its pullback.
Let $\mathcal A \in \Psi_{ {\rm cl},\Gamma}(\widetilde M,\widetilde E)$ be a $\Gamma$-invariant classical pseudodifferential operator.
Let $x\in \widetilde M$, and $\sigma(\mathcal A )(x,\cdot)$ denote the symbol in a local trivialisation around $ x$.
Since $\mathcal A $ is $\Gamma$-invariant, the residue ${\rm Res}_x(\mathcal A )\, dx$ and the canonical trace ${\rm TR}_x(\mathcal A )\, dx$ densities introduced in Section \ref{section2} define $\Gamma$-invariant densities, leading to the following definitions.
Let $F\subset X$ be a fundamental domain for the action of $\Gamma$ on $\widetilde M$.
\begin{defn}
\label{def:gammares}
To a $\Gamma$-invariant classical pseudodifferential operator $\mathcal A \in \Psi_{ {\rm cl},\Gamma}^\mathbb{Z}(\widetilde M,\widetilde E)$, resp. $\mathcal A \in \Psi_{ {\rm cl}, \Gamma}^{\notin \mathbb{Z}_n}(\widetilde M,\widetilde E)$ we assign the $\Gamma$-residue, resp. canonical $\Gamma$- trace
\begin{equation}
\label{eq:covresiduedensity}
{\rm Res}_\Gamma (\mathcal A ):= \int_{F}{\rm Res}_x (\mathcal A )dx, \quad {\rm resp.}\quad {\rm TR}_\Gamma (\mathcal A ):= \int_{F}{\rm TR}_x (\mathcal A )dx.
\end{equation}
\end{defn}
\begin{rk}
On operators in $\Psi_{ {\rm cl}, \Gamma} (\widetilde M,\widetilde E)$ of order smaller than $-n$, which by \cite[Satz 4.4]{Va} are $\Gamma$-trace-class, the canonical $\Gamma$-trace coincides with the ordinary $L^2$-trace \begin{equation}\label{eq:Gammaordtrace}{\rm Tr}_\Gamma (\mathcal A ):= \int_{F}{\rm tr}\left(\sigma(\mathcal A )(x,\xi)\right)\, dx\, d\xi=\int_{F}{\rm tr}\left(K_{\mathcal A}(x,x)\right)\,dx,\end{equation}
where $K_{\mathcal A}$ stands for the Schwartz kernel of ${\mathcal A}$.
\end{rk}
\begin{defnprop}
\label{prop:ResTRliftedA}
\begin{enumerate}
\item A $\Sigma$-class of operators $\Sigma(M, E)\subset \Psi_{\rm cl}(M,E)$ lifts to the class of $\Gamma$-invariant operators
\[\Sigma_\Gamma(\widetilde M, \widetilde E):=\{{\mathcal A}\in\Psi_{\Gamma,{\rm cl}}(\widetilde M,\widetilde E), \quad {\mathcal A}\underset{\rm diag}{\sim} \widetilde A_0\quad \text{for some}\, A_0\in \Sigma(M, E)\},\]
using the notation of (\ref{eq:abusenotation}).
\item Any local linear form $\Lambda\colon \Sigma( M, E) \to \C$ canonically lifts to a linear form $\Lambda_\Gamma\colon \Sigma_\Gamma(\widetilde M, \widetilde E)\to \C$ defined as
\[\Lambda_\Gamma({\mathcal A}):= \Lambda(A)= \int_F \pi^*\left(\lambda\left({\rm tr}_x\sigma(A)(x,\cdot)\right)\, dx\right)\, \quad {\rm if}\,
{\mathcal A}\in\reallywidetilde{ [A]_{\rm diag}}.\]
\end{enumerate}
\end{defnprop}
\begin{rk}
By construction local linear forms are constant on the horizontal lines of the diagramme in \eqref{eq:diagramme}.
\end{rk}
\begin{proof} On an operator $ \widetilde A_0$ for some $\e$-local operator $A_0\in \Sigma(M, E)$, the $\Gamma$-invariant linear form is defined as $ \Lambda_\Gamma({\mathcal A}):=\Lambda(A_0)$.
This determines $\Lambda_\Gamma$ on $\Sigma_\Gamma(\widetilde M, \widetilde E)$ since a local linear form is constant on $\sim_{\rm diag}$-equivalence classes and we can set
\[ \Lambda_{\Gamma}(\widetilde{[A]_{\rm diag}})=\Lambda([A]_{\rm diag}):=\Lambda(A_0)=\Lambda(A)\, ,\quad\text{for any $\e$-local operator}\, A_0\in [A]_{\rm diag}.\qedhere\]
\end{proof}
This applies the canonical trace and the Wodzicki residue.
\begin{cor}
The residue and the canonical trace densities are well-defined on lifted classes $\reallywidetilde {[A]_{\rm diag}}$ of Definition \ref{defn:classlift} and are preserved under lifts.
\[{\mathcal A}\in \reallywidetilde{[ A]_{\rm diag}}\Longrightarrow {\rm Res}_\Gamma({\mathcal A})= {\rm Res}(A)\quad A\in \Psi_{\rm cl}^\mathbb{Z}(M,E),\]
and \[{\mathcal A}\in \reallywidetilde{[ A]_{\rm diag}}\Longrightarrow {\rm TR}_\Gamma({\mathcal A})= {\rm TR}(A)\quad A\in \Psi_{\rm cl}^{\notin \mathbb{Z}}(M,E).\]
In other words, the residue and the canonical trace are constant along the horizontal lines of the diagramme (\ref{eq:diagramme}) in the introduction.
\end{cor}
Consequently, trace-defect formulae on closed manifolds recalled in Proposition \ref{thm:KVPS} induce trace-defect formula on coverings.
\begin{thm}
\label{thm:KVPScov}
For any holomorphic family $ \mathcal A(z)\in \Psi_{{\rm cl},\Gamma}(\widetilde M, \widetilde E)$ of classical operators parametrised by $ \C$ with holomorphic order $ -qz+a$ for some positive $q$ and some real number $a$,
\begin{enumerate}
\item the map $z\mapsto {\rm TR}_\Gamma \left(\mathcal A (z)\right) $ is meromorphic with simple poles $d_k:=\frac{a+n-k}{q}$, $ k\in \mathbb{Z}_{\geq 0}$,
\item the complex residue at the point $d_k$ is given by
\begin{equation}\label{eq:classicalKVnc}{\rm Res}_{z=d_k} {\rm TR}_\Gamma \left(\mathcal A(z)\right)= \frac{1}{q}{ \rm
Res}_\Gamma (\mathcal A (d_k) );
\end{equation}
\item If $\mathcal A(d_k)$ has a well-defined $\Gamma$-canonical trace ${\rm TR}_\Gamma(\mathcal A(d_k))$, then $\mathcal A^\prime (d_k)$ has a well defined $\Gamma$-
Wodzicki residue ${ \rm
Res}_\Gamma (\mathcal A^\prime(d_k) ) $ and the constant term in the meromorphic expansion of ${\rm TR}_\Gamma\left(\mathcal A (z)\right)$ at $d_k$ is
\begin{equation}\label{eq:PSclassicalop-bis}
{\rm fp}_{z=d_k}{\rm TR}_\Gamma\left(\mathcal A (z)\right)= {\rm TR}_\Gamma(\mathcal A(d_k))+
\frac{1}{q}{ \rm
Res}_\Gamma (\mathcal A^\prime (d_j));
\end{equation}
\item If $\mathcal A(d_k)$ is a differential operator, this reduces to
\begin{equation}\label{eq:PSclassicalopdiff-bis}
{\rm fp}_{z=d_k}{\rm TR}_\Gamma\left(\mathcal A (z)\right)=
\frac{1}{q}{ \rm
Res}_\Gamma (\mathcal A^\prime (d_j));
\end{equation}
\item All the above statements actually hold for any other representative in $[\mathcal A(z)]_{\rm diag}$.
\end{enumerate}
\end{thm}
\begin{proof} We use Proposition \ref{prop:ShubinTh1} to write $\mathcal A (z)\underset{ \rm \small diag}{\sim}\reallywidetilde {A(z)}$ with $A(z)$ $\ve$-local for some positive $\ve$. On the grounds of Proposition \ref{prop:ResTRliftedA}, without loss of generality we can show the statements for $\mathcal A(z)=\reallywidetilde {A(z)}$, and we have
\begin{equation}
\label{eq:TRGamma}
{\rm TR}_\Gamma\left( \widetilde{A(z)}\right)={\rm TR} \left( A(z) \right).
\end{equation}
(1) then follows from the meromorphicity and the structure of the poles $\{d_k, k\in \mathbb{Z}_{\geq 0}\}$ of ${\rm TR} (A(z))$ discussed in Proposition \ref{thm:KVPS}.
Similarly, (\ref{eq:classicalKVnc}) follows from (\ref{eq:classicalKV}) combined with (see Proposition \ref{thm:KVPS})
\begin{equation}
\label{eq:ResGamma}
{\rm Res}_\Gamma\left( \widetilde{A(d_j)}\right)={\rm Res} \left( A(d_j) \right).
\end{equation}
To prove (3) we assume that $A(d_k)$ has a well-defined canonical trace, in which case $\widetilde{A(d_k)}$ has a well-defined $\Gamma$-canonical trace. We apply (\ref{eq:PSclassicalop-bis}) to the family
$ A(z)$, which combined with (\ref{eq:TRGamma}) yields
\begin{eqnarray*}
{\rm fp}_{z=d_k} {\rm TR}_\Gamma(\widetilde{A(z)})&=& {\rm fp}_{z=d_k}{\rm TR} \left( A (z) \right)\\
&=&{\rm TR} \left( A(d_k) \right)+\frac{1}{q} {\rm Res} \left( A^\prime(d_k) \right)\\
&=& {\rm TR} \left( \reallywidetilde{ A(d_k)} \right)+\frac{1}{q} {\rm Res} \left( A^\prime(d_k) \right).
\end{eqnarray*}
Since the operator $ A^\prime(d_k)$ has a well-defined residue, the lifted derivative $\reallywidetilde{A^\prime(d_k)} $ has a well-defined $\Gamma$-residue and we have
$\ds {\rm Res}_\Gamma\left(\reallywidetilde{A^\prime(d_k)}\right)={\rm Res} \left( A^\prime (d_k) \right),$ leading to (\ref{eq:PSclassicalopdiff-bis}). \\
The fact that these statements depend only on the class $[\mathcal A(z)]_{\rm diag}$ and not on the representative follows from the fact that $\TR$ and ${\rm Res}$ are well defined on such classes. \qedhere
\end{proof}
\begin{cor}\label{cor:KVPScomparison} Let $A(z)$ be a holomorphic family of operators in $\Psi_{\rm cl}(M, E)$. For some positive number $\ve$, there is a holomorphic family $A_0(z) $ of $\ve$-local operators in $\Psi_{\rm cl}(M, E)$ such that $A_0(z)\in [A(z) ]_{\rm diag}$.\\
For any holomorphic family $\mathcal A(z)$ in $\Psi_{{\rm cl},\Gamma}(\widetilde M, \widetilde E)$ such that the difference $\mathcal A (0)-\reallywidetilde{A_0(0)}$ at zero has a smooth kernel, the map $z\mapsto {\rm TR}_\Gamma\left(\mathcal A(z)\right)- {\rm TR} \left(A(z)\right)$ is holomorphic at $0$
with
\begin{equation}\label{eq:RestildeAz}
{\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal A (z)\right)-{\rm fp}_{z=0}{\rm TR}\left(A(z)\right)= {\rm Tr}_\Gamma(\mathcal A(0)-\reallywidetilde{A_0(0)}),
\end{equation}
where as before, ${\rm Tr}_\Gamma $ is defined in \eqref{eq:Gammaordtrace}.
\end{cor}
\begin{proof}
For some given positive $\ve$, Definition \ref{defn:classlift} yields a family of $\ve$-local operators $A_0(z)$ in $ [A(z)]_{\rm diag}$. The explicit construction of these operators by means of an appropriate partition of unity shows that this family can be chosen holomorphic as a consequence of the holomorphicity of $A(z)$. It then follows from Proposition \ref{prop:ResTRliftedA} that
$${\rm TR}_\Gamma\left(\reallywidetilde{A_0(z)}\right)= {\rm TR} \left( A_0(z)\right)={\rm TR} \left( A(z)\right).$$
The operators $\mathcal B(z):= \mathcal A(z)- \reallywidetilde{A_0(z)}$ define a holomorphic family in $\Psi_{{\rm cl},\Gamma}(\widetilde M, \widetilde E)$. It follows from the above that
\begin{equation}
\label{eq:1} {\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal A (z)\right)-{\rm fp}_{z=0}{\rm TR}\left(A(z)\right)= {\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal A (z)\right)-{\rm fp}_{z=0}{\rm TR}\left(A_0(z)\right)={\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal B (z)\right).
\end{equation} Assuming that the operator $ \mathcal A(0)- \reallywidetilde{A_0(0)}$ has a smooth kernel, then the operator $\mathcal B(0) $ has a smooth kernel and hence a vanishing Wodzicki residue. It then follows from (\ref{eq:classicalKVnc}) that the map $z\mapsto {\rm TR}_\Gamma\left(\mathcal B(z)\right)$ has a vanishing complex residue at zero, showing its holomorphicity at zero.
The fact that $\mathcal B(0)$ has a smooth kernel also implies that it has a well-defined canonical trace which coincides with the ordinary $\Gamma$-trace ${\rm Tr}_\Gamma\left(\mathcal B(0)\right)$ and we have
\begin{equation}
\label{eq:2}{\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal B (z)\right)=\lim_{z\to 0}{\rm TR}_\Gamma\left(\mathcal B (z)\right)={\rm Tr}_\Gamma(\mathcal B(0))= {\rm Tr}_\Gamma(\mathcal A(0)- \reallywidetilde{A_0(0)}).\end{equation} Combining equations (\ref{eq:1}) and (\ref{eq:2}) yields (\ref{eq:RestildeAz}).
\end{proof}
\subsection{Lifted spectral $\zeta$-invariants}
\label{sec:liftedregtr}
This section is devoted to applications of Formula (\ref{eq:RestildeAz}) in Corollary \ref{cor:KVPScomparison}.\\
Let $E$ be a hermitian vector bundle over a closed Riemannian manifold $M$, and let $\mathbf D$ be an essentially self-adjoint differential operator in $\Psi(M, E)$. Then $\mathbf \Delta:= \mathbf D^2$ is a non-negative essentially self-adjoint operator.\\
For any $\ve>0$, let $Q_\ve(\mathbf \Delta)$ be a weight as defined in \eqref{eq:Qeps} and for a measurable function $h$ on on a contour $\Gamma_\pi$ around the ray $R_\pi=]-\infty, 0]$, such that
\begin{equation}
\label{eq:estimate.h}
\vert h(\lambda)\vert \leq\vert \lambda\vert^{-\delta}
\end{equation}
for some positive $\delta$, let $h(Q_\ve(\mathbf \Delta))$ be defined by \eqref{eq:hQ}.
Similarly, we consider $ h(Q_\ve(\widetilde{\mathbf \Delta}))$ with $Q_\ve(\widetilde {\mathbf \Delta})$ defined by (\ref{eq:tildeQeps}). \\
In the specific case when $h$ is polynomial, then $h(\mathbf \Delta)$ is a well-defined differential operator, which lifts to $\reallywidetilde{h(\mathbf \Delta)}=h(\widetilde{ \mathbf \Delta})$.
Let $h$ be a measurable function $h$ on a contour $\Gamma_\pi$ around the ray $R_\pi=]-\infty, 0]$, satisfying \eqref{eq:estimate.h} for some positive $\mathbf \Delta$, we set $h_\ve(\mathbf \Delta):= h(Q_\ve(\mathbf \Delta))$ and $h_\ve(\widetilde{ \mathbf \Delta}):= h(Q_\ve(\widetilde{ \mathbf \Delta}))$.
\\
For any $A\in \Psi_{\rm cl}(M, E)$ and any weight $Q\in \Psi_{\rm cl}(M, E)$ , the family $A(z):= A\, Q^{-z}$ is a holomorphic perturbation of $A$ and the {\bf spectral $\zeta$-function} (or ${ Q}$-regularised trace of ${ A}$)
\begin{equation}\label{eq:zetaM} z\longmapsto\zeta_{A,Q}(z):={\rm TR}(A\, Q^{-z})
\end{equation} defines a meromorphic map on $\C$ with a known countable set of simple poles.
Similarly, for any ${\mathcal A}\in \Psi_\Gamma(\widetilde M, \widetilde E)$ and any weight $ {\mathfrak Q}\in \Psi_\Gamma(\widetilde M, \widetilde E)$, using (\ref{eq:weightonlift}) we define the holomorphic perturbation
\[{\mathcal A}(z):= {\mathcal A}\, {\mathfrak Q}^{-z}\] of ${\mathcal A}$ and thanks to Theorem \ref{thm:KVPScov}, we know that
the spectral $\zeta$-function (or ${\mathfrak Q}$- regularised trace of ${\mathcal A}$) \begin{equation}\label{zetatildeM}z\longmapsto\zeta^\Gamma_{{\mathcal A},{\mathfrak Q}}(z):={\rm TR}_\Gamma({\mathcal A}\, {\mathfrak Q}^{-z})
\end{equation}
is meromorphic with a known countable set of simple poles.
So we can take the finite parts of the Laurent expansions at zero and build (with some abuse of notation) {\bf spectral $\zeta$-invariants}
\begin{equation}\label{zetazero}
\zeta_{A,Q}(0):={\rm fp}_{z=0}\left( \zeta_{A,Q}(z)\right);\quad \zeta^\Gamma_{ {\mathcal A},{\mathfrak Q}}(0):={\rm fp}_{z=0}\left(\zeta^\Gamma_{{\mathcal A},{\mathfrak Q}}(z)\right).
\end{equation}
Take for $\ve>0$ \[A_\ve ( \mathbf D ):= P(\mathbf D)\, h_\ve(\mathbf \Delta);\,{\mathcal A}_\ve(\widetilde{\mathbf D})= P(\widetilde{\mathbf D})\, h_\ve(\widetilde{\mathbf \Delta}) .\] \begin{rk}Recall from Remark \ref{rk:simDelta} that whereas the equivalence relation $\sim$ is stable under products of operators, the equivalence relation $\underset{ \rm \small diag}{\sim}$ is not.
\end{rk}
In particular, we do not a priori expect the regularised trace $\zeta_{A_\ve ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)$ to
coincide with the regularised $\Gamma$-trace $ \zeta_{{\mathcal A}_\ve(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(0)$, which involves the product $P(\widetilde D)\,h_\ve(\widetilde {\mathbf \Delta}) \,Q_\ve({\widetilde {\mathbf \Delta}} )^{-z}$.\\
The following theorem nevertheless relates the two regularised traces.
\begin{thm}\label{thm:liftedregtraces} Let $\ve >0$. With the above notations
\begin{itemize}
\item The meromorphic map
$$z\longmapsto \zeta_{A ( \mathbf D ), Q_\ve(\mathbf \Delta)}(z)- \zeta^\Gamma_{{\mathcal A}(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(z)$$
is holomorphic at zero.
\item There is some positive $\alpha$, and an $\alpha$-local operator $A_{\ve,0} ({\mathbf D})\in\left[A_\ve({\mathbf D}) \right]_{\rm diag}$ such that the difference $ \reallywidetilde{A_{\ve,0}({\mathbf D})}-{\mathcal A}_\ve(\widetilde{\mathbf D})$ has a smooth kernel and hence a well-defined trace ${\rm Tr}_\Gamma\left(\reallywidetilde{A_{\ve,0}(\mathbf D)}-{\mathcal A}_\ve(\widetilde{\mathbf D}))\right)$.
We have
\begin{eqnarray}\label{eq:liftedregtraces}
\zeta_{A_\ve ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)- \zeta^\Gamma_{{\mathcal A}_\ve(\widetilde{\mathbf D}),{\mathfrak Q}_\ve(\widetilde{\mathbf \Delta})}(0) = {\rm Tr}_\Gamma(\reallywidetilde{A_{\ve,0}({\mathbf D})}-{\mathcal A}_\ve(\widetilde{\mathbf D})),
\end{eqnarray}
\item Spectral $\zeta$-invariants canonically lift to the covering. In other words, when $h\equiv 1$, identity (\ref{eq:liftedregtraces}) amounts to the coincidence of the zeta-regularised trace and its lifted counterpart:
\begin{equation}
\label{eq:diffliftedregtraces} \zeta_{P ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)= \zeta^\Gamma_{P(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(0) . \end{equation}
\item The above statements extend to super-regularised traces for operators acting on sections of a $\mathbb{Z}_2$-graded bundle, replacing the canonical trace ${\rm TR}$ by a $\mathbb{Z}_2$-graded canonical tace ${\rm sTR} $ and correspondingly $\zeta-$ functions by $\mathbb{Z}_2$-graded $\zeta-$ functions $s\zeta_{A,Q}(z):={\rm sTR}(A\, Q^{-z})$.
\end{itemize}
\end{thm}
\begin{proof}The operators $ Q_\ve (\mathbf \Delta)^{-z}$, resp. $ Q_\ve (\widetilde { \mathbf \Delta})^{-z}$ built from complex powers define holomorphic families on $M$, resp. $\widetilde M$ and hence so do the operators $ B_\ve (\mathbf D)(z):= A_\ve (\mathbf D)\,Q_\ve (\mathbf \Delta)^{-z} $, resp. ${\mathcal B}_\ve(\widetilde{\mathbf D})(z):= {\mathcal A}_\ve (\widetilde{\mathbf D})\,Q_\ve (\widetilde { \mathbf \Delta})^{-z}$ define homolorphic families on $M$, resp. $\widetilde M$, to which we want to apply Corollary \ref{cor:KVPScomparison}. To simplify notations, we drop the index $\ve$ and the operator ${\mathbf D}$ from the notation, setting $ B(z):= B_\ve(z) (\mathbf D)$ and
${\mathcal B}(z):={\mathcal B}_\ve(z)(\widetilde{\mathbf D})$. With these notations
\begin{equation}\label{eq:useful1} \zeta_{A_\ve ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)- \zeta_{{\mathcal A}_\ve(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(0)= {\rm TR}_\Gamma\left(\mathcal B(z)\right)- {\rm TR} \left( B(z)\right).\end{equation}
On the one hand and as in Corollary \ref{cor:KVPScomparison}, we build a $\ve$-local holomorphic family $B_0(z)\in [B(z)]_{\rm diag}$. In particular, the operator $B_0:=B_0(0)= {A}_\ve( {\mathbf D})$ has symbol $ P(\sigma({\mathbf D}))\star h_\star (\sigma(Q_\ve( \mathbf \Delta))$. So, on the one hand its lift $\widetilde{B_0}$ has symbol $\reallywidetilde{P(\sigma({\mathbf D}))\star h_\star (\sigma(Q_\ve( \mathbf \Delta))}$.
On the other hand, the operator $\mathcal{B}(0)=\mathcal{A}_\ve(\widetilde{\mathbf D})$ has symbol $P(\sigma(\widetilde{\mathbf D}))\star h_\star(\sigma(Q_\ve(\widetilde{ \mathbf \Delta})))$. By (\ref{eq:hstarQ}) applied to $\mathfrak Q=Q_\ve(\widetilde{\mathbf \Delta})$ and $Q=Q_\ve(\mathbf \Delta)$, we have $h_\star(\sigma(Q_\ve(\widetilde{ \mathbf \Delta})))\sim \reallywidetilde{h_\star (\sigma(Q_\ve( \mathbf \Delta))}$.
Consequently, using again the locality of $\star$ as in Lemma \ref{lem:hstarQ}, we find that the operator $\mathcal{B}(0)$ has the same symbol as $\reallywidetilde{B_0}$. Hence the difference $\mathcal{B}(0)-\reallywidetilde{B_0}$ lies in $\cap_{m\in \mathbb{R}}{\mathcal U}\Psi_\Gamma^{m}(\widetilde M, \widetilde E)$ as a result of which it is $\Gamma$-trace-class and therefore has a well-defined $\Gamma$-trace ${\rm Tr}_\Gamma \left(\mathcal B(0)-\widetilde{B_0}\right)$ (see \ref{eq:Gammaordtrace}).\\ Applying Corollary \ref{cor:KVPScomparison} yields the holomorphicity at $z=0$ of the map in (\ref{eq:useful1})
$$ \zeta_{A_\ve ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)- \zeta_{{\mathcal A}_\ve^\Gamma(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(0)= {\rm fp}_{z=0}\left( {\rm TR}_\Gamma(\mathcal B(z))- {\rm TR}_\Gamma(\reallywidetilde{B_0(z)})\right)
={\rm Tr}_\Gamma \left(\mathcal B(0)-\widetilde{B_0}\right),$$ which corresponds to (\ref{eq:liftedregtraces}).\\
If $h\equiv 1$ , then $B_0=B(0)= P({\mathbf D})$ is a differential operator and we have ${\mathcal B}(0) =P(\widetilde{\mathbf D}) = \reallywidetilde{P({\mathbf D})} =\reallywidetilde{B(0)}=\reallywidetilde{B_0} $ so that ${\rm Tr}_\Gamma({\mathcal B}(0) -\widetilde{B_0})=0$, from which the assertion \eqref{eq:diffliftedregtraces} follows.
\item Replacing ${\rm TR}$ by its $\mathbb{Z}_2$-graded analog ${\rm sTR}$ yields the last assertion. \qedhere.
\end{proof}
\subsubsection{ The $L^2$-Atiyah index theorem revisited}\label{subsec:L2index}
We apply the above construction to a hermitian $\mathbb{Z}_2$-bundle $E=E_+\oplus E_-$ over a closed Riemannian manifold $M$, so that its pull-back $F:=\widetilde E=\widetilde E_+\oplus \widetilde E_-$ by $\pi$ is a $\mathbb{Z}_2$ graded $\Gamma$-equivariant vector bundle over $X:= \widetilde M$. \\
Let $ D_+: C^\infty( M, E_+)\to C^\infty( M, E_-)$ be an elliptic differential operator of order $d$ with formal adjoint $ D_-: C^\infty( M, E_-)\to C^\infty( M, E_+)$. Let $\widetilde D_\pm: C^\infty(X,F_\pm)\to C^\infty(X, F_\mp)$ be the lifted differential operators. The operator \begin{equation}\label{eq:wtD}\widetilde D:
= \begin{bmatrix}
0 & \widetilde D_- \\
\widetilde D_+ & 0
\end{bmatrix}
\end{equation} is a $\Gamma$-invariant elliptic differential operator of positive order $d$ to which we apply the above constructions.
Even though the kernels $\{s\in C_c^\infty(\widetilde M, \widetilde E), \; \widetilde D_\pm s=0\}$ are not necessarily finite dimensional, their closures $K_{\widetilde D_\pm}$ are finitely generated $\Gamma$-modules and hence isometrically
$\Gamma$-isomorphic to Hilbert
$\Gamma$-subspaces of the Hilbert
space $\ell_2(\Gamma)^n$ for some positive integer $n$, which can be represented by idempotent matrices $P^\pm=(p_{ij}^\pm)\in {\rm gl}_n\left({\mathcal N}\Gamma\right)$. Let $\chi_{\{0\}}(\widetilde D_\pm)=\chi_{\{0\}}(\widetilde \Delta)$ denote the orthogonal projections onto $K_{\widetilde D_+}\oplus K_{\widetilde D_-}$.
The $\Gamma$-dimension (resp. $\Gamma$- graded dimension) of $K_{\widetilde D_\pm}$ is (see Appendix \ref{sec:appHM})
$$
{\rm dim}_\Gamma K_{\widetilde D_\pm}:= \sum_{i=1}^n\langle p_{ii}^\pm(e), e\rangle, \quad {\rm resp.} \quad {\rm sdim}_\Gamma (K_{\widetilde \Delta}):= {\rm dim}_\Gamma K_{\widetilde D_+}- {\rm dim}_\Gamma K_{\widetilde D_-}={\rm sTr}_\Gamma\left(\chi_{\{0\}}( \widetilde \Delta)\right),
$$
where $e\in \C\Gamma$ is the element with all components zero outside the first one which is one and the $\Gamma$-index of $D$ is
$${\rm ind}_\Gamma(\widetilde D):= {\rm dim}_\Gamma K_{\widetilde D_+}-{\rm dim}_\Gamma K_{\widetilde D_-}.
$$
The $\Gamma$-Wodzicki residue and the $\Gamma$-canonical trace extend in a straightforward manner to a $\mathbb{Z}_2$-graded Wodzicki residue ${\rm sRes}_\Gamma$ and a $\mathbb{Z}_2 $-graded canonical trace ${\rm sTR}_\Gamma$.
\begin{cor}
\label{cor:indres2}With the notation of \eqref{eq:wtD}, $ \log( Q_\ve(\widetilde\Delta) )$ has a well-defined (super) $\Gamma$-residue and we have
\begin{equation}
\label{eq:indres2}{\rm ind}_\Gamma(\widetilde D_+)= -\frac{1}{q}{ \rm
sRes}_\Gamma ( \log( Q_\ve(\widetilde\Delta) ))=-\frac{1}{q}\,{ \rm sRes}( \log( Q_\ve(\Delta) ))={\rm ind}(D_+).
\end{equation}
\end{cor}
\begin{proof}
By Corollary \ref{cor:indexD}, the index is a $\mathbb{Z}_2$-graded regularised trace of the identity:
${\rm ind} (D_+)= {\rm sTR}^{Q_\ve} (Id)$ and similarly, independently of $ \ve >0$, we have
\begin{eqnarray}
{\rm ind}_\Gamma(\widetilde D_+)&=&
{\rm sTR}_\Gamma \left(\chi_{\{0\}}(\widetilde \Delta)\right) \nonumber\\
&=&{\rm sTR}_\Gamma \left(\chi_{[-\ve, \ve]}(\widetilde \Delta)\right) \nonumber={\rm sTR}_\Gamma ( Q_\ve(\widetilde\Delta)^{-z} ) \nonumber
\;\;\text{seen as meromorphic functions}\\
&=&{\rm fp}_{z=0} {\rm sTR}_\Gamma\left(\left(Q_\ve (\widetilde \Delta)\right)^{-z}\right) \nonumber \;\text{ taking the finite part at zero}\nonumber\\
&=& {\rm sTR}^{Q_\ve(\widetilde \Delta)}(\widetilde Id).
\end{eqnarray}
Theorem \ref{thm:liftedregtraces} applied to $h\equiv 1$ and ${\mathbf D}\equiv 1$ then yields ${\rm ind}_\Gamma(\widetilde D_+)={\rm ind}_\Gamma(D_+)$. The lifted trace-defect formula (\ref{eq:PSclassicalop-bis}) derived in Theorem \ref{thm:KVPScov} applied to the family ${\mathcal A}(z):=\left(Q_\ve (\widetilde \Delta)\right)^{-z}$ further yields the expression of ${\rm ind}_\Gamma(\widetilde D_+)$ as a Wodzicki residue.
\end{proof}
\subsubsection{The $\eta$-invariant revisited}
We now assume that both $D$ and $\widetilde D$ are essentially self-adjoint and invertible, in which case $Q:=D^2$ is a weight which lifts to $\widetilde Q= \widetilde D^2$. The operators $\vert D\vert^{-1}=\Delta^{-\frac{1}{2}}$ and $\vert\widetilde D\vert^{-1}=\widetilde\Delta^{-\frac{1}{2}}$ are defined as Cauchy integrals (see (\ref{eq:hQ})) using $h(x)= x^{-\frac{1}{2}}$ and the $\eta$-invariant of $D$ can be expressed in terms of regularised traces \cite{CDP} as
$\ds\eta(D)= {\rm Tr}^Q(D\,\vert D\vert^{-1}); \quad \eta_\Gamma(\widetilde D)= {\rm Tr}_\Gamma^{\widetilde Q}(\widetilde D\,\vert\widetilde D\vert^{-1}).$
\begin{cor}
\label{cor:eta} There is an $\ve$-local operator $A_0\in
\left[D\, \vert D\vert^{-1}\right]_{\rm diag}$ for some small enough positive $\ve$ (see Definition \ref{defn:classlift}), such that the difference $\reallywidetilde{A_0(D)}-\widetilde D\, \vert \widetilde D\vert^{-1}$ has a smooth kernel and a well-defined $\Gamma$-trace and we have
$$\eta(D)-\eta_\Gamma(\widetilde D)= {\rm TR}_\Gamma\left(\reallywidetilde{A_0(D)}-\widetilde D\, \vert \widetilde D\vert^{-1}\right).$$
\end{cor}
\begin{proof} The statement is a straightforward consequence of Theorem \ref{thm:liftedregtraces} applied to $P(x)=x$ and $h(x)=x^{-\frac{1}{2}}$, and the right hand side does not depend on the representative $A_0$ in $ [D\, \vert D\vert^{-1}]_{\rm diag}$.
\end{proof}
\subsection{Invariants built from geometric operators}
Let $F \to X$ be a vector bundle over a Riemannian manifold $\left(X,g\right)$. We assume that $X$ is spin and $F=S\otimes W$ where $S$ is the spinor bundle and $W$ an auxillary bundle equipped with a connection $\nabla^W$. This way, $F$ can be equipped with a connection $\nabla^F:= \nabla\otimes 1+ 1\otimes \nabla^W$, where $\nabla$ is the Levi-Civita connection on $S$. \\ Following Gilkey's notations \cite[Formula (2.4.3)]{G}, for a multi-index $\alpha=(\alpha_1,\cdots, \alpha_s)$
we introduce formal variables $g_{ij/\alpha}:= \partial_\alpha g_{ij}$
for the partial derivatives of the metric
tensor $g$ on the manifold $X$
and similarly $\omega_{i/\alpha}:= \partial_\alpha \omega_{i}$ for the connection
$\omega$
on the external bundle. Let us set
${\rm ord}\left(g_{ij/\alpha}\right)= \vert \alpha\vert=\alpha_1+\cdots+\alpha_s
; {\rm ord}(\omega_{i/\beta})=\vert \beta\vert$.
Inspired by Gilkey \cite[Formulae (1.8.18) and (1.8.19)]{G} and following \cite{MP} we set the following definition.
\begin{defn}
Let $A\in \Psi_{\rm cl}(X,F)$ be a classical (resp. a log-polyhomogeneous --see \cite{Le} and references therein--) operator of order $a$ with symbol $\sigma(A)\sim \sum_{j=0}^\infty\sigma_{a-j}(A)(\xi)$ (resp. $\sigma(A)(\xi)\sim \sum_{\ell=0}^{k}\sum_{j=0}^\infty\sigma_{a-j, \ell}(A)\, \log^\ell \vert \xi\vert$) with $\sigma_{a-j}(A)(\xi)$ (resp. $\sigma_{a-j,\ell}(A)(\xi)$) homogeneous of degree $a-j$ for $\vert \xi\vert\geq 1$. We call $A$
{\bf geometric}, if
in any local trivialisation, the homogeneous components
$\sigma_{a-j}
(A) $ (resp. $\sigma_{a-j, \ell}
(A) $ for any $\ell\in \{0,\cdots, k\}$)
are homogeneous of
order
$j$
in the jets of the metric and of the connection.
\end{defn}
In particular, a differential operator
$A=\sum_{\vert \alpha\vert \leq a}c_\alpha(x)\, \partial_x^\alpha$ is geometric if $c_\alpha(x)$ is homogeneous of degree $j= a-\vert \alpha\vert$ in the metric and the connection on the auxillary bundle.
The Laplace-Beltrami operator (resp. the Dirac operator) associated with the metric $g$ (resp. and the connection on $W$) are geometric differential operators. More generally, the Laplace operator (resp. Dirac operator) associated with the connection $\nabla^F$ are geometric differential operators.
\begin{lem}
An operator ${\mathcal A}\in \Psi_{\Gamma, {\rm cl}}(\widetilde M, \widetilde E)$ such that \[{\mathcal A}\sim \widetilde A_0\quad {\rm and}\quad A_0\sim A\]for some geometric operator $A\in\Psi_{\rm cl}(M, E)$, is itself geometric. In particular, with the notations of \eqref{eq:liftedOp} we have
\[\left(A\, {\rm geometric}\, {\rm and}\, {\mathcal A}\in \widetilde{[ A ]_{\rm diag}}\right)\Longrightarrow \left( {\mathcal A}\, {\rm geometric} \right).\]
\end{lem}
\begin{proof}
Since $\sim$ preserves symbols and symbols lift to the covering (see \eqref{eq:sigmafraktildeAzero}), we have
\[\sigma\left({\mathcal A}\right)\sim \sigma\left(\widetilde A_0\right)\sim\reallywidetilde { \sigma\left(A_0\right)}\sim \reallywidetilde { \sigma\left(A\right)}.\] The fact that the operator $A$ is geometric amounts to the components $\sigma_{a-j,\ell}\left(A\right)$ being homogeneous of degree $j$ in the jets of the metric and the connection. Its lift $ \widetilde { \sigma\left(A\right)}$ obeys the same conditions w.r.t to the metric $\widetilde g$ and the connection $\widetilde \nabla^W$ on the covering and hence so does $\sigma\left({\mathcal A}\right)$, which shows that ${\mathcal A}$ is geometric.
\end{proof}
The results of \cite{MP} relative to holomorphic families of the type $A(z)= A\, Q^{-z}$ generalise to any holomorphic family. The proof of this more general statement can be carried out along the same lines of the proofs of \cite[Corollary 1 and Theorem 1]{MP}.
Adopting Gilkey's notations \cite[par. 2.4]{G}, let us denote by
${\mathcal P}^{g,\nabla^W}_{n,k,p}$
(which we write ${\mathcal P}^{g }_{n,k,p}$ in the absence of twisting) the linear space consisting of
$p$- form valued invariant
polynomials
that are homogeneous of order
$k$
in the jets of the metric
and of the connection
$\nabla^W$. \begin{thm}\label{thm:geometricop} Let $A(z)\in \Psi_{\rm cl}(M,E)$ and ${\mathcal A}(z)\in \Psi_{\Gamma,{\rm cl}}(\widetilde M, \widetilde E)$ be two holomorphic families of classical pseudodifferential operators such that\[A(z)\, {\rm geometric}\, {\rm and}\, {\mathcal A}(z)\in \reallywidetilde{[ A (z)]_{\rm diag}}.\] Then
$ {\mathcal A}(z)$ is geometric and if both $A(0)$ and ${\mathcal A} (0)$ are differential operators,
\begin{enumerate}
\item for any $x\in \widetilde M$, the residue densities defined in \eqref{eq:covresiduedensity} \[{\rm Res}_{x}({\mathcal A}^\prime(0))=\reallywidetilde{{\rm Res}_{\pi(x)}(A^\prime(0))} \]
lie in ${\mathcal P}_{n,n,n}^{\widetilde g, \widetilde \nabla^ W}$.
\item Consequently, \[ {\rm fp}_{z=0}{\rm TR}_\Gamma\left(\mathcal A (z)\right)=\frac{1}{q}\, {\rm Res}_{\Gamma}({\mathcal A}^\prime(0))=\frac{1}{ q}\, \int_{F} {\rm Res}_{ x}({\mathcal A}^\prime(0))(x)\, dx= \frac{1}{ a}\, \int_{M} {\rm Res}_{x}({\mathcal A}^\prime(0))(x)\, dx\] is the integral of densities generated by Pontrjagin forms on the fundamental domain and Chern forms on the auxillary bundle.
\end{enumerate}
\end{thm}
\begin{rk}
In \eqref{eq:covresiduedensity} the residue densities are defined for classical pseudodifferential operators whereas ${\mathcal A}^\prime(0)$ is a log-polyhomogeneous operator (see e.g. \cite{Le} and references therein) but it follows from the previous results that they extend to derivatives
${\mathcal A}^\prime(0)$ whenever ${\mathcal A} (0)$ is differential.
\end{rk}
\begin{proof}
In order to get identities on the level of densities, we need to apply the above results to families $\phi\, {\mathcal A}(z)$ for any $\Gamma$-invariant function $\phi$ on $\widetilde M$.
\begin{enumerate}\item Since ${\mathcal A}(0)$ and $A(0)$ are differential operators, we know that $\phi\, {\mathcal A}^\prime (0)$ and $\phi\, A^\prime (0)$ (which are not classical operators) have well defined Wodzicki residue. The fact that the Wodzicki residue canonically lifts to coverings (Proposition \ref{prop:ResTRliftedA}) applied to $\phi\, {\mathcal A}^\prime (0) $ then yields for any smooth $\Gamma$-invariant function $ \phi$ on $\widetilde M$
\[\int_F {\rm Res}_x\left(\phi\, {\mathcal A}^\prime(0)\right)\, dx={\rm Res}_\Gamma\left(\phi\, {\mathcal A}^\prime (0)\right)= {\rm Res} \left(\phi\, { A}^\prime (0)\right)= \int_M {\rm Res}_x\left(\phi\, A^\prime(0)\right)\, dx \]
from which we deduce the identity on the level of densities:
\[{\rm Res}_{x}({\mathcal A}^\prime(0))=\reallywidetilde{{\rm Res}_{\pi(x)}(A^\prime(0))}. \]
\item The second statement follows from combining \eqref{eq:PSclassicalopdiff-bis} which relates the finite part of the canonical trace to the residue applied to $\phi\, {\mathcal A}(z)$ for any $\Gamma$-invariant function $\phi$ on $\widetilde M$, with Gilkey's theory of invariants \cite[Theorem
2.6.2]{G} since ${\mathcal P}^{g,\nabla^W}_{n,n,n} $
is a polynomial in the $2$-jets of the metric and the one jets of the auxillary
connection.
\end{enumerate}
\end{proof}
Consequently, $\zeta$-spectral invariants for geometric operators can be written as integrals of densities generated by Pontrjagin forms on the underlying manifold and Chern forms on the auxillary bundle.
\begin{cor} With the notations of \eqref{eq:diffliftedregtraces}, the spectral zeta invariants \[\zeta_{A_\ve ( \mathbf D ), Q_\ve(\mathbf \Delta)}(0)= \zeta_{{\mathcal A}_\ve(\widetilde{\mathbf D}),{\mathfrak Q}_\e(\widetilde{\mathbf \Delta})}(0)\]
can be written as integrals of densities generated by Pontrjagin forms on the underlying manifold and Chern forms on the auxillary bundle.
\end{cor}
\begin{proof} This follows from applying Theorem \ref{thm:geometricop} to the families $ z\mapsto A_\ve (\mathbf D)\,Q_\ve (\mathbf \Delta)^{-z}\in \Psi_{\rm cl}(M, E) $, resp. $z\mapsto {\mathcal A}_\ve (\widetilde{\mathbf D})\,Q_\ve (\widetilde { \mathbf \Delta})^{-z}\in \Psi_{\Gamma,{\rm cl}}(\widetilde M, \widetilde E) $.
\end{proof}
\newpage
\small{ |
1,116,691,500,949 | arxiv | \section{Introduction}
In the previous paper (\cite{YE97}, hereafter YE),
we determined neutral stability points of the
{\it Chandrasekhar-Friedman-Schutz} (CFS) instability of general
relativistic rotating polytropes, by which non-axisymmetric oscillations
of rotating stars are excited through the coupling with
gravitational radiation (see \cite{CH70,FS78,JF78}). In the absence of
viscosity this instability sets in at the points where eigenfrequencies
of the modes vanish as seen from the inertial observer at spatial infinity.
Thus to determine the points on equilibrium sequences of rotating stars
where the model begins to become unstable, zero frequency modes in the
asymptotically inertial frame must be found.
In YE the equations of state (EOS) of stellar matter were restricted
to the simple relation of polytropes and neutral points of the
counter-rotating f-modes were obtained. Here the counter-rotating modes
denote the oscillations whose phase propagation is retrograde with respect
to the stellar rotation as seen from the observer rotating with the star.
In this paper we investigate oscillation modes and their neutral points
of stability for more realistic EOS proposed for the neutron star matter.
The investigated modes are the same ones as in YE which may be
the most susceptible {\it spheroidal} modes to the CFS instability.
\footnote{Recent discovery of
the CFS instability of the r-mode and its strong effect on the stellar
rotational evolution are also interesting subjects
(\cite{NA98,AKS98,LOM98}), but they are beyond the scope of this paper.}
It has been discussed that modes with the azimuthal quantum number
$m=3$ to $5$, by which the eigenfunction of the modes are decomposed
to harmonics having angular $\varphi -$coordinate dependence
$\sim e^{im\varphi}$,
are the most interesting for this instability (\cite{LL86}).
Also interesting is the recently discovered
bar mode neutral points for rather soft EOS, which never appear in the
Newtonian framework (for fully general relativistic treatments, see
\cite{SF97}; also see YE).
Thus we will investigate these lower order modes in this paper.
Concerning the neutral points to the CFS instability,
Morsink et al. (1998, hereafter MSB) investigated realistic neutron star
models by applying the numerical method to find the exact neutral modes
of general relativistic rotating stars developed by Stergioulas and
Friedman (1997). They obtained f-mode neutral points for models with
various masses for several representative EOS. Therefore we will compare
our results with theirs.
The counter-rotating f-modes are not only interesting in the context of
the CFS instability of a single neutron star, but also may play an important
role in compact binary systems because they may couple strongly with the tidal
potential of the companion. The most significant effect will be the
resonant excitation of the modes by the tidal force and its back reaction
to the orbital motion of the binary system. We will study this subject
in the last section of this paper.
\section{Brief Summary of the Solving Method}
\subsection{Assumptions}
We assume that axisymmetric equilibrium stars are rotating uniformly
and that the stellar matter is described by zero temperature EOS.
Under these assumptions, equilibrium states of relativistic rotating
stars are obtained numerically by the KEH scheme (\cite{KEH89}).
Linear adiabatic perturbations may be a good approximation
in the present situation, and it is also assumed that the adiabatic index
$\gamma$ of the perturbation coincides with the local adiabatic index
of the equilibrium star as follows:
\begin{equation}
\frac{\epsilon +p}{p}\frac{\Delta p}{\Delta\epsilon}\equiv
\gamma = \frac{\epsilon +p}{p}
\left(\frac{dp}{d\epsilon}\right)_{Equil.},
\end{equation}
where $\Delta$ means the Lagrangian perturbation of the corresponding
variable. Eulerian perturbations of the metric components are totally
omitted as in the previous study (YE), i.e., the Cowling approximation
is adopted.
\subsection{Equations of State}
There exist many candidates for EOS of real neutron stars
with zero temperature. We here examine some of the representative EOS
to cover the wide range of stiffness.
\footnote{See Nozawa et al.~(1998) for recent calculation and summary of
equilibrium models with various realistic candidates of cold EOS.}
Our choices are those of
1) Pandharipande with hyperon (denoted by EOS B in \cite{AB77}),
2) Bethe-Johnson without hyperon (\cite{BJ74}), 3) Bethe-Johnson
(EOS C in Arnett \& Bowers) and 4) more recent WFF3 (\cite{WFF88})
joined to NV (\cite{NV73}) in the low density region.
In order to compare our result with those of MSB, the EOS of Pandharipande
without hyperon (EOS A in Arnett \& Bowers) is also employed.
Extremely stiff EOS L in Arnett \& Bowers is used only in Figure 8 for the
comparison of the EOSs with a wide range of stiffness.
\subsection{Numerical Treatment}
Our numerical scheme is basically the same as that in YE. Perturbed
quantities are assumed to behave as $\sim e^{-2\pi i\nu t+im\varphi}$,
where $t$ is the killing time coordinate.
A minor change that has been made is the introduction
of a function $q \equiv \delta p/(\epsilon +p)$, instead of the Eulerian
perturbation of the Emden function for polytropic stars. Coefficients
of the perturbed equations contain the background metric and its connection
coefficients as well as the pressure gradient and a function of adiabatic
index like $\gamma p/(\epsilon +p)$. \footnote{With these coefficients
introduced, our system of equations is free from coefficients that
diverge at the stellar surface for relatively stiff EOS unlike
those of MSB.}
We have used $(r \times \theta) = (100 \times 61)$ grid points for
equilibrium models where $(r, \theta)$ are the spherical polar
coordinates. Since less number
of grid point is used for the perturbational calculation due to the
restrictions of the power of the computer, values for equilibrium states
are interpolated to give values at the coarse grid points of our
surface-fitted coordinate (see YE). The interpolation is done by employing
the cubic spline scheme in two dimensions. The results in this paper are
obtained by using $(r \times \theta) = (25 \times 12)$ grid points in the
surface-fitted coordinate.
\section{Results}
\subsection{Eigenfrequencies and Eigenfunctions}
Rotational sequences of equilibrium stars can be obtained by fixing
the central energy density $\epsilon_c$ and changing the rotational
parameter such as the ratio of the polar radius to the equatorial
radius in the meridional cross section of the star. Physical quantities
such as the gravitational mass, the $T/|W|$ value and the angular
momentum, etc. are calculated after equilibrium configurations are
computed. Here $T$ and $W$ are the rotational energy and the gravitational
energy, respectively, whose definitions can be found in Komatsu et al. (1989)
and its ratio can be considered to be a standard indicator of the stellar
rotation. By changing the central density many sequences
of equilibrium stars are obtained and a series of models with the same
gravitational mass and a different rotational frequency can be chosen.
For these equilibrium models the eigenproblem is solved numerically.
In this paper we will concentrate only on the counter-rotating f-modes.
They are the generalization of the Kelvin modes of Newtonian
Maclaurin spheroids with the counter-rotating phase velocity as seen from
the co-rotating observer with the star and have indices $l=m$ of the
spheroidal harmonics. These particular modes may be the most susceptible
to the CFS instability (see \cite{BF86}) and have been mainly investigated
as to this instability.
Figures 1 -- 7 show the dependency of eigenfrequency $\nu$ on the
rotational frequency $f$ for the specified EOS and for the fixed
gravitational mass. For slowly rotating models, the frequency becomes
higher as $m$ increases. Note that, since we are considering the behavior of
the perturbed quantities expressed as $\sim e^{-2\pi i\nu t + im\varphi}$,
the phase velocity $2\pi\nu/m$ is negative in this case.
As the rotational frequency is increased, the rotational dragging effect
on the mode ({\it not} the 'inertial-frame-dragging' effect in general
relativity)
works more strongly for the larger $m$ modes and the order of the mode
frequencies are reversed. As a result, modes with larger $m$ pass the
neutral points earlier on the sequences.
\begin{figure}[htbp]
\centering
\psfig{file=fig1.ps,height=5cm,width=8cm,angle=-90}
\caption[fig1.ps]{Eigenfrequencies of f-modes for the neutron stars
constructed with the WFF3-NV EOS and $M = 1.8 M_\odot$. It should be
noted that both the eigenfrequency and the rotational frequency
of the stars are not the angular frequencies but the ordinary frequencies.
Symbols have the following meanings: '$\Diamond$' for $m=2$ mode,
'$+$' for $m=3$, '$\Box$' for $m=4$ and '$\times$' for $m=5$.
The vertical line in the right part is the maximum frequency which
corresponds to the mass-shedding limit of the sequence. \label{fig1}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig2.ps,height=5cm,width=8cm,angle=-90}
\caption[fig2.ps]{Same as Figure 1 except for $M=1.4 M_\odot$. \label{fig2}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig3.ps,height=5cm,width=8cm,angle=-90}
\caption[fig3.ps]{Same as Figure 1 except for $M=1.0 M_\odot$. \label{fig3}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig4.ps,height=5cm,width=8cm,angle=-90}
\caption[fig4.ps]{Same as Figure 1 except for the EOS B and
$M=1.4 M_\odot$. \label{fig4}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig5.ps,height=5cm,width=8cm,angle=-90}
\caption[fig5.ps]{Same as Figure 4 except for $M=1.0 M_\odot$. \label{fig5}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig6.ps,height=5cm,width=8cm,angle=-90}
\caption[fig6.ps]{Same as Figure 4 except for the EOS Bethe-Johnson
(neutron).\label{fig6}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig7.ps,height=5cm,width=8cm,angle=-90}
\caption[fig7.ps]{Same as Figure 4 except for the EOS C.\label{fig7}}
\end{figure}
Near the mass-shedding limit, eigenvalues rise sharply with the increase
of the rotational parameter. By using more detailed data set from which
these graphs are produced, we can have much smoother eigenfrequency curves
with the fixed central energy density and show that this behavior is also
seen there. However, the eigenfunctions of these models show sharp rises
of the amplitudes near the surface region on the equatorial plane.
Therefore, the rather coarse angular resolution may prevent us from solving
the numerical eigenvalue problem accurately near the mass-shedding limit.
\begin{figure}[htbp]
\centering
\psfig{file=fig8.ps,height=5cm,width=8cm,angle=-90}
\caption{Dimensionless eigenfrequency of $m=3$ mode is plotted
against the dimensionless rotational frequency.
The three sequences correspond to the same
gravitational mass ($M=1.4M_{\odot}$) models with
different EOSs, WFF3-NV, B and L. \label{fig8}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig9.ps,height=5cm,width=8cm,angle=-90}
\caption{Same as Figure 8, except that the EOS is fixed (WFF3-NV)
and the gravitational masses are varied ($M=1.8,1.4,0.8 M_{\odot}$).
\label{fig9.ps}}
\end{figure}
Next we show the typical behaviors of the eigenfrequencies
of the mode of our interest. In Figure 8 dimensionless eigenfrequency
of the $m=3$ mode is plotted against dimensionless rotational frequency
for three of the EOSs with various stiffness, with the gravitational
mass of the model being fixed (as $M=1.4M_{\odot}$).
The EOS B is the softest
among the three, and the EOS WFF3-NV, EOS L become stiffer in this order.
Normalization factor of them is $\sqrt{4\pi\bar{\rho}}$,
where $\bar{\rho}\equiv M/V_p$ is the averaged density with $M$ and $V_p$
being the gravitational mass and the proper volume of the equilibrium star.
We can see that with this normalization the eigenfrequency is rather
insensitive to the
stiffness of the EOS from cases with no rotation to those nearly at the
mass-shedding limits.
Figure 9 displays the variation of the normalized eigenfrequency due to the
gravitational mass difference, with the EOS being fixed (as WFF3-NV).
It is seen that increase in gravitational mass makes the mode frequency
larger, which corresponds to the effect of the softening of the EOS in
Figure 8.
This is reasonable since the strong self-gravity of the star
tends to induce its density profile to be concentrated to the
central region of it, which effectively realizes the configuration
with the softer EOS.
In Figures 10 and 11, we show the typical behavior of the eigenfunction
$q$. The equilibrium models compared in these two figures have the same
central density but the rotational frequencies are different.
In both models, the function $q$ increases monotonically
from the center to the surface of the star along the radial spokes
in the surface-fitted coordinate. For the slowly rotating model (Fig.10),
the angular dependence of the $q$ is nearly that of the associated
Legendre function, $P_l^m(\cos\theta)$ (in this case $l=m=4$),
whereas rapid rotation tends to shift the distribution of the
function $q$ to the equatorial plane (Fig.11).
\begin{figure}[htbp]
\centering
\psfig{file=fig10.ps,height=5cm,width=8cm,angle=-90}
\caption{An example of the eigenfunction $q$ (see text) for
a slowly rotating star with the WFF3-NV EOS. A quarter of the meridional
cross section of the star is shown. The mode number $m=4$.
The radial coordinate distance is normalized by using
$c/(4\pi\epsilon_c)^{1/2}$ where $c$ is the velocity of light
and $\epsilon_c$ is the central mass density of the star.
The amplitude of the eigenfunction is normalized such that the value of
$q$ at the surface point in the equatorial plane becomes unity.
The parameters of the equilibrium model are:
$\epsilon_c=1.0\times 10^{15}$g/cm$^3$,
the rotational frequency $f=285$ Hz,
$M=1.19 M_\odot$ and $T/|W|=0.053$. The eigenfrequency $\nu =-2391$ Hz.
\label{fig10}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig11.ps,height=5cm,width=8cm,angle=-90}
\caption{The eigenfunction $q$ for a rapidly rotating star.
The same mode as in Figure 10.
The parameters of the equilibrium model are:
$\epsilon_c=1.0\times 10^{15}$g/cm$^3$,
$f=1092$ Hz, $M=1.43 M_\odot$ and $T/|W|=0.093$.
The eigenfrequency $\nu =1250$ Hz. \label{fig11}}
\end{figure}
To see the 'radial' dependence of the eigenfunction, we show the
function $q$ on the equatorial plane (Fig.12). Here the same models
are used as those in Figures 10 and 11. As a 'radial' coordinate here we
take the value $x\equiv 1-\epsilon/\epsilon_c$, which roughly represents
the matter-energy distribution of the equilibrium stars.
As seen in this figure, the distribution of the function $q$ is concentrated
to the surface region as the star rotates more rapidly.
In the Newtonian theory, the same situation seems to improve the
Cowling approximation for rapidly rotating stars (\cite{SY97})
because the role of perturbed gravitational potential becomes less
important. Roughly speaking, less part of the stellar mass participates
in the oscillation as the star rotates more rapidly. This is also
likely to be the case in general relativistic stars, though we have no
'exact' quasi-normal modes to compare with.
\begin{figure}[htbp]
\centering
\psfig{file=fig12.ps,height=5cm,width=8cm,angle=-90}
\caption{The eigenfunction $q$ on the equatorial plane
is plotted against the density coordinate $x$. The equilibrium models
are the same as in Figure 10 and 11. Mode number $m=4$.
The solid curve is that for the model with $T/|W|=0.053$, whereas the
dashed one is that for the model with $T/|W|=0.093$. \label{fig12}}
\end{figure}
In Figure 13 the differences of radial behavior in the equatorial
plane between modes with different $m$ are shown. Here the radial
coordinate is that of the surface-fitted coordinates. It can be seen
that as $m$ increases, the main part of oscillation of the star shifts
outward where the amount of the mass fraction decreases. This can
make the Cowling approximation more accurate for higher order modes
(see Table 1).
\begin{figure}[htbp]
\centering
\psfig{file=fig13.ps,height=5cm,width=8cm,angle=-90}
\caption{The eigenfunction $q$ on the equatorial plane with
different mode numbers. The radial coordinate distance is that of the
surface-fitted coordinate. The equilibrium model is the same as
that in Figure 11. Symbols have the same meanings as in Figure 1.
\label{fig13}}
\end{figure}
\subsection{Neutral Points of the CFS Instability}
As is already remarked, we can estimate the neutral points of the
CFS instability by finding zeroes of the
eigenfrequencies. \footnote{Note that the neutral points here are
{\it not} determined by using the graphs shown in the previous
section. More detailed data sets have been used to obtain them.}
In Table 1 we compare the values of $T/|W|$ at the neutral points
of the instability with those obtained by MSB.
Here we summarize
the tendency of the Cowling approximation in general relativity as
follows:
(1) for the same EOS and for the same mode number, the Cowling approximation
gives better results as the central energy density of the model
increases;
(2) for the same equilibrium model, it results in more accurate values
for larger mode numbers; and
(3) the Cowling approximation has a tendency to overestimate
the stability in the case of relatively weak gravity
(cf. the Newtonian case in \cite{SY97}).
These are qualitatively consistent with the previous results in YE.
In contrast to the property (3), for larger central density
(strong gravity) models, higher order modes seem to be underestimated
in its stability by the Cowling approximation. As for the bar mode
there seems no improvement with increase of the central density.
\begin{table}[hbt]
\caption{Comparison of values of $T/|W|$ at neutral points with
the results by Morsink et al. (1998) \label{tabl1}}
\vspace{0.2cm}
\begin{center}
\begin{tabular}{ccccc}\tableline
EOS&mode&$\epsilon_c$($\times 10^{15}$g/cm$^3$)& present & MSB\\\tableline
A & $m=2$ & 1.0 & 0.094 & 0.082\\
& & 3.2 & 0.079 & 0.056\\
& $m=3$ & 1.0 & 0.081 & 0.066\\
& & 3.2 & 0.049 & 0.044\\
& $m=4$ & 1.0 & 0.056 & 0.054\\
& & 3.2 & 0.035 & 0.035\\
& $m=5$ & 1.0 & 0.043 & 0.044\\
& & 3.2 & 0.027 & 0.029\\
&&&&\\
C & $m=2$ & 0.74 & $-$ & 0.087\\
& & 0.90 & 0.098 & 0.082\\
& & 2.5 & 0.077 & 0.059\\
& $m=3$ & 0.70 & 0.076 & 0.066\\
& & 0.95 & 0.071 & 0.061\\
& & 2.5 & 0.048 & 0.046\\
& $m=4$ & 0.70 & 0.053 & 0.052\\
& & 1.0 & 0.049 & 0.047\\
& & 2.5 & 0.035 & 0.036\\
& $m=5$ & 1.0 & 0.036 & 0.038\\
& & 2.5 & 0.027 & 0.028\\\tableline
\end{tabular}
\end{center}
\end{table}
When the EOS is fixed, we can calculate an eigenfrequency of a mode
for the model with a given rotational frequency $f$ (Hz) and
a given gravitational mass $M$ ($M_\odot$).
Then we have a neutral stability curve of
the CFS instability for the mode in the $f - M$ plane as a set of
zeroes of the eigenfrequencies.
Four neutral stability curves corresponding to four different modes
with different EOS are shown in Figures 14 -- 17.
\begin{figure}[htbp]
\centering
\psfig{file=fig14.ps,height=5cm,width=8cm,angle=-90}
\caption{Neutral stability curves of the f-mode in the
$f - M$ plane. The gravitational mass is normalized by the solar mass.
The EOS is that of Bethe-Johnson without hyperon contribution.
The solid line is the mass-shedding limit curve.
The dashed line is the approximate maximum mass curve for a given
rotational frequency (see text). Symbols used here are the same as in
Figure 1. \label{fig14}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig15.ps,height=5cm,width=8cm,angle=-90}
\caption{Same as Figure 14 except for the WFF3-NV EOS.
\label{fig15}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig16.ps,height=5cm,width=8cm,angle=-90}
\caption{Same as Figure 14 except for the EOS C.
The approximate maximum mass curve is omitted here. \label{fig16}}
\end{figure}
\begin{figure}[htbp]
\centering
\psfig{file=fig17.ps,height=5cm,width=8cm,angle=-90}
\caption{Same as Figure 14 except for the EOS B. \label{fig17}}
\end{figure}
Figure 14 displays those for the EOS of Bethe-Johnson without hyperon.
The region of the left hand side of each curve is the stable region
against the CFS mechanism for the corresponding mode.
The solid line is the mass-shedding limit curve
on which the gas on the surface in the equatorial plane rotates with the local
Keplerian velocity. The dashed line in the upper region is the approximate
maximum mass line for a given rotational frequency. The reason why we use
the word 'approximate' is that to obtain it we do not constrain the
rotational frequency to be constant but the axis ratio of the configuration
to be constant (\cite{KEH89}).
At first sight, it seems strange that for sufficiently rapidly rotating
cases we have a model in the right hand side region of the mass-shedding
curve. It seems to imply that for a given mass we have equilibrium models
that have larger rotational frequencies than the mass-shedding case.
This comes from the fact that the rotational frequency $f$ is {\it not}
a proper measure of the rotation for extremely rapidly rotating
stars (see appendix for its explanation).
\section{Applications}
We here apply our results to two issues of interest in neutron star
physics. One is the estimation of the time scale of the
CFS instability. The other is related to the resonant excitation of
f-modes in inspiraling compact binary systems which are possible targets
of the gravitational wave detectors under construction.
\subsection{CFS Instability of Neutron Stars}
In the previous section neutral points of the CFS instability are
determined. Then what we want to know next is {\it how fast these unstable
perturbations grow beyond these points}. Unfortunately our approximation
does not provide us the answer directly.
What is expected to happen in the real process is that the stellar
free oscillations with non-zero frequencies couple to gravitational wave
radiation which carries its energy to infinity, and the frequencies inevitably
have imaginary parts. This is the problem of the so-called
{\it quasi-normal mode} which is well-known in black hole physics.
To obtain complex eigenfrequencies, we must take the metric
perturbations into account and the {\it out-going wave} conditions must be
specified to them. This task is extraordinarily difficult for rapidly
rotating stars and no investigation has been accomplished for it yet.
As its alternative, there have been several works which estimates the
growth time of the instability by applying the gravitational radiation
reaction potential of the post-Newtonian expansion (\cite{KT69}) to
Newtonian stellar oscillations. In this approximation, we only need to
know the time dependency of the mass multipole of the oscillating star.
Here we roughly estimate the growth rates of the instability by following
Comins (1979), who investigated the secular effects of the gravitational
radiation reaction and viscosity on the oscillating Maclaurin spheroids.
According to the analysis of Comins, the effect of gravitational radiation
adds a small imaginary part to the oscillation frequency viewed from the
co-rotating frame with the star $\Sigma\equiv -2\pi (\nu-f)$ as follows:
\begin{equation}
\delta\Sigma = \frac{2iG\left[\frac{M}{\frac{4}{3}\pi R_1^3}\right]
(m+1)(m+2)R_1^{2m+1} (2\pi)^{2m}
\nu^{2m+1}}
{(m-1)\left[(2m+1)!!\right]^2
\left[(m-1)f - \nu\right]c^{2m+1}},
\end{equation}
where $c$ and $G$ are the velocity of light and the gravitational constant,
respectively, and $R_1$ is the equatorial radius of the star.
From this expression, we can estimate the e-folding time $\tau_{GR}$
for the growth of perturbations as follows:
\begin{eqnarray}
\tau_{GR} (\mbox{sec}) &=& k(m)
\left[ (m-1)\left( \frac{f}{\mbox{kHz}} \right)
- \left(\frac{\nu}{\mbox{kHz}}\right)\right]\times\nonumber\\
&& \left(\frac{M}{M_{\odot}}\right)^{-1}
\left(\frac{R_1}{10\mbox{km}}\right)^{-2m+2}
\left(\frac{\nu}{\mbox{kHz}}\right)^{-2m-1},
\end{eqnarray}
where $k(m)$ is defined as,
\begin{equation}
k(m) = \frac{10^{2m-4}\left[c/10^{10}\right]^{2m+1}
(m-1)\left[(2m+1)!!\right]^2}
{4(2\pi)^{2m-1}(m+1)(m+2)}.
\end{equation}
This formula is applied to the neutron star models of
$M=1.4M_{\odot}$ with the WFF3-NV EOS (Table 2).
It is seen
that, as the rotation rate is increased, the lower order modes suffer
stronger destabilization effect by gravitational radiation, which
originates from the efficiency of gravitational radiation of these lower
order oscillations, than the higher order modes whose
neutral points reside at the lower rotational frequencies.
As a result these modes shown here have the same order of
timescale near the mass-shedding limit.
\begin{table}[hbt]
\caption{Estimated timescale in units of second
of the CFS instability for the $M=1.4 M_{\odot}$
star with the WFF3-NV EOS. \label{tabl2}}
\vspace{0.2cm}
\begin{center}
\begin{tabular}{ccccc}\\\tableline
$f$(Hz) && $m=3$ & $m=4$ & $m=5$\\\tableline
$818$ && $-$ & $-$ & $9\times 10^{15}$\\
$952$ && $-$ & $-$ & $2\times 10^{9}$\\
$1006$ && $4\times 10^{19}$ & $4\times 10^7$ & $6\times 10^7$\\
$1053$ && $3\times 10^7$ & $1\times 10^6$ & $3\times 10^6$\\
$1117$ && $1\times 10^3$ & $1\times 10^3$ & $3\times 10^3$\\\tableline
\end{tabular}
\end{center}
\end{table}
This estimation needs to be treated as very rough one because we assume
that the formulae for Newtonian Maclaurin spheroids are applicable to the
relativistic neutron star models and because the correspondence of equilibrium
quantities such as the mass and the equatorial radius is vague.
However, qualitative behavior of the modes would be the same
even if more refined treatment would be employed.
\subsection{Resonant Excitation of the f-modes for Inspiraling Compact
Binary Systems}
Inspiraling compact binary systems (neutron star--neutron star (NS/NS),
black hole--neutron star (BH/NS) and black hole--black hole)
are the most promising sources of gravitational wave for large
interferometry gravitational wave detectors under construction
such as LIGO and VIRGO (see e.g. \cite{KT94} and the references therein).
These detectors will be able to
observe the inspiraling phase where components of the system are well
approximated by the point masses. To extract meaningful results from the
gravitational wave signals, it is indispensable to have sufficiently
accurate theoretical templates of wave forms in the frequency range
of $10-10^3$Hz to which these detectors are sensitive (\cite{CET93}).
In this context, if at least one of the components is a neutron star,
its internal hydrodynamical degrees of freedom may be a potential threat
to the template construction. Bildsten \& Cutler (1992) examined
the problem of the 'equilibrium tide' and the issue of the tidal locking of the
components. The tidal locking which causes synchronization of the spin
and the orbital motion seems rather unlikely to occur according to their
result, and the theoretical template suffers only a negligible correction
by it. Excitations of stellar oscillations of the inspiraling stars,
or the 'dynamical tide', is another issue to
be considered. Reisenegger \& Goldreich (1994) and Lai (1994)
investigated resonant excitations of g-modes and their effects on
the inspiral orbit. For slowly rotating stars g-modes frequencies
may fall in the range as low as the orbital resonant frequency.
According to their results, however, g-modes affect the orbit
negligibly, since the coupling between tidal potential and g-mode
eigenfunctions is rather weak.
As we have seen, counter-rotating f-modes of neutron stars pass
the neutral points viewed from the asymptotic inertial frame
if the star rotates sufficiently rapidly. Thus it may
suffer resonant excitations during the inspiral phase.\footnote{As seen
in Bildsten \& Cutler, the tidal synchronization
of inspiraling components is unexpected. So the initial angular velocities
of both components are preserved during the inspiral. We here consider
the case in which at least one neutron star in the system has a sufficient
angular velocity to suffer the orbital resonance. Whether such systems
do exist actually or not is beyond our discussion here.}
Once the resonance condition is fulfilled, the f-mode is likely to be
a much more dangerous obstacle to the construction of the
theoretical template, since the f-mode eigenfunction couples more
strongly with the tidal potential. We here adopt the simple oscillator
model by Reisenegger \& Goldreich (1994) and examine the effect of the
mode excitation.
A sinusoidal external force operating on a star with a mass $M_1$
whose frequency fulfills the resonance condition,
$\nu/m = n_{\mbox{\tiny orb}}$, where $n_{\mbox{\tiny orb}}$
is the orbital frequency, excites a mode whose energy amounts to,
\begin{equation}
\varepsilon = \frac{(F\delta t)^2}{8 M_1},
\end{equation}
during the resonance time interval $\delta t$, which is typically
the decay time of the orbit by gravitational radiation.
The external tidal force by the companion with a mass $M_2$ is estimated
by the following formula:
\begin{equation}
F = \frac{GM_1M_2}{R_1^2}\left(\frac{R_1}{a}\right)^{m+1} S,
\end{equation}
where $R_1$ is the stellar radius and $a$ is the separation of the
binary system. The factor $S$ is the 'overlap integral' describing
the efficiency of the tidal force on an eigenfunction defined by
\begin{equation}
S = \int_{M_1} \sqrt{-g} u^t dr d\theta d\varphi
\left(\frac{\epsilon}{c^2}\right) \vec\xi\cdot\vec P,
\end{equation}
where $g$ is the determinant of the metric of
the background spacetime, $u^t$ the time component of the 4-velocity
of the unperturbed stellar fluid, $\vec\xi$ is the Lagrangian displacement and
$\vec P = \nabla (r^mY_m^m(\theta,\varphi))$.
The vibrational energy of an excited mode $\varepsilon$ is compared with
the orbital energy decrease by
gravitational radiation $\Delta E$ in the time interval $\delta t$,
\begin{equation}
\frac{\varepsilon}{\Delta E}
= \alpha_m[M_2/M_1;R_1]
\left(\frac{n_{\mbox{\tiny orb}}}{100\mbox{Hz}}\right)^{\frac{8m-23}{6}}
S^2,
\end{equation}
where the factor $\alpha_m$ depends on the mode number, the mass ratio of
the components and the stellar radius. If this ratio is not negligible
compared with unity, the assumption that the binary orbit evolves
solely by gravitational radiation from the orbital motion should be amended.
If the stellar radius and the mass ratio of the components are
assumed to be $R_1=10{\mbox{km}}$ and $M_2/M_1 = 1$, the factors
$\alpha_m$ are computed as follows:
\begin{equation}
\alpha_m = \cases{
5\times 10^1 & (m=2) \cr
0.4 & (m=3) \cr
0.004 & (m=4) \cr
3\times 10^{-5} & (m=5) \cr}
.
\end{equation}
If the mass ratio is larger, say $M_2/M_1 = 10$ (BH/NS case),
they are:
\begin{equation}
\alpha_m = \cases{
2 & (m=2) \cr
6\times 10^{-3} & (m=3) \cr
2\times 10^{-5} & (m=4) \cr
5\times 10^{-8} & (m=5) \cr}
.
\end{equation}
The overlap integral $S$ may be computed by using the eigenfunction
of the mode obtained by the Cowling approximation.
We find that $S\simeq 1$ for the bar mode and $S$ is larger than
$0.6$, $0.3$, and $0.2$ for $m=3,4,$ and $5$ modes, respectively,
for all the models from the non-rotating state
to the mass-shedding limit. \footnote{Here we made rough an estimation
which assumes the validity of Newtonian and slow-rotation-limit
formulae used in the Reisenegger \& Goldreich in our investigation.
It is intended only to display that the overlap integral should be
nearly order of unity in the low order mode cases.}
\subsubsection{Neutron Stars with the Prograde Rotation to the
Orbital Motion}
When the star under consideration rotates in the same direction as
its orbital motion, the resonance condition of 'counter-rotating'
modes and the orbital motion may be fulfilled for the states
beyond the neutral points of the modes. In this case, the bar mode may
not be significant except for the stars which rotate with velocities of
nearly the mass-shedding limit.
As seen from Figs. 1 -- 3, for higher order modes, the orbital frequencies
at the resonances, $n_{\mbox{\tiny orb}}=\nu/m$, may be under
$10^3\mbox{Hz}$ for the realistic EOS \footnote{Here we
pay attention only to the models with the WFF3-NV EOS. },
even at the mass-shedding limit of the star.
Using the formula of $\varepsilon/\Delta E$ above, it is observed that
the higher modes than $m=4$ may not affect the orbital evolution
governed by gravitational radiation during the inspiral phase
which will be observed by the gravitational wave antennas like LIGO/VIRGO.
It is also seen that for systems with a large mass ratio like a BH/NS binary,
the energy deposited in the NS vibration is too small to affect
the binary orbit.
As for the $m=3$ mode, its excitation may affect the stability and
the evolution of the system if the star has a sufficiently large
rotational frequency (for example, larger than $850$Hz for
$M=1.0M_{\odot}$ ; larger than $1000$Hz for $M=1.4M_{\odot}$),
since the vibrational energy is no more negligible compared with
the gravitational radiation energy loss from the system. Moreover
$n_{\mbox{\tiny orb}}$ at its resonance is in the range of sensitivity of
LIGO/VIRGO detectors, the excitation may be observed as the discrepancy of
the signal form from its theoretical template.
\subsubsection{Neutron Stars with the Retrograde Rotation to the
Orbital Motion}
If the star rotates in the retrograde direction against its orbital
motion, the resonance condition of the modes and the orbit requires
the rotational frequency of the star to be lower than that of the
neutral points. Thus the resonance condition can drop to the
frequency range of LIGO/VIRGO sensitivity windows even for lower $m$ modes.
Significant is the fact that the bar mode which couples most strongly to
the tidal potential can be resonant for a wide range of rotational
frequency of the star. For $M=1.4M_{\odot}$ models, stars with
rotational frequency above $400$Hz may suffer the resonant excitation
on its inspiraling path in the gravitational wave antennas' sensitivity
window (for $M=1.0M_{\odot}$ case, this frequency may go down to $200$Hz).
The factor $\alpha_2$ amounts to $50$ for an equal mass NS/NS binary, and
to $2$ even in BH/NS cases with the mass ratio $M_2/M_1=10$.
Since the overlap integral $S\sim 1$ for the bar mode, the resonant
excitation would affect the evolution of the binary orbit significantly.
Moreover the $m=3$ mode can be excited in the stars rotating with frequency
as low as $20$Hz (for $1.4M_{\odot}$ case). Thus theoretical templates of
gravitational wave signals from compact binary systems containing NS
with retrograde rotation should almost always take the resonant excitation
of the counter-rotating f-modes into account.
\acknowledgments
SY would like to thank Prof. B. Schutz and Dr. C. Cutler for their warm
and generous hospitality at Max-Planck-Institut f\"ur Gravitationsphysik
(Albert-Einstein-Institut) in Potsdam where a part of the numerical computation
was done and this paper was prepared.
The authors are also grateful to the anonymous referee for useful suggestions.
|
1,116,691,500,950 | arxiv | \section{Statement of the main results}
A semisimplicial set (called $\Delta$-set by \cite{RS}) is a functor $(\Delta^{\operatorname{inj}})^{op}\to\matheurm{Set}$, where $\Delta^{\operatorname{inj}}$ is the category of the totally ordered finite sets $[n]=\{0,\dots,n\}$ and strictly monotone maps. Rourke--Sanderson \cite{RS} (see also \cite{McC}) showed that any semisimplicial set satisfying the Kan condition admits a simplicial structure. In this note we investigate under which condition there is a simplicial structure on a semisimplicial set which merely satisfies the ``weak Kan condition'' that all \emph{inner} horns can be filled. For notational simplicity we will refer to such an object as \emph{quasi-semicategory}. (Here ``semi'' stands for semisimplicial. We do not intend to suggest that such an object is, in general, a model for a non-unital infinity-category.)
Let $X$ be a semisimplicial set. For $f\in X_1$, we write $f\colon x\to y$ where $x=d_1f$ and $y=d_0 f$. If $f,g,h\in X_1$, we write $g\circ f\simeq h$ if there is a 2-simplex $\sigma$ such that $d_1\sigma = h$, $d_2\sigma=f$, and $d_0\sigma = g$. The symbol $\Delta^n$ will denote the semisimplicial $n$-simplex (\emph{i.e.}, the presheaf represented by $[n]$), and $\Lambda^n_i\subset \Delta^n$ the $(n,i)$-horn.
\begin{definition}
\begin{enumerate}
\item $f\colon x\to x$ in $X$ is called \emph{idempotent} if $f\circ f\simeq f$ holds.
\item A morphism $f\in X_1$ is called \emph{equivalence} if $f$ is both cartesian and cocartesian -- that is, if for any $n\geq 2$ there is a filler for any horn $\Lambda_n^n\to X$ whose last edge is $f$ and for any horn $\Lambda^n_0\to X$ whose first edge is $f$.
\end{enumerate}
\end{definition}
\begin{examples*}
\begin{enumerate}
\item\label{item:ex1} If $X$ is a quasi-category, by Joyal \cite{Joyal} this notion of equivalence agrees with the usual notion of equivalence (or quasi-isomorphism).
\item\label{item:ex2} Let $X=N(\mathcal{C})$ be the nerve of a non-unital category (so that $X$ is a quasi-semicategory). It is not hard to see that $f\colon x\to y$ is an equivalence in our sense if and only if for any object $z$, the maps
\[-\circ f\colon \mathcal{C}(y,z)\to \mathcal{C}(x,z)\quad\mathrm{and}\quad f\circ -\colon \mathcal{C}(z,x)\to \mathcal{C}(z,y)\]
are bijective.
\end{enumerate}
\end{examples*}
If $X$ is a quasi-category and $x\in X_0$, then the degeneracy $s_0(x)$ of $x$ has the property of being an idempotent equivalence of $x$. Our first result is a converse to this statement. We will say that a quasi-semicategory $X$ \emph{has a simplicial structure} if it is the underlying semisimplicial set of a simplicial set (which then is automatically a quasi-category).
\begin{theorem}\label{thm:existence}
Let $X$ be a quasi-semicategory and let $s_0\colon X_0\to X_1$ be any function such that for each $x\in X_0$, $s_0(x)$ is an idempotent equivalence $x\to x$. Then $X$ has a simplicial structure whose degeneracy in degree $0$ coincides with $s_0$.
\end{theorem}
\begin{corollary}[Rourke-Sanderson]\label{cor:Kan}
Any semisimplicial set satisfying the Kan condition has a simplicial structure.
\end{corollary}
Theorem \ref{thm:existence} comes with a relative version, see Theorem \ref{thm:relative} below. From the relative version we will deduce:
\begin{theorem}\label{thm:uniqueness}
Let $\mathcal{C}$, $\mathcal{C}'$ be quasi-categories which have the same underlying semi\-simplicial set. Then $\mathcal{C}$ and $\mathcal{C}'$ are categorically equivalent.
\end{theorem}
In section \ref{sec:proofs} we will prove Theorems \ref{thm:existence} and \ref{thm:uniqueness} and deduce Corollary \ref{cor:Kan}. In section \ref{sec:generalization} we will generalize the results to semisimplicial objects in other categories.
The results of this paper will be used by the author in the proof of an analog of Waldhausen's additivity theorem in the setup of cobordism categories \cite{Steimleforward}. The point is that cobordism categories are naturally categories without identities, just as cobordism \emph{spaces} (considered by Quinn, Ranicki, Laures--McClure and others) are naturally semi-simplicial sets, while it is usually more convenient to work with simplicial objects.
\section{The relative existence theorem}\label{sec:proofs}
We start by recalling some terminology. A semisimplicial map $p\colon X\to Y$ is called \emph{inner fibration} if any commutative diagram of semisimplicial sets
\begin{equation}\label{eq:lifting_diagram}
\xymatrix{
\Lambda^n_i \ar[d] \ar[r]^h & X \ar[d]^p\\
\Delta^n \ar[r]^k \ar@{.>}[ru]& Y
}
\end{equation}
has a diagonal lift as dotted in the diagram, provided $0<i<n$. An element $a\in X_1$ is called \emph{$p$-cartesian} if any commutative diagram \eqref{eq:lifting_diagram} has a diagonal lift, provided $i=n$ and the last edge of $h$ is $a$; it is called $p$-cocartesian if it is $p^{op}$-cocartesian as an element of $X_1^{op}$. These definitions are in accordance with the usual simplicial notions.
If $Y$ has a simplicial structure, and $f\colon x\to x$ is a 1-simplex in $X$, then we call $f$ \emph{$p$-idempotent} if there is a 2-simplex $\sigma\in X_2$ all whose boundaries are $f$, and which projects to the degenerate simplex $s_0^2(p(x))\in Y_2$.
\begin{theorem}\label{thm:relative}
Let $p\colon X\to Y$ be an inner fibration of semisimplicial sets and $f\colon A\to X$ the inclusion of a semisimplicial subset; assume that $Y$ and $A$ have simplicial structures such that $p\circ f$ is a simplicial map. Let $s_0\colon X_0\to X_1$ be a map, compatible with the degeneracies $s_0$ on $A$ and $Y$, and such that for all $x\in X_0$, $s_0(x)$ is $p$-idempotent, $p$-cartesian, and $p$-cocartesian.
Then $s_0\colon X_0\to X_1$ extends to a simplicial structure on $X$ such that $f$ and $p$ are simplicial.
\end{theorem}
\begin{addendum}\label{addendum}
If $p$ is a Kan fibration, then a map $s_0\colon X_0\to X_1$ as required in the Theorem exists always, so that a compatible simplicial structure on $X$ exists without further hypotheses.
\end{addendum}
Theorem \ref{thm:existence} is a special case of Theorem \ref{thm:relative} where $Y$ is the terminal object and $A=\emptyset$; Corollary \ref{cor:Kan} follows from the Addendum. The relative existence theorem also implies Theorem \ref{thm:uniqueness}: Let $J$ be the groupoid with two objects $0$ and $1$ and two non-identity morphisms. We apply Theorem \ref{thm:relative} with $X=\mathcal{C}\times J$, $A=\mathcal{C}\times \{0,1\}$, $Y=J$, and $p$ the projection map, where $A$ carries the simplicial structure of $\mathcal{C}$ over $0$ and of $\mathcal{C}'$ over $1$.
We conclude that $\mathcal{C}\times J$ has a simplicial structure compatible with $\mathcal{C}$ over 0 and with $\mathcal{C}'$ over 1, such that $p$ is simplicial. Now note that $p$ is a cartesian fibration over $J$ so the pull-backs over $0$ and $1$ are categorically equivalent \cite[3.3.1.3]{HTT}.
We come to the proof of Theorem \ref{thm:relative}, which is a modification of the strategy from \cite{McC}. Throughout this section $X$ and $A$ will be as in the assumption of Theorem \ref{thm:relative}. For notational brevity we will give the proof only in the case where $Y$ is the terminal object $\{*\}$ so that the datum of $p$ and $Y$ may be ignored. The proof in the general case is identical, if ``filling a horn'' is replaced by ``choosing a diagonal lift''.
Recall the simplicial identities:
\begin{align}
\label{eq:d_vs_d}
d_i d_k &= d_{k-1} d_i \quad (i<k);
\\
\label{eq:d_vs_s}
d_i s_k &=
\begin{cases}
s_{k-1} d_i & (i<k),\\
\id & (i=k, k+1),\\
s_k d_{i-1} & (i>k+1);
\end{cases}
\\
\label{eq:s_vs_s}
s_i s_k &= s_{k+1} s_i \quad (i\leq k).
\end{align}
The construction of degeneracy maps is by induction. Let us call an \emph{$N$-good system} a system of maps $(s_k\colon X_n\to X_{n+1})$ ($n\geq 0$, $0\leq k\leq \min(n,N)$) that satisfies the simplicial identities whenever they apply, and that extends the given maps on $A$ and $X_0$. Clearly a $(-1)$-good system exists; we wish to prove that any $(N-1)$-good system $(s_0, \dots, s_{N-1})$ can be extended to an $N$-good one.
We proceed in two steps. Let us call an \emph{almost $N$-good system} a system of maps $(s_k\colon X_n\to X_{n+1})$ ($n\geq 0$, $0\leq k\leq \min(n,N)$) satisfying the condition for being $N$-good, except that we do not require the identity $d_{N+1}s_N=\id$ to hold.
\begin{lemma}\label{lem:step1}
Any $(N-1)$-good system extends to an almost $N$-good system.
\end{lemma}
\begin{proof}
The construction of $s_N\colon X_n\to X_{n+1}$ is by induction on $n$, starting at $n=N$. In the case $N=0$, the induction beginning is provided by the map $s_0\colon X_0\to X_1$ which exists by assumption. The induction step and, in the case $N\neq 0$, also the induction beginning, are proven by the same construction which we now explain.
Assume that we have an $(N-1)$-good system $(s_0, \dots, s_{N-1})$ and maps $s_N\colon X_\ell\to X_{\ell+1}$ for $ N\leq \ell\leq n-1$, satisfying the condition for being almost $N$-good whenever they apply. We wish to define $s_N\colon X_n\to X_{n+1}$ so that the conditions for being almost $N$-good hold whenever they apply; that is, \eqref{eq:d_vs_s} and \eqref{eq:s_vs_s} should hold for $k=N$ except we do not require $d_{N+1}s_N=\id$.
For $x\in X_n$, the equations in \eqref{eq:d_vs_s} for $k=N$, $i\neq N+1$, are $(n+1)$-many equations that together prescribe the restriction of $s_N(x)$ to the horn $\Lambda^{n+1}_{N+1}\subset \Delta^{n+1}$. Therefore we will define $s_N(x)$ as a filler for the horn $\Lambda^{n+1}_{N+1}\to X$ which is defined by the right-hand sides of the relevant equations in \eqref{eq:d_vs_s}. In more detail, we let
\[x_i=\begin{cases}
s_{N-1}d_i(x), &(i<N),\\
x, & (i=N),\\
s_N d_{i-1}(x), & (i>N+1)
\end{cases}
\]
where the operator $s_N$ in the last case acts on $X_{n-1}$ and is given by hypothesis. We claim that
\begin{equation}\label{eq:horn_condition}
d_j(x_i) = d_{i-1}(x_j), \quad (j<i,\; j,i\neq N+1)
\end{equation}
so that the sequence $x_i$ for $i\neq N+1$ defines a horn $\Lambda^{n+1}_{N+1}$ in $X$. The equations \eqref{eq:horn_condition} can be easily verified by hand using the relevant equations of \eqref{eq:d_vs_s}, making a case by case distinction.
If $n>N$ (the induction step case), the horn defined in this way is an inner horn so a filler exists because $X$ is a quasi-semicategory. If $n=N$ (the induction beginning case), this is a right horn, but applying \eqref{eq:d_vs_s} iteratively, we see that
\[d_0^N s_N(x) = s_0 d_0^N(x)\]
so the last edge of the horn is in the image of $s_0\colon X_0\to X_1$ and therefore a cartesian morphism by our assumptions. So the horn has a filler in this case again, by definition of being cartesian.
We would like to define $s_N(x)$ as a choice of filler for this horn; however this definition is a little too crude in that we didn't ensure that the restriction of $s_N$ to $A_n$ is as required, nor that the simplicial identities \eqref{eq:s_vs_s} hold. This can be rectified as follows: First, if $x=f(x')$ for some $x'\in A_n$, we have to and do choose $f s_N(x')$ as a filler for the horn in order to make $f$ simplicial. Second, if $x\in X_n$ is of the form $x=s_i(y)$ for some $i<N$, then the equation $s_Ns_i=s_is_{N-1}$ from \eqref{eq:s_vs_s} forces us to choose $s_N(x):=s_i s_{N-1}(y)$ as a filler for the horn.
To complete the proof, we need to show this rule is well-defined; that is, if $x=s_i(y)=s_j(y')$ or if $x=f(x')=s_i(y)$, any of the choices leads to the the same value of $s_N(x)$. To justify this, we use the following Lemma, which we prove at the end of the section.
\begin{lemma}\label{lem:pullback_property}
In an $(N-1)$-good system, and for $i<j<N$, $k<N$, the commutative squares
\[\xymatrix{
X_{n-2} \ar[r]^{s_{j-1}} \ar[d]^{s_i} & X_{n-1} \ar[d]^{s_i}
&& A_{n-1} \ar[r]^f \ar[d]^{s_k} & X_{n-1} \ar[d]^{s_k}\\
X_{n-1} \ar[r]^{s_j} & X_n
&& A_n \ar[r]^f & X_n
}\]
are pull-back squares.
\end{lemma}
Hence, if we can write $x=s_i(y) = s_j(y')$ for $i<j<N$, there exists a $z\in X_{n-2}$ such that $y=s_{j-1}(z)$ and $y'=s_i(z)$. Then we have
\[s_i s_{N-1}(y) = s_i s_{N-1} s_{j-1}(z) = s_j s_{N-1} s_i(z)= s_j s_{N-1} (y')\]
provided $i<j<N$ and the system is $(N-1)$-good, so that both possible definitions of $s_N(x)$ agree. Similarly, if $x=s_i(y)=f(x')$, then there exists $y'\in A_{n-1}$ with $y=f(y')$ and $s_i(y')=x'$ so
\[s_i s_{N-1}(y) = s_i s_{N-1} f(y') = f s_i s_{N-1}(y') = f s_N s_i (y') = f s_N(x') \]
and again the two possible definitions agree.
\end{proof}
Next we come to the second step of our construction.
\begin{lemma}\label{lem:step2}
If $(s_0, \dots, s_N)$ is an almost $N$-good system, then there is a collection of maps $\sigma_N\colon X_n\to X_{n+1}$, $n\geq N$, such that $(s_0, \dots, s_{N-1}, \sigma_N)$ is $N$-good.
\end{lemma}
\begin{proof}
We construct maps $T_N\colon X_n\to X_{n+2}$ for $n\geq N$ such that
\begin{align}
\label{eq:d_vs_T}
d_i T_N & =
\begin{cases}
s_{N-1}^2 d_i, & (i<N),\\
s_N, & (i=N+1, N+2),\\
T_N d_{i-2}, & (i> N+2);
\end{cases}
\\
\label{eq:s_vs_T}
T_N s_i & = s_N^2 s_i, & (i<N).
\end{align}
One should think of the map $T_N$ as a candidate for the double degeneracy $\sigma_N^2$. Indeed, if $s_N$ is already $N$-good, then the operators $T_N:= s_N^2$ satisfy the above properties (plus the equation $s_N=d_N T_N$). On the other hand, if we are given an almost $N$-good system $(s_0,\dots, s_N)$, and maps $T_N$ satisfying \eqref{eq:d_vs_T} and \eqref{eq:s_vs_T}, then by setting $\sigma_N:=d_NT_N$, we obtain an $N$-good system.
The construction of the collection $(T_N)$ is very analogous to the construction in the previous step and is by induction on $n\geq N$. In the case $N=0$, the induction beginning is given by any map $T_0\colon X_0\to X_2$ that sends $x\in X_0$ to a 2-simplex expressing the fact that $s_0(z)$ is $p$-idempotent, where we also assume that on $A_0\subset X_0$, the map is actually given by $s_0^2$. The induction beginning in the other cases and the induction step are by the same construction as follows:
Assume that we have an almost $N$-good system $(s_0, \dots, s_N)$ and maps $T_N\colon X_\ell\to X_{\ell+2}$ for $ \ell\leq n-1$, satisfying the conditions \eqref{eq:d_vs_T} and \eqref{eq:s_vs_T}. We wish to define $T_N\colon X_n\to X_{n+2}$ so that \eqref{eq:d_vs_T} and \eqref{eq:s_vs_T} are again satisfied.
For $x\in X_n$, the ($n+2$ many) equations \eqref{eq:d_vs_T} define a map $\Lambda^{n+2}_N\to X$. In more detail, if we let
\[x_i=\begin{cases}
s_{N-1}^2d_i(x), &(i<N),\\
s_N(x), & (i=N+1, N+2),\\
T_N d_{i-2}(x), & (i>N+2)
\end{cases}
\]
then again a case-by-case calculation shows that the horn equations
\begin{equation}\label{eq:n_horn_condition}
d_j(x_i) = d_{i-1}(x_j), \quad (j<i,\; j,i\neq N)
\end{equation}
hold.
If $N>0$, the horn $\Lambda^{n+2}_N$ is an inner horn which can be filled by an $(n+2)$-simplex we call $T_N(x)$. If $N=0$, then \eqref{eq:d_vs_T} shows that the first edge is $s_0$ of the first vertex, which is cocartesian by assumption. So we can fill in the horn as well to get an element $T_0(z)\in X_{n+2}$.
Again we need to modify this construction in two ways: First, if $x=f(x')$, we choose as filler the element $f s_N^2(x')$ provided by the simplicial set structure of $A$. Second, if $x\in X_n$ degenerate, then the choice of filler $T_n(x)$ is forced to us by \eqref{eq:s_vs_T}. Again, Lemma \ref{lem:pullback_property} ensures that this is well-defined.
\end{proof}
Lemmas \ref{lem:step1} and \ref{lem:step2} together prove the induction step and therefore Theorem \ref{thm:relative}. We now give the postponed Lemma \ref{lem:pullback_property}. It builds on the following Lemma, which is valid in an arbitrary category and whose proof is an easy exercise.
\begin{lemma}\label{lem:general_pullback_property}
Suppose that in the commutative square
\[\xymatrix{
A \ar[r]^i \ar[d]^f & B \ar[d]^g \\
X \ar[r]^j & Y
}\]
the morphism $i$ is a retract of $j$, and that $j$ is injective. Then the diagram is a pull-back diagram.
\end{lemma}
(Here, being a retract means that there exist morphisms $F\colon X\to A$ and $G\colon Y\to B$ such that $Ff=\id_A$, $Gg=\id_B$, and $iF=Gj$.) Lemma \ref{lem:pullback_property} follows from this result by choosing $d_i$ as vertical retractions.
\begin{proof}[Proof of the Addendum]
By the Kan condition for the horn $\Lambda^1_1\subset \Delta^1$, any $x\in X_0$ is $d_0$ of some 1-simplex $e$. Then filling in the $(2,2)$-horn
\[\xymatrix{
& x \ar[rd]^e\\
x \ar[rr]^e \ar@{.>}[ru]^f && y
}\]
yields an edge $f\colon x\to x$; filling in the $(3,0)$-horn
\[\xymatrix{
& x \ar[rd]^f \ar[dd]^(.7)e\\
x \ar[ru]^f \ar[rr]^(0.3)f \ar[rd]_e && x \ar[ld]^e\\
& x
}\]
shows that $f$ is idempotent (and an equivalence, as any edge in a Kan semisimplicial set). Thus the correspondence $x\mapsto f$ provides a function $s_0$ as required.
\end{proof}
\section{A generalization}\label{sec:generalization}
The proof above works for semi-simplicial objects in other categories than the category of sets. Indeed, let $\mathcal{C}$ be any category, closed under limits, and provided with a subclass of morphisms called ``cofibrations'', satisfying axioms $\mathbf{A}$ and $\mathbf{B}$ below.
\begin{description}
\item[A] A split-injection is a cofibration.
\end{description}
For a collection of morphisms $X_i\to X$ ($i\in \{1,\dots, N\}$), their ``union'' $\bigcup_{i=1}^N X_i$ is defined to be the colimit of the objects $X_i$ over their ``intersections'' $X_{ij}:= X_i\times_X X_j$, that is, the colimit of the diagram formed by the objects $X_i$ and the objects $X_{ij}$ (for $i<j$), together with the projection maps $X_{ij}\to X_i$ and $X_{ij}\to X_j$. With this notation, the second axiom reads:
\begin{description}
\item[B] If $(c_i\colon X_i\rightarrowtail X)_{i\in \{1,\dots, N\}}$ is a finite family of cofibrations, then their ``union'' exists and the induced map $\bigcup_{i=1}^N X_i\to X$ is a cofibration.
\end{description}
In the last section we studied the case where $\mathcal{C}$ is the category of sets and the cofibrations are the injective maps. In this situation, one easily verifies that the ``union'' $\bigcup_n X_n$ maps injectively into $X$, with image the actual union of the subsets $c_n(X_n)\subset X$, which justifies our notation.
As usual, we call a morphism in $\mathcal{C}$ an ``acyclic fibration'' if it has the right lifting property against all cofibrations. Clearly the collection of acyclic fibrations is closed under compositions and pull-backs. In our previous example, the category of sets, a map is an acyclic fibration if and only if it is surjective.
Let $s\mathcal{C}$ denote the category of semi-simplicial objects in $\mathcal{C}$, and let $X\in s\mathcal{C}$. Since $\mathcal{C}$ is closed under limits, the contravariant functor $X$ extends along the Yoneda embedding $\Delta^{\operatorname{inj}}\to s\matheurm{Set}$, via the formula
\[X(A):=\lim_{\Delta^n\to A} X_n \quad (A\in \matheurm{Set}^{(\Delta^{\operatorname{inj}})^{op}})\]
where the limit is indexed over the category of simplices of $A$. With this definition, the canonical map $X_n\to X(\Delta^n)$ is an isomorphism.
\begin{definition}
Let $X,Y\in s\mathcal{C}$ and $T\in \mathcal{C}$.
\begin{enumerate}
\item A semisimplicial map $p\colon X\to Y$ is an \emph{inner fibration} (resp., a \emph{Kan fibration}) if the canonical maps
\[X_n\to X(\Lambda^n_i)\times_{Y(\Lambda^n_i)} Y_n\]
in $\mathcal{C}$ are acyclic fibrations for $0<i<n$ (resp., for $0\leq i\leq n$).
\item Suppose further that $Y$ has a simplicial structure. A map $f\colon T\to X_1$ is \emph{$p$-idempotent} if there exists a map $T\to X_2$ which agrees with $f$ on all three boundaries, and whose image $T\to Y_2$ in $Y$ factors through the degeneracy $Y_0\to Y_2$.
\item A map $f\colon T\to X_1$ is \emph{$p$-cartesian} if for any $n>0$, the canonical map
\[T\times_{X_1} X_n\to T\times_{X_1} X(\Lambda^n_n)\times_{Y(\Lambda^n_n)} Y_n\]
in $\mathcal{C}$ is an acyclic fibration, where $X(\Lambda^n_n)$ maps to $X_1$ by the last edge map. The notion of $p$-cocartesianness is defined dually.
\end{enumerate}
\end{definition}
With these notions, we have the following generalizations of Theorem \ref{thm:relative} and Addendum \ref{addendum}.
\begin{theorem}\label{thm:general}
Let $p\colon X\to Y$ and $f\colon A\to X$ be morphisms in $s\mathcal{C}$ where $p$ is an inner fibration and $f$ an injective cofibration in each semi-simplicial degree, and where $Y$ and $A$ have simplicial structures such that $p\circ f$ is simplicial. Let $s_0\colon X_0\to X_1$ be a $p$-idempotent, $p$-cartesian and $p$-cocartesion morphism in $\mathcal{C}$ which is compatible with the degeneracies $s_0$ on $A$ and $Y$.
Then $s_0\colon X_0\to X_1$ extends to a simplicial structure on $X$ such that $f$ and $p$ are simplicial.
\end{theorem}
\begin{addendum}
If $p$ is a Kan fibration, then a map $s_0\colon X_0\to X_1$ as required in the Theorem exists always, so that a compatible simplicial structure on $X$ exists without further hypotheses.
\end{addendum}
The proof of Theorem \ref{thm:general} is identical to the proof of Theorem \ref{thm:relative}: In terms of this section, the proof of Lemma \ref{lem:step1} constructs a commutative solid square
\[\xymatrix{
B \ar[r] \ar@{>->}[d] & X_{n+1} \ar[d]\\
X_n \ar[r] \ar@{.>}[ru]^{s_N} & X(\Lambda^{n+1}_{N+1})\times_{Y(\Lambda^{n+1}_{N+1})} Y_{n+1}
}\]
where $B$ is the ``union'' of the cofibrations $s_i\colon X_{n-1}\to X_{n}$, $i<N$, and the map $f\colon A_n\to X_{n}$; note that Lemma \ref{lem:pullback_property} precisely identifies the ``intersections'' of $s_i$ with $s_j$ and with $f$; and the calculation following the Lemma shows that the map $B\to X_{n+1}$ is well-defined. Therefore $s_N\colon X_n\to X_{n+1}$ exists by definition of inner fibration (in the case $n>N$) and by definition of $p$-cartesian (in the case $n=N$). By induction this gives rise to an almost $N$-good structure just as in the proof of Theorem \ref{thm:general}. The same re-writing can be made for Lemma \ref{lem:step2} and the second step of the proof. The proof of the Addendum is completely analogous. Here are two examples.
\subsection{Semi-Segal spaces}
We take $\mathcal{C}$ the category of simplicial sets, with the usual notion of cofibration (level-wise injective maps). Then a map is an acyclic fibration in our sense if and only if it is a Kan fibration and a weak equivalence (after realization). We call an object in $s\mathcal{C}$ a semisimplicial space for short.
A map $p$ in $s\mathcal{C}$ is a \emph{Reedy fibration} if for any inclusion $A\subset B$ of semi-simplicial sets, the induced map
\[X(B) \to Y(B)\times_{Y(A)} X(A)\]
is a Kan fibration. The space $X$ is called \emph{Reedy fibrant} if the projection $X\to \{*\}$ is a Reedy fibration.
Let $I_n\subset \Delta^n$ be the semi-simplicial subset spanned by the edges $(i, i+1)$, where $i=0,\dots, n-1$. By definition, $X$ is a \emph{semi-Segal space} if it is Reedy fibrant and if for each $n>1$, the map $X_n\to X({I_n})$ induced by the inclusion $I_n\to \Delta^n$ is a weak equivalence of spaces.
The following is a variation of \cite[3.4]{Joyal-Tierney(2007)}.
\begin{lemma}
A Reedy fibration $p\colon X\to Y$ between semi-Segal spaces is an inner fibration in our sense.
\end{lemma}
\begin{proof}
The map in question is a fibration by the fact that $p$ is a Reedy fibration. To show that it is a weak equivalence, we show that for each inner horn $\Lambda^n_k$, $0<k<n$, in the square
\[\xymatrix{
X_n \ar[d] \ar[r]^p & Y_n \ar[d]\\
X(\Lambda^n_k) \ar[r]^p & Y(\Lambda^n_k)
}\]
the vertical maps are weak equivalences.
Recall that the forgetful functor from simplicial sets to semisimplicial sets has a left adjoint $A\mapsto A^+$ which is an embedding of categories. Let $\mathcal{A}\subset \mathcal{C}$ be the class of injective semi-simplicial maps (that is, injective simplicial maps that are of the form $f^+\colon A^+\to B^+$) such that $f^*\colon Y(B)\to Y(A)$ is a weak equivalence (hence an acyclic fibration). As $Y$ is Reedy fibrant by assumption, the class $\mathcal{A}$ contains the inclusion $L_n\to [n]$; by \cite[Lemma 3.5]{Joyal-Tierney(2007)} it contains therefore every inner horn inclusion $\Lambda^n_k\to [n]$, too. Thus $Y_n\to Y(\Lambda^n_k)$ is an acyclic fibration. The same argument applies to $X$.
\end{proof}
As a consequence, we deduce from Theorem \ref{thm:general} the following result. (Recall that here ``space'' means ``simplicial set''.)
\begin{theorem}\label{thm:relative_Segal}
Let $p\colon X\to Y$ be an inner fibration of semi-Segal spaces and $f\colon A\to X$ the inclusion of a semisimplicial subspace; assume that $Y$ and $A$ have simplicial structures such that $p\circ f$ is a simplicial map. Let $s_0\colon X_0\to X_1$ be a map, compatible with the degeneracies $s_0$ on $A$ and $Y$, and such that $s_0$ is $p$-idempotent, $p$-cartesian, and $p$-cocartesian.
Then $s_0\colon X_0\to X_1$ extends to a simplicial structure on $X$ such that $f$ and $p$ are simplicial.
\end{theorem}
We close this section by giving a criterion for $p$-(co-)cartesianness. For $\sigma\in X(A)$ and $A\subset B$, we denote by $X(B)/\sigma\subset X(B)$ the subspaces of all elements mapping to $\sigma\in X(A)$ under the map induced by the inclusion $A\subset B$.
\begin{lemma}\label{lem:criterion}
For a Reedy fibration $p\colon X\to Y$ of semi-Segal spaces, and $f\colon T\to X_1$, the following are equivalent:
\begin{enumerate}
\item $f$ is $p$-cartesian.
\item For any $t\in T_0$, the composite $\{*\}\xrightarrow{t} T\xrightarrow{f} X_1$ is $p$-cartesian.
\item For any $t\in T_0$, with $e:=f(t)\colon x'\to x$, the following commutative square is a homotopy pull-back:
\[\xymatrix{
X_2/e \ar[rr]^{d_1} \ar[d]^p && X_1/x \ar[d]^p\\
Y_2/p(e) \ar[rr]^{d_1} && Y_1/p(x)
}\]
\end{enumerate}
\end{lemma}
\begin{remark*}
By the Segal condition, the map $d_2\colon X_2/e\to X_1/x'$ is a weak equivalence so the horizontal maps in the diagram may be thought of as ``postcomposition by $e$ and $p(e)$'', respectively.
\end{remark*}
\begin{proof}[Proof of Lemma \ref{lem:criterion}]
(i) implies (ii) because acyclic fibrations are stable under pull-back. For the converse direction, we note that in the map under consideration,
\[T\times_{X_1} X_n\to T\times_{X_1} X(\Lambda^n_0)\times_{Y(\Lambda^n_0)} Y_n\]
both domain and target are Kan fibrations over $T$. Therefore, to show that the map is a weak equivalence, it suffices to test on all fibers over elements of $T_0$. But this is condition (ii).
For the equivalence between (ii) and (iii), we consider the following diagram (for $n>0$):
\begin{equation}\label{eq:comparison_for_criterion}
\xymatrix{
X_{n+1}/e \ar[d]^p \ar[r] & X(\Lambda^{n+1}_{n+1})/e \ar[d]^p \ar[r]^{d_n}& X_n/y \ar[d]^p\\
Y_{n+1}/p(e) \ar[r] & Y(\Lambda^{n+1}_{n+1})/p(e) \ar[r]^{d_n} & Y_n/p(y)
}
\end{equation}
and notice that condition (ii) is equivalent to the left square being a homotopy pull-back, for any $e=f(t)\in X_1$. Now we note that for $n=1$, the right horizontal maps are isomorphisms. Hence, if (ii) holds, then the total square is a homotopy pull-back for $n=1$, that is (iii) holds.
Conversely assume that (iii) holds. We consider the commutative diagram
\[\xymatrix{
X_{n+1}/e \ar[r]^{d_n} \ar[d]^\simeq & X_n/y \ar[d]^\simeq \\
X_{n-1}\times_{X_0} X_2/e \ar[r]^{\id\times d_1}_\simeq & X_{n-1}\times_{X_0} X_1/y
}\]
where the vertical arrows are equivalences by the Segal condition and the lower horizontal map is one by (iii). It follows that the total square in \eqref{eq:comparison_for_criterion} is a homotopy pull-back for all $n>0$.
We show by induction on $n$ that the left square is a homotopy pull-back, too. If $n=1$, then we remarked above that the right horizontal maps are isomorphisms which immediately implies the claim. For the induction step, we note that the inclusion $\Delta^n\subset \Lambda^{n+1}_0$ of the $n$-th face is obtained by filling in horns $\Lambda^k_0$ for $k\leq n$, with last edge $(n, n+1)$. (By induction on $k$, fill in all pairs of type $(i_1,\dots, i_k, n)$ and $(i_1, \dots, i_k, n+1)$; for each such pair this corresponds to filling in a horn as required.) By the inductive assumption, it follows that the right square is a homotopy pull-back, hence so is the left.
\end{proof}
\subsection{Multi-semisimplicial sets}
We take $\mathcal{C}=s^k\matheurm{Set}$ the category of $k$-fold semisimplicial sets, where a morphism defined to be a cofibration if it is injective in each multi-semisimplicial level. We say that an object $X$ of $\mathcal{C}$ satisfies the Kan condition if, after writing $s^k\matheurm{Set}= s(s^{k-1}\matheurm{Set})$ by singling out any of the $k$ simplicial directions, any map
\[X_n \to X(\Lambda^n)\]
induced by a horn inclusion $\Lambda^n\subset \Delta^n$ is an acyclic fibration in the sense of this section. (This is equivalent to \cite[Definition 5.2]{McC}.)
\begin{theorem}[{\cite{McC}}]
Any $k$-fold semisimplicial set which satisfies the Kan condition, has a $k$-fold simplicial structure.
\end{theorem}
\begin{proof}
We show more generally that any $k$-fold semisimplicial $l$-fold simplicial set $X$, satisfying the Kan condition, has a $(k+l)$-fold simplicial structure. The proof is by induction on $k$, where the induction beginning $k=0$ holds obviously. For the induction step, we view $X$ as a semisimplicial object in the category of $(k-1)$-fold semisimplicial $l$-fold simplicial sets. By Theorem \ref{thm:general}, this can be promoted to a simplicial object, corresponding to a $(k-1)$-fold $(l+1)$-fold simplicial set. But this admits a simplicial structure by induction hypothesis.
\end{proof}
|
1,116,691,500,951 | arxiv | \section{Introduction}
The semiclassical tunneling method
\cite{Kraus:1994}-\cite{Paddy3:2002}
is an alternative approach to model particle creation by
black holes \cite{Hawk}.
The basic scheme of this method is to compute the imaginary part of
the `particle' action which gives the emission probability from the
event horizon. From the expression of the
emission probability one identifies the
temperature of the radiation. The earliest works in this context
can be found in \cite{Kraus:1994, Kraus:1996}. Following these works
an approach called the null geodesic method was developed
\cite{Wilczek:2000, Parikh2:2004}. There exists also another way
to model black hole evaporation via tunneling called complex path
analysis \cite{Paddy1:1999, Paddy2:2001, Paddy3:2002} which we discuss here. This method involves writing down, in the
semiclassical limit $\hbar \to 0$ a Hamilton-Jacobi equation
from the matter equations of motion, treating the horizon as a
singularity in the complex plane (which is a simple pole for all
known solutions) and then complex integrating the equation across
that singularity to obtain an imaginary contribution for the particle action.
Both of this two alternative approaches have received great
attention during last few years. It is noteworthy that
since both of
these methods deal only with the near horizon geometry,
they can be
very useful alternatives particularly when the spacetime has no
well defined asymptotic structure or infinities \cite{sb}.
As far as we neglect the backreaction of the matter we are
considering, the
temperature of the radiation or the Hawking temperature should
not depend upon the parameters (e.g. mass, spin, and charge) of the
particle species. The Smarr formula for black hole mechanics
predicts that this temperature is proportional to the surface gravity
of the event horizon for a stationary black hole with a Killing horizon.
The complex path analysis approach has been successfully
applied to scalar emissions as well as to spinor emissions
separately for a wide class of stationary
black holes giving the expected expressions of Hawking temperatures
that were predicted by the Smarr formula. To tackle Dirac equation
in this approach the usual method has been employed, i.e., finding
a proper representation of the general $\gamma$ matrices in terms
of the Minkowskian $\gamma$'s and the metric functions and then
making the variable separation.
For an exhaustive review and list of references on this
see e.g. \cite{rb}. See also e.g. \cite{Akhmedov:2006pg}-\cite{Belinski:2009bc} for some recent
issues concerning the tunneling approach.
Thus, the universality of the Hawking temperature has been
proved case by case for a wide variety of black holes via the
complex path method. Can we prove this universality from
a more general point of view?
In particular, in this paper we shall show that for the
Dirac spinors we do not need
to work with any
particular representation of the $\gamma$ matrices
in the semiclassical framework.
In this work we wish to point out, in a coordinate
independent way
that in any arbitrary spacetime with any number
of dimensions, the equations of motion for a
Dirac spinor, a vector, spin-$2$ meson
and spin-$\frac{3}{2}$ fields reduce to
Klein-Gordon equations in the semiclassical
limit $\hbar \to 0$ for the usual WKB ansatz.
The equations for a charged Dirac spinor reduce to that
of a charged scalar. This clearly shows that at the
semiclassical level all those different equations of motion
of various particle species are equivalent and it is
sufficient to deal with the scalar equation only.
We shall also present, for a stationary spacetime with some
assumed geometrical properties,
a general coordinate independent expression
for the emission probability and the Hawking temperature
which is characterized by the black hole parameters itself (Eq. (\ref{e})).
We further consider some explicit examples to demonstrate
that our formula indeed gives the expected Hawking temperature
in terms of the horizon's surface gravity.
Thus the semiclassical complex path method gives us a way
in which we may treat the different spin fields in an
identical footing, giving the same Hawking temperature
and thereby proving the universality of the Hawking
temperature for stationary black holes from a very
general point of view.
The paper is organized as follows. In the next section
we shall deal with Dirac
spinors (neutral and then charged) to show that the equations
reduce to that of scalars in the semiclassical limit for the WKB
ansatz. In Sect. 3, we shall explicitly expand the
resultant scalar equation in a coordinate independent
way in the near horizon limit for a stationary black hole
with a Killing horizon, and shall
present a general expression that gives
the emission or absorption probabilities.
We shall illustrate the validity of this expression by
taking a few explicit examples.
In Sect. 4, we shall also
demonstrate that similar results hold also for
the vector, massive spin-$2$ and spin-$\frac{3}{2}$ fields.
Finally we shall discuss our results.
We shall take $G=1=c$, but shall retain $\hbar$ throughout.
\section{Reduction of the semiclassical Dirac equation into Klein-Gordon equation}
Let us then start by considering a spacetime of dimension $n$,
and a metric $g_{ab}$ defined on it, at least in our
region of interest. We consider the Dirac equation
\begin{eqnarray}
i\gamma^a\nabla_a\Psi
=\frac{m}{\hbar}\Psi.
\label{s1}
\end{eqnarray}
$\nabla_a$ is the spin covariant derivative defined by $\nabla_a\Psi:=
\left(\partial_a+\Gamma_a\right)\Psi$, where $\Gamma_a$ are the spin connection matrices. The matrices
$\gamma^a(x)$ are the curved space generalization of the Minkowskian
$\gamma^{(\mu)}$. We expand
$\gamma^a$ in an orthonormal basis,
$\gamma^a=\gamma^{(\mu)}e_{(\mu)}^{a} :
\mu=0,~1,~2,\dots,~(n-1)$. Also,
$g^{ab}e^{(\mu)}_{a}e^{(\nu)}_b=\eta^{(\mu)(\nu)}$. Here the Greek
indices within bracket denote the local Lorentz indices and
$\eta^{(\mu)(\nu)}$ is the inverse metric corresponding to
the $n$-dimensional Minkowski spacetime. The
$\gamma^{(\mu)}$ satisfy the well known anti-commutation relation:
$\left\{\gamma^{(\mu)},~\gamma^{(\nu)}\right\}=2 \eta^{(\mu)(\nu)}
\bf{I}$, where $\bf{I}$ denotes the identity matrix.
The expansion of $\gamma^a$ in terms of
the orthonormal basis $\{e_{(\mu)}^{a}\}$, and the
anti-commutation relation for $\gamma^{(\mu)}$'s give
\begin{eqnarray}
\left\{\gamma^a,~\gamma^b\right\}=2g^{ab}\bf{I}.
\label{s2}
\end{eqnarray}
Now we square Eq. (\ref{s1}) by acting with $i\gamma^b\nabla_b$
on both sides from left, producing
\begin{eqnarray}
\frac{1}{2}\left(\gamma^b\gamma^a+\gamma^a\gamma^b\right)
\nabla_b\nabla_a\Psi+
\frac{1}{4}\left(\gamma^b\gamma^a-\gamma^a\gamma^b\right)
\left(\nabla_b\nabla_a-\nabla_a\nabla_b\right)\Psi+
\left(\gamma^b\nabla_b \gamma^a\right)\nabla_a \Psi=
-\frac{m^2}{\hbar^2}\Psi.
\label{s3}
\end{eqnarray}
But the commutator of two covariant derivatives acting
on $\Psi$ is proportional to the Riemann tensor,
$\left(\gamma^b\gamma^a-\gamma^a\gamma^b\right)
\left(\nabla_b\nabla_a-\nabla_a\nabla_b\right)\Psi=
\left(\gamma^a\gamma^b-\gamma^b\gamma^a\right)
R_{abcd}\left(\gamma^c\gamma^d-\gamma^d\gamma^c\right)
\Psi$.
Using this fact and
the anti-commutation relation for $\gamma^a$
(Eq. (\ref{s2})), Eq. (\ref{s3}) becomes
\begin{eqnarray}
\nabla_a\nabla^a\Psi+
\frac{1}{4}\left[\gamma^a,~\gamma^b\right]
R_{abcd}\left[\gamma^c,~\gamma^d\right]\Psi+
\left(\gamma^b\nabla_b \gamma^a\right)\nabla_a \Psi=
-\frac{m^2}{\hbar^2}\Psi.
\label{s4}
\end{eqnarray}
We will look at Eq. (\ref{s4}) semiclassically.
We choose the usual WKB ansatz for a spin-`up' particle
\begin{eqnarray}
\Psi
&=&\left[
\begin{array}{c}
A(x) \\
0 \\
B(x) \\
0\\
\end{array}
\right] e^{\frac{i I(x)}{\hbar}}.
\label{s5}
\end{eqnarray}
and substitute into Eq. (\ref{s4}).
Since we are neglecting backreaction, the
components of the Riemann tensor are independent
of $\hbar$. Then
it is clear that in the semiclassical
limit $\hbar \to 0$,
on the left
hand side only the first term survives because
only this one contains some double derivatives of
${\cal {O}}\left(\hbar^{-2}\right)$. The
single derivative terms coming from the Laplacian
will certainly not survive in the semiclassical limit
(which is true for an actual
scalar equation also), but we shall
formally keep the Laplacian $\nabla_a\nabla^a$
intact till later when we shall discuss its expansion
explicitly. Thus in the semiclassical limit, the WKB
ansatz (\ref{s5}) implies Eq. (\ref{s4})
can be effectively represented by two
Klein-Gordon equations for spin-`up' particles
\begin{eqnarray}
\nabla_a\nabla^a\Psi
+\frac{m^2}{\hbar^2}\Psi=0.
\label{s6}
\end{eqnarray}
Similar result holds for a spin-`down' particle also.
If we consider
a Dirac particle with a charge $e$
coupled to a gauge field $A_a$, the spin covariant
derivative $\nabla_a$ in Eq. (\ref{s1}) is replaced
by the gauge covariant derivative
$\widetilde{\nabla}_a \equiv \nabla_a-\frac{ie}{\hbar} A_a$
such that the equation of motion becomes
\begin{eqnarray}
i\gamma^a\nabla_a\Psi +\frac{e}{\hbar}\gamma^a A_a\Psi
=\frac{m}{\hbar}\Psi.
\label{s7}
\end{eqnarray}
We now apply from the left
$\left(i\gamma^b\nabla_b+\frac{e}{\hbar}\gamma^b A_b\right)$
on both sides of this equation. Using Eq.s (\ref{s2}) and
(\ref{s4}) we obtain
\begin{eqnarray}
\nabla_a\nabla^a\Psi+
\frac{1}{4}\left[\gamma^a,~\gamma^b\right]
R_{abcd}\left[\gamma^c,~\gamma^d\right]\Psi+
\left(\gamma^b\nabla_b \gamma^a\right)\nabla_a \Psi
-\frac{e^2}{\hbar^2}A_bA^b\Psi +\frac{2ie}{\hbar}A^a
\nabla_a\Psi\nonumber \\
-\frac{ie}{\hbar}\left[\left(\gamma^b\nabla_b \gamma^a\right)
A_a +\frac{1}{4}\left[\gamma^a,~\gamma^b\right]F_{ab}
+\left(\nabla_a A^a\right)
\right]\Psi=-
\frac{m^2}{\hbar^2}\Psi,
\label{s8}
\end{eqnarray}
where $F_{ab}=\nabla_a A_b-\nabla_b A_a$. We
now substitute
the ansatz (Eq. (\ref{s5})) into Eq. (\ref{s8})
and take the semiclassical limit $\hbar \to 0$.
We see that in this
limit Eq. (\ref{s8}) can formally be represented by
\begin{eqnarray}
\nabla_a\nabla^a\Psi
-\frac{e^2}{\hbar^2}A_bA^b\Psi
+\frac{2ie}{\hbar}A^a\nabla_a\Psi+
\frac{m^2}{\hbar^2}\Psi=0,
\label{s9}
\end{eqnarray}
each of
which effectively has the form of the equation of motion of
a charged scalar.
What have we seen so far? We have dealt with
neutral and charged Dirac spinors and have
explicitly shown in a coordinate independent way
that, for the semiclassical WKB ansatz
all those equations of motion are equivalent
to that of scalars
in any arbitrary spacetime of dimension $n$.
So it is clear that the single particle
Hawking radiation will be
identical for Dirac spinors and scalars for any given
black hole.
We shall also show explicitly in Sect. 4
that similar conclusions hold for Proca,
massive spin-$2$ and spin-$\frac{3}{2}$ fields.
But before that
we wish to discuss the explicit expansions
and the near horizon limits of Eq.s
(\ref{s6}), (\ref{s9}) in a stationary
spacetime containing black hole. We shall address
only the charged Dirac spinor (or equivalently, charged scalar,
Eq. (\ref{s9})). The other case will be equivalent to
setting $e=0$ in Eq. (\ref{s9}).
\section{Hawking temperature for a stationary black hole with \\
Killing horizon}
We wish to present in the following a general
coordinate independent expression
for the emission or absorption probability from a
stationary black hole with some assumed
geometrical properties.
Let us first list the definitions and
assumptions we make.
We consider an $n$-dimensional stationary spacetime
containing a black hole with a Killing
horizon ${\cal{H}}$.
We assume that the spacetime
can be foliated into a family of hypersurfaces
$\Sigma$, orthogonal to a vector field $\chi^a$.
The hypersurface is spacelike everywhere except
at the horizon (${\cal{H}}$), which is defined to be
an $(n-1)$ dimensional null hypersurface. So, $\chi^a$
is orthogonal to a null hypersurface over ${\cal{H}}$
and hence $\chi^a$ is itself null over ${\cal{H}}$.
Everywhere else $\chi^a$ is timelike.
Since ${\cal{H}}$ is a Killing horizon, the vector field
$\chi^a$ becomes a null Killing vector field, say
$\chi_{\rm{H}}^a$, over ${\cal{H}}$. $\chi^a$ is not
necessarily a Killing field everywhere, but it is
Killing at least over ${\cal{H}}$
%
\begin{eqnarray}
\chi^a\vert_{{\cal{H}}}=\chi_{\rm{H}}^a:~\nabla_{(a}\chi_{{\rm{H}}b)}=0,~ \chi_{\rm{H}}^a\chi_{{\rm{H}}a}=-\beta^2\vert_{{\cal{H}}}=0.
\label{ad1}
\end{eqnarray}
We now write the spacetime metric $g_{ab}$ as
\begin{eqnarray}
g_{ab}=-\beta^{-2}\chi_a\chi_b+\lambda^{-2}R_aR_b +\gamma_{ab},
\label{e1}
\end{eqnarray}
where $R^a$ is a spacelike vector field orthogonal to $\chi^a$,
and $\lambda^2$ is the norm of $R_a$.
$\gamma_{ab}$ is the non-null spacelike portion of the metric
perfectly well behaved on or in an infinitesimal neighbourhood
of the horizon.
Let us denote the Killing fields of this spacetime by
$(\xi_a,~\{\phi^{i}_a\})$, where $i=1,2\dots m$. Let $\xi_a$
be the timelike Killing field and $\{\phi^{i}_a\}$ be the
spacelike Killing field(s). We assume that the hypersurface
orthogonal vector field $\chi^a$ (which
is orthogonal to $\{\phi^{i}_a\}$ and
any other spacelike field),
can be written as a linear
combination of all the Killing fields
\begin{eqnarray}
\chi_a=\xi_a+\alpha^i(x)\phi^i_a,
\label{lm1}
\end{eqnarray}
where repeated indices are summed over and $\{\alpha^i(x)\}$
are smooth functions. Then, using
Killing's equation we have $\nabla_{(a}\chi_{b)}
=\phi^i_{(a}\nabla_{b)}\alpha^i(x)$. Thus we have
\begin{eqnarray}
\chi^a\chi^b\nabla_a\chi_b=-\frac{1}{2}\chi^a\nabla_a\beta^2
=\chi^a\chi^b\phi^i_{a}\nabla_{b}\alpha^i(x)=0.
\label{lm2}
\end{eqnarray}
Eq. (\ref{lm2}) shows that $\nabla_a\beta^2$ is
everywhere orthogonal to $\chi^a$ and hence it is
spacelike when $\chi^a$ is timelike. So, we may choose
$R_a=\nabla_a\beta^2$ in Eq. (\ref{e1}).
To look at the behaviour of $\nabla_a\beta^2$ over
the horizon, we recall that
over the Killing horizon ${\cal{H}}$
\cite{Wald:06,
Gourgoulhon:2005ng}
\begin{eqnarray}
\nabla_a\beta^2=-2\kappa\chi_{{\rm{H}}a},
\label{g1}
\end{eqnarray}
where $\kappa$ is a function. Since by definition
$\chi_{\rm{H}}^a$ is null hypersurface orthogonal
at the horizon, it turns out that $\kappa$ is
a constant over the horizon \cite{Wald:06}.
Eq. (\ref{g1}) shows that $\nabla_a \beta^2$ is
null over ${\cal{H}}$.
However, the choice $R_a=\nabla_a\beta^2$ is
not unique, we could have multiplied $
\nabla_a\beta^2$ by
some non-diverging
function over ${\cal{H}}$, even some
positive power of $\beta$. But we shall retain
this choice for convenience.
Let $R$ be the parameter along $R^a$. Then using Eq.
(\ref{g1}) we have over ${{\cal{H}}}$
\begin{eqnarray}
R^a\nabla_a\beta^2=\frac{d\beta^2}{dR}
=-4\kappa^2\beta^2,
\label{g2}
\end{eqnarray}
which implies over ${{\cal{H}}}$
\begin{eqnarray}
\beta^2=e^{-4\kappa^2R}.
\label{g3}
\end{eqnarray}
With the choice of $R^a$ we have made, it is clear
that the metric (\ref{e1}) becomes doubly
degenerate over ${{\cal{H}}}$.
Note that Eq. (\ref{e1}) can readily be
realized, in its doubly degenerate form, for
a static spherically symmetric black hole by employing
the usual $(t,~r_{\star})$ coordinates, where
$r_{\star}$ is the Tortoise coordinate. We shall be more
explicit about $R$ when we shall go into
specific examples.
The assumption of stationarity and Killing horizon
would help us to provide
a meaningful notion of the `particle' energy~\cite{Wald:06}.
For $n>4$, the
uniqueness and other general properties of black
holes
are not very well understood and there may exist
more general
stationary black holes.
However, we shall show below that for known
stationary exact solutions,
those assumptions will be sufficient.
Let us now expand Eq. (\ref{s9}) with the
decomposition (\ref{e1}). The single derivative terms do
not contribute in the $\hbar \to 0$ limit we are concerned
with
and the equation explicitly becomes
\begin{eqnarray}
\lambda^2\left(\chi^a\partial_a I-ef\right)^2
-\beta^2\left(R^a\partial_a I+
eg\right)^2-\left(\beta\lambda\right)^2\left[
\gamma_{ab}\partial^aI\partial^b I
+e^2\gamma_{ab}A^aA^b-
2e\gamma_{ab}A^a\partial^b I
+m^2\right]=0,
\label{g4}
\end{eqnarray}
where $f=-\chi^aA_a$, and $g=R_aA^a$.
Here it is clear that had we multiplied $R^a$
by a function $h(x)$ non-diverging over
${{\cal{H}}}$, we would have multiplied
Eq. (\ref{g4}) only
by an over all factor $h^2(x)$.
Now we shall look Eq. (\ref{g4}) in the near horizon
limit.
By our
assumption the metric functions $\gamma_{ab}$ are well
behaved over the horizon. So, $\gamma_{ab}A^aA^b$ is
non divergent over ${{\cal{H}}}$.
Also, examples with
$g\neq 0$ seem to be unknown in the literature.
So, we shall set $g=0$ in Eq. (\ref{g4}) and write Eq. (\ref{g4})
in the near horizon limit as
\begin{eqnarray}
\lambda^2\left(\chi^a\partial_a I-ef\right)^2
-\beta^2\left(R^a\partial_a I\right)^2-\left(\beta\lambda\right)^2\left[
\gamma_{ab}\partial^aI\partial^b I-
2e\gamma_{ab}A^a\partial^b I
\right]=0.
\label{g5}
\end{eqnarray}
To further simplify Eq. (\ref{g5}), let us choose an orthogonal
basis $\left\{m^a_{i}\right\}_{i=1}^{n-2}$ for $\gamma_{ab}$. Let
$\theta_i$ be the parameter along each $m^a_{i}$. Let us consider the
first term within the square brackets.
This is basically a sum of the
squares of $(n-2)$
Lie derivatives: $\frac{1}{m_1^2}(\pounds_{m_1}I)^2+
\frac{1}{m_2^2}(\pounds_{m_2}I)^2\dots$, where $m_i^2$
is the norm of each $m^a_{i}$. By our definition, those
norms are non-zero finite over ${\cal{H}}$. Since $I$ is
a scalar those Lie derivatives are basically partial
derivatives : $\pounds_{m_i}I=\partial_{\theta_i}I$.
We shall now check whether the terms within the
square bracket in Eq. (\ref{g5}) are divergent
over ${\cal{H}}$. Let us
suppose that close to ${{\cal{H}}}$, if possible the
following divergence occur
%
\begin{eqnarray}
\gamma_{ab}\partial^aI\partial^bI=\frac{D(x)}{\beta^2},
\label{g6}
\end{eqnarray}
where $D(x)$ is bounded over or close to
${\cal{H}}$ and independent of
$\beta$ at leading order.
Then Eq. (\ref{g2}) implies
that $D(x)$ is also independent of $R$ over ${\cal{H}}$
%
\begin{eqnarray}
\pounds_{R}D(x)\Big\vert_{{\cal{H}}}=0.
\label{g7}
\end{eqnarray}
Also by our choice $R_a=\nabla_a\beta^2$,
whose norm is $\lambda^2$, vanishes over ${\cal{H}}$ as
${\cal{O}}(\beta^2)$ (Eq. (\ref{g1})). So the function $D(x)$ is
also independent of $\lambda$ in the leading order
over ${\cal{H}}$.
Since the metric functions $\gamma_{ab}$ are well
behaved over ${\cal{H}}$, the
divergence of $\gamma_{ab}\partial^aI\partial^bI$ arises
from the Lie derivatives $(\partial_{\theta_i}I)^2$.
For simplicity we shall suppose that the divergence
comes from a single Lie derivative which is the $i$-th
one. We can easily generalize our analysis for more than
one diverging terms.
Let us take near the horizon
\begin{eqnarray}
(\partial_{\theta_i}I)^2=\frac{C_{i}^2(x)}{\beta^2},
\label{g8}
\end{eqnarray}
where $C_{i}^2(x)$ is a non-diverging function
independent of $\beta$ in the leading order over
or close to ${\cal{H}}$,
and is independent of $R$ over
${\cal{H}}$.
The divergence of the second term within the
square bracket in Eq. (\ref{g5})
comes from $(\partial_{\theta_i}I)$ which, by
Eq. (\ref{g8}) is ${\cal{O}}(\beta^{-1})$. So this
term can be neglected with respect to the quadratic
term $(\partial_{\theta_i}I)^2$. Hence
comparing Eq.s (\ref{g6}),
(\ref{g8}) we have $D(x)=\frac{C_{i}^2(x)}{m_i^2}$.
Using Eq. (\ref{g2})
we obtain from Eq. (\ref{g8}) the following divergence over
${\cal{H}}$
\begin{eqnarray}
\frac{\partial^2I}{\partial R\partial{\theta_i}}=\pm\frac{2\kappa^2
C_{i}(x)}{\beta}.
\label{g9}
\end{eqnarray}
On the other hand
we can write Eq. (\ref{g5}) near ${\cal{H}}$ now as
\begin{eqnarray}
\left(\partial_R I\right)^2=\frac{\lambda^2}{\beta^2}\left[\left(\chi^a\partial_a I-ef\right)^2-D(x)\right].
\label{e6}
\end{eqnarray}
We shall take the Lie derivative of Eq. (\ref{e6}) with
respect to $m_i^a$ over ${\cal{H}}$.
By our choice $R_aR^a=\lambda^2=\nabla_a \beta^2
\nabla^a\beta^2$. Also, the function $\kappa$
in Eq. (\ref{g1}) is a constant over ${\cal{H}}$. This means
that $\partial_{\theta_i}\kappa=0$ over ${\cal{H}}$.
Since by our definition the vector
field
$\chi_{\rm{H}}^a$ is Killing
over ${\cal{H}}$, the term
$\left(\chi_{\rm{H}}^a\partial_a I-ef\right)$ is a conserved
quantity,
i.e., a constant \cite{Wald:06}.
We shall regard this term to be the conserved
effective energy $(E)$ of the particle.
So, using Eq.s
(\ref{g1}), (\ref{g9})
the Lie derivative of Eq. (\ref{e6})
with respect to $m_i^a$ gives the
following ${\cal{O}}(\beta^{-1})$
divergence over ${\cal{H}}$
\begin{eqnarray}
\partial_{\theta_i}D(x)
=\pm \frac{\lambda}{\beta^2} C_i(x)\left[
E^2-
D(x)\right]^{\frac{1}{2}}.
\label{e8}
\end{eqnarray}
Eq. (\ref{e8}) contradicts the fact that $D(x)$ is independent
of $\beta$, $\lambda$ or $R$ in the leading order over ${\cal{H}}$.
So, Eq. (\ref{g6}) cannot be true.
Similarly we can show that
the term $\gamma_{ab}\partial^aI\partial^bI$
cannot be divergent as ${\cal{O}}(\beta^{-n})$
for any $n>2$.
Thus $\beta^2\gamma_{ab}\partial^aI\partial^bI=0$
over the horizon.
With all these, we now
integrate Eq. (\ref{g5}) across the horizon along a complex path
\begin{eqnarray}
I_{\pm}
=\pm\int_{{\cal{H}}}
\frac{\lambda}{\beta}\left(\chi_{\rm{H}}^a\partial_a I-ef\right)
dR,
\label{e}
\end{eqnarray}
where complex integration is understood. The $+(-)$ sign
stands for outgoing (incoming) solution.
Eq. (\ref{e})
gives the emission (absorption)
probability for a
stationary black hole satisfying the assumptions we have made.
In order to verify the validity of
Eq. (\ref{e}), at this point
we need some particular metrics.
We shall find out the vector fields
$\chi_{\rm{H}}^a$ and $R^a$, and then
compute $I_{\pm}$ from
Eq. (\ref{e}).
Let us start with four dimensions by considering
the charged Kerr black hole
\begin{eqnarray}
ds^2=-\frac{\Delta-a^2\sin^2\theta}{\Sigma}dt^2
-\frac{2a\sin^2\theta\left(r^2+a^2-\Delta\right)}{\Sigma}
dtd\phi&+& \frac{\left(r^2+a^2\right)^2-
\Delta a^2 \sin^2\theta}{\Sigma}\sin^2\theta d\phi^2 \nonumber \\
&+&\frac{\Sigma}{\Delta}dr^2+\Sigma d\theta^2,
\label{e10}
\end{eqnarray}
where $\Sigma=r^2+a^2\cos^2\theta$, $\Delta=r^2+a^2+Q^2-2Mr$
; $a$ and $Q$ are the parameters specifying
rotation and charge respectively. $\Delta=0$ defines the horizon
($r_{\rm{H}}$). The gauge field of this solution is
$A_a=-\frac{Qr}{\Sigma}\left[(dt)_a-a\sin^2\theta
(d\phi)_a\right]$.
We first define
$\chi^a=(\partial_{t})^a-\frac{g_{t\phi}}{g_{\phi\phi}}
(\partial_{\phi})^a$, such that $\chi_a(\partial_{\phi})^a=0$
everywhere.
Near the horizon we have $\chi_a\chi^a=-\beta^2\approx-\frac{\Delta \Sigma}{\left(r^2+a^2
\right)^2-\Delta a^2 \sin^2\theta}\leq 0$. So, $\beta^2=0$ over the
horizon which implies $\chi^a$ becomes null over the horizon and timelike
outside it.
Over the horizon $\chi^a$ becomes,
$\chi_{\rm{H}}^a=(\partial_{t})^a-\frac{g_{t\phi}}{g_{\phi\phi}}
(r_{\rm{H}})(\partial_{\phi})^a=(\partial_{t})^a+
\frac{a}{r_{\rm{H}}^2+a^2}(\partial_{\phi})^a
$, which is Killing and null. Thus we have specified
the required vector field $\chi^a$ which becomes null
and Killing over the horizon.
Next we need to find out $R^a$ and the parameter
$R$ along it. Using the expression $\chi^a=(\partial_{t})^a-\frac{g_{t\phi}}{g_{\phi\phi}}(\partial_{\phi})^a$, we have $\chi^a\nabla_a\beta^2=0$
everywhere. So we can
let $R_a=\nabla_a\beta^2$. Then
using the expressions for $\beta^2$ and the metric functions
(Eq. (\ref{e10})) we have near the horizon
\begin{eqnarray}
R^a\nabla_a\beta^2=\frac{d\beta^2}{dR}=\nabla_a\beta^2\nabla^a\beta^2
=\frac{\Delta \Sigma}{(r^2+a^2)^4}\Delta^{\prime2}+ {\cal{O}}(\Delta^2),
\label{e9}
\end{eqnarray}
where the prime denotes derivative with respect to $r$.
Thus we have
found out the norm $\lambda^2(=\nabla_a\beta^2\nabla^a\beta^2)$ of
the vector field $R^a$ which becomes null over the horizon.
Also. Eq. (\ref{e9}) gives near the horizon
\begin{eqnarray}
R=\int\frac{(r^2+a^2)^2d\Delta}{\Delta\Delta^{\prime2}}.
\label{null}
\end{eqnarray}
Thus we have specified the coordinate or the parameter $R$
along $R_a$. Note that Eq. (\ref{null}) implies that near
the horizon, choosing the vector field $R_a=\nabla_a\beta^2$
means a coordinate transformation $r\to R$ in the
metric (\ref{e10}).
The components of the gauge field $A_a$ on the horizon are given
by $A_{a}\chi_{\rm{H}}^a=-\frac{Q r_{\rm{H}}}{r_{\rm{H}}^2+a^2}$,
and $A_a(\partial_{\phi})^a=\frac{Qr_{\rm{H}}a^2\sin^2\theta}
{\left(r_{\rm{H}}^2+a^2\cos^2\theta\right)\left(r_{\rm{H}}^2+
a^2\right)}$. The near horizon contribution comes only from
the first one.
Substituting the near horizon norms
$\chi_a\chi^a=-\beta^2\approx-\frac{\Delta \Sigma}{\left(r^2+a^2\right)^2}$, $R^aR_a=\lambda^2=\frac{\Delta \Sigma}{(r^2+a^2)^4}\Delta^{\prime2}$, and
$dR=\frac{(r^2+a^2)^2d\Delta}{\Delta\Delta^{\prime2}}$
into Eq. (\ref{e}) we have
\begin{eqnarray}
I_{\pm}
=\pm\int_{{\cal{H}}}
\left(\chi_{\rm{H}}^a\partial_a I-ef\right)\frac{r^2+a^2}{\Delta}
dr,
\label{e11}
\end{eqnarray}
where $f=-A_{a}\chi_{\rm{H}}^a=-
\frac{Q r_{\rm{H}}}{r_{\rm{H}}^2+a^2}$.
Eq. (\ref{e11}) was first obtained in
\cite{Kerner:2008qv, Li:2008zra} by explicitly
solving the semiclassical
Dirac equation by method of separation of variables.
The emission (absorption) probabilities are
given by $\sim \Big \vert e^{\frac{iI(r_{\rm{H}})_{\pm}}{\hbar}}
\Big\vert^2$ \cite{Paddy1:1999}. We shall not go into the details
of the complexification of the `path',
the choice of contours and explicit evaluation of
Eq. (\ref{e11}). We refer the reader to
\cite{Paddy1:1999, Kerner:2008qv,
Li:2008zra} for this.
Explicit evaluation of Eq. (\ref{e11}) and the emission
($P_{\rm{E}}$) or
absorption ($P_{\rm{A}}$)
probabilities give the desired
temperature of the emission
from the exponential
behaviour of $\frac{P_{\rm{E}}}{P_{\rm{A}}}$.
The Hawking temperature is found to be
$T_{\rm{H}}=\frac{\kappa_{\rm{H}}}{2\pi}$, where
$\kappa_{\rm{H}}$ is the surface gravity of the event horizon.
After this, we shall consider some examples from higher dimensions.
First, we consider non-extremal rotating charged black hole solution
of five dimensional minimal supergravity with two different
rotation parameters in the Boyer-Lindquist
coordinates \cite{Chong:2005hr},
\begin{eqnarray}
ds^2 = &-&\left[\frac{\Delta_{\theta}\left(1+g^2r^2\right)}{\Sigma_a\Sigma_b}-
\frac{\Delta_{\theta}^2\left(2m\rho^2-q^2+2abqg^2\rho^2
\right)}{\rho^4\Sigma_a^2\Sigma_b^2}
\right]dt^2+\frac{\rho^2}{\Delta_r}dr^2+\frac{\rho^2}{\Delta_\theta}
d\theta^2 \nonumber\\
&+& \left[\frac{\left(r^2+a^2\right)\sin^2\theta}{\Sigma_a}+
\frac{a^2 \left(2m\rho^2-q^2\right)\sin^4\theta +2abq\rho^2\sin^4\theta }{\rho^4 \Sigma_a^2}\right] d\phi^2 \nonumber\\
&+&\left[\frac{\left(r^2+b^2\right)\cos^2\theta}{\Sigma_b}+
\frac{b^2 \left(2m\rho^2-q^2\right)\cos^4\theta+2abq\rho^2\cos^4\theta }
{\rho^4 \Sigma_b^2}\right] d\psi^2\nonumber\\
&-&\frac{2\Delta_{\theta}\sin^2\theta\left[a\left(2m\rho^2-q^2\right)
+ bq\rho^2\left(1+a^2g^2\right) \right]}
{\rho^4\Sigma_a^2\Sigma_b}dtd\phi \nonumber\\
&-&\frac{2\Delta_{\theta}\cos^2\theta\left[b\left(2m\rho^2-q^2\right)
+ aq\rho^2\left(1+b^2g^2\right) \right]}
{\rho^4\Sigma_a\Sigma_b^2}
dtd\psi\nonumber\\
&+&\frac{2\sin^2\theta\cos^2\theta\left[ab\left(2m\rho^2-q^2\right)
+ q\rho^2\left(a^2+b^2\right) \right]}
{\rho^4\Sigma_a\Sigma_b}
d\phi d\psi
\label{e12}
\end{eqnarray}
where $\rho^2=\left(r^2+a^2\cos^2\theta+b^2\sin^2\theta\right)$,
$\Delta_{\theta}=\left(1-a^2g^2\cos^2\theta-b^2g^2\sin^2\theta
\right)$, $\Sigma_a=(1-a^2g^2)$, $\Sigma_b=(1-b^2g^2)$ and
$\Delta_r=\left[\frac{(r^2+a^2)(r^2+b^2)
(1+g^2r^2)+q^2+2abq}{r^2}-2M\right]$.
The black hole event horizon $(r_{\rm{H}})$ is
given by $\Delta_r(r_{\rm{H}})=0$. The parameters
$(M,~a,~b,~q)$ specify respectively
the mass, angular momenta and the charge of the black hole. $g$ is
a real positive constant. The gauge field corresponding
to the charge $q$ is given by $A_a=
\frac{\sqrt{3}q}{\rho^2}\left(\frac{\Delta_{\theta}}{\Sigma_a
\Sigma_b}(dt)_a-\frac{a\sin^2\theta}{\Sigma_a}(d\phi)_a
-\frac{b\cos^2\theta}{\Sigma_b}(d\psi)_a\right)$.
The angular velocities of the comoving observers on the horizon
are given by \cite{Li:2010zzd}
\begin{eqnarray}
\Omega_{\phi}=- \frac{\left\{g_{t\phi}g_{\psi\psi}-g_{t\psi}g_{\phi\psi}\right\}}
{\left\{g_{\phi\phi}g_{\psi\psi}-(g_{\psi\phi})^2\right\}}
\Bigg\vert_{r=r_{\rm{H}}}
=\frac{a(r_{\rm{H}}^2+b^2)(1+g^2r_{\rm{H}}^2)+bq}
{(r_{\rm{H}}^2+a^2)(r_{\rm{H}}^2+b^2)+abq}, \nonumber\\
\Omega_{\psi}=-\frac{\left\{g_{t\psi}g_{\phi\phi}-g_{t\phi}g_{\phi\psi}\right\}}
{\left\{g_{\phi\phi}g_{\psi\psi}-(g_{\psi\phi})^2\right\}}
\Bigg\vert_{r=r_{\rm{H}}}=
\frac{b(r_{\rm{H}}^2+a^2)(1+g^2r_{\rm{H}}^2)+aq}
{(r_{\rm{H}}^2+a^2)(r_{\rm{H}}^2+b^2)+abq}.
\label{e13}
\end{eqnarray}
We note that the vector field
\begin{eqnarray}
\chi^a
=(\partial_{t})^a-\frac{\left\{g_{t\phi}g_{\psi\psi}-g_{t\psi}g_{\phi\psi}\right\}}
{\left\{g_{\phi\phi}g_{\psi\psi}-(g_{\psi\phi})^2\right\}}
(\partial_{\phi})^a
-\frac{\left\{g_{t\psi}g_{\phi\phi}-g_{t\phi}g_{\phi\psi}\right\}}
{\left\{g_{\phi\phi}g_{\psi\psi}-(g_{\psi\phi})^2\right\}}(\partial_{\psi})^a
\label{5d1}
\end{eqnarray}
is orthogonal to $(\partial_{\phi})^a$ and $(\partial_{\psi})^a$ everywhere.
Also, the near horizon norm of $\chi^a$ is
$\chi^a\chi_a
=-\beta^2=-\frac{\rho^2r^4\Delta_r} {\left[(r^2+a^2)(r^2+b^2)+abq
\right]^2}+{\cal{O}}({\Delta_r^2})$. Thus $\chi^a$ becomes null over
the horizon.
Also,
Eq. (\ref{e13}) shows that
$\chi^a$ becomes a Killing field $\chi_{\rm{H}}^a$
over the horizon, where
\begin{eqnarray}
\chi_{\rm{H}}^a
=(\partial_{t})^a+\Omega_{\phi}(\partial_{\phi})^a
+\Omega_{\psi}(\partial_{\psi})^a.
\label{e14}
\end{eqnarray}
So, we have specified the
required vector field $\chi^a$ which becomes
null and Killing over the horizon.
Also, exactly
through the same manner as in the Kerr-Newman metric, we can specify
the other null vector field $R^a$, its norm $\lambda^2$,
and the coordinate $R$ for the metric (\ref{e12}). Choosing
$R_a=\nabla_a\beta^2$ we have near the horizon
\begin{eqnarray}
R_aR^a=\lambda^2=\nabla_a\beta^2\nabla^a\beta^2=R^a\nabla_a\beta^2
=\frac{\Delta_r\rho^2r^4}
{\left[(r^2+a^2)(r^2+b^2)+abq
\right]^4}\left(r^2\Delta_r\right)^{\prime 2},
\label{5d2}
\end{eqnarray}
which becomes null over the horizon. Also, near the horizon
the coordinate $R$ along $R^a$ is given by
\begin{eqnarray}
R=\int \frac{\left[(r^2+a^2)(r^2+b^2)+abq
\right]^2 d(r^2\Delta_r)}{(r^2\Delta_r)(r^2\Delta_r)^{\prime 2}},
\label{5d3}
\end{eqnarray}
The gauge field $A_a$ has three components :
$(A_a\chi_{\rm{H}}^a,~A_a(\partial_{\phi})^a,~
A_a(\partial_{\psi})^a)$, of which the near horizon
contribution comes only from $A_a\chi_{\rm{H}}^a$.
Substituting the near horizon norms $\beta^2=
\frac{\rho^2r^4\Delta_r} {\left[(r^2+a^2)(r^2+b^2)+abq
\right]^2}$, $\lambda^2=\frac{\Delta_r\rho^2r^4}
{\left[(r^2+a^2)(r^2+b^2)+abq
\right]^4}\left(r^2\Delta_r\right)^{\prime 2}$
and $dR=\frac{\left[(r^2+a^2)(r^2+b^2)+abq
\right]^2 d(r^2\Delta_r)}{(r^2\Delta_r)(r^2\Delta_r)^{\prime 2}}$
into Eq. (\ref{e}) we have
\begin{eqnarray}
I_{\pm}
=\pm\int_{{\cal{H}}}
\left(\chi_{\rm{H}}^a\partial_a I-ef\right)\frac{\left[
(r^2+a^2)(r^2+b^2)+abq\right]}
{r^2\Delta_r}dr,
\label{e15}
\end{eqnarray}
where $f=-A_a\chi_{\rm{H}}^a=-\frac{\sqrt{3}q r_{\rm{H}}}
{(r^2+a^2)(r^2+b^2)+abq}$.
Eq. (\ref{e15}) was first obtained in \cite{Li:2010zzd}
by explicit solution of
the semiclassical Dirac equation by method of separation
of variables.
Complex integration of Eq. (\ref{e15}) across the horizon
and computation of the emission (absorption) probabilities
give the expected Hawking temperature in terms of the Killing
horizon's surface gravity \cite{Chong:2005hr, Li:2010zzd}.
It can be easily verified using the same methods
as above
that Eq. (\ref{e}) also applies
well and recovers the desired results for
the $5$ dimensional stationary solutions with
Killing horizons like
Kerr-G\"{o}del black hole \cite{Hashimoto:2003},
squashed Kaluza-Klein black hole
\cite{Kurita:2008mj, Ishihara:2005dp},
a black string \cite{Kurita:2008mj, Horowitz:2002ym},
black hole solutions of $z = 4$ Horava-Lifshitz gravity
\cite{Park:2009zra, Chen:2009bja} and the toroidal black
hole solutions like in \cite{Rinaldi:2002tc}.
Our scheme also applies very easily to an $n$
dimensional generalization of the Kerr black hole
with a single rotation parameter
\cite{Myers:1986un}
%
\begin{eqnarray}
ds^2 &=&-dt^2+(r^2+a^2)\sin^2\theta d\phi^2+\frac{\mu}{
r^{n-5}\Sigma}\left(dt-a\sin^2\theta d\phi\right)^2+
\frac{r^{n-5} \Sigma} {r^{n-5}(r^2+a^2)-\mu}dr^2\nonumber\\
&+& \Sigma d\theta^2+r^2\cos^2\theta d\Omega^{n-4},
\label{e16}
\end{eqnarray}
where the parameters $(\mu,~a)$ represents the mass and
angular momentum of the black hole.
$\Sigma=r^2+a^2\cos^2\theta$ and $d\Omega^{n-4}$ represents
the metric over an $(n-4)$ sphere.
Eq. (\ref{e}) applies to a de Sitter horizon also, provided
the assumptions stated at the beginning of this section are
true for that case. Such an example is the Kerr-de Sitter
spacetime. The de Sitter horizon for this spacetime is a
Killing horizon \cite{Gibbons:1977mu}. One can show,
following exactly the similar way as before that all the
other assumptions are valid for this case. Explicit
evaluation of Eq. (\ref{e}) gives the expected thermal
character of the incoming radiation.
\section{Vector, spin-$2$ and spin-$\frac{3}{2}$ fields}
Now we shall show that all the approaches and conclusions made in the
preceding sections
also hold for the Proca, massive spin-$2$ and spin-$\frac{3}{2}$
fields. Let us first consider the equation of motion for a Proca field $A^b$,
\begin{eqnarray}
\nabla_{a}F^{ab} = -\frac{m^{2}}{\hbar^2} A^{b},
\label{v1}
\end{eqnarray}
where $F_{ab}=\nabla_a A_b-\nabla_b A_a$. Eq. (\ref{v1}) can be
written as
\begin{eqnarray}
\nabla_a\nabla^a A_b -R_{b}{}^{a}A_a-
\nabla_b\left(\nabla_a A^a\right)=-\frac{m^{2}}{\hbar^2} A_b.
\label{v2}
\end{eqnarray}
But Eq. (\ref{v1}) implies that $\nabla_a A^a=0$ identically.
Now let us choose a set of orthonormal basis $\left\{e_{a}^{(\mu)}\right\}$. We expand the vector field $A_a$
in this basis, $A_b=e_{b}^{(\mu)}A_{(\mu)}$. With this
expansion and the fact that $\nabla_a A^a=0$, Eq. (\ref{v1})
becomes
\begin{eqnarray}
e_{b}^{(\mu)}\nabla_a\nabla^a A_{(\mu)}+ A_{(\mu)}\nabla_a\nabla^a
e_{b}^{(\mu)}+2\nabla_a A_{(\mu)}\nabla^a e_{b}^{(\mu)}
-R_{b}{}^{(\mu)}A_{(\mu)}
=-\frac{m^{2}}{\hbar^2}A_{(\mu)}e_{b}^{(\mu)},
\label{v3}
\end{eqnarray}
which, after contracting both sides by $e^{b}_{(\nu)}$, reduces
to
\begin{eqnarray}
\nabla_a\nabla^a A_{(\nu)}+ A_{(\mu)}e^{b}_{(\nu)}
\nabla_a\nabla^a
e_{b}^{(\mu)}+2e^{b}_{(\nu)}\nabla_a A_{(\mu)}\nabla^a
e_{b}^{(\mu)}
-R_{(\nu)}{}^{(\mu)}A_{(\mu)}
=-\frac{m^{2}}{\hbar^2} A_{(\nu)}.
\label{v4}
\end{eqnarray}
We
choose the usual WKB ansatz for each $A_{(\nu)}$ :
$A_{(\nu)}=f_{\nu}(x) e^{\frac{i I(x)}{\hbar}}$, substitute
into Eq. (\ref{v4}), and take the semiclassical limit $\hbar
\to 0$.
Then it immediately turns out that
in the semiclassical limit Eq. (\ref{v4}) can be
effectively represented by $n$
Klein-Gordon equations for the scalars $A_{(\nu)}$
\begin{eqnarray}
\nabla_a\nabla^a A_{(\nu)}
+\frac{m^{2}}{\hbar^2} A_{(\nu)}=0,
\label{v5}
\end{eqnarray}
with $\nu=0,~1,~2,\dots,~(n-1)$. When each of the
Eq.s (\ref{v5}) is
explicitly expanded and the near horizon limit is taken,
we get back Eq. (\ref{e}) with $e=0$.
Next, we turn our attention to the massive spin-$2$ field $\pi_{ab}$
satisfying Pauli-Fierz equation \cite{pauli}
\begin{eqnarray}
\nabla_c\nabla^c \pi_{ab}+\frac{m^2}{\hbar^2}\pi_{ab}=0,
\label{h1}
\end{eqnarray}
where $\pi_{ab}$ are symmetric tensor fields. As before we
expand $\pi_{ab}$ in orthonormal basis,
$\pi_{ab}=e^{(\mu)}_a e^{(\nu)}_b\pi_{(\mu)(\nu)}$. In the
semiclassical limit and for the WKB ansatz
Eq. (\ref{h1}) can effectively be represented
by $\frac{n(n+1)}{2}$ Klein-Gordon equations for the
scalars $\pi_{(\mu)(\nu)}$
\begin{eqnarray}
\nabla_c\nabla^c \pi_{(\mu)(\nu)}+
\frac{m^2}{\hbar^2}\pi_{(\mu)(\nu)}=0,
\label{h2}
\end{eqnarray}
and thus similar conclusions hold for this case also.
Finally, we wish to briefly address the spin-$\frac{3}{2}$ fields satisfying the Rarita-Schwinger equation \cite{rarita}. The tunneling phenomenon for this field was addressed in \cite{Yale:2008kx}
for the Kerr black hole by explicitly solving the equations of motion in the near horizon limit.
The Rarita-Schwinger equation in a curved spacetime reads
\begin{eqnarray}
i\gamma^a\nabla_a\Psi_b =\frac{m}{\hbar}\Psi_b,
\label{h3}
\end{eqnarray}
where $\Psi_b \equiv \Psi_b^{(s)}$ is a spinor with $s$ being
the spin index. The $\gamma$'s are matrices (with matrix indices
suppressed) satisfying the anti-commutation relation similar to the
Dirac $\gamma$'s: $\left\{\gamma^a,~\gamma^b \right\}=2g^{ab}\bf{I}$.
The spin-covariant derivative $\nabla$ is defined as $\nabla_a \Psi_b:=\left(\partial_a + \Gamma_a\right)\Psi_b$, where
$\Gamma_a$ are the spin connection matrices (with suppressed matrix indices). Also, $\Psi_b$ satisfies an additional
constraint $\gamma^a\Psi_a=0$.
Due to the similarity of the spin-$\frac{3}{2}$ fields with the Dirac spinors discussed in Sect. 2, we shall apply the same method here to show
that $\Psi_b$ satisfies the Klein-Gordon equation in the semiclassical WKB
framework.
So, we square Eq. (\ref{h3}) by applying $i\gamma^c\nabla_c$ from
left. A little computation, using the definition of the spin-covariant derivative $\nabla_a$, the anti-commutation relation satisfied by the $\gamma$'s, and also the commutativity of the partial derivatives yields
\begin{eqnarray}
\nabla_a\nabla^a\Psi_b+
\frac{1}{4}\left[\gamma^a,~\gamma^c\right]
\left(\partial_{[a}\Gamma_{c]}+\Gamma_{[a}\Gamma_{c]}\right)\Psi_b+
\left(\gamma^c\nabla_c \gamma^a\right)\nabla_a \Psi_b=
-\frac{m^2}{\hbar^2}\Psi_b.
\label{h4}
\end{eqnarray}
So, as in the previous cases, it immediately follows then for the usual ansatz
\begin{eqnarray}
\Psi_a
&=&\left[
\begin{array}{c}
A_a(x)e^{\frac{i I_1(x)}{\hbar}}\\
B_a(x)e^{\frac{i I_2(x)}{\hbar}} \\
C_a(x)e^{\frac{i I_3(x)}{\hbar}} \\
D_a(x)e^{\frac{i I_4(x)}{\hbar}}\\
\end{array}
\right],
\label{h5}
\end{eqnarray}
Eq. (\ref{h4}) reduce to the Klein-Gordon equations in the semiclassical limit. We can easily generalize this result for a charged spin-$\frac{3}{2}$ particle coupled to a gauge field by replacing the spin covariant derivative by the gauge spin covariant derivative. This gives
charged Klein-Gordon equations.
\section{Discussions}
We now summarize our results. The objective
of this work was to put the complex path approach
for stationary black holes
in a general framework. To do this,
we have dealt with some well known physical
matter equations and shown for any arbitrary spacetime
in a coordinate independent way that in the semiclassical
limit the WKB ansatz implies that all those
equations of motion are equivalent to the Klein-Gordon
equation.
We have done this without choosing any particular
basis of the vector fields or the $\gamma$ matrices.
We needed to assume only that a metric $g_{ab}$ can be defined
on the spacetime which guarantees
the existence of the orthonormal basis $\left\{e^{(\mu)}_a
\right\}$ \cite{Wald:06}. So it is clear
that as far as the semiclassical
level is concerned it is sufficient to work only with scalars
for any arbitrary black hole. It also becomes clear from
that the Hawking temperature is indeed independent of
the particle species we are concerned with.
We further presented a general coordinate independent
expression for the emission probability from an
arbitrary stationary black hole with some assumed geometrical
properties
(Eq. ({\ref{e})). We showed that
finding the emission probability or the Hawking temperature
for such black holes reduces to merely finding
a null coordinate or a null vector field
(which is spacelike outside
the horizon), and the norm of
the timelike vector field which is orthogonal to the
horizon and becomes null and Killing
over the horizon. At this point we can use any specific
metric for
explicit computation and we illustrated the validity of
Eq. (\ref{e}) by taking several examples.
The principle message of this work is the
following. The semiclassical method provides us a
way through which we can treat the equations of motions
of different spin fields and compute the single particle
emission probability or the Hawking temperature
for a stationary black hole
in an identical footing or manner.
\vskip 1cm
\section*{Acknowledgment}
I wish to sincerely acknowledge Amitabha Lahiri
for useful discussions and encouragement. I also thank anonymous referees
for useful comments and questions. This work was supported by a fellowship from my Institution SNBNCBS.
\vskip 1cm
|
1,116,691,500,952 | arxiv | \section{Introduction} \label{sec::Introduction}
Quality assurance with respect to both functional and non-functional quality characteristics of software becomes crucial to the success of software products. For example, an extra one-second delay in load time of a storefront page can cause 11\% reduction in page views, and 16\% less customer satisfaction \cite{NS8}. Moreover, banking, retailing, and airline reservation systems as samples of mission-critical systems are all required to be resilient against varying conditions affecting their functional performance \cite{weyuker2000experience, brunnert2015performance, grinshpan2012solving}.
Performance, which has been also called “efficiency” in the classification schemes of quality characteristics \cite{ISO/IEC,glinz2007non,chung2012non}, is generally referred to as how well a software system (service) accomplishes the expected functionalities. Performance requirements mainly describe time and resource bound constraints on the behavior of software, which are often expressed in terms of performance metrics such as response time, throughput, and resource utilization.
\textit{\textbf{Performance evaluation.}} Performance modeling and testing are common evaluation approaches to accomplish the associated objectives such as measurement of performance metrics, detection of functional problems emerging under certain performance conditions, and also violations of performance requirements \cite{jiang2015survey}. Performance modeling mainly involves building a model of the software system's behavior using modeling notations such as queueing networks, Markov processes, Petri nets, and simulation models \cite{cortellessa2011model,harchol2013performance,kant1992introduction}. Although models provide helpful insights into the performance behavior of the system, there are also many details of implementation and execution platform that might be ignored in the modeling \cite{denaro2004early}. Moreover, drawing a precise model expressing the performance behavior of the software under different conditions is often difficult.
Performance testing as another family of techniques is intended to achieve the aforementioned objectives by executing the software under the actual conditions.
Verifying robustness of the system in terms of finding performance breaking point is one of the primary purposes of performance testing. A performance breaking point refers to the status of software at
which the system becomes unresponsive or certain performance requirements get violated.
\textit{\textbf{Research challenge.}} Performance testing to find performance breaking points remains a challenge for complex software and execution platforms.
Testing approaches mainly raise issues of automated and efficient generation of test cases (test conditions) resulting in accomplishing the intended objective. Common approaches for generating the performance test cases such as using source code analysis \cite{zhang2012compositional}, linear programs and evolutionary algorithms on performance models \cite{zhang2002automated, gu2009search, di2007search} and UML models \cite{garousi2010genetic, garousi2008empirical, garousi2008traffic, costa2012generating, da2011generation}, using use case-based \cite{draheim2006realistic, lutteroth2008modeling}, and behavior-driven techniques \cite{schulz2019behavior,ferme2018declarative, ferme2017towards, walter2016asking} mainly rely on source code or other artifacts, which might not always be available during the testing.
Regarding the aforementioned issues, we propose that machine learning techniques could tackle them. One category of machine learning algorithms is reinforcement learning (RL), which is mainly intended to train an agent (learner) on how to solve a problem in an environment through being rewarded or punished in a trial and error interaction with the environment. Model-free RL is a subset of RL enabling the learner to explore the environment (the behavior of the software under test (SUT) in an execution environment in our case) and learn the optimal policy, to accomplish the objective (generating performance test cases resulting in an intended performance breaking point in our case) without access to source code and a model of the system. The learner can store the learned policy and is able to replay the learned policy in future situations, which can lead to efficiency improvements.
\textit{\textbf{Goal of the paper.}} Our research goal is represented by the following question:
\textit{How can we adaptively and efficiently generate the performance test cases resulting in the performance breaking points for different software programs without access to the underlying source code and performance models?}
Finding performance breaking point is a key purpose in robustness analysis, which is of great importance for many types of software systems, particularly in mission- and safety-critical domains \cite{fowler2009mission}. Moreover, the question above is worth exploring also in applications specifically, such as resource management (scaling, provisioning and scheduling) for cloud services \cite{jennings2015resource}, performance prediction \cite{venkataraman2016ernest, kolesnikov2019tradeoffs}, and performance analysis of software services in other areas
\cite{morabito2017virtualization, babovic2016web}.
\textit{\textbf{Contribution.}} In this paper, we present the design and experimental evaluation of a self-adaptive fuzzy reinforcement learning-based (SaFReL) performance testing framework. It is intended to efficiently and adaptively generate the (platform-based) performance test conditions leading to the performance breaking point for different software programs with different performance sensitivity to resources (e.g., CPU-, memory-, and disk-intensive programs) without access to source code and performance models. \blue{An early-stage general formulation of the idea
of using RL particularly in performance testing was introduced in our prior work \cite{moghadam2019machine}. The initial formulation introduces a single smart tester agent that uses RL (simple Q-learning) in a two-phase learning together with an initial architecture in the abstract.
This paper extends the initial abstract formulation of the RL-assisted performance testing \cite{moghadam2019machine}. It uses an elaborate learning technique originally inspired by the conference paper by \cite{ibidunmoye2017adaptive}, which presents an adaptive performance (response time) control approach for cloud services using cooperative fuzzy multi-agent reinforcement learning.
However, regarding the distinguishing learning details, the proposed RL-assisted performance testing framework is based on a single smart agent, involves two distinct phases of learning, and benefits a particular adaptive learning strategy which plays an important role in the functionality of the agent.
The proposed smart performance testing framework is intended to conduct performance testing to meet a testing objective that is finding an intended performance breaking point. The proposed framework, SaFReL, is a two-phase RL-assisted performance testing agent that is able to learn the efficient generation of performance test cases to meet the testing objective and more importantly replay the learned policy in further similar testing situations.}
\blue{SaFReL assumes two phases of learning: initial and transfer learning. In the initial learning phase, it learns the optimal policy to generate the target performance test cases initially upon observing the behavior of the first SUT. Afterward in the transfer learning, it reuses the learned policy for the SUTs with a performance sensitivity analogous to already observed ones while still keeping the learning running in the long term.
The learning mechanism uses Q-learning augmented by fuzzy logic in one part of the learning to deal with the issue of uncertainty in defining discrete categories over continuous values as used by \cite{ibidunmoye2017adaptive}. The single light-weight RL tester agent has the capability of transfer learning and reusing knowledge in similar situations. It benefits an adaptive action selection strategy that adapts the learning to various testing situations and subsequently makes the agent able to act efficiently on various SUTs.}
We demonstrate that SaFReL works adaptively and efficiently on different sets of SUTs, which are either homogeneous or heterogeneous in terms of their performance sensitivity.
Our experiments are based on simulating the performance behavior of 50 instances of 12 well-known programs as the SUTs. Those instances are characterized by various initial amounts of granted resources and different values of response time requirements. We use two evaluation criteria, namely efficiency and adaptivity, to evaluate our approach.
We investigate the efficiency of the approach in generating the test cases that result in reaching the intended performance breaking point and also the behavioral sensitivity of the approach to the learning parameters.
\blue{In particular, SaFReL reaches the intended objective more efficiently compared to a typical stress testing technique, which generates the performance test cases based on changing the conditions, e.g., decreasing the availability of resources, by certain steps in an exploratory way.}
SaFReL leads to reduced cost (in terms of computation time) for performance test case generation by reusing the learned policy upon the SUTs with similar performance sensitivity. Moreover, it adapts its operational strategy to various SUTs with different performance sensitivity effectively while preserving efficiency. To summarize, our contributions in this paper are:
\begin{itemize}
\item A smart performance testing framework (agent) that learns the optimal policy (way) to generate the performance test cases meeting the testing objective without access to source code and models, and reuses the learned policy in further testing cases.
It uses fuzzy RL and an adaptive action selection strategy for the generation of test cases, and implements two phases of learning:
\begin{itemize}
\item Initial learning during which the agent learns the optimal policy for the first time,
\item Transfer learning during which the agent replays the learned policy in similar cases while keeping the learning running in the long term.
\end{itemize}
\item A two-fold experimental evaluation involving performance (efficiency and adaptivity) and sensitivity analysis of the approach.\\
\blue{The evaluation is carried out based on simulating the performance behavior of various SUTs. We use a performance simulation module instead of actually executing SUTs. The main function of the performance simulation module is estimating the performance behavior of SUTs in terms of their response time.}
\end{itemize}
\textit{\textbf{Structure of the paper.}} The rest of the paper is organized as follows: Section \ref{sec::Motivation and Background} discusses the background concepts and motivations for the proposed self-adaptive learning-based approach. Section \ref{sec::System Architecture} presents an overview of the architecture of the proposed testing framework, while the technical details of the constituent parts are described in Sections \ref{sec::Fuzzy State Detection} and \ref{sec::Adaptive Action Selection and Reward}. In Section \ref{sec::Stress Testing using Self-Adaptive}, we explain the functions of the learning phases. Section \ref{sec::Evaluation} reports on the experimental evaluation involving the experiment's setup, and the results of the experimentation. Section \ref{sec::Discussion} discusses the results, the lessons learned during the experimentation, and also the threats to the validity of the results. Section \ref{sec::Related Work} provides a review on the related work, and finally Section \ref{sec::Conclusion} concludes the paper and discusses some future directions.
\section{Motivation and Background}\label{sec::Motivation and Background}
Performance analysis, realized through modeling or testing, is important for performance-critical software systems in various domains.
Anomalies in performance behavior of a software system or violations of performance requirements are generally consequences of the emergence of performance bottlenecks at the system or platform levels \cite{ibidunmoye2015performance, chandola2009anomaly}. A performance bottleneck is a system or resource component limiting the performance of the system and hinders the system from acting as required \cite{gregg2013systems}.
The behavior of a bottleneck component is due to some limitations associated with the component such as saturation and contention. A system or resource component saturation happens upon full utilization of its capacity or when the utilization exceeds a usage threshold \cite{gregg2013systems}. Capacity expresses the maximum available processing power, service (giving) rate, or storage size. Contention occurs when multiple processes contend for accessing a limited number of shared components including resource components like CPU cycles, memory, and disk or software (application) components.
There are various application-, platform- and workload-based causes for the emergence of performance bottlenecks \cite{ibidunmoye2015performance}.
Application-based causes represent issues such as defects in the source code or system architecture faults. Platform-based causes characterize the issues related to hardware resources, operating system, and execution platform. High deviations from the expected workload intensity and similar issues such as workload burstiness are denoted by workload-based causes.
On the other hand, detecting violations of performance requirements and finding performance breaking points are challenging, particularly for complex software systems. To address these challenges, we need to find how to provide critical execution conditions that make the performance bottlenecks emerge. The focus of performance testing in our case is to assess the robustness of the system and find the performance breaking point.
The effects of the internal causes (application/architecture-based ones) could vary, e.g., due to continuous changes and updates of the software during Continuous Integration/Continuous Delivery (CI/CD), and even vary upon different execution platforms and under different workload conditions. Therefore, the complexity of SUT and a variety of affecting factors make it hard to build a precise performance model expressing the effects of all types of factors at play. This is a major barrier motivating the use of model-free learning-based approaches like model-free RL in which the optimal policy for accomplishing the objective could be learned indirectly through interaction with the environment (SUT and the execution platform). In this problem statement,
the testing system learns the optimal policy to achieve the target that is finding an intended performance breaking point, for different types of software without access to a model of the environment. The testing system explores the behavior of the SUT through varying the platform-based (and workload-based in future work) test conditions, stores the learned policy and is able to later reuse the learned policy in similar situations, i.e., other SUTs with similar performance sensitivity to resource restriction. This is the feature of the proposed learning approach that is supposed to lead to a considerable reduction in the testing system's effort, and subsequently saving computation time.
Regarding the aforementioned challenges and strong points of the model-free learning-based approach, we hypothesize that in a CI/CD process based on agile software development, performance engineers and testers can save time and resources by using SaFReL for performance (stress) testing of various releases or variants. SaFReL provides an agile efficient performance test case generation technique (See Section \ref{sec::Evaluation} and Section \ref{sec::Discussion} for efficiency evaluation) while eliminating the need for source code or system model analysis.
\subsection{Reinforcement Learning}\label{sec::Reinforcement Learning}
\blue{Reinforcement learning (RL) \cite{sutton2018reinforcement} is a fundamental category of machine learning algorithms generally intended to find the optimal behavior (way) in decision-making problems. RL is an interactive learning paradigm that is different from the common supervised and unsupervised machine learning algorithms and has been frequently applied to building many self-adaptive smart systems. It involves continuous interaction between the agent (learner) and the environment that is controlled. At each step of the interaction, the agent observes (senses) the \textit{state} of the environment, takes a possible \textit{action} and receives a reinforcement signal as a scalar \textit{reward} from the environment that shows the effectiveness of the applied action to guide the agent towards accomplishing the intended objective. There is no supervisor in RL, and the agent just receives a reward signal. RL basically involves a sequential decision-making process. The RL agent goes through the environment, decides how to behave at each step, and based on optimizing the long-term received reward, learns the optimal way of decision making. }
\blue{The agent actually decides between actions based on the history of its observations. However, considering the whole history of observations is not efficient, therefore, \textit{state} should be formulated as a concise summary of the history including all the required information. Keeping in mind this issue, a related helpful concept to formulate the state as a summary function is the \textit{Markov state}. The states of the environment are Markov by definition. Then, when the environment is fully observable to the agent, the states that the agent observes and uses for making decisions, are Markov too. The environment in our case is the SUT and the execution platform.
The state is modeled in terms of response time and resource utilization improvement. The actions are some operations for modifying/adjusting the available capacity of resources and the objective of the agent is finding an intended performance breaking point. Figure \ref{fig: RL} shows the interaction between the agent and the environment \blue{that is the composition of SUT and execution platform in our case}.}
\blue{There are three main elements in an RL agent: policy, value function, and model. The Policy is the behavior function describing what actions the agent takes in a certain state. Value function indicates how good each state and/or action is, in terms of the amount of reward expected upon taking a particular action given a particular state. Finally, the model is the agent's view of the environment and describes what the environment does next, e.g., shows the state transitions of the environment.}
\blue{Model-free RL algorithms are special types of RL that are not intended to build or learn a model of the environment. Instead, they learn the optimal behavior to achieve the intended objective through multiple experiences of interaction with the environment.}
\blue{Temporal Difference (TD) \cite{sutton2018reinforcement} is one of the main types of model-free RL, which is able to learn from the incomplete episodes of the interaction with the environment. Q-learning as a model-free TD learns the optimal policy through learning the optimal value function, i.e., Q-values. It uses an action selection strategy based on a combination of trying out the available actions, namely exploration, and relying on the previously achieved experience to select the highly-valued actions, namely exploitation. It is off-policy, which means that the agent learns the optimal policy regardless of how the agent explores the environment. After learning the optimal policy, in the transfer learning phase, the agent is able to replay the learned policy while keeping the learning running, which implies occasionally exploring the action space and trying out different actions.}
\begin{figure}[ht]
\centering
\includegraphics[width=.40\textwidth, height=4cm]{RLNew.pdf}
\caption{\blue{Interaction between agent and SUT in RL} }
\label{fig: RL}
\end{figure}
\section{Architecture}\label{sec::System Architecture}
This section provides an overview of the architecture of the proposed smart performance testing framework, SaFReL (see Figure \ref{fig: SaFReL architecture}). The entire interaction of the smart framework with each SUT, as a learning episode, consists of a number of learning trials. The steps of learning in each trial and the components involved in each step are described as follows:
\paragraph{\textit{1. Fuzzy State Detection.}} The fuzzification, fuzzy inference, and rule base components in Figure \ref{fig: SaFReL architecture} are involved in the state detection. The agent uses the values of four quality metrics, 1) response time, and utilization improvements of 2) CPU, 3) memory, and 4) disk, to identify the state of the environment. In other words, the \textit{state} expresses the status of the environment relative to the testing target. In our case, these quality metrics are used to model (represent) the state space of the environment. An ordinary approach for state modeling in RL problems is dividing the state space into multiple mutually exclusive discrete sets. Each set represents a discrete state. At each time, the environment must be at one distinct state. The relevant challenges of such crisp categorization or defining discrete states, include knowing how much a value is suitable to be a threshold for categories of a metric, and how we can treat the boundary values between categories. Instead of crisp discrete states, using fuzzy logic and defining fuzzy states can help address these challenges. We use fuzzy classification as a soft labeling technique for presenting the values of the metrics used for modeling the state of the environment. Then, using a fuzzy inference engine and fuzzy rule base, the agent detects the fuzzy state of the environment. More details about the fuzzy state detection of the agent are presented in Section \ref{sec::Fuzzy State Detection}.
\paragraph{\textit{2. Action Selection and Strategy Adaptation.}} After detecting the fuzzy state of the SUT, the agent takes an action. The actions are operations modifying the factors affecting the performance, i.e., the available resource capacity, in the current prototype. The agent selects the action according to an \textit{action selection strategy} that it follows. The action selection strategy determines to what extent the agent should explore and try out the available actions, and to what extent it should rely on the learned policy and select a high-value action that has been tried and assessed before. The role of this strategy is guiding the action selection of the agent throughout the learning and is of importance for the efficiency of the learning. In order to obtain the desired efficiency, a proper trade-off between the exploration of the state action space and exploitation of the previously learned policy is critical.
In our proposed framework, the smart agent is augmented by a \textit{strategy adaptation} characteristic, as a meta-learning feature responsible for dynamically adapting the degree of exploration and exploitation in various situations. This feature makes SaFReL able to detect where it should rely on the previously learned policy and where it should make a change in the strategy to update its policy and adapt to new situations. New situations mean acting on new SUTs that are different from the previously observed ones in terms of performance sensitivity to resources.
\blue{Software programs have different levels of sensitivity to resources. SUTs with different performance sensitivity to resources, e.g., CPU-intensive, memory-intensive, or disk-intensive SUTs, will react to changes in resource availability differently. Therefore, when the agent observes a SUT that is different from the previously observed ones in terms of performance sensitivity, the strategy adaptation tries to guide the agent towards doing more exploration than exploitation. A performance sensitivity indicator showing the sensitivity of SUT to the resources (i.e., being CPU-intensive, memory-intensive or disk-intensive) is an input to the strategy adaptation mechanism (see Figure \ref{fig: SaFReL architecture}).}
The components corresponding to the action selection, the stored experience (learned policy), and the strategy adaptation are shown as yellow components in Figure \ref{fig: SaFReL architecture}. More details about the set of actions and the mechanism of strategy adaptation are described in Section \ref{sec::Adaptive Action Selection and Reward}.
\paragraph{\textit{3. Reward Computation.}} After taking the selected action, the agent receives a reward signal indicating the effectiveness of the applied action to approach the intended performance breaking point. The reward computation component (red block) in Figure \ref{fig: SaFReL architecture} calculates the received reward (see Section \ref{sec::Adaptive Action Selection and Reward}) for the taken actions.
\begin{figure}[ht]
\centering
\includegraphics[width=.99\textwidth]{SaFReLArchitectureNew.pdf}
\caption{\blue{SaFReL architecture} }
\label{fig: SaFReL architecture}
\end{figure}
\section{Fuzzy State Detection} \label{sec::Fuzzy State Detection}
The state space of the environment in our learning problem is modeled by the quality measurements, CPU, memory, and disk resource utilization improvement and response time of the SUT, which is shown in Figure \ref{fig: Fuzzy representation of quality measurements}.
\blue{The learning approach works based on detecting (discrete) states of the system. These states could be typically defined based on classifying the continuous values of the quality measurements that were mentioned above. On the other hand, defining such crisp boundaries on a number of continuous domains is an issue that might involve many uncertainties.
In order to address this issue and preserve the desired precision of the model, fuzzy classification and reasoning is used to specify the states of the system. Therefore, the states of the environment are defined in terms of some fuzzy states and the environment can be in one or more fuzzy states at the same time with different degrees of certainty. The agent detects the state of the system using a fuzzy inference engine and a rule base \cite{kuncheva2008fuzzy, Fuzzyinference} (Figure \ref{fig: SaFReL architecture}). In summary, the step of state detection is done based on making fuzzy inference about the state of the system. The fuzzy state detection consists of three main parts: normalization of the input values (quality measurements), fuzzification of the measurements, and the fuzzy inference to identify the state of the environment. The details of these parts together with the fuzzy rules, fuzzy operators, and the implication method that are used, are described in Section \ref{sec::Fuzzy State Space Modelling}.}
\subsection{State Modeling and Fuzzy Inference} \label{sec::Fuzzy State Space Modelling}
\textbf{\textit{Normalization.}} As described in the previous section, a set of quality measurements, CPU, memory, and disk utilization improvements and response time of the SUT, represent the state of the environment. The values of these measurements are not bounded, then for simplifying the inference and also the exploration of the state space, we normalize the values of these parameters to the interval [0, 1] using the following functions:
\begin{equation}\label{eq:1}
{RT_n}=\frac{2}{\pi}\tan^{-1}(\frac{RT_n^\prime}{RT^q})
\end{equation}
\begin{align}\label{eq:2}
{CUI}_n&=\frac{1}{CUI_n^\prime} & MUI_n&=\frac{1}{MUI_n^\prime} & DUI_n&=\frac{1}{DUI_n^\prime}
\end{align}
where $RT_n^\prime$, \(CUI_n^\prime\), \(MUI_n^\prime\), and \(DUI_n^\prime\) are the measured values of the response time, CPU, memory and disk utilization improvements at time step \(n\) respectively and \(RT^q\) is the response time requirement. \(CUI_n^\prime\) as the CPU utilization improvement is the ratio between the CPU utilization at time step \(n\) and its initial value (at the start of learning), that is, \({CUI_n^\prime}=\frac{CU_n}{CU^i}\). Likewise, those are, \({MUI_n^\prime}=\frac{MU_n}{MU^i}\) and \({DUI_n^\prime}=\frac{DU_n}{DU^i}\). Using the normalization function in Eq. \ref{eq:1}, when \({RT_n^\prime}=RT^q\) the normalized value of the response time, \(RT_n\) is 0.5, and for \({RT_n^\prime}> RT^q\) the normalized values will be toward 1 and for \({RT_n^\prime}< RT^q\) the normalized values will be toward 0. A tuple as \((CUI_n, MUI_n, DUI_n, RT_n)\) consisting of the normalized values of quality measurements is the input to the fuzzy state detection.
\textbf{\textit{Fuzzification.}} Input fuzzification involves defining fuzzy sets and corresponding membership functions over the values of the quality measurements. A membership function is characterized by a linguistic term. A fuzzy set $L$ is defined as $L=\{(x, \mu_L(x))|\ 0<x\textrm{,}\quad x\in \mathbb{R}\}$ where a membership function $\mu_L(x)$ defines membership degrees of the values as $\mu_L:x\rightarrow[0,1]$. Figure \ref{fig: Fuzzy representation of quality measurements} shows the membership functions defined over the value domains of quality measurements. As shown in Figure \ref{fig: Fuzzy representation of quality measurements}, trapezoidal membership functions are used for \textit{High} and \textit{Low} fuzzy sets and a triangular counterpart for the \textit{Normal} fuzzy set on the response time. In Figure \ref{fig: Fuzzy representation of quality measurements}, where \(RT^q\) is the requirement, a normal (medium) fuzzy set over the values of response time implies a small range around the requirement value as normal response time values. Moreover, in this case the ranges of membership functions were selected empirically and could be updated based on the requirements.
\textbf{\textit{Fuzzy Inference.}} After input fuzzification, inferring the possible states that the environment assumes is directed by the fuzzy rules that have formed based on the domain knowledge.
\textit{Fuzzy Rules.} A fuzzy rule, as shown in Eq. \ref{eq:fuzzy rule}, consists of two parts: antecedent and consequent. The former is a combination of linguistic terms of the input normalized quality measurements and the consequent is a fuzzy set with a membership function showing to what extent the environment is in the associated state.
\begin{equation} \label{eq:fuzzy rule}
\begin{split}
\textrm{Rule 1: } & \textrm{If CUI is High AND MUI is High AND DUI is Low AND}\\
&\textrm{RT is Normal, then State is HHLN}.
\end{split}
\end{equation}
\blue{$Rule~1$ is a sample of the fuzzy rules in the rule base. The rest of the rules are defined similarly based on the fuzzy sets defined over the values of the quality measurements and the combinations of them. Based on the number of fuzzy sets, namely two fuzzy sets, \textit{High} and \textit{Low}, over the value range of each resource utilization improvement and three sets, \textit{High}, \textit{Normal}, and \textit{Low}, over the value range of the response time, we define 24 rules in our rule base to define the fuzzy states of the environment.}
\textit{Fuzzy Operators.} When the antecedents of the rules are made of multiple linguistic terms, which are associated to fuzzy sets, e.g., "High, High, Low and Normal", then fuzzy operators are applied to the antecedent to obtain one number showing the support or activation degree of the rule. Two well-known methods for the fuzzy \(AND\) operator are \(minimum (min)\) and \(product (prod)\). In our case, we use method \(min\) for the fuzzy \(AND\) operation. It shows that given a set of input parameters \(A\), the degree of support for rule \(Ri\) is given as \(\tau_{Ri}=\min\limits_j \mu_L(a_j)\) where \(a_j\) is an input parameter in A and L is its associated fuzzy set in the rule Ri.
\textit{Implication Method.} After obtaining the membership degree for the antecedent, the membership function of the consequent is reshaped using an implication method. There are also two well-known methods for implication process, \(minimum (min)\) and \(product (prod)\), which truncate and scale the membership function of the output fuzzy set respectively. The membership degree of the antecedent is given as input to the implication method. We use method \(min\) as the implication method in our case.
\blue{Finally,
the most effective rule,
the one with the maximum support degree, is selected to determine the final fuzzy state of the environment \({(S_n,\mu_n)}\). In summary, the fuzzy state with the highest likelihood is considered as the state of the system.} Figure \ref{fig: Fuzzy states of SUT} shows a representation of the fuzzy states. Each of them represents one state based on the fuzzy values (linguistic terms) assigned to quality measurements (CPU, memory and disk utilization improvement, and response time). Regarding the presentation of fuzzy states, L, H, and N stand for Low, High, and Normal terms respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\textwidth]{Fuzzysets.pdf}
\caption{Fuzzy representation of quality measurements}
\label{fig: Fuzzy representation of quality measurements}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.75\textwidth, height=6cm]{Fuzzystates.pdf}
\caption{Fuzzy states of the environment}
\label{fig: Fuzzy states of SUT}
\end{figure}
\section{Adaptive Action Selection and Reward Computation} \label{sec::Adaptive Action Selection and Reward}
\textit{\textbf{Actions.}} In SaFReL, the actions are the operations changing the platform-based factors affecting the performance, i.e., the available resources such as computation (CPU), memory and disk capacity. In the current prototype, the set of actions contains operations reducing the available resource capacity with finely tuned steps, which are as follows:
\begin{equation} \label{eq:AC}
\begin{split}
AC_n=&\{\textrm{no action}\}\ \cup\ \{(CPU_n-y)\ |\ y \in CDF\}\ \cup\ \{(Mem_n-k)\ |\ k \in MDF_n \}\\
&\cup\ \{(Disk_n-k)\ |\ k \in MDF_n\}
\end{split}
\end{equation}
\begin{equation} \label{eq:CDF}
CDF=\{\frac{1}{4},\frac{2}{4},\frac{3}{4},1\}
\end{equation}
\begin{equation} \label{eq:MDF}
MDF_n=\{(x\times\frac{Mem(Disk)_n}{4})\ |\ x \in \{\frac{1}{4},\frac{2}{4},\frac{3}{4},1\}\}
\end{equation}
where \(AC_n\), \(CPU_n\), \(Mem_n\) and \(Disk_n\) represent the set of actions, the current available computation (CPU), memory and disk capacity at time step n respectively. The list of actions is as shown in Table \ref{table: Actions in SaFReL}.\\
\begin{table}[b]
\begin{center}
\caption{Actions in SaFReL}
\begin{tabular}{ |p{6 cm}|p{4cm}|}
\hline
\multicolumn{2}{|c|}{\textbf{\textit{Actions}}} \\
\hline
\textbf{\textit{Operation}} & \textbf{\textit{Decrease}} \\
\hline
Reducing memory / disk capacity & by a factor in \(MDF_n\) \\
\hline
Reducing computation (CPU) capacity & by a factor in CDF\\
\hline
No action & - \\
\hline
\end{tabular}
\label{table: Actions in SaFReL}
\end{center}
\end{table}
\textit{\textbf{Strategy Adaptation.}} The agent can use different strategies for selecting the actions. $\varepsilon$-greedy with different $\varepsilon$-values and Softmax are well-known methods for action selection in RL algorithms. They are intended to provide a right trade-off between exploration of the state action space and exploitation of the learned policy. In SaFReL, we use $\varepsilon$-greedy as the action selection strategy and the proposed strategy adaptation feature acts as a simple meta-learning algorithm intended to make changes to the $\varepsilon$ value dynamically to make the action selection strategy well-adapted to new situations (new SUTs). Upon observing a SUT instance with a performance sensitivity different from the already observed ones, it adjusts the value of the parameter $\varepsilon$ to direct the agent toward more exploration (setting $\varepsilon$ to higher values). On the other hand, upon interaction with SUT instances that are similar to the previous ones, the parameter $\varepsilon$ is adjusted to increase exploitation (setting $\varepsilon$ to lower values). SaFReL detects the similarity between SUT instances by calculating \textit{cosine similarity} between the performance sensitivity vectors of SUT instances, as shown in Eq. \ref{eq:Similarity}.
\begin{equation} \label{eq:Similarity}
\begin{split}
\textrm{similarity}(k,k-1)&=\frac{SV^k\ SV^{k-1}}{\|SV^k\|\|SV^{k-1}\|}\\
&=\frac{\sum_{i=1}^{3} {SV_i^{k}SV_i^{k-1}}}{\sqrt{\sum_{i=1}^{3}{(SV_i^{k})}^2}\sqrt{\sum_{i=1}^{3}{(SV_i^{k-1})}^2}}
\end{split}
\end{equation}
where \(SV^k\) represents the sensitivity vector of the \(k^{th}\) SUT instance and \(SV_i^k\) represents the \(i^{th}\) element of vector \(SV^k\). \blue{The sensitivity vector contains the values of the sensitivity indicators of the SUT instance, \(Sen^C\), \(Sen^M\) and \(Sen^D\). The performance sensitivity indicators assume values in the range $[0, 1]$ and represent the sensitivity degree of the SUT to CPU, memory and disk respectively. Their values could be set empirically or even intuitively, and SaFReL uses the approximate estimated similarity to tune the $\varepsilon$ value adaptively (See Section \ref{sec:: Experiments_and_results}).}\\
\textit{\textbf{Reward Signal.}} The agent receives a reward signal indicating the effectiveness of the applied action in each learning step to guide the agent toward reaching the intended performance breaking point. We derive a utility function as a weighted linear combination of two functions indicating the response time deviation and resource usage, which is as follows:
\blue{
\begin{equation} \label{eq:reward}
R_n=\beta U_n^r+(1-\beta)U_n^E \end{equation}}
where \(U_n^r\) represents the deviation of response time from the response time requirement, \(U_n^E\) indicates the resource usage, and \(\beta\), \(0\leq\beta\leq1\) is a parameter intended to prioritize different aspects of stress conditions, i.e., response time deviation or limited resource availability. \(U_n^r\) is defined as follows:
\begin{equation} \label{eq:Un}
U_n^r =
\begin{cases}
\text{0,} &\quad RT_n^\prime\leq RT^q\\
\frac{(RT_n^\prime-RT^q)}{(RT^b-RT^q)}, &\quad RT_n^\prime > RT^q\\
\end{cases}
\end{equation}
where $RT_n^\prime$ is the measured response time, \(RT^q\) is the response time requirement and \(RT^b\) is the threshold defining the performance breaking point. \(U_n^E\) represents the resource utilization in the reward signal, and is a weighted combination of the resource utilization values. It is defined using the following equation:
\begin{equation} \label{eq:UE}
U_n^E= Sen^C CUI_n^{\prime}+ Sen^M MUI_n^{\prime} +Sen^D DUI_n^{\prime}
\end{equation}
where \(CUI_n^\prime\), \(MUI_n^\prime\), and \(DUI_n^\prime\) represent CPU, memory and disk utilization improvements respectively, and \(Sen^C\), \(Sen^M\) and \(Sen^D\) are the performance sensitivity indicators of the SUT, and assume values in the range $[0, 1]$.
\section{Performance Testing using Self-Adaptive Fuzzy Reinforcement Learning} \label{sec::Stress Testing using Self-Adaptive}
In this section, we describe details of the procedure of SaFReL to generate the performance test cases resulting in reaching the performance breaking points for various types of SUTs. The tester agent learns how to generate the target test cases for different types of software without access to source code or system models. The procedure of SaFReL, which includes initial and transfer learning phases, is as follows:
The agent measures the quality parameters and identifies the state- membership degree pair \((S_n,\mu_n )\) through the fuzzy state detection, where \(S_n\) is the fuzzy state of the environment and \(\mu_n\) indicates the membership degree, which means to what extent the environment has assumed that state. Then, according to the action selection strategy, the agent selects one action, \(a_n \in A_n\) based on the previously learned policy or through exploring the state action space. The agent takes the selected action and executes the SUT. In the next step the agent detects the new state of the SUT, \((S_{n+1},\mu_{n+1})\) and receives a reward signal, \(r_{n+1}\in \mathbb{R}\), indicating effectiveness of the applied action. After detecting the new state and receiving the reward, it updates the stored experience (learned policy). The whole procedure is repeated until meeting the stopping criterion that is reaching the performance breaking point, \((RT^b)\).
The experience of the agent is defined in terms of the policy that the agent learns. A policy is a mapping between each state and action and specifies the probability of taking action \(a\) in a given state \(s\). The purpose of the agent in the learning is to find a policy that maximizes the expected long-term reward achieved over the further learning trials, which is formulated as follows: \cite{sutton2018reinforcement}:
\begin{equation} \label{eq:purposeofRL}
R_n=r_{n+1}+\gamma r_{n+2}+...+\gamma^k r_{n+k+1}= \sum_{k=0}^{\infty} \gamma^k r_{n+k+1}
\end{equation}
where \(\gamma\) is a discount factor specifying to what extent the agent prioritize future rewards compared to the immediate one. We use Q-learning as a model-free RL algorithm in our framework. In Q-Learning, a utility value \(Q^\pi (s,a)\) is assigned to each pair of state and action, which is defined as follows: \cite{sutton2018reinforcement}:
\begin{equation} \label{eq:Q value}
Q^\pi (s,a)=E^\pi [R_n | s_n=s,a_n=a]
\end{equation}
The q-values, \(Q^\pi (s,a)\), form the experience base of the agent, on which the agent relies for the action selection. The q-values are updated incrementally during the learning. According to using fuzzy state modeling, we include the membership degree of the detected state of the environment, $\mu_n^s$, in the typical updating equation of q-values to take into account the impact of the uncertainty associated with the fuzzy state, which is as follows:
\begin{equation} \label{eq:Q updating}
Q(s_n,a_n)=\mu_n^s [(1-\alpha) Q(s_n,a_n)+ \alpha(r_{n+1}+ \gamma \max\limits_{a^{\prime}} Q(s_{n+1},a^{\prime}))]
\end{equation}
where \(\alpha\), \(0 \leq \alpha \leq 1\) is the learning rate, which adjusts to what extent the new utility values affect (overwrite) the previous q-values. Finally, the agent finds the optimal policy to reach the target, which suggests the action maximizing the utility value for a given state \(s\):
\begin{equation} \label{eq:action_from_learnt_policy}
a(s)= \argmax\limits_{a^{\prime}} Q(s,a^{\prime})
\end{equation}
The agent selects the action based on Eq. \ref{eq:action_from_learnt_policy} when it is supposed to exploit the learned policy.
SaFReL implements two learning phases: initial and transfer learning.
\textbf{\textit{Initial learning.}} Initial learning occurs during the interaction with the first SUT instance. The initial convergence of the policy takes place upon the initial learning. The agent stores the learned policy (in terms of a table containing q-values, Q-table). It repeats the learning episode multiple times on the first SUT instance to achieve the initial convergence of the policy.
\textbf{\textit{Transfer learning.}} SaFReL goes through the transfer learning phase, after the initial convergence. During this phase, the agent uses the learned policy upon observing SUT instances with similar performance sensitivity to the previously observed ones, while keeping the learning running, i.e., updating the policy upon detecting new SUT instances with different performance sensitivity. Strategy adaptation is used in the transfer learning phase and makes the agent adapt to various SUT instances. Algorithms \ref{Algorithm: SaFReL} and \ref{Algorithm: Fuzzy Q-Learning} present the procedure of SaFReL in both initial learning and transfer learning phases.
\begin{algorithm}[H]
\SetAlgoLined
\caption{SaFReL: Self-adaptive Fuzzy Reinforcement Learning-based Performance Testing}\label{Algorithm: SaFReL}
\textbf{Required:} $S, A, \alpha, \gamma$;\\
\textrm{Initialize q-values,\ \(Q(s,a)= 0\ \forall s \in \mathbb{S},\ \forall a \in \mathbb{A}\ \textrm{and}\ \epsilon=\upsilon\ ,0 < \upsilon <1\)};\\
\textrm{Observe the first SUT instance};\\
\Repeat{initial convergence}{
Fuzzy Q-Learning (with initial action selection strategy, e.g. $\epsilon$-greedy, initialized $\epsilon$)\;
}
\textrm{Store the learnt policy};\\
\textrm{Start the transfer learning phase};\\
\While{true}{
Observe a new SUT instance;\\
Measure the similarity;\\
Apply strategy adaptation to adjust the degree of exploration and exploitation (e.g. tuning parameter $\epsilon$ in $\epsilon$-greedy);\\
Fuzzy Q-Learning with adapted strategy (e.g. new value of $\epsilon$);\\
}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
\caption{Fuzzy Q-Learning}\label{Algorithm: Fuzzy Q-Learning}
\Repeat{meeting the stopping criteria (reaching performance breaking point) }{
\textrm{1. Detect the fuzzy state-degree pair \((S_n,\mu_n)\) of the SUT};\\
\textrm{2. Select an action using the action selection strategy (e.g. $\epsilon$-greedy: select $a_n= \argmax_{a\in \mathbb{A}} Q(s_n,a)$ with probability (1-$\epsilon$) or a random $a_k$, $a_k \in \mathbb{A}$ with probability $\epsilon$)};\\
3. Take the selected action, execute the SUT;\\
4. Detect the new fuzzy state-degree $(S_{n+1},\mu_{n+1})$ of the environment;\\
5. Receive the reward signal, $R_{n+1}$;\\
6. Update the q-value of the pair of previous state and applied action\\
$Q(s_n,a_n)=\mu_n^s [(1-\alpha) Q(s_n,a_n)+ \alpha(r_{n+1}+ \gamma \max\limits_{a^{\prime}} Q(s_{n+1},a^{\prime}))]$\;
}
\end{algorithm}
\section{Evaluation} \label{sec::Evaluation}
In this section, we present the experimental evaluation of the proposed self-adaptive fuzzy RL-based performance testing framework, SaFReL. We assess the performance of SaFReL, in terms of efficiency in generating the performance test cases and adaptivity to various types of SUT programs, i.e., how well it can adapt its functionality to new cases while preserving its efficiency. \blue{ Therefore, we examine the efficiency of SaFReL (in the transfer learning phase) compared to a typical testing process for this target, which involves generating the performance test cases through changing the availability of the resources based on the defined actions in an exploratory (random) way, which is called \textit{typical stress testing} hereafter.}
We also evaluate the sensitivity of SaFReL to the learning parameters. The goal of the experimental evaluation is to answer the following research questions:
\begin{itemize}
\item RQ1. How efficiently can SaFReL generate the test cases leading to the performance breaking points for different software programs compared to a typical testing procedure?
\item RQ2. How adaptively can SaFReL act on various software programs with different performance sensitivity?
\item RQ3. How is the efficiency of SaFReL affected by changing the learning parameters?
\end{itemize}
The following sub-sections describe the proposed setup for conducting the experiments, the evaluation metrics, and the analysis scenarios designed for answering the above research questions.
\subsection{Experiments Setup} \label{sec:: Experiments Setup}
In this study, we implement the proposed smart testing framework (agent) along with \blue{ a performance simulation module simulating the performance behavior of SUT programs under different execution conditions.} The simulation module receives the resource sensitivity values and based on the amounts of resources demanded initially and the amounts of them granted after taking each action, estimates the program throughput using the following equation proposed by \cite{taheri2016vmbbthrpred}:
\begin{equation} \label{eu_eqn}
Thr_j=\frac{\frac{CPU_j^g}{CPU_j^i}Sen_j^C +\frac{Mem_j^g}{Mem_j^i}Sen_j^M+ \frac{Disk_j^g}{Disk_j^i}Sen_j^D} {Sen_j^C+ Sen_j^M+ Sen_j^D}\times Thr_j^N
\end{equation}
where \(CPU_j^i\), \(Mem_j^i\) and \(Disk_j^i\) the amounts of CPU, memory and disk resources demanded by program j at the initial state and \(CPU_j^g\), \(Mem_j^g\) and \(Disk_j^g\) are the amounts of resources granted to program j after taking an action, which modifies the resource availability. \(Sen_j^C\), \(Sen_j^M\) and \(Sen_j^D\) represent the CPU, memory and disk sensitivity values of program j, and \(Thr_j^N\) represents the nominal throughput of program j in an isolated, contention free environment. The response time of the program is calculated as \(RT_j=\frac{1}{Thr_j}\) in the simulation module.
Figure \ref{fig: Our implementation structure} presents the implementation structure including SaFReL along with the implemented performance simulation module.\blue{ In our implementation, the performance simulation module simulates the performance behavior of the SUT program and the testing agent interacts with the simulation module to capture the quality measures used for state detection.}
\begin{figure}[ht]
\centering
\includegraphics[width=.98\textwidth]{implementationstructureNew.pdf}
\caption{\blue{Implementation structure} }
\label{fig: Our implementation structure}
\end{figure}
Table \ref{table:Programs and sen values} shows the list of programs and the corresponding resource sensitivity values used in the experimentation, the table data obtained from \cite{taheri2016vmbbthrpred}. The collection listed in Table \ref{table:Programs and sen values} includes various CPU-intensive, memory-intensive and disk-intensive types of programs and also the programs with combined types of resource sensitivity.
The SUTs are instances of the programs listed in Table \ref{table:Programs and sen values} and are characterized with various initial amounts of resources and also different values of response time requirements.
Two analysis scenarios are designed to answer the evaluation research questions. The first one focuses on efficiency and adaptivity evaluation of the framework on various SUTs. In the second analysis scenario, the sensitivity of the approach to changes of the learning parameters are studied. The efficiency and adaptivity are measured (evaluated) according to following specification:
\begin{itemize}
\item \textit{Efficiency} is measured in terms of number of learning trials required by the tester agent to achieve the testing target, which is reaching the intended performance breaking point. Number of learning trials is an indicator of the required computation time to generate the proper test case leading to the performance breaking point.
\item \textit{Adaptivity} is evaluated in terms of number of additional learning trials (computation time) required to re-adapt the learned policy to new observations for achieving the target.
\end{itemize}
\begin{table}[h!]
\centering
\caption{Programs and the corresponding sensitivity values used for experimental evaluation \cite{taheri2016vmbbthrpred}}
\begin{tabu} to \textwidth{ | X[l] | X[l] | }
\hline
\textbf{Programs} & \textbf{Resource Sensitivity Values (\(Sen^C\), \(Sen^M\) and \(Sen^D\))} \\
\hline
Build-apache & (0.96, 0.04, 0.00) \\
\hline
n-queens & (0.97, 0.00, 0.00) \\
\hline
John-the-ripper &(0.96, 0.00, 0.00)\\
\hline
Apache & (0.97, 0.03, 0.00)\\
\hline
Dcraw & (0.48, 0.04, 0.00)\\
\hline
X264 & (0.41, 0.02, 0.00)\\
\hline
Unpack-linux & (0.18, 0.09, 0.35)\\
\hline
Build-php & (0.97, 0.07, 0.00)\\
\hline
Blogbench & (0.11, 0.81, 0.18)\\
\hline
Bork & (0.00, 0.53, 0.20)\\
\hline
Compress-gzip & (0.00, 0.00, 0.47)\\
\hline
Aio-stress & (0.00, 0.30, 0.80)\\
\hline
\end{tabu}
\label{table:Programs and sen values}
\end{table}
\subsection{Experiments and Results} \label{sec:: Experiments_and_results}
\subsubsection{Efficiency and Adaptivity Analysis}
To answer RQ1 and RQ2, the performance of SaFReL is evaluated based on its efficiency in generating the performance test cases leading to the performance breaking points of different SUTs and its adaptation capability to new SUTs with performance sensitivity different from previously observed ones.
We select two sets of SUT instances: i) one including SUTs similar in the aspect of performance sensitivity to resources, i.e., similar with regard to the primarily demanded resource (homogenous SUTs); and ii) the other set contains SUT instances different in performance sensitivity (heterogeneous SUTs). The SUT instances assume different initial amounts of CPU, memory and disk resources, and response time requirements. The amounts of resources, CPU, memory and disk capacity, were initialized with different values in the range [1, 10] cores, [1, 50] GB, [100, 1000] GB respectively. The response time requirements range from 500 to 3000 ms. The intended performance breaking point for the SUT instances is defined as the point in which the response time exceeds 1.5 times the response time requirement.
In the efficiency analysis, we set the learning parameters, learning rate and discount factor, to $0.1$ and $0.5$, respectively. We study the impacts of different variants of $\varepsilon$-greedy algorithm as the action selection strategy on the efficiency and adaptivity of the approach during the analysis. We investigate three variants of $\varepsilon$-greedy with $\varepsilon=0.2$, $\varepsilon=0.5$, and decaying $\varepsilon$, and also the proposed adaptive $\varepsilon$ selection method.
\textit{Learning setup.} First, we need to set up the initial learning. For choosing a proper configuration for the action selection strategy in the initial learning, we evaluate the performance of different variants of $\varepsilon$-greedy algorithm, in terms of the number of required learning trials for initial convergence (Figure \ref{fig: Efficiency of SaFReL- initial learning}). For the initial convergence, we run the initial learning on the first SUT 100 times, namely 100 learning episodes. \blue{ Table \ref{table: Efficiency of SaFReL- initial learning} presents a quick summarized view of the average learning trials during the last 10 episodes that are considered as the achieved values upon the convergence of the initial learning.}
\blue{As shown in Figure \ref{fig: Efficiency of SaFReL- initial learning} and Table \ref{table: Efficiency of SaFReL- initial learning}, using $\varepsilon$-greedy with $\varepsilon=0.2$ results in the fastest initial convergence, which has also led to the lowest number of trials compared to the other variants of $\varepsilon$-greedy. The number of learning trials after about 10 episodes starts converging and during the last 10 episodes it converges to approximately 7 trials.}
\begin{figure}[h]
\centering
\includegraphics[width=.98\textwidth, height=9cm]{Initial.pdf}
\caption{Initial convergence of SaFReL in 100 learning episodes during the initial learning }
\label{fig: Efficiency of SaFReL- initial learning}
\end{figure}
\begin{table}[b]
\begin{center}
\caption{\blue{Initial convergence of SaFReL in the initial learning regarding using different variants of action selection strategy}}
\begin{tabular}{ | m{4.5cm} | m{1.3cm}| m{1.3cm}|m{1.3cm}| m{1.35cm} | }
\hline
\textbf{}& \multicolumn{4}{|c|}{SaFReL - Initial Learning} \\
\hline
\textbf{\small Action Selection Strategy: $\epsilon$-greedy} & $\epsilon=0.85$ & $\epsilon=0.5$ & $\epsilon=0.2$ & \small decaying $\epsilon$ \\
\hline
\textbf{\blue{\small Number of learning trials (after convergence)}} & $22$ & $21$ & $7$ & $9$ \\
\hline
\end{tabular}
\label{table: Efficiency of SaFReL- initial learning}
\end{center}
\end{table}
Once the initial convergence occurs, SaFReL is ready to act on various SUTs and is expected to be able to reuse the learned policy to meet the intended performance breaking points on further SUT instances, while still keeping the learning running.
The optimal policy learned in the initial learning is not influenced by the used action selection strategy, since Q-learning is an off-policy learning algorithm \cite{sutton2018reinforcement}. It implies that the learner finds the optimal policy independently of how the actions have been selected (action selection strategy). For the sake of efficiency, we choose the one that resulted in the fastest convergence.
In the following sections, first, we investigate the efficiency of SaFReL compared to a typical stress testing procedure, when acting on homogeneous and heterogeneous sets of SUTs, then its capability to adapt to new SUTs with different performance sensitivity.
\paragraph{\textit{I. Homogeneous set of SUTs.}} We select CPU-intensive programs and make a homogeneous set of SUT instances during our analysis in this step. We simulate the performance behavior of 50 instances of the CPU-intensive programs, Build-apache, n-queens, John-the-ripper, Apache, Dcraw, Build-php, X264, and vary both the initial amounts of resources granted and the response time requirements. Figure \ref{fig: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning} shows the efficiency of SaFReL on a homogeneous set of CPU-intensive SUTs compared to a typical stress testing procedure regarding using $\varepsilon$-greedy with different values of $\varepsilon$. Table \ref{table: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning} presents the average number of trials/steps for generating the target performance test case in the proposed approach and the typical testing procedure. As shown in Figure \ref{fig: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning}, it keeps the number of required trials for $\approx 94\%$ of the SUTs below the average number of required steps in the typical stress testing.
Table \ref{table: improvement-SaFReL-homogeneous set of SUTs-transfer learning} shows the resulting improvement in the average number of required trials/steps for meeting the target, which implies reduction in the required computation time, compared to the typical stress testing process.
In the transfer learning, the agent reuses the learned policy based on the allowed degree of policy reusing according to its action selection strategy in the transfer learning. As shown in Table \ref{table: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning},
it implies that in the transfer learning the agent does fewer trials (based on the degree of allowed policy reusing) to meet the target on new cases, which leads to a higher efficiency. According to Table \ref{table: improvement-SaFReL-homogeneous set of SUTs-transfer learning}, on a homogeneous set of SUTs, more policy reusing leads to higher efficiency (more computation time improvement).
\begin{figure}[h]
\centering
\includegraphics[width=.98\textwidth, height=9cm]{Homo.pdf}
\caption{Efficiency of SaFReL on a homogeneous set of SUTs in the transfer learning }
\label{fig: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning}
\end{figure}
\begin{table}[h]
\begin{center}
\caption{Average number of trials/steps for generating the target performance test case on the homogeneous set of SUTs}
\begin{tabular}{ | m{3.2cm} | m{1.5cm}| m{1.5cm}|m{1.5cm}| m{2cm} | }
\hline
\textbf{}& \multicolumn{3}{|c|}{SaFReL with $\epsilon$-greedy}& \\
\hline
\textbf{\small Approach} & $\epsilon=0.5$ & \small decaying $\epsilon$ & $\epsilon=0.2$ & Typical stress testing \\
\hline
\textbf{\small Average number of trials/steps} & $10$ & $10$ & $7$ & $12$ \\
\hline
\end{tabular}
\label{table: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Computation time improvement on the homogeneous set of SUTs}
\begin{tabular}{ | m{5.9cm} | m{1.45cm}| m{1.45cm}| m{1.45cm} | }
\hline
\textbf{}& \multicolumn{3}{|c|}{SaFReL} \\
\hline
\textbf{\small Action Selection Strategy: $\epsilon$-greedy} & $\epsilon=0.5$ & \small decaying $\epsilon$ & $\epsilon=0.2$ \\
\hline
\textbf{\small Improvement in the number of trials} & $16\%$ & $16\%$ & $42\%$ \\
\hline
\end{tabular}
\label{table: improvement-SaFReL-homogeneous set of SUTs-transfer learning}
\end{center}
\end{table}
\paragraph{\textit{II. Heterogeneous set of SUTs.}} In this part of the analysis, to complete the answer to RQ1 and and also answer RQ2, we examine the efficiency and adaptivity of SaFReL during the transfer learning on a heterogeneous set of SUTs including various CPU-intensive, memory-intensive and disk-intensive ones. We simulate the performance behavior of 50 SUT instances from the list of the programs in Table \ref{table:Programs and sen values}. We evaluate the efficiency of SaFReL on the heterogeneous set of SUTs compared to the typical stress testing procedure regarding using $\varepsilon$-greedy with $\varepsilon=0.2$, $0.5$, and decaying $\varepsilon$ (Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-typical epsilon}). As shown in Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-typical epsilon} the transfer learning algorithm with a typical configuration of the action selection strategy, such as $\varepsilon=0.2$, $0.5$ and decaying $\varepsilon$, which imposes a certain degree of policy reusing based on the value of $\varepsilon$ does not work well. It does not outperform the typical stress testing, but also slightly degrades in some cases of $\varepsilon$. When the smart agent acts on a heterogeneous set of SUTs, blind replaying of the learned policy (i.e., just based on the value of $\varepsilon$) is not effective, and the tester agent needs to know where it should do policy reusing and where it requires more exploration to update the policy.
\begin{figure}[h]
\centering
\includegraphics[width=.98\textwidth, height=9cm]{Hetro1.pdf}
\caption{Efficiency of SaFReL on a heterogeneous set of SUTs regarding the use of typical configurations of $\epsilon$-greedy}
\label{fig:Efficiency of SaFReL-heterogeneous set of SUTs-typical epsilon}
\end{figure}
As described in Section \ref{sec::Adaptive Action Selection and Reward}, to solve this issue and improve the performance of SaFReL when it acts on a heterogeneous set of SUTs, it is augmented with a simple meta-learning feature enabling it to detect the heterogeneity of the SUT instances and adjust the value of parameter $\varepsilon$, adaptively. In general, it implies that when the smart tester agent observes a SUT instance different from the previously observed ones wrt the performance sensitivity, it changes the action selection strategy to doing more exploration and upon detecting a SUT instance with the same performance sensitivity as the previous ones, it makes the action selection strategy strive for more exploitation. As illustrated in Section \ref{sec::Adaptive Action Selection and Reward}, the strategy adaptation module, which fulfills this function, measures the similarity between SUTs at two levels of observations, then based on the measured values, adjusts the value of parameter $\varepsilon$. The threshold values of similarity measures and the adjustments for parameter $\varepsilon$ in the experimental analysis are described in Algorithm \ref{Algorithm:Adaptive selection}.
\begin{algorithm}[H]
\caption{Adaptive $\epsilon$ selection}\label{Algorithm:Adaptive selection}
\begin{algorithmic}
\IF{$similarity_{k,k-1}\geq 0.8$}
\IF{$similarity_{k,k-2}\geq 0.8$}
\STATE $\epsilon\gets 0.2$
\ELSE
\STATE $\epsilon\gets 0.5$
\ENDIF
\ELSIF{$similarity_{k,k-1}< 0.8$}
\STATE $\epsilon\gets 0.5$
\ENDIF
\end{algorithmic}
\end{algorithm}
Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-adaptive} shows the efficiency of SaFReL regarding the use of similarity detection and the adaptive $\varepsilon$-greedy action selection strategy on a heterogeneous set of SUTs. Regarding the use of adaptive $\varepsilon$ selection, SaFReL makes a considerable improvement and is able to keep the number of required trials for reaching the target on approximately $\approx 82\%$ of SUTs below the corresponding average value in the typical stress testing. Meanwhile, the average number of learning trials is totally lower than the typical stress testing procedure.
Table \ref{table: Efficiency of SaFReL-heterogeneous set of SUTs-transfer learning} presents the average number of trials/steps for generating the target performance test case in SaFReL and the typical stress testing when they act
on a heterogeneous set of SUTs. Table \ref{table: improvement-SaFReL-heterogeneous set of SUTs-transfer learning} shows the corresponding resulting improvement in the computation time respectively.
\begin{table}[h]
\begin{center}
\caption{Average number of trials/steps for generating the target performance test case on the heterogeneous set of SUTs}
\begin{tabular}{ | m{2.3cm} | m{1.4cm}| m{1.4cm}|m{1.4cm}|m{1.4cm}| m{1.5cm} | }
\hline
\textbf{}& \multicolumn{4}{|c|}{SaFReL with $\epsilon$-greedy}& \\
\hline
\textbf{\small Approach} & $\epsilon=0.5$ & \small decaying $\epsilon$ & $\epsilon=0.2$ & adaptive $\epsilon$ & Typical stress testing \\
\hline
\textbf{\small Average number of trials/steps} & $18$ & $17$ & $18$ & $11$ & $16$ \\
\hline
\end{tabular}
\label{table: Efficiency of SaFReL-heterogeneous set of SUTs-transfer learning}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Computation time improvement on the heterogeneous set of SUTs}
\begin{tabular}{ | m{4cm} | m{1.45cm}| m{1.45cm}| m{1.45cm}|m{1.45cm}| }
\hline
\textbf{}& \multicolumn{4}{|c|}{SaFReL} \\
\hline
\textbf{\small Action Selection Strategy: $\epsilon$-greedy} & $\epsilon=0.5$ & \small decaying $\epsilon$ & $\epsilon=0.2$ & adaptive $\epsilon$ \\
\hline
\textbf{\small Improvement in the number of trials} & No & No & No & $31\%$ \\
\hline
\end{tabular}
\label{table: improvement-SaFReL-heterogeneous set of SUTs-transfer learning}
\end{center}
\end{table}
To answer RQ2, we investigate the adaptivity of SaFReL on the heterogeneous set of SUTs regarding the use of different variants of action selection strategy including adaptive $\varepsilon$ selection (Figure \ref{fig:Adaptivity of SaFReL-heterogeneous set of SUTs}). As shown in Figure \ref{fig:Adaptivity of SaFReL-heterogeneous set of SUTs}, the number of required learning trials versus detected similarity is used to depict how adaptive SaFReL can act on a heterogeneous set of SUTs regarding the use of different configurations of $\varepsilon$. It shows that SaFReL with adaptive $\varepsilon$ is able to adapt to changing situations, e.g., a mixed heterogeneous set of SUTs. In other words, on around $\approx 75\%$ of SUTs that are completely different from the previous ones (i.e., with $similarity_{k,k-1} < 0.8$) it still keeps the number of required trials to meet the target below the average value of the typical stress testing. It implies that it can act adaptively, which means it reuses the policy wherever it is useful and does more exploration wherever required.
\begin{figure}[h]
\centering
\includegraphics[width=.78\textwidth, height=5cm]{Hetro2.pdf}
\caption{Efficiency of SaFReL on a heterogeneous set of SUTs regarding the use of adaptive $\epsilon$-greedy action selection strategy }
\label{fig:Efficiency of SaFReL-heterogeneous set of SUTs-adaptive}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.98\textwidth, height=9cm]{adaptivity.pdf}
\caption{Adaptivity of SaFReL on a heterogeneous set of SUTs regarding the use of different variants of action selection strategy}
\label{fig:Adaptivity of SaFReL-heterogeneous set of SUTs}
\end{figure}
\subsubsection{Sensitivity Analysis}
To answer RQ3, we study the impacts of the learning parameters including learning rate ($\alpha$) and discount factor ($\gamma$), on the efficiency of SaFReL on both homogeneous and heterogeneous sets of SUTs. For conducting sensitivity analysis, we implement two sets of experiments that involve changing one learning parameter while keeping the other one constant. For the experiments running on a homogeneous set of SUTs, we use $\varepsilon$-greedy with $\varepsilon=0.2$ as the well-suited variant of action selection strategy with respect to the results of efficiency analysis (See Figure \ref{fig: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning}) and on the heterogeneous set of SUTs, we use adaptive $\varepsilon$ selection (See Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-adaptive}). During the sensitivity analysis experiments, to study the impact of the learning rate changes, we set the discount factor to 0.5. While examining the impact of the discount factor changes, we keep the learning rate fixed to 0.1. Figure \ref{fig:Sensitivity of SaFReL-homogeneous set of SUTs} shows the sensitivity of SaFReL to changing learning rate and discount factor parameters when it acts on a homogeneous set of SUTs (CPU-intensive). Figure \ref{fig:Sensitivity of SaFReL-heterogeneous set of SUTs} depicts the results of the sensitivity analysis of SaFReL on a heterogeneous set of SUTs.
\begin{figure}[h]
\includegraphics[width=0.98\textwidth, height=3.5cm]{Sen-homo.pdf}
\label{fig:Sensitivity of SaFReL-homogeneous set of SUTs-DiscountFact}
\begin{center}
\begin{tabular}{ |p{2.7cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn{6}{|c|}{\small Average Efficiency of SaFReL with $\epsilon=0.2$, Discount factor $\gamma=0.5$} \\
& \small $\alpha=0.1$ &\small $\alpha=0.3$& \small $\alpha=0.5$ & \small $\alpha=0.7$ & \small $\alpha=0.9$ \\
\hline
\small Average number of learning trials & 9 & 5& 7& 7& 17\\
\hline
\hline
\end{tabular}
\begin{tabular}{ |p{2.7cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn{6}{|c|}{\small Average Efficiency of SaFReL with $\epsilon=0.2$, Learning rate $\alpha=0.1$} \\
& \small $\gamma=0.1$ &\small $\gamma=0.3$& \small $\gamma=0.5$ & \small $\gamma=0.7$ & \small $\gamma=0.9$ \\
\hline
\small Average number of learning trials & 11 & 12 & 9 & 8 & 11\\
\hline
\end{tabular}
\caption{Sensitivity of SaFReL to learning rate and discount factor on the homogeneous set of SUTs}
\label{fig:Sensitivity of SaFReL-homogeneous set of SUTs}
\end{center}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.98\textwidth, height=3.5cm]{Sen-hetro.pdf}
\label{fig:Sensitivity of SaFReL-heterogeneous set of SUTs-DiscountFact}
\begin{center}
\begin{tabular}{ |p{2.7cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn{6}{|c|}{\small Average Efficiency of SaFReL with adaptive $\epsilon$, Discount factor $\gamma=0.5$} \\
& \small $\alpha=0.1$ &\small $\alpha=0.3$& \small $\alpha=0.5$ & \small $\alpha=0.7$ & \small $\alpha=0.9$ \\
\hline
\small Average number of learning trials & 15& 14& 15& 14& 13\\
\hline
\hline
\end{tabular}
\begin{tabular}{ |p{2.7cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn{6}{|c|}{\small Average Efficiency of SaFReL with adaptive $\epsilon$, Learning rate $\alpha=0.1$} \\
& \small $\gamma=0.1$ &\small $\gamma=0.3$& \small $\gamma=0.5$ & \small $\gamma=0.7$ & \small $\gamma=0.9$ \\
\hline
\small Average number of learning trials & 15& 15& 15& 16& 15\\
\hline
\end{tabular}
\caption{Sensitivity of SaFReL to learning rate and discount factor on the heterogeneous set of SUTs}
\label{fig:Sensitivity of SaFReL-heterogeneous set of SUTs}
\end{center}
\end{figure}
\section{Discussion} \label{sec::Discussion}
\subsection{\blue{Efficiency, Adaptivity and Sensitivity Analysis}} \textbf{RQ1:} Using multiple experiments, we studied the efficiency of SaFReL compared to a typical stress testing procedure, on both a set of homogeneous and heterogeneous SUTs regarding the use of different action selection strategies.
The results of the experiments running on a set of 50 CPU-intensive SUT instances as a homogeneous set of SUTs, Figure \ref{fig: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning} and Tables \ref{table: Efficiency of SaFReL-homogeneous set of SUTs-transfer learning} and \ref{table: improvement-SaFReL-homogeneous set of SUTs-transfer learning}, show that using $\varepsilon$-greedy, $\varepsilon=0.2$ as action selection strategy in the transfer learning leads to desired efficiency and an improvement in the computation time (around $42\%$) compared to the typical stress testing. It causes SaFReL to rely more on reusing the learned policy and results in computation time saving. The existing similarity between the performance sensitivity of SUTs in a homogeneous set of SUTs makes the strategy of policy reusing successful in this type of testing situations.
Furthermore, we studied the efficiency of SaFReL on a heterogeneous set of 50 SUTs containing different CPU-intensive, memory-intensive and disk-intensive ones. The results of the analysis illustrate that choosing an action selection strategy without considering the heterogeneity among the SUTs (e.g., using the typical variants of $\varepsilon$-greedy) does not lead to desirable efficiency compared to the typical stress testing (See Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-typical epsilon}, Table \ref{table: Efficiency of SaFReL-heterogeneous set of SUTs-transfer learning} and \ref{table: improvement-SaFReL-heterogeneous set of SUTs-transfer learning}).
Then, we augmented our fuzzy RL-based approach with an adaptive action selection strategy that is a heterogeneity-aware strategy for adjusting the value of $\varepsilon$. It measures the similarity between the performance sensitivity of the SUTs and adjusts the $\varepsilon$ parameter.
As shown in Figure \ref{fig:Efficiency of SaFReL-heterogeneous set of SUTs-adaptive}, using the adaptive $\varepsilon$-greedy, addressed the issue and led to an efficient generation of the target performance test case and a computation time improvement (around $31\%$). It makes the agent able to reuse the learned policy according to the conditions, which means it uses the learned policy wherever it is useful and does more exploration wherever it is required.
\textbf{RQ2:} At the last part of the efficiency and adaptivity analysis, we extended our analysis by measuring the adaptivity of SaFReL when it performs on a heterogeneous set of SUTs. As shown in Figure \ref{fig:Adaptivity of SaFReL-heterogeneous set of SUTs}, with the use of the adaptive $\varepsilon$-greedy, SaFReL is able to adapt to changing testing situations while preserving the efficiency.
\textbf{RQ3:} The results of the sensitivity analysis experiments on the homogeneous set of SUTs show that adjusting the learning rate to lower values such as 0.1 leads to better efficiency. Furthermore, regarding the sensitivity analysis of SaFReL to the discount factor on a homogeneous set of SUTs, the experimental results depict that lower values of the discount factor are suitable choices for the desired operation that we expect.
However, the results of the sensitivity analysis on the heterogeneous set of SUTs do not show a considerable effects on the average efficiency of SaFReL when it acts on a heterogeneous set of SUTs regarding the use of adaptive $\varepsilon$-greedy.
\subsection{Lessons Learned}
The experimental evaluation of SaFReL shows how machine learning can guide performance testing towards being automated and taking one step further towards being autonomous. Common approaches for generating performance test cases mostly rely on source code or system models, but such development artifacts might not always be available. Moreover, drawing a precise model of a complex system predicting the state of the system upon given performance-related conditions requires a solid endeavor. This makes room for machine learning, particularly model-free learning techniques. Model-free RL is a machine learning technique enabling the learner to explore the environment (the behavior of the SUT on the execution platform in this case) and learn the optimal policy to accomplish the objective (finding the intended performance breaking point in this case) without having a model of the system. The learner stores the learned policy and is able to replay the learned policy in further suitable situations. This important characteristic of RL leads to a reduction in the effort of the learner to accomplish the objective in further cases and consequently leads to improved efficiency.
\blue{Therefore, the main features that lead SaFRel to outperform an exploratory (search-based) technique are the capability of storing knowledge during the exploration and reusing the knowledge in suitable situations, and the possibility of selective and adaptive control on exploration and exploitation.}
In general, automation, reduction of computation time and cost, and less dependency on source code and models are profound strengths of the proposed RL-assisted performance testing.
Regarding applicability, according to the aforementioned strengths and the results of the experimental evaluation, the proposed approach could be beneficial to performance testing of software variants in software product lines, evolving software in continuous Integration/Delivery process and performance regression testing.
\textit{Changes in Future Trends.} With the emergence of serverless architecture, which incorporates third-party backend services (BaaS) and/or runs the server-side logic in state-less containers that are fully-managed by providers (FaaS), a slight shift in the objectives of performance evaluation, particularly performance testing on cloud-native applications is expected. Within the serverless architecture, the backend code is run without the need to manage and provision the resources on servers. For example in FaaS, scaling, including the resource provisioning and allocation, is automatically done by the provider whenever it is needed, to preserve the response time requirement of the application. In general, regarding the capabilities of new execution platforms and deployment architectures, the objectives of performance testing might be slightly influenced. Nevertheless, it is still crucial for a wide range of software systems.
\subsection{Threats to Validity} Some of the main sources of threat to the validity of our experimental evaluation results are as follows:
\blue{\textit{Construct.} One of the main sources of threat is the formulation of the RL technique to address the problem, which is very important for successful learning. Modeling the state space, actions, and also the reward function are major players to guide the agent throughout the learning and make it learn the optimal policy. For example, boundaries defined in discrete states modeling are a threat to internal validity. To mitigate this threat, we used a fuzzy labeling technique to deal with the issue of uncertainty in defining sharp values for boundaries. Regarding the actions, the formulation of actions affects the granularity of the exploration steps, thus we tried to define actions in a way to provide reasonable granularity for the exploration steps.}
\textit{Internal.} There are a number of threats to the internal validity of the results. RL techniques like many other machine learning algorithms are influenced by their hyperparameters such as learning rate and discount factor. During our efficiency and adaptivity analysis experiments, we did not change the learning parameters, we also conducted a set of controlled experiments to study the influence of learning parameters on the efficiency of our approach.
\blue{The insufficient number of learning episodes/iterations could also act as a source of threat in the initial learning. To alleviate this threat, we iterated the initial learning sufficiently to ensure the convergence.}
\blue{Moreover, using a performance simulation module instead of executing SUTs actually is considered as a source of threat to the validity of results.}
\blue{Finally, model-free RL is mainly intended to solve a decision-making problem (to find an optimal policy to behave) without access to a model of the environment. Therefore, not considering the structure of the environment might be a source of threat in case of improper formulation of the RL technique.}
\textit{External.} Model-free RL learns the optimal policy to achieve the target through interaction with the environment. Our approach was formulated based on the SUTs with three types of performance sensitivity that are CPU-intensive, memory-intensive, and disk-intensive, and our results are derived from the experimental evaluation of our approach on these types of SUTs. If the experiment contains SUTs with other types of performance sensitivity such as network-intensive programs, then the approach needs to be reformulated slightly to support new types of performance sensitivities.
\blue{Moreover, the dependency of the performance simulation module on the performance sensitivity values could raise a threat to validity in case of deploying the smart tester agent with the performance simulation module. The performance simulation module requires the performance sensitivity values for the SUTs as we described in our experiments. However, given a real deployment of the approach, e.g., in a cloud-based testing setup without the performance simulation module, the dependency on the performance sensitivity values are lighter and their exact values are not necessary. Nonetheless, it is still considered as a source of threat.}
\section{Related Work} \label{sec::Related Work}
Measurement of performance metrics under typical or stress test execution conditions, which involve both workload and platform configuration aspects \cite{ menasce2002load, hill2009tools, apte2017autoperf, michael2017cloudperf, jindal2019performance}, detection of performance-related issues such as functional problems or violations of performance requirements emerging under certain workload or resource configuration conditions \cite{briand2005stress, zhang2011automatic, ayala2018one, schulz2019behavior} are common objectives of different types of performance testing.
Different approaches have been proposed to design the target performance test cases for accomplishing performance-related objectives such as finding intended performance breaking points. Performance test conditions involve both workload and resource configuration status. A general high-level categorization of main techniques for generating the performance test cases is as follows:
\textit{Source code analysis.} Deriving workload-based performance test conditions using data-flow analysis and symbolic execution are examples of techniques for designing fault-inducing performance test cases based on source code analysis to detect performance-related issues such as functional problems (like memory leaks) and performance requirement violations \cite{yang1996towards, zhang2011automatic}.
\textit{System model analysis.} Modeling the system behavior in terms of performance models like Petri nets and using constraint solving techniques \cite{zhang2002automated}, using the control flow graph of the system and applying search-based techniques \cite{gu2009search, di2007search}, and using other types of system models like UML models and using genetic algorithms \cite{garousi2010genetic, garousi2008empirical, garousi2008traffic, costa2012generating, da2011generation} to generate the performance test cases are examples of the techniques based on system model analysis for generating performance test cases.
\textit{Behavior-driven declarative techniques.} Using a Domain Specific Language (DSL) to provide declarative goal-oriented specifications of performance tests and model-driven execution frameworks for automated execution of the tests \cite{ferme2018declarative, ferme2017towards, walter2016asking}, and using a high-level behavior-driven language inspired from Behavior-Driven Development (BDD) techniques to define test conditions \cite{schulz2019behavior} in combination with a declarative performance testing framework like BenchFlow \cite{ferme2017towards} are examples of behavior-driven techniques for performance testing.
\textit{Modeling the realistic conditions.} Modeling the real user behavior through stochastic form-oriented models \cite{draheim2006realistic, lutteroth2008modeling}, extracting workload characteristics from the recorded requests and modeling the user behavior using, e.g., extended finite state machines (EFSMs) \cite{shams2006model} or Markov chains \cite{vogele2018wessbas}, sandboxing services and deriving a regression model of the deployment environment based on the data resulting from sandboxing to estimate the service capacity \cite{jindal2019performance}, end-user clustering based on the business-level attributes extracted from usage data \cite{maddodi2018generating}, and using automated GUI testing tools with capture and replay techniques to generate realistic interactive usage sequences \cite{adamoli2011automated} are examples of techniques based on modeling the realistic conditions to generate the performance test cases.
\textit{Machine learning-enabled techniques.} Machine learning techniques such as supervised and unsupervised algorithms mainly work based on building models and extracting patterns (knowledge) from the data. While, some other techniques such as RL algorithms are intended to train the learner agent to solve the problems (tasks). The agent learns an optimal way to achieve an objective through interacting with the system. Machine learning has been widely used for analysis of data resulting from the performance testing and also for performance preservation. For example, anomaly detection through analysis of performance data, e.g., resource usage, using clustering techniques \cite{syer2011identifying}, predicting reliability from the testing data using Bayesian Networks \cite{avritzer2008reliability}, performance signature identification based on performance data analysis using supervised and unsupervised learning techniques \cite{malik2013automatic, malik2010automatic}, and also adaptive RL-driven performance in particular response time control for cloud services \cite{ibidunmoye2017adaptive, veni2016auto, jamshidi2016fuzzy} and also software on other execution platforms, e.g., PLC-based real-time systems \cite{moghadam2018adaptive}.
Machine learning has been also applied to the generation of performance test cases in some studies. For example, using symbolic execution in combination with an RL algorithm to find the worst-case execution path within a SUT \cite{koo2019pyse}, using RL to find a sequence of input workload leading to performance degradation \cite{ahmad2019exploratory}, and a feedback-driven learning to identify the performance bottlenecks through extracting rules from execution traces \cite{grechanik2012automatically}. There are also some adaptive techniques slightly analogous to the concept of RL for generating performance test cases. For example, an adaptive workload generation that adapts the workload dynamically based on some pre-defined adjustment policies \cite{ayala2018one}, and a feedback-driven approach that uses search algorithms to benchmark an NFS server based on varying workload parameters to find the workload peak rate reaching the target response time confidence level.
\section{Conclusion} \label{sec::Conclusion}
Performance testing is a family of techniques commonly used as part of performance analysis, e.g., estimating performance metrics or detecting performance violations.
One important goal of performance testing, particularly in mission-critical domains, is to verify the robustness of the SUT in terms of finding performance breaking point. Model-driven techniques might be used for this purpose in some cases, but drawing a precise model of the performance behavior of a complex software system under different application-, platform- and workload-based affecting factors is difficult. Furthermore, such modeling might disregard important implementation and deployment details. In software testing, source code analysis, system model analysis, use-case based design, and behavior-driven techniques are some common approaches for generating performance test cases. However, source code or other artifacts might not be available during the testing.
In this paper, we proposed a fuzzy reinforcement learning-based performance testing framework (SaFReL) that adaptively and efficiently generates the target performance test cases resulting in the intended performance breaking points for different software programs, without access to source code and system models. We used Q-learning augmented by fuzzy state modeling and an action selection strategy adaptation that resulted in a self-adaptive autonomous tester agent. The agent can learn the optimal policy to achieve the target (reaching the intended performance breaking point), reuse its learned policy when deployed to test similar software and adapt its strategy when targeting software with different characteristics.
We evaluated the efficiency and adaptivity of SaFReL through a set of experiments based on simulating the performance behavior of various SUT programs. During the experimental evaluation, we tried to answer how efficiently and adaptively SaFReL can perform testing of different SUT programs compared to a typical stress testing approach. We also performed a sensitivity analysis to explore how the efficiency of SaFReL is affected by changing the learning parameters.
We believe that the main strengths of using the intelligent automation offered by SaFReL are 1) efficient generation of test cases and reduction of computation time, and 2) less dependency on source code and models.
Regarding applicability, we believe that SaFReL could be beneficial to the testing of software variants, evolving software during the (CI/CD) process, and regression performance testing. Applying some heuristics and techniques to speed up the exploration of the state space like using multiple cooperating agents, and also extending the proposed approach to support workload-based performance test cases are further steps to continue this research.
\section*{Acknowledgment}
This work has been supported by and received funding partially from the TESTOMAT, XIVT, IVVES and MegaM@Rt2 European projects.
|
1,116,691,500,953 | arxiv | \section{Introducci\'on}
El diagn\'ostico y tratamiento adecuado de la epilepsia es uno de los principales problemas de la salud p\'ublica seg\'un la Organizaci\'on Mundial de la Salud. En todo el mundo hay m\'as de 50 millones de personas que padecen alg\'un tipo de epilepsia, casi el 80 \% de ellas en regiones en desarrollo, donde tres cuartas partes no reciben un diagn\'ostico y tratamiento apropiado \cite{WHO2018}. Los pacientes que padecen esta enfermedad a menudo manifiestan diferentes caracterizaciones fisiol\'ogicas, que resultan de la descarga sincr\'onica y excesiva de un grupo de neuronas en la corteza cerebral. Las crisis epil\'epticas generalmente tienen un inicio repentino, se extienden en cuesti\'on de segundos y, en la mayor\'ia de los casos son breves. La manifestaci\'on de una crisis depende de la regi\'on d\'onde comienza en el cerebro y qu\'e tan r\'apido se propaga. La correcta identificaci\'on de esta informaci\'on es clave para un tratamiento adecuado de esta enfermedad.
La electroencefalograf\'ia (EEG) es una modalidad biom\'edica no invasiva y ampliamente disponible que se puede utilizar para diagnosticar y dise\~nar un tratamiento correcto de la epilepsia. El EEG captura las principales caracter\'isticas que son relevantes de la crisis, lo cual ayuda a discriminar entre actividad cerebral normal y anormal. Las caracter\'isticas m\'as estudiadas en la literatura se pueden clasificar en tres grupos: propiedades espectrales, propiedades morfol\'ogicas y descriptores estad\'isticos. Para un tratamiento integral de estas caracter\'isticas ver \cite{NiedermeyerDaSilva2010,EpilepsyIntersection2011, EpilepticSeizures2010}.
Primero vamos a explicar brevemente algunos conceptos usados en este trabajo para desarrollar toda la idea propuesta.
La \emph{validaci\'on cruzada} es una t\'ecnica de validaci\'on de modelos usada para evaluar c\'omo los resultados de un algoritmo de an\'alisis estad\'istico se pueden generalizar a un conjunto de datos independiente. Esto se hace mediante la partici\'on de un conjunto de datos de la siguiente manera: un subconjunto para entrenar el algoritmo y los datos restantes para la prueba o test. Cada ronda de validaci\'on cruzada implica la partici\'on aleatoria del conjunto de datos original en un \emph{conjunto de entrenamiento} y un \emph{conjunto de test}. El \emph{conjunto de entrenamiento} luego se usa para entrenar un algoritmo de aprendizaje supervisado y el \emph{conjunto de test} se usa para evaluar su desempe\~no. Este proceso se repite varias veces donde el \emph{valor de p\'erdida y el error aparente} de la validaci\'on cruzada se utilizan como un indicador de rendimiento. Aplicaciones de este m\'etodo en epilepsia se remontan a la d\'ecada de los 70's con \cite{Lloyd1972} usando una t\'ecnica llamada \emph{template matching} \cite{QuinteroRincon2016a}. La importancia de usar la validaci\'on cruzada radica en que en muchas aplicaciones biom\'edicas los datos pueden ser muy l\'imitados para la etapa de entrenamiento y de test, por lo que si se quieren construir buenos modelos, se debe utilizar la mayor cantidad de datos disponibles para la etapa de entrenamiento. Sin embargo, si el conjunto de validaci\'on es peque\~no, dar\'a una estimaci\'on relativamente ruidosa del rendimiento predictivo \cite{Bishop2006}. En la \emph{Validaci\'on cruzada dejando uno fuera}, los datos de las particiones usan el enfoque de $k$-iteraciones, donde $k$ es igual al n\'umero total de observaciones en los datos. Ver \cite{Alpaydin2014,Hastie2011} para un tratamiento exhaustivo de las propiedades estad\'isticas y \cite{Combrisson2015,Sargolzaei2015,Zhang2014,Stevenson2014,Liang2013} para algunos ejemplos en se\~nales EEG.
El coeficiente de correlaci\'on producto-momento de Pearson es un test de asociaci\'on entre parejas de datos, es usado como una medida del grado de correlaci\'on lineal o dependencia entre dos variables, \emph{crisis} y \emph{no-crisis} en nuestro caso. El coeficiente se calcula como el cociente entre la covarianza de las dos variables y el producto de sus desviaciones t\'ipicas. Referimos al lector a \cite{Glantz2011} para un tratamiento comprensivo de este coeficiente.
En este trabajo estudiamos el coeficiente de correlaci\'on producto-momento de Pearson para predecir entre los eventos de crisis y no-crisis, a partir de una clasificaci\'on lineal de la estimaci\'on de los par\'ametros de la distribuci\'on Gaussiana generalizada, modelo estudiado en nuestros trabajos previos \cite{QuinteroRincon2014, QuinteroRincon2016a, QuinteroRincon2016b, QuinteroRincon2017,QuinteroRincon2018a}. Esta distribuci\'on tiene dos par\'ametros: \emph{escala} $\mathcal{A}$ y \emph{forma} $\mathcal{B}$ que se estiman en cada ritmo cerebral, a partir de una descomposic\'on wavelet. Por lo tanto, tenemos un conjunto de par\'ametros $\mathcal{A}$ y $\mathcal{B}$ tanto para eventos de crisis como para los eventos de no-crisis. Estos par\'ametros se clasifican a trav\'es de un clasificador lineal en dos clases: \emph{crisis} o \emph{no-crisis}. A continuaci\'on el aporte de este trabajo, se estima un coeficiente de correlaci\'on producto-momento de Pearson para cada clase; permitiendo un rango de magnitud entre $[-1,+1]$. Este escalamiento facilita una predicci\'on de la crisis epil\'eptica en se\~nales EEG.
Este documento est\'a estructurado de la siguiente manera. La secci\'on \ref{sec:meth} describe la metodolog\'ia propuesta que se usa para describir se\~nales de EEG y discriminar entre un evento de crisis y no-crisis en se\~nales epil\'epticas de EEG. Esta metodolog\'ia se aplica y luego se compara con dos modelos similares en se\~nales reales de EEG de pacientes que sufren crisis epil\'epticas en la secci\'on \ref{sec:results}. La elecci\'on de estos dos modelos es porque usan una metodolog\'ia similar y est\'an basados en el cl\'asico clasificador de m\'aquinas de vectores de soporte (SVM). Finalmente las conclusiones se informan en la secci\'on \ref{sec:disc}.
\section{Metodolog\'ia}
\label{sec:meth}
Sea $\boldsymbol X \in \mathbb{R}^{N\times M}$ la matriz en conjunto de $M$ se\~nales EEG $\boldsymbol{x}_m \in \mathbb{R}^{N\times 1}$, medidas simultaneamente en diferentes canales en instantes de tiempo discretos $N$. La metodolog\'ia propuesta esta compuesta de 5 estapas.
La primera etapa divide la se\~nal original $\boldsymbol X$ en una serie de segmentos de 2 segundos con $50\%$ de solapamiento, usando una ventana rect\'angular $\boldsymbol \Omega = \boldsymbol\Omega_0 \left( w-\frac{W-1}{2}\right)$ con $0 \leq w\leq W-1$, tal que $\boldsymbol X^{(i)} = \boldsymbol \Omega^{(i)} \boldsymbol X$.
La segunda etapa consiste en representar cada segmento $\boldsymbol X^{(i)}$ en su correspondiente representaci\'on tiempo-frecuencia usando una descomposici\'on multiresoluci\'on 1D, a trav\'es de la wavelet Daubechies (dB4) con 6 escalas. El prop\'osito de esta descomposici\'on es evaluar la distribuci\'on de energ\'ia a trav\'es de todos los ritmos cerebrales llamados: \emph{banda delta: 0.5-4Hz, banda theta:4-8Hz, banda alfa: 8-13Hz, banda beta:13-30Hz} y \emph{banda gamma: $>$ 30Hz}.
\begin{align}
\boldsymbol X^{(i)}_b &= \left [\boldsymbol X^{(i)}_{\delta} \; \boldsymbol X^{(i)}_{\theta}\; \boldsymbol X^{(i)}_{\alpha} \; \boldsymbol X^{(i)}_{\beta} \; \boldsymbol X^{(i)}_{\gamma} \right ]^T
\end{align}
En la tercera etapa, la distribuci\'on estad\'istica de los coeficientes wavelet es representada usando la distribuci\'on generalizada Gaussiana (GGD) de media cero, estudiada en nuestros trabajos previos \cite{QuinteroRincon2014,QuinteroRincon2016a, QuinteroRincon2016b, QuinteroRincon2017,QuinteroRincon2018a}. La GGD tiene una funci\'on de densidad de probabilidad (PDF) dada por:
\begin{equation}
\label{eq:ggd}
f_\textnormal{GGD}(x;\mathcal{A},\mathcal{B}) = \frac{\mathcal{B}}{2\mathcal{A}\Gamma(\mathcal{B}^{-1})} \exp\left(-\left|\frac{x}{\mathcal{A}}\right|^\mathcal{B}\right)
\end{equation}
donde $\mathcal{A} \in \mathbb{R}^+$ es el par\'ametro de \emph{escala}, $\mathcal{B} \in \mathbb{R}^+$ es el par\'ametro de \emph{forma} y $\Gamma\left(\cdot\right)$ es la funci\'on Gamma.
Cada escala de la descomposici\'on wavelet es reducida al estimar los par\'ametros estad\'isticos de la distribuci\'on Gaussiana generalizada $\mathcal{A}$ y $\mathcal{B}$, (Ver ecuaci\'on \eqref{eq:ggd}), con el fin de obtener el conjunto de caracter\'isticas asociadas a todas las escalas wavelet, para un segmento de 2 segundos con $50\%$ de solapamiento.
\begin{align}
\boldsymbol {\widehat X}^{(i)}_{b} &= \left [\boldsymbol X^{(i)}_{(\mathcal{A},\mathcal{B}),{\delta}} \; \boldsymbol X^{(i)}_{(\mathcal{A},\mathcal{B}),{\theta}} \; \boldsymbol X^{(i)}_{(\mathcal{A},\mathcal{B}),{\alpha}} \; \boldsymbol X^{(i)}_{(\mathcal{A},\mathcal{B}),{\beta}} \; \boldsymbol X^{(i)}_{(\mathcal{A},\mathcal{B}),{\gamma}} \right]^T
= \argmax_{\left[\mathcal{A},\mathcal{B}\right]^T} f_\textnormal{GGD}\left (\boldsymbol X^{(i)}_b; \mathcal{A},\mathcal{B} \right )
\end{align}
En la cuarta etapa se utiliza un an\'alisis discriminante lineal para clasificar en dos clases posibles: $\omega_s$ para los eventos de crisis y $\omega_{ns}$ para los eventos de no-crisis. Para un vector de caracter\'isticas $\boldsymbol {\widehat X}^{(i)}_b$ perteneciente a la clase $\omega_s$ o a la clase $\omega_{ns}$, se asume que $\boldsymbol {\widehat X}^{(i)}_b$ tiene una distribuci\'on normal con valor medio $\boldsymbol\mu_s$ (o $\boldsymbol\mu_{ns}$) y matriz de covarianza $\boldsymbol\Sigma_s$ (o $\boldsymbol\Sigma_{ns}$), entonces:
\begin{align}
\label{eq:linear1}
P \left(\boldsymbol {\widehat X}^{(i)}_b \middle|\omega_{s}\right) &= \frac{1}{\sqrt{ \left( 2 \pi \right)^k \left| \boldsymbol\Sigma_s \right| }} \exp \left[ -\frac12 \left( \boldsymbol {\widehat X}^{(i)}_b -\boldsymbol\mu_s \right)^T \boldsymbol\Sigma_s^{-1} \left( \boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_s \right) \right] \\
P \left(\boldsymbol {\widehat X}^{(i)}_b \middle|\omega_{ns}\right) &= \frac{1}{\sqrt{ \left( 2 \pi \right)^k \left| \boldsymbol\Sigma_{ns} \right| }} \exp \left[ -\frac12 \left( \boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_{ns} \right)^T \boldsymbol\Sigma_{ns}^{-1} \left( \boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_{ns} \right) \right]
\label{eq:linear2}
\end{align}
donde $k$ es la dimensi\'on del vector estimado $\boldsymbol {\widehat X}^{(i)}_b$ y $P(\cdot)$ es la probabilidad de evento en particular.
Para el an\'alisis discriminante lineal, se calculan las muestras $\boldsymbol\mu_s$ (o $\boldsymbol\mu_{ns}$) de cada clase. Entonces se calcula la muestra $\boldsymbol\Sigma_s$ (o $\boldsymbol\Sigma_{ns}$) al restar primero la muestra $\boldsymbol\mu_s$ (o $\boldsymbol\mu_{ns}$) de cada clase a partir de las observaciones de esa clase, y tomando la matriz emp\'irica $\boldsymbol\Sigma_s$ (o $\boldsymbol\Sigma_{ns}$) del resultado. Por lo tanto el discriminante lineal para el problema de clasificaci\'on viene dado por
\begin{align}
\log \frac{ \rho \left(\boldsymbol {\widehat X}^{(i)}_b \middle|\omega_{s}\right) }{ \rho \left(\boldsymbol {\widehat X}^{(i)}_b \middle|\omega_{ns}\right) }
&= (\boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_s)^T \boldsymbol\Sigma_s^{-1}(\boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_s) + ln \big | \boldsymbol\Sigma_s \big | \nonumber \\
&~~~~ - (\boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_{ns})^T \boldsymbol\Sigma_{ns}^{-1}(\boldsymbol {\widehat X}^{(i)}_b - \boldsymbol\mu_{ns}) - ln \big | \boldsymbol\Sigma_{ns} \big |
\label{eq:Bayes}.
\end{align}
Finalmente, en la etapa cinco, el coeficiente $r$ de correlaci\'on producto-momento de Pearson se estima a trav\'es de
\begin{align}
r = \frac{\sum(\omega_{s} - \overline{\omega_{s}})(\omega_{ns} - \overline{\omega_{ns}})} {\sqrt{\sum(\omega_{ns} - \overline{\omega_{s}})^2\sum(\omega_{ns}- \overline{\omega_{ns}})^2}}
\label{eq:pearson}
\end{align}
donde $\overline{\omega_{s}}$ y $\overline{\omega_{ns}}$ son las medias de cada clase. La magnitud de $r$ describe la fuerza de asociaci\'on entre las dos variables y el signo de $r$ indica la direcci\'on de esta asociaci\'on: $r=+1 $ cuando las dos variables aumentan juntas, y $r=-1$ cuando una disminuye y la otra aumenta. As\'i mismo, tambi\'en muestra el caso m\'as com\'un de dos variables que est\'an correlacionadas linealmente. El valor $r=0$ indica ausencia de correlaci\'on, $r=+1$ indica correlaci\'on positiva total y $r=-1$ indica correlaci\'on negativa total.
\section{Resultados}
\label{sec:results}
La metodolog\'ia propuesta se evalu\'o mediante la base de datos del Hospital Infantil de Boston, que consta de 36 registros de EEG de sujetos pedi\'atricos con crisis intratables. Las se\~nales EEG son bipolares y est\'an muestreadas a 256Hz para cada sujeto. Cada registro contiene un evento de crisis con un inicio y un final marcado, el cual fue detectado por un neur\'ologo experimentado. En este trabajo usamos 18 eventos de crisis y 18 eventos de no-crisis de 9 sujetos. Consulte \cite{Goldberger2000} para obtener m\'as detalles.
Durante la etapa de pre-propresamiento se usaron dos filtros Butterworth IIR en cascada, un filtro pasa-bajo de segundo orden con frecuencia de corte de 100 Hz y un filtro pasa-alto de primer orden con una frecuencia de corte de 30 Hz, adem\'as se sustrajo el valor medio de cada canal. Consultar \cite{QuinteroRincon2012} para un amplio estado del arte en diferentes tipos de artefactos en se\~nales EEG.
La detecci\'on de una crisis consta de dos etapas principales: la extracci\'on de caracter\'isticas y una etapa de clasificaci\'on basado en un aprendizaje autom\'atico, con el fin de caracterizar y cuantificar eventos de crisis o eventos de no-crisis. Usando los mismos datos de entrada, nuestro modelo \textbf{[Q]} fue comparado con dos modelos similares del estado del arte que trabajan tambi\'en sobre todos los ritmos cerebrales con una longitud de ventana de 2 segundos y solapamiento del 50\%: \textbf{[S]} Shoeb et al \cite{Shoeb2004,Shoeb2010} Usando una ventana rect\'angular, la extracci\'on de caracter\'isticas se realiza a trav\'es del calculo de las diferencias de energ\'ia a nivel espectral/espacial y su relaci\'on espectral/temporal
utilizando una wavelet Daubechies (dB4) con 6 escalas junto con la densidad espectral de potencia. \textbf{[C]} Chan et al. \cite{Chan2008} Usando una ventana Hamming, la extracci\'on de caracter\'isticas se estiman a trav\'es del espectro de potencia usando la transformada de Fourier (FFT) junto con un periodograma.
Ambos modelos usan un clasificador basado en m\'aquinas de vectores de soporte (SVM). Cabe resaltar que en esta comparaci\'on a pesar de que la extracci\'on de caracter\'isticas tiene algunas diferencias y la etapa de clasificaci\'on es distinta, permite contrastar metodolog\'ias similares y el costo computacional que puede ser crucial en implementaciones en tiempo real. Por ejemplo una soluci\'on \'optima para el cl\'asico clasificador SVM implica una complejidad del orden de $n^2$ o $n^3$ productos, donde $n$ es el tama\~no del conjunto de datos, el cual por lo general es grande cuando se analizan se\~nales EEG \cite{Bordes2005,ShalevShwartz2008}, mientras que para un clasificador lineal la complejidad es del orden $mn + mt + nt$, donde $m$ es el n\'umero de muestras, $n$ es el n\'umero de caracter\'isticas y $t = min(m, n)$ \cite{Cai2008}.
Las figuras \ref{fig:Deltas}-\ref{fig:Gammas} muestran el rendimiento a trav\'es de los diferentes diagramas de dispersi\'on, para todos los ritmos cerebrales de las dos clases: $\omega_s$ para crisis y $\omega_{ns}$ para no-crisis, permitiendo una buena discriminaci\'on por inspecci\'on visual para todos los modelos. La nomenclatura usada en los ejes $x$ e $y$ respectivamente, en los diagramas de dispersi\'on son: escala $\mathcal{A}$ y forma $\mathcal{B}$ para [Q] usando un clasificador lineal. Energ\'ia $\mathcal{E}$ y potencia $\mathcal{P}$ para [S] y Frequencia $\mathcal{F}$ y potencia $\mathcal{P}$ para [C], ambos usando un clasificador SVM.
\begin{figure}[htbp]
\centering
\subfigure[\text{[Q]}:Clasificador lineal]{\includegraphics[width=52mm]{./Figures/figure_delta2.png}}
\subfigure[\text{[S]}:Clasificador SVM]{\includegraphics[width=52mm]{./Shoeb/figure_delta2.png}}
\subfigure[\text{[C]}:Clasificador SVM]{\includegraphics[width=52mm]{./Chan/figure_delta2.png}}
\caption{Diagrama de dispersi\'on de la banda delta: (a) [Q] escala $\mathcal{A}$ eje-$x$ y forma $\mathcal{B}$ eje-$y$. (b) [S] Energ\'ia $\mathcal{E}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. (c) [C] Frecuencia $\mathcal{F}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. Todos los m\'etodos permiten una discriminaci\'on entre eventos de no-crisis (circulos azules (\emph{non-seizure})) y eventos de crisis (tri\'angulos rojos (\emph{seizure})).}
\label{fig:Deltas}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[\text{[Q]}:Clasificador lineal]{\includegraphics[width=52mm]{./Figures/figure_theta2.png}}
\subfigure[\text{[S]}:Clasificador SVM]{\includegraphics[width=52mm]{./Shoeb/figure_theta2.png}}
\subfigure[\text{[C]}:Clasificador SVM]{\includegraphics[width=52mm]{./Chan/figure_theta2.png}}
\caption{Diagrama de dispersi\'on de la banda theta: (a) [Q] escala $\mathcal{A}$ eje-$x$ y forma $\mathcal{B}$ eje-$y$. (b) [S] Energ\'ia $\mathcal{E}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. (c) [C] Frecuencia $\mathcal{F}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. Todos los m\'etodos permiten una discriminaci\'on entre eventos de no-crisis (circulos azules (\emph{non-seizure})) y eventos de crisis (tri\'angulos rojos (\emph{seizure})).}
\label{fig:Thetas}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[\text{[Q]}:Clasificador lineal]{\includegraphics[width=52mm]{./Figures/figure_alpha2.png}}
\subfigure[\text{[S]}: Clasificador SVM]{\includegraphics[width=52mm]{./Shoeb/figure_alpha2.png}}
\subfigure[\text{[C]}:Clasificador SVM]{\includegraphics[width=52mm]{./Chan/figure_alpha2.png}}
\caption{Diagrama de dispersi\'on de la banda alfa: (a) [Q] escala $\mathcal{A}$ eje-$x$ y forma $\mathcal{B}$ eje-$y$. (b) [S] Energ\'ia $\mathcal{E}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. (c) [C] Frecuencia $\mathcal{F}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. Todos los m\'etodos permiten una discriminaci\'on entre eventos de no-crisis (circulos azules (\emph{non-seizure})) y eventos de crisis (tri\'angulos rojos (\emph{seizure})).}
\label{fig:Alphas}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[\text{[Q]}:Clasificador lineal]{\includegraphics[width=52mm]{./Figures/figure_beta2.png}}
\subfigure[\text{[S]}:Clasificador SVM]{\includegraphics[width=52mm]{./Shoeb/figure_beta2.png}}
\subfigure[\text{[C]}:Clasificador SVM]{\includegraphics[width=52mm]{./Chan/figure_beta2.png}}
\caption{Diagrama de dispersi\'on de la banda beta: (a) [Q] escala $\mathcal{A}$ eje-$x$ y forma $\mathcal{B}$ eje-$y$. (b) [S] Energ\'ia $\mathcal{E}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. (c) [C] Frecuencia $\mathcal{F}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. Todos los m\'etodos permiten una discriminaci\'on entre eventos de no-crisis (circulos azules (\emph{non-seizure})) y eventos de crisis (tri\'angulos rojos (\emph{seizure})).}
\label{fig:Betas}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[\text{[Q]}:Clasificador lineal]{\includegraphics[width=52mm]{./Figures/figure_gamma2.png}}
\subfigure[\text{[S]}:Clasificador SVM]{\includegraphics[width=52mm]{./Shoeb/figure_gamma2.png}}
\subfigure[\text{[C]}:Clasificador SVM]{\includegraphics[width=52mm]{./Chan/figure_gamma2.png}}
\caption{Diagrama de dispersi\'on de la banda gamma: (a) [Q] escala $\mathcal{A}$ eje-$x$ y forma $\mathcal{B}$ eje-$y$. (b) [S] Energ\'ia $\mathcal{E}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. (c) [C] Frecuencia $\mathcal{F}$ eje-$x$ y Potencia $\mathcal{P}$ eje-$y$. Todos los m\'etodos permiten una discriminaci\'on entre eventos de no-crisis (circulos azules (\emph{non-seizure})) y eventos de crisis (tri\'angulos rojos (\emph{seizure})).}
\label{fig:Gammas}
\end{figure}
La comparaci\'on entre la tabla de contingencia o matriz de confusi\'on mostrada en la Tabla \ref{tab:confusion}, muestra una sensibilidad o porcentaje de verdaderos positivos (TPR) del 100\% para todos los modelos. Mientras que la especificidad o porcentaje de verdaderos negativos (TNR) muestra un mejor rendimiento para el modelo basado en el clasificador lineal [Q] con repecto a los otros modelos que estan basados en un clasificador SVM, [S] y [C]. Esto nos permite sugerir que nuestro modelo basado en un clasificador lineal obtiene la mejor precisi\'on para todos los ritmos cerebrales en los 36 eventos estudiados (18 no-crisis y 18 crisis). Para simplificar la interpretaci\'on visual, destacamos con color rojo el m\'etodo que logra la mayor sensibilidad, especificidad y precisi\'on general para cada banda de frecuencia.
\begin{table}[htp]
\begin{center}
\begin{tabular}{||c|| c|c|c|| c|c|c|| c|c|c|| c|c|c||}
\hline \hline
& \multicolumn{3}{|c||}{TPR} & \multicolumn{3}{|c||}{FPR} & \multicolumn{3}{|c||}{TNR} & \multicolumn{3}{|c||}{ACC} \\
\hline
\hline
Bandas & Q & S & C & Q & S & C & Q & S & C & Q & S & C \\
\hline
\hline
\verb Delta & 1 & 1& 1& \hl{0.16 }& 0.33 & 0.38 & \hl{0.83} & 0.66 & 0.61& \hl{33} & 30 & 29\\
\verb Theta & 1 & 1& 1& \hl{0.05} & 0.33 & 0.38 & \hl{0.94} & 0.66 & 0.61& \hl{35} & 30 & 29 \\
\verb Alfa & 1 & 1& 1 & \hl{0.11} & 0.22 & 0.38 & \hl{0.88} & 0.77 & 0.61& \hl{34} & 32 & 29\\
\verb Beta & 1 & 1& 1 & \hl{0.05} & 0.33 & 0.44 & \hl{0.94} & 0.66 & 0.55 & \hl{35} & 30 & 28 \\
\verb Gamma & 1 & 1& 1 & \hl{0.05} & 0.33 & 0.55 & \hl{0.94} & 0.66 & 0.44 & \hl{35} & 30 & 26 \\
\hline \hline
\end{tabular}
\end{center}
\caption{\label{tab:confusion} Comparaci\'on entre el modelo propuesto [Q] basado en un clasificador lineal, contra dos modelos similares [S] y [C] que usan un clasificador SVM. Se estudiaron 36 eventos (18 no-crisis y 18 crisis), para cada ritmo cerebral utilizando la siguiente m\'etrica: sensibilidad o porcentaje de verdaderos positivos (TPR); porcentaje de falsos positivos (FPR); especificidad o porcentaje de verdaderos negativos (TNR); y precisi\'on de clasificaci\'on general (ACC).}
\end{table}
Para cada iteraci\'on-$k$ se calculo el \emph{valor de p\'erdida} y el \emph{error aparente} de los clasificadores, para todas las observaciones usando el modelo entrenado. El \emph{Valor de p\'erdida} permite determinar la p\'erdida de clasificaci\'on para las observaciones no utilizadas durante el entrenamiento. El \emph{error aparente} es la respuesta entre la diferencia de los datos de entrenamiento y las predicciones que hace el clasificador, en funci\'on de los datos de entrenamiento de entrada. En otras palabras es el porcentaje de error cometido sobre la misma muestra empleada para construir el modelo. Valores bajos en ambos m\'etodos significa una gran confianza o exactitud en el clasificador usado.
La Tabla \ref{tab:LRE} muestra el rendimiento de los modelos en t\'erminos de \emph{valor de p\'erdida} y el \emph{error aparente}. Se puede observar como el modelo [Q] basado en un clasificador lineal tiene los valores de p\'erdida y error m\'as bajos con respecto a los modelos [S] y [C] basados en un clasificador SVM.
\begin{table}[htp]
\begin{center}
\begin{tabular}{||c|| c|c|c|| c|c|c|| c|c|c|| c|c|c||}
\hline \hline
& \multicolumn{3}{|c||}{Valor de p\'erdida} & \multicolumn{3}{|c||}{Error Aparente} \\
\hline
\hline
Bandas & [Q] & [S] & [C] & [Q] & [S] & [C] \\
\hline
\hline
\verb Delta & 0.083 & 0.166 & 0.250 & 0.083 & 0.166 & 0.194 \\
\verb Theta & 0.027 & 0.194 & 0.222 & 0.027 & 0.166 & 0.194\\
\verb Alpha & 0.055 & 0.138 & 0.194 & 0.055 & 0.111 & 0.194\\
\verb Beta & 0.055 & 0.166 & 0.222 & 0.027 & 0.166 & 0.222\\
\verb Gamma & 0.083 & 0.194 & 0.277 & 0.027 & 0.166 & 0.277\\
\hline \hline
\end{tabular}
\end{center}
\caption{\label{tab:LRE} \emph{Valor de p\'erdida} y \emph{Error aparente} para los tres modelos comparados en los 36 eventos (18 no-crisis y 18 crisis), para cada ritmo cerebral. Se puede observar que los 3 modelos clasifican correctamente ya que tienen valores bajos, m\'as sin embargo nuestro modelo [Q] basado en la clasificaci\'on lineal tiene los mejores valores con respecto a [S] y [C].}
\end{table}
Teniendo toda la informaci\'on anterior, nosotros nos preguntamos si es posible relacionar los resultados en un rango entre $[-1,+1]$ de tal forma que sirva como una de predicci\'on de las crisis. Para responder a este interrogante, se uso el coeficiente de correlaci\'on producto-momento de Pearson para las clases: $\omega_s$ para los eventos de crisis y $\omega_{ns}$ para los eventos de no-crisis entre el modelo [Q] basado en el clasificador lineal y los dos modelos [S] y [Q] basados en un clasificador SVM. El estudio usando 36 eventos (18 crisis y 18 no-crisis) para cada ritmo cerebral, reportado en la Tabla \ref{tab:Pearson} muestra que nuestro modelo [Q] sugiere que se pueden predecir las crisis en un rango de correlaci\'on de $[-1,+1]$ ya que, $\omega_{ns}$ para las bandas: \emph{delta},\emph{theta}, \emph{alfa} y \emph{beta} se presenta una alta correlaci\'on con un valor de $r$ cercano a $1$, excepto para la banda \emph{gamma} donde la correlaci\'on es media. Todos con un muy buen valor $p < 0.05$ permitiendo un buen intervalo de confianza. Mientras que para $\omega_{s}$ las bandas no est\'an correlacionadas ya que el valor de $r$ es cercano a cero para las bandas: \emph{delta},\emph{theta}, \emph{alfa} y \emph{beta}
mientras que para la banda \emph{gamma} la no-correlaci\'on es m\'axima. Cabe resaltar que la comparaci\'on entre las dos clases en la banda \emph{theta}, el valor de $r$ esta en la mitad del intervalo deseado para los eventos de no-crisis, m\'as no tiene un valor muy cercano respecto a los eventos de crisis. Los modelos [S] y [C] presentan en general valores de correlaci\'on muy similares, lo cual no permite discriminar claramente entre eventos de crisis y eventos de no-crisis. Estos resultados sugieren que nuestro modelo basado en la distribuci\'on Gaussiana generalizada junto con un clasificador lineal, puede definirse por un valor de escala entre $[-1,+1] $ para discriminar entre eventos de crisis y eventos de no-crisis en todas los ritmos cerebrales, permitiendo definir umbrales a partir del intervalo de confianza para la escala propuesta.
\begin{table}[htp]
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{||c|| c|c|c|| c|c|c|| c|c|c|| c|c|c}
\hline \hline
& \multicolumn{3}{|c||}{[Q]} & \multicolumn{3}{|c||}{[C]} & \multicolumn{3}{|c||}{[S]} \\
\hline
\hline
Bandas & r & p & IC95\% & r & p & IC95\% & r & p & IC95\% \\
\hline
\hline
Delta $\omega_{ns}$ & \hl{0.88} & $<$ 0.001 & 0.70 0.95 & 0.46 & 0.054 & 0 0.76 & 0.83 & $<$ 0.001 & 0.59 0.93 \\
Delta $\omega_s$ & 0.39 & 0.11 & -0.01 0.72 & 0.19 & 0.448 & -0.30 0.60 & 0.50 & 0.034 & 0.04 0.78 \\
\hline
Theta $\omega_{ns}$ & \hl{0.81} & $<$ 0.001 & 0.55 0.92 & 0.56 & 0.014 & 0.13 0.81 & 0.75 & $\approx$ 0 & 0.45 0.90 \\
Theta $\omega_s$ & 0.51 & 0.03 & 0.06 0.79 & 0.22 & 0.363 & -0.26 0.62 & 0.76 & $\approx$ 0 & 0.46 0.90 \\
\hline
Alfa $\omega_{ns}$ & \hl{0.80} & $<$ 0.001 & 0.53 0.92 & 0.14 & 0.558 & -0.34 0.57 & 0.33 & 0.176 & -0.15 0.69 \\
Alfa $\omega_s$ & 0.45 & 0.06 & -0.02 0.76 & 0.30 & 0.217 & -0.18 0.67 & 0.80 & $<$ 0.001 & 0.53 0.92 \\
\hline
Beta $\omega_{ns}$ & \hl{0.72} & $<$ 0.001 & 0.38 0.89 & 0.60 & 0.007 & 0.19 0.83 & 0.69 & $<$ 0.001 & 0.34 0.87 \\
Beta $\omega_s$ & 0.15 & 0.56 & -0.34 0.58 & 0.50 & 0.031 & 0.05 0.78 & 0.65 & $\approx$ 0 & 0.26 0.85 \\
\hline
Gamma $\omega_{ns}$ & \hl{0.58} & 0.01 & 0.15 0.82 & 0.34 & 0.167 & -.015 0.69 & 0.93 & $\approx$ 0 & 0.82 0.97 \\
Gamma $\omega_s$ & -0.11 & 0.66 & -0.55 0.38 & 0.01 & 0.947 & -0.45 0.47 & 0.85 & $<$ 0.001 & 0.65 0.94 \\
\hline \hline
\end{tabular}}
\caption{\label{tab:Pearson} Coeficiente de correlaci\'on producto-momento de Pearson entre $\omega_s$ para un evento de crisis (seizure) y $\omega_{ns}$ para un evento de no-crisis (non-seizure) sobre los diferentes modelos de clasificaci\'on: [Q] basado en un clasificador lineal, [S] y [C] basados en un clasificador SVM. Se usaron 36 eventos (18 crisis and 18 no-crisis) para cada ritmo cerebral; donde $r=1$ es una correlaci\'on total, $r=0$ es una ausencia de correlaci\'on, $r=-1$ es una no-correlaci\'on total. $IC95\%$ es el 95 por ciento del intervalo de confianza. [Q] presenta una alta correlaci\'on para las bandas \emph{delta},\emph{theta}, \emph{alfa} y \emph{beta}, excepto para la banda \emph{gamma} para los eventos de no-crisis, mientras que para los eventos de crisis presenta una ausencia de correlaci\'on. Los modelos [S] y [C] no tienen un correlaci\'on clara. Esto sugiere que nuestro modelo [Q] puede estimar los cambios entre eventos crisis y no-crisis en una escala de $[-1,+1]$.}
\end{table}
La detecci\'on para todos los ritmos cerebrales, basado en el intervalo de confianza $IC95\%$ del coeficiente de correlaci\'on producto-momento de Pearson de las se\~nales estudiadas es de: 1 segundo en promedio antes del inicio de la crisis para el $50\%$, detecci\'on exacta en tiempo del inicio de la crisis para el $30\%$ y el $20\%$ restante tiene una latencia en promedio de $3.07$ segundos.
\section{Conclusiones}
\label{sec:disc}
Los resultados preliminares de este trabajo sugieren que la detecci\'on de eventos de crisis utilizando la distribuci\'on Gaussiana generalizada basada en un clasificador lineal junto con
el coeficiente de correlaci\'on producto-momento de Pearson, permiten predecir una crisis de una no-crisis en un valor de escala entre $[-1,+1]$ en todos los ritmos cerebrales. Por lo tanto, es potencialmente interesante para idear algoritmos de detecci\'on de crisis de manera autom\'atica en tiempo real.
Las perspectivas para el trabajo futuro incluyen una extensa evaluaci\'on de esta metodolog\'ia, mejorar el tiempo de latencia de la detecci\'on de la crisis, realizar un estudio detallado de la confiabilidad de la predicci\'on a medida que se genera la crisis, aplicar nuestra metodolog\'ia en se\~nales a largo-plazo durante el sue\~no, hacer pruebas con se\~nales en tiempo real y comparaciones con otros m\'etodos de predicci\'on del estado del arte, que incluyan una alta robutez frente al ruido y control de artefactos, as\'i como la intensidad, la duraci\'on y la propagaci\'on de las crisis por cada canal.
\section*{Conflictos de Interes}
Los autores declaran que no tienen ning\'un conflicto de intereses.
\section*{Agradecimientos}
Parte de este trabajo fue subsidiado por \emph{ITBACyT} DP.557No.34/2015, Actividades cient\'ificas y tecnol\'ogicas del departamento de investigaci\'on del Instituto Tecnol\'ogico de Buenos Aires y por el protocolo \emph{07/15} del FLENI.
\bibliographystyle{elsart-num}
|
1,116,691,500,954 | arxiv | \section{Introduction}
Named entity recognition (NER), as a fundamental task in information extraction, aims to locate and classify named entities from unstructured natural language.
A considerable number of approaches equipped with deep neural networks have shown promising performance~\cite{chiu2016named} on fully supervised NER. Notably, pre-trained language models (e.g., BERT~\cite{devlin2018bert}) with an additional classifier achieve significant success on this task and gradually become the base paradigm. Such studies demonstrate that deep models could yield remarkable results accompanied by a large amount of annotated corpora.
\begin{figure}[t]
\centering
\includegraphics[width = 0.95\linewidth]{overview.pdf}
\caption{An overview of \textsc{Few-NERD}. The inner circle represents the coarse-grained entity types and the outer circle represents the fine-grained entity types, some types are denoted by abbreviations.}
\label{fig:overview}
\end{figure}
With the emerging of knowledge from various domains, named entities, especially ones that need professional knowledge to understand, are difficult to be manually annotated on a large scale.
Under this circumstance, studying NER systems that could learn unseen entity types with few examples, i.e., few-shot NER, plays a critical role in this area. There is a growing body of literature that recognizes the importance of few-shot NER and contributes to the task~\cite{hofer2018few, fritzler2019few, yang2020simple, li2020few, huang2020few}. Unfortunately, \textit{there is still no dataset specifically designed for few-shot NER}. Hence, these methods collect previously proposed supervised NER datasets and re-organize them into a few-shot setting. Common options of datasets include OntoNotes~\cite{weischedel2013ontonotes}, CoNLL'03~\cite{sang2003introduction}, WNUT'17~\cite{derczynski2017results}, etc.
These research efforts of few-shot learning for named entities mainly face two challenges:
First, most datasets used for few-shot learning have only 4-18 coarse-grained entity types, making it hard to construct an adequate variety of ``N-way'' meta-tasks and learn correlation features. And in reality, we observe that most unseen entities are fine-grained. Second, because of the lack of benchmark datasets, the settings of different works are inconsistent~\cite{huang2020few,yang2020simple}, leading to unclear comparisons.
To sum up, these methods make promising contributions to few-shot NER, nevertheless, a specific dataset is urgently needed to provide a unified benchmark dataset for rigorous comparisons.
To alleviate the above challenges, we present a large-scale human-annotated few-shot NER dataset, $\textsc{Few-NERD}$, which consists of 188.2k sentences extracted from the Wikipedia articles and 491.7k entities are manually annotated by well-trained annotators (Section~\ref{sec:human}). To the best of our knowledge, $\textsc{Few-NERD}$ is the first dataset specially constructed for few-shot NER and also one of the largest human-annotated NER dataset (statistics in Section~\ref{sec:datasize}). We carefully design an annotation schema of 8 coarse-grained entity types and 66 fine-grained entity types by conducting several pre-annotation rounds. (Section~\ref{sec:schema}). In contrast, as the most widely-used NER datasets, CoNLL has 4 entity types, WNUT'17 has 6 entity types and OntoNotes has 18 entity types (7 of them are value types). The variety of entity types makes $\textsc{Few-NERD}$ contain rich contextual features with a finer granularity for better evaluation of few-shot NER.
The distribution of the entity types in $\textsc{Few-NERD}$ is shown in Figure~\ref{fig:overview}, more details are reported in Section~\ref{sec:dist}. We conduct an analysis of the mutual similarities among all the entity types of $\textsc{Few-NERD}$ to study knowledge transfer (Section~\ref{sec:sim}). The results show that our dataset can provide sufficient correlation information between different entity types for few-shot learning.
For benchmark settings, we design three tasks on the basis of $\textsc{Few-NERD}$, including a standard supervised task (\textsc{Few-NERD (sup)}) and two few-shot tasks ($\textsc{Few-NERD-intra}$) and \textsc{Few-NRTD (inter)}), for more details see Section~\ref{sec:bench}. \textsc{Few-NERD (sup)}, \textsc{Few-NERD (intra)}, and \textsc{Few-NERD (inter)} assess instance-level generalization, type-level generalization and knowledge transfer of NER methods, respectively. We implement models based on the recent state-of-the-art approaches and evaluate them on $\textsc{Few-NERD}$ (Section~\ref{sec:exp}). And empirical results show that $\textsc{Few-NERD}$ is challenging on all these three settings. We also conduct sets of subsidiary experiments to analyze promising directions of few-shot NER. Hopefully, the research of few-shot NER could be further facilitated by $\textsc{Few-NERD}$.
\section{Related Work}
As a pivotal task of information extraction, NER is essential for a wide range of technologies~\cite{cui2019kbqa, li2019chinese, ding-etal-2019-event, shen2020modeling}. And a considerable number of NER datasets have been proposed over the years. For example, CoNLL'03~\cite{sang2003introduction} is regarded as one of the most popular datasets, which is curated from Reuters News and includes 4 coarse-grained entity types. Subsequently, a series of NER datasets from various domains are proposed~\cite{balasuriya2009named, ritter-etal-2011-named, weischedel2013ontonotes, stubbs2015annotating, derczynski2017results}. These datasets formulate a sequence labeling task and most of them contain 4-18 entity types. Among them, due to the high quality and size, OntoNotes 5.0~\cite{weischedel2013ontonotes} is considered as one of the most widely used NER datasets recently.
As approaches equipped with deep neural networks have shown satisfactory performance on NER with sufficient supervision~\cite{lample-etal-2016-neural, ma2016end}, few-shot NER has received increasing attention~\cite{hofer2018few, fritzler2019few, yang2020simple, li2020few}. Few-shot NER is a considerably challenging and practical problem that could facilitate the understanding of textual knowledge for neural model~\cite{huang2020few}. Due to the lack of specific benchmarks of few-shot NER, current methods collect existing NER datasets and use different few-shot settings.
To provide a benchmark that could comprehensively assess the generalization of models under few examples, we annotate \textsc{Few-NERD}. To make the dataset practical and close to reality, we adopt a fine-grained schema of entity annotation, which is inspired and modified from previous fine-grained entity recognition studies~\cite{ling2012fine, gillick2014context, choi2018ultra, ringland2019nne}.
\section{Problem Formulation}
\subsection{Named Entity Recognition}
NER is normally formulated as a sequence labeling problem. Specifically, for an input sequence of tokens $\bm{x}=\{x_1,x_2,...,x_t\}$, NER aims to assign each token $x_i$ a label $y_i \in \mathcal{Y}$ to indicate either the token is a part of a named entity (such as \texttt{Person}, \texttt{Organization}, \texttt{Location}) or not belong to any entities (denoted as \texttt{O} class), $\mathcal{Y}$ being a set of pre-defined entity-types.
\subsection{Few-shot Named Entity Recognition}
$N$-way $K$-shot learning is conducted by iteratively constructing episodes. For each episode in training, $N$ classes ($N$-way) and $K$ examples ($K$-shot) for each class are sampled to build a support set $\mathcal{S}_{\text{train}} = \{\bm{x}^{(i)}, \bm{y}^{(i)}\}_{i=1}^{N*K}$, and $K'$ examples for each of $N$ classes are sampled to construct a query set $\mathcal{Q}_{\text{train}} = \{\bm{x}^{(j)}, \bm{y}^{(j)}\}_{j=1}^{N*K'}$, and $\mathcal{S} \bigcap \mathcal{Q} = \varnothing$. Few-shot learning systems are trained by predicting labels of query set $\mathcal{Q}_{\text{train}}$ with the information of support set $\mathcal{S}_{\text{train}}$. The supervision of $\mathcal{S}_{\text{train}}$ and $\mathcal{Q}_{\text{train}}$ are available in training. In the testing procedure, all the classes are unseen in the training phase, and by using few labeled examples of support set $\mathcal{S}_{\text{test}}$, few-shot learning systems need to make predictions of the unlabeled query set $\mathcal{Q}_{\text{test}}$ ($\mathcal{S} \bigcap \mathcal{Q} = \varnothing$). However, in the sequence labeling problem like NER, a sentence may contain multiple entities from different classes. And it is imperative to sample examples in sentence-level since contextual information is crucial for sequence labeling problems, especially for NER. Thus the sampling is more difficult than conventional classification tasks like relation extraction~\cite{han2018fewrel}.
Some previous works~\cite{yang2020simple, li2020few} use greedy-based sampling strategies to iteratively judge if a sentence could be added into the support set, but the limitation becomes gradually strict during the sampling. For example, when it comes to a 5-way 5-shot setting, if the support set already had 4 classes with 5 examples and 1 class with 4 examples, the next sampled sentence must only contain the specific one entity to strictly meet the requirement of $5$ way $5$ shot. It is not suitable for $\textsc{Few-NERD}$ since it is annotated with dense entities. Thus, as shown in Algorithm~\ref{alg:gre} we adopt a $N$-way $K$$\sim$$2K$-shot setting in our paper, the primary principle of which is to ensure that each class in $\mathcal{S}$ contain $K$$\sim$$2K$ examples, effectively alleviating the limitations of sampling.
\begin{algorithm}[h]
\caption{Greedy $N$-way $K$$\sim$$2K$-shot sampling algorithm}
\LinesNumbered
\label{alg:gre}
\KwIn{Dataset $\mathcal{X}$, Label set $\mathcal{Y}$, $N$, $K$}
\KwOut{output result
$\mathcal{S}\leftarrow \varnothing$; {\tcp{Init the support set}}
{\tcp{Init the count of entity types}}
\For{$i=1$ to $N$}{
$\text{Count}[i] = 0$ \;
}
\Repeat{$\text{Count}_i \geq K$ for $i = 1$ to N }{
Randomly sample $(\bm{x}, \bm{y}) \in \mathcal{X}$ \;
Compute $|\text{Count}|$ and $\text{Count}_i$ after update \;
\eIf{$|\text{Count}| > N$ or $\exists \text{Count}[i] > 2K$}{
Continue \;
}{
$\mathcal{S} = \mathcal{S} \bigcup (\bm{x}, \bm{y})$ \;
Update $\text{Count}_i$ \;
}
}
\end{algorithm}
\vspace{-0.2cm}
\section{Collection of \textsc{Few-NERD}}
\subsection{Schema of Entity Types}
\label{sec:schema}
The primary goal of \textsc{Few-NERD} is to construct a fine-grained dataset that could specifically be used in the few-shot NER scenario.
Hence, schemas of traditional NER datasets such as CoNLL'03, OntoNotes that only contain 4-18 coarse-grained types could not meet the requirements.
The schema of \textsc{Few-NERD} is inspired by \textsc{Figer}~\cite{ling2012fine}, which contains 112 entity tags with good coverage. On this basis, we make some modifications according to the practical situation. It is worth noting that $\textsc{Few-NERD}$ focuses on named entities, omitting value/numerical/time/date entity types~\cite{weischedel2013ontonotes, ringland2019nne} like \texttt{Cardinal, Day, Percent}, etc.
First, we modify the \textsc{Figer} schema into a two-level hierarchy to incorporate simple domain information~\cite{gillick2014context}. The coarse-grained types are \{\texttt{Person}, \texttt{Location}, \texttt{Organization}, \texttt{Art}, \texttt{Building}, \texttt{Product}, \texttt{Event}, \texttt{Miscellaneous} \}.
Then we statistically count the frequency of entity types in the automatically annotated \textsc{Figer}. By removing entity types with low frequency, there are 80 fine-grained types remaining. Finally, to ensure the practicality of the annotation process, we conduct rounds of pre-annotation and make further modifications to the schema. For example, we combine the types of \texttt{Country}, \texttt{Province/State}, \texttt{City}, \texttt{Restrict} into a class \texttt{GPE}, since it is difficult to distinguish these types only based on context (especially GPEs at different times). For another example, we create a \texttt{Person-Scholar} type, because in the pre-annotation step, we found that there are numerous person entities that express the semantics of research, such as mathematician, physicist, chemist, biologist, paleontologist, but the Figer schema does not define this kind of entity type. We also conduct rounds of manual denoising to select types with truly high frequency.
Consequently, the finalized schema of \textsc{Few-NERD} includes 8 coarse-grained types and 66 fine-grained types, which is detailedly shown accompanied by selected examples in Appendix.
\subsection{Paragraph Selection}
\label{sec:para}
The raw corpus we use is the entire Wikipedia dump in English, which has been widely used in constructions of NLP datasets~\cite{han2018fewrel,yang2018hotpotqa, wang2020maven}. Wikipedia contains a large variety of entities and rich contextual information for each entity.
\textsc{Few-NERD} is annotated in paragraph-level, and it is crucial to effectively select paragraphs with sufficient entity information. Moreover, the category distribution of the data is expected to be balanced since the data is applied in a few-shot scenario. It is also a key difference between $\textsc{Few-NERD}$ and previous NER datasets, whose entity distributions are usually considerably uneven. In order to do so, we construct a dictionary for each fine-grained type by automatically collecting entity mentions annotated in $\textsc{Figer}$, then the dictionaries are manually denoised.
We develop a search engine to retrieve paragraphs including entity mentions of the distant dictionary. For each entity, we choose 10 paragraphs and construct a candidate set. Then, for each fine-grained class, we randomly select 1000 paragraphs for manual annotation. Eventually, 66,000 paragraphs are selected, consisting of 66 fine-grained entity types, and each paragraph contains an average of 61.3 tokens.
\subsection{Human Annotation}
\label{sec:human}
As named entities are expected to be context-dependent, annotation of named entities is complicated, especially with such a large number of entity types. For example, shown in Table~\ref{tab:case}, ``\textit{London is the fifth album by the British rock band Jesus Jones..}'', where \textit{London} should be annotated as an entity of \texttt{Art-Music} rather than \texttt{Location-GPE}. Such a situation requires that the annotator has basic linguistic training and can make reasonable judgments based on the context.
\begin{table}[]
\centering
\scalebox{0.9}{
\begin{tabular}{p{7.7cm}} \toprule
\textbf{Paragraph} \\\midrule
\ \ ${\color{Plum}\textit{London}_{\texttt{[Art-Music]}}}$ is the fifth album by the ${\color{BlueViolet}\textit{British}_{\texttt{[Loc-GPE]}}}$ rock band ${\color{PineGreen}\textit{Jesus Jones}_{\texttt{[Org-ShowOrg]}}}$ in 2001 through ${\color{Mahogany}\textit{Koch Records}_{\texttt{[Org-Company]}}}$. Following the commercial failure of 1997's "${\color{Plum}\textit{Already}_{\texttt{[Art-Music]}}}$" which led to the band and ${\color{Mahogany}\textit{EMI}_{\texttt{[Org-Company]}}}$ parting ways,
the band took a hiatus before regathering for the recording of "${\color{Plum}\textit{London}_{\texttt{[Art-Music]}}}$" for Koch/Mi5 Recordings, with a more alternative rock approach as opposed to the techno sounds on their previous albums. The album had low-key promotion, initially only being released in the ${\color{BlueViolet}\textit{United States}_{\texttt{[Loc-GPE]}}}$. Two EP's were released from the album, "${\color{Plum}\textit{Nowhere Slow}_{\texttt{[Art-Music]}}}$" and "${\color{Plum}\textit{In the Face Of All This}_{\texttt{[Art-Music]}}}$".
\\ \bottomrule
\end{tabular}}
\caption{An annotated case of $\textsc{Few-NERD}$}
\label{tab:case}
\vspace{-0.24cm}
\end{table}
Annotators of $\textsc{Few-NERD}$ include 70 annotators and 10 experienced experts. All the annotators have linguistic knowledge and are instructed with detailed and formal annotation principles. Each paragraph is independently annotated by two well-trained annotators. Then, an experienced expert goes over the paragraph for possible wrong or omissive annotations, and make the final decision. With 70 annotators participated, each annotator spends an average of 32 hours during the annotation process. We ensure that all the annotators are fairly compensated by market price according to their workload (the number of examples per hour).
The data is annotated and submitted in batches, and each batch contains 1000$\sim$3000 sentences.
To ensure the quality of $\textsc{Few-NERD}$, for each batch of data, we randomly select 10\% sentences and conduct double-checking. If the accuracy of the annotation is lower than 95 \% (measured in sentence-level), the batch will be re-annotated. Furthermore, we calculate the Cohen's Kappa~\cite{cohen1960coefficient} to measure the aggreements between two annotators, the result is 76.44\%, which indicates a high degree of consistency.
\section{Data Analysis}
\begin{table*}[]
\centering
\scalebox{0.86}{
\begin{tabular}{lrrrrr}
\toprule \textbf{Datasets} & \# \textbf{Sentences} & \# \textbf{Tokens} & \# \textbf{Entities} & \# \textbf{Entity Types} & \textbf{Domain} \\ \midrule
CoNLL'03~\cite{sang2003introduction} & 22.1k & 301.4k & 35.1k & 4 & Newswire \\
WikiGold~\cite{balasuriya2009named} & 1.7k & 39k & 3.6k & 4 & General \\
OntoNotes~\cite{weischedel2013ontonotes}& 103.8k & 2067k & 161.8k & 18 & General \\
WNUT'17~\cite{derczynski2017results} & 4.7k & 86.1k & 3.1k & 6 & SocialMedia \\
I2B2~\cite{stubbs2015annotating} & 107.9k & 805.1k & 28.9k & 23 & Medical \\ \midrule
\textsc{Few-NERD} & \textbf{188.2k} & \textbf{4601.2k} & \textbf{491.7k} & \textbf{66} & General \\ \bottomrule
\end{tabular}}
\caption{Statistics of $\textsc{Few-NERD}$ and multiple widely used NER datasets. For CoNLL'03, WikiGold, and I2B2, we report the statistics in the original paper. For OntoNotes 5.0 (LDC2013T19), we download and count all the data (English) annotated by the NER labels, some works use different split of OntoNotes 5.0 and may report different statistics. For WNUT'17, we download and count all the data.}
\label{table:stat}
\vspace{-0.2cm}
\end{table*}
\subsection{Size and Distribution of $\textsc{Few-NERD}$}
\label{sec:datasize}
\label{sec:dist}
$\textsc{Few-NERD}$ is not only the first few-shot dataset for NER, but it also is one of the biggest human-annotated NER datasets. We report the the statistics of the number of sentences, tokens, entity types and entities of \textsc{Few-NERD} and several widely-used NER datasets in Table~\ref{table:stat}, including CoNLL'03, WikiGold, OntoNotes 5.0, WNUT'17 and I2B2. We observe that although OntoNotes and I2B2 are considered as large-scale datasets, $\textsc{Few-NERD}$ is significantly larger than all these datasets. Moreover, $\textsc{Few-NERD}$ contains more entity types and annotated entities.
As introduced in Section~\ref{sec:para}, \textsc{Few-NERD} is designed for few-shot learning and the distribution could not be severely uneven. Hence, we balance the dataset by selecting paragraphs through a distant dictionary.
The data distribution is illustrated in Figure~\ref{fig:overview}, where $\texttt{Location}$ (especially \texttt{GPE}) and \texttt{Person} are entity types with the most examples.
Although utilizing a distant dictionary to balance the entity types could not produce a fully balanced data distribution, it still ensures that each fine-grained type has a sufficient number of examples for few-shot learning.
\subsection{Knowledge Correlations among Types}
\label{sec:sim}
Knowledge transfer is crucial for few-shot learning~\cite{li2019large}. To explore the knowledge correlations among all the entity types of $\textsc{Few-NERD}$, we conduct an empirical study about entity type similarities in this section.
We train a BERT-Tagger (details in Section~\ref{sec:models}) of 70\% arbitrarily selected data on $\textsc{Few-NERD}$ and use 10\% data to select the model with best performance (it is actually the setting of $\textsc{Few-NERD (sup)}$ in Section~\ref{sec:bench-supner}). After obtaining a contextualized encoder, we produce entity mention representations of the remaining 20\% data of \textsc{Few-NERD}. Then, for each fine-grained types, we randomly select 100 instances of entity embeddings. We mutually compute the dot product among entity embeddings for each type two by two and average them to obtain the similarities among types, which is illustrated in Figure~\ref{fig:heat}. We observe that entity types shared identical coarse-grained types typically have larger similarities, resulting in an easier knowledge transfer. In contrast, although some of the fine-grained types have large similarities, most of them across coarse-grained types share little correlations due to distinct contextual features. This result is consistent with intuition. Moreover, it inspires our benchmark-setting from the perspective of knowledge transfer (see Section~\ref{sec:bench-fewner}).
\begin{figure}
\centering
\includegraphics[width = 0.96\linewidth]{heat-100-2.pdf}
\caption{A heat map to illustrate knowledge correlations among type in $\textsc{Few-NERD}$, each small colored square represents the similarity of two entity types.}
\label{fig:heat}
\vspace{-0.3cm}
\end{figure}
\section{Benchmark Settings}
\label{sec:bench}
We collect and manually annotate 188,238 sentences with 66 fine-grained entity types in total, which makes $\textsc{Few-NERD}$ one of the largest human-annotated NER datasets.
To comprehensively exploit such rich information of entities and contexts, as well as evaluate the generalization of models from different perspectives, we construct three tasks based on $\textsc{Few-NERD}$ (Statistics are reported in Table~\ref{tab:stats_bench}).
\subsection{Standard Supervised NER}
\label{sec:bench-supner}
\textbf{\textsc{Few-NERD (sup)}}\quad We first adopt a standard \textit{supervised setting} for NER by randomly splitting 70\% data as the training data, 10\% as the validation data and 20\% as the testing data.
In this setting, the training set, dev set, and test set contain the whole 66 entity types. Although the supervised setting is not the ultimate goal of the construction of \textsc{Few-NERD}, it is still meaningful to assess the instance-level generalization for NER models. As shown in Section 6.2, due to the large number of entity types, \textsc{Few-NERD} is very challenging even in a standard supervised setting.
\subsection{Few-shot NER}
\label{sec:bench-fewner}
The core intuition of few-shot learning is to learn new classes from few examples. Hence, we first split the overall entity set (denoted as $\mathcal{E}$) into three mutually disjoint subsets, respectively denoted as $\mathcal{E}_{\text{train}}, \mathcal{E}_{\text{dev}}, \mathcal{E}_{\text{test}}$, and $\mathcal{E}_{\text{train}} \bigcup \mathcal{E}_{\text{dev}} \bigcup \mathcal{E}_{\text{test}} = \mathcal{E}$, $\mathcal{E}_\text{train} \bigcap \mathcal{E}_{\text{dev}} \bigcap \mathcal{E}_{\text{test}} = \varnothing$. Note that all the entity types are fine-grained types.
Under this circumstance, instances in train, dev and test datasets only consist of instances with entities in $\mathcal{E}_{\text{train}}, \mathcal{E}_{\text{dev}}, \mathcal{E}_{\text{test}}$ respectively. However, NER is a sequence labeling problem, and it is possible that a sentence contains several different entities. To avoid the observation of new entity types in the training phase, we replace the labels of entities that belong to $\mathcal{E}_{\text{test}}$ with \texttt{O} in the training set. Similarly, in the test set, entities that belongs to $\mathcal{E}_{\text{train}}$ and $\mathcal{E}_{\text{dev}}$ are also replaced by \texttt{O}. Based on this setting, we develop two few-shot NER tasks adopting different splitting strategies.
\noindent\textbf{\textsc{Few-NERD (intra)}}\quad Firstly, we construct $\mathcal{E}_{\text{train}}$, $\mathcal{E}_{\text{dev}}$ and $\mathcal{E}_{\text{test}}$ according to the coarse-grained types. In other words, all the entities in different sets belong to different coarse-grained types. In the basis of the principle that we should replace as few as possible entities with \texttt{O}, we assign all the fine-grained entity types belonging to \texttt{People, MISC, Art, Product} to $\mathcal{E}_{\text{train}}$, all the fine-grained entity types belonging to \texttt{Event, Building} to $\mathcal{E}_{\text{dev}}$, and all the fine-grained entity types belonging to \texttt{ORG, LOC} to $\mathcal{E}_{\text{test}}$, respectively. Based on Figure~\ref{fig:heat}, in this setting, the training set, dev set and test set share little knowledge, making it a difficult benchmark.
\noindent\textbf{\textsc{Few-NERD (inter)}}\quad In this task, although all the fine-grained entity types are mutually disjoint in $\mathcal{E}_{\text{train}}$, $\mathcal{E}_{\text{dev}}$, the coarse-grained types are shared. Specifically, we roughly assign 60\% fine-grained types of all the 8 coarse-grained types to $\mathcal{E}_{\text{train}}$, 20\% to $\mathcal{E}_{\text{dev}}$ and 20\% $\mathcal{E}_{\text{test}}$, respectively. The intuition of this setting is to explore if the coarse information will affect the prediction of new entities.
\begin{table}[]
\centering
\scalebox{0.88}{
\begin{tabular}{lccc}
\toprule
\textbf{Split} & \textbf{\#Train} & \textbf{\#Dev} & \textbf{\#Test} \\
\midrule
\textsc{Few-NERD (sup)} & 131,767 & 18,824 & 37,648\\
\textsc{Few-NERD (intra)} & 99,519 & 19,358 & 44,059\\
\textsc{Few-NERD (inter)} & 130,112 & 18,817 & 14,007\\ \bottomrule
\end{tabular}}
\caption{Statistics of train, dev and test sets for three tasks of $\textsc{Few-NERD}.$ We remove the sentences with no entities for the few-shot benchmarks.}
\label{tab:stats_bench}
\vspace{-0.3cm}
\end{table}
\section{Experiments}
\label{sec:exp}
\subsection{Models}
\label{sec:models}
Recent studies show that pre-trained language models with deep transformers (e.g., BERT~\cite{devlin2018bert}) have become a strong encoder for NER~\cite{li2020unified}. We thus follow the empirical settings and use BERT as the backbone encoder in our experiments. We denote the parameters as $\theta$ and the encoder as $f_\theta$. Given a sequence $\bm{x} = \{x_1,...,x_n\}$,
for each token $x_i$, the encoder produces contextualized representations as:
\begin{equation}
\bm{h} = [\bm{h}_1,...,\bm{h}_n] = f_\theta([x_1,...,x_n]).
\end{equation}
Specifically, we implement four BERT-based models for supervised and few-shot NER, which are BERT-Tagger~\cite{devlin-etal-2019-bert}, ProtoBERT~\cite{snell2017prototypical}, NNShot~\cite{yang2020simple} and StructShot~\cite{yang2020simple}.
\noindent \textbf{BERT-Tagger} \quad As stated in Section~\ref{sec:bench-supner}, we construct a standard supervised task based on $\textsc{Few-NERD}$, thus we implement a simple but strong baseline BERT-Tagger for supervised NER. BERT-Tagger is built by adding a linear classifier on top of BERT and trained with a cross-entropy objective under a full supervision setting.
\noindent \textbf{ProtoBERT}\quad Inspired by achievements of meta-learning approaches~\cite{finn2017model, snell2017prototypical,ding2021prototypical} on few-shot learning.
The first baseline model we implement is ProtoBERT, which is a method based on prototypical network~\cite{snell2017prototypical} with a backbone of BERT~\cite{devlin2018bert} encoder. This approach derives a prototype $\bm{z}$ for each entity type by computing the average of the embeddings of the tokens that share the same entity type. The computation is conducted in support set $\mathcal{S}$. For the $i$-th type, the prototype is denoted as $\bm{z}_i$ and the support set is $\mathcal{S}_i$,
\begin{equation}
\bm{z}_i = \frac{1}{|\mathcal{S}_i|} \sum_{{x} \in \mathcal{S}_i} f_\theta(x).
\end{equation}
While in the query set $\mathcal{Q}$, for each token $x \in \mathcal{Q}$, we firstly compute the distance between $x$ and all the prototypes. We use the $l$-2 distance as the metric function $d(f_\theta(x), \bm{z}) = ||f_\theta(x) - \bm{z}||^2_2$. Then, through the distances between $x$ and all other prototypes, we compute the prediction probability of $x$ over all types. In the training step, parameters are updated in each meta-task. In the testing step, the prediction is the label of the nearest prototype to $x$. That is, for a support set $\mathcal{S}_\mathcal{Y}$ with types of $\mathcal{Y}$ and a query $x$, the prediction process is given as
\begin{equation}
\begin{split}
y^* &= \text{arg} \min_{y \in \mathcal{Y}} d_y(x), \\
d_y(x) &= d(f_\theta(x), \bm{z}_y).
\end{split}
\end{equation}
\noindent \textbf{NNShot \& StructShot} \quad NNShot and StructShot~\cite{yang2020simple} are the state-of-the-art methods based on token-level nearest neighbor classification. In our experiments, we use BERT as the backbone encoder to produce contextualized representations for fair comparison. Different from the prototype-based method, NNShot determines the tag of one query based on the token-level distance, which is computed as $d(f_\theta(x), f_\theta(x')) = ||f_\theta(x) - f_\theta(x')||^2_2$. Hence, for a support set $\mathcal{S}_\mathcal{Y}$ with type of $\mathcal{Y}$ and a query $x$,
\begin{equation}
\begin{split}
y^* &= \text{arg} \min_{y \in \mathcal{Y}} d_y(x), \\
d_y(x) &= \min_{x' \in \mathcal{S}_y} d(f_\theta(x), f_\theta(x')).
\end{split}
\end{equation}
With the identical basic structure as NNShot, StructShot adopts an additional Viterbi decoder during the inference phase~\cite{hou2020few} (not in training phase), where we estimate a transition distribution $p(y'|y)$ and an emission distribution $p(y|x)$ and solve the problem:
\begin{equation}
\bm{y}^* = \text{arg} \max_{\bm{y}} \prod^T_{t=1} p(y_t|x) \times p(y_t|y_{t-1}).
\end{equation}
To sum up, BERT-Tagger is a well-acknowledged baseline that could produce pronounced results on supervised NER. ProtoBERT, and NNShot \& StructShot respectively use prototype-level and token-level similarity scores to tackle the few-shot NER problem. These baselines are strong and representative models of the NER task. For implementation details, please refer to Appendix.
We evaluate models by considering query sets $\mathcal{Q}_{\text{test}}$ of test episodes.
We calculate the precision (P), recall (R) and micro F1-score over all test episodes.
Instead of the popular BIO schema, we utilize the IO schema in our experiments, using $\texttt{I-type}$ to denote all the tokens of a named entity and $\texttt{O}$ to denote other tokens.
\begin{figure*}[ht]
\centering
\includegraphics[width = 1\linewidth]{type.pdf}
\caption{F1-scores of different entity types on \textsc{Few-NERD (SUP)}, we report the average performance of each coarse-grained entity type on the legends.}
\label{fig:types}
\end{figure*}
\begin{table}[]
\centering
\scalebox{0.86}{
\begin{tabular}{lccc}
\toprule
\textbf{Datasets} & \textbf{P} & \textbf{R} & \textbf{F1} \\
\cmidrule(r){1-1} \cmidrule(r){2-4}
CoNLL'03 &90.62& 92.07& 91.34\\
OntoNotes 5.0 & 90.00 & 88.24 & 89.11\\
\cmidrule(r){1-1} \cmidrule(r){2-4}
\textsc{{Few-NERD (sup)}} & 65.56 ($\color{red}{\downarrow}$) & 68.78 ($\color{red}{\downarrow}$) & 67.13 ($\color{red}{\downarrow}$) \\ \bottomrule
\end{tabular}}
\caption{Results of BERT-Tagger on previous NER datasets and the supervised setting of $\textsc{Few-NERD}$.}
\vspace{-0.3cm}
\label{tab:sup}
\end{table}
\begin{table*}[h]
\centering
\scalebox{0.57}{
\begin{tabular}{lcccccccccccc} \toprule
\multirow{3}{*}{\textbf{\large{Model}}} & \multicolumn{12}{c}{\textbf{\large{\textsc{Few-NERD(intra)}}}} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
& \multicolumn{3}{c}{\large{\textbf{5 way 1$\sim$2 shot}}} & \multicolumn{3}{c}{\large{\textbf{5 way 5$\sim$10 shot}}} & \multicolumn{3}{c}{\large{\textbf{10 way 1$\sim$2 shot}}}& \multicolumn{3}{c}{\large{\textbf{10 way 5$\sim$10 shot}}} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
& \large{P} & \large{R} & \large{F1} & {P} & \large{R} & \large{F1} & {P} & \large{R} & \large{F1} & \large{P} & \large{R} & \large{F1} \\ \cmidrule(r){1-1} \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
Proto & 15.97±0.61
& 29.66±1.39 & 20.76±0.84 & 36.34±1.33 & \textbf{51.32±0.45} & \textbf{42.54±0.94} & 11.33±0.57 & \textbf{22.47±0.49} & 15.05±0.44 & 29.39±0.27 & \textbf{44.51±1.00} & \textbf{35.40±0.13} \\
NNShot & 24.15±0.35 & 27.65±1.63 & 25.78±0.91 & 32.91±0.62 & 40.19±1.22 & 36.18±0.79 & 16.25±0.22 & 20.90±1.38 & 18.27±0.41 & 24.86±0.30 & 30.49±0.96 & 27.38±0.53 \\
Struct & \textbf{32.99±0.76} & \textbf{27.85±0.98} & \textbf{30.21±0.90} & \textbf{46.78±1.00} & 32.06±2.17 & 38.00±1.29 & \textbf{26.05±0.53} & 17.65±1.34 & \textbf{21.03±1.13} & \textbf{40.88±0.83} & 19.52±0.49 & 26.42±0.60 \\ \bottomrule
\end{tabular}}
\caption{Performance of state-of-art models on \textsc{Few-NERD (intra)}.}
\label{tab:intra}
\end{table*}
\begin{table*}[!h]
\centering
\scalebox{0.57}{
\begin{tabular}{lcccccccccccc} \toprule
\multirow{3}{*}{\textbf{\large{Model}}} & \multicolumn{12}{c}{\textbf{\textsc{\large{Few-NERD(inter)}}}} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
& \multicolumn{3}{c}{\large{\textbf{5 way 1$\sim$2 shot}}} & \multicolumn{3}{c}{\large{\textbf{5 way 5$\sim$10 shot}}} & \multicolumn{3}{c}{\large{\textbf{10 way 1$\sim$2 shot}}}& \multicolumn{3}{c}{\large{\textbf{10 way 5$\sim$10 shot}}} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
& \large{P} & \large{R} & \large{F1} & \large{P} & \large{R} & \large{F1} & \large{P} & \large{R} & \large{F1} & \large{P} & \large{R} & \large{F1} \\ \cmidrule(r){1-1} \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
Proto & 32.04±1.75 & {49.30±0.68} & 38.83±1.49 & {52.54±1.32} & \textbf{66.76±1.01} & \textbf{58.79±0.44} & 26.02±1.32 & {43.17±0.92} & 32.45±0.79 & 46.38±0.42 & \textbf{61.60±0.36} & \textbf{52.92±0.37} \\
NNShot & 42.57±1.27 & \textbf{53.09±0.54} & 47.24±1.00 & 51.03±0.63 & 61.15±0.63 & 55.64±0.63 & 34.36±0.24 & \textbf{44.76±0.33} & 38.87±0.21 & 44.96±2.69 & 55.25±2.77 & 49.57±2.73 \\
Struct & \textbf{53.89±0.78} & 50.02±0.62 & \textbf{51.88±0.69} & \textbf{62.12±0.41} & 53.21±0.91 & 57.32±0.63 & \textbf{47.07±0.15} & 40.16±0.12 & \textbf{43.34±0.10} & \textbf{57.61±1.87} & 43.54±3.70 & 49.57±3.08 \\ \bottomrule
\end{tabular}}
\caption{Performance of state-of-art models on \textsc{Few-NERD (inter)}.}
\label{tab:inter}
\end{table*}
\subsection{The Overall Results}
We evaluate all baseline models on the three benchmark settings introduced in Section~\ref{sec:bench}, including \textsc{Few-NERD (sup)}, \textsc{Few-NERD (intra)} and \textsc{Few-NERD (inter)}.
\noindent \textbf{Supervised NER}\quad As mentioned in Section~\ref{sec:bench-supner}, we first split the $\textsc{Few-NERD}$ as a standard supervised NER dataset. As shown in Table~\ref{tab:sup}, BERT-Tagger yields promising results on the two widely used supervised datasets. The F1-score is 91.34\%, 89.11\%, respectively. However, the model suffers a grave drop in the performance on \textsc{Few-NERD (sup)} because the number of types of \textsc{Few-NERD (sup)} is larger than others. The results indicate that $\textsc{Few-NERD}$ is challenging in the supervised setting and worth studying.
We further analyze the performance of different entity types (see Figure~\ref{fig:types}). We find that the model achieves the best performance on the \texttt{Person} type and yields the worst performance on the \texttt{Product} type. And almost for all the coarse-grained types, the \texttt{Coarse-Other} type has the lowest F1-score. This is because the semantics of such fine-grained types are relatively sparse and difficult to be recognized. A natural intuition is that the performance of each entity type is related to the portion of the type. But surprisingly, we find that they are not linearly correlated. For examples, the model performs very well on the Art type, although this type represents only a small fraction of \textsc{Few-NERD}.
\noindent \textbf{Few-shot NER} \quad For the few-shot benchmarks, we adopt 4 sampling settings, which are $5$ way $1$$\sim$$2$ shot, $5$ way $5$$\sim$$10$ shot, $10$ way $1$$\sim$$2$ shot, and $10$ way $5$$\sim$$10$ shot. Intuitively, $10$ way $1$$\sim$$2$ shot is the hardest setting because it has the largest number of entity types and the fewest number of examples, and similarly, $5$ way $5$$\sim$$10$ shot is the easiest setting.
All results of $\textsc{Few-NERD (intra)}$ and $\textsc{Few-NERD (inter)}$ are reported in Table~\ref{tab:intra} and Table~\ref{tab:inter} respectively. Overall, we observe that the previous state-of-the-art methods equipped by BERT encoder could not yield promising results on $\textsc{Few-NERD}$. From a perspective of high level, models generally perform better on $\textsc{Few-NERD (inter)}$ than $\textsc{Few-NERD (intra)} $, and the latter is regarded as a more difficult task as we analyze in Section~\ref{sec:sim} and Section~\ref{sec:bench}, it splits the data according to the coarse-grained entity types, which means entity types between the training set and test set share less knowledge.
In a horizontal comparison, consistent with intuition, almost all the methods produce the worst results on $10$ way $1$$\sim$$2$ shot and achieve the best performance on $5$ way $5$$\sim$$10$.
In the comparison across models, ProtoBERT generally achieves better performance than NNShot and StructShot, especially in $5$$\sim$$10$ shot setting where calculation by prototype may differ more from calculation by entity. StructShot has seen a large improvement in precision in $\textsc{Few-NERD (intra)}$. It shows that Viterbi decoder at the inference stage can help remove false positive predictions when knowledge transfer is hard. It is also observed that NNShot and StructShot may suffer from the instability of the nearest neighbor mechanism in the training phase, and prototypical models are more stable because the calculation of prototypes essentially serves as regularization.
\subsection{Error Analysis}
\vspace{-0.1cm}
We conduct error analysis to explore the challenges of $\textsc{Few-NERD}$, the results are reported in Table~\ref{tab:error}. We choose the setting of $\textsc{Few-NERD (inter)} $ because the test set contains all the coarse-grained types. We analyze the errors of models from two perspectives. \textit{Span Error} denotes the misclassifying in token-level classification. If an \texttt{O} token is misclassified as a part of entity, i.e., \texttt{I-type}, it is an FP case, and if a token with the type \texttt{I-type} is misclassified to \texttt{O}, it is FN. \textit{Type Error} indicates the misclassification of entity types when the spans are correctly classified. A ``Within'' error represents the entity is misclassified to another type within the same coarse-grained type, while ``Outer'' denotes the entity is misclassified to another type in a different coarse-grained type. As the statistics of type errors may be impacted by the sampled episodes in testing, we conduct 5 rounds of experiments and report the average results.
The results demonstrate that the token-level accuracy is not that low since most \texttt{O} tokens could be detected. But an entity mention is considered to be wrong if one token is wrong, which becomes the main reason for the challenge of $\textsc{Few-NERD}$. If an entity span could be accurately detected, the models could yield relatively good performance on entity typing, indicating the effectiveness of metric learning.
\begin{table}[]
\centering
\scalebox{0.88}{
\begin{tabular}{lllll} \toprule
\multirow{2}{*}{\textbf{Models}} & \multicolumn{2}{c}{\textbf{Span Error}} & \multicolumn{2}{c}{\textbf{Type Error}} \\ \cmidrule(r){2-3} \cmidrule(r){4-5}
& \multicolumn{1}{c}{\textbf{FP}} & \multicolumn{1}{c}{\textbf{FN}} & \multicolumn{1}{c}{\textbf{Within}} & \multicolumn{1}{c}{\textbf{Outer}} \\ \cmidrule(r){1-1} \cmidrule(r){2-3} \cmidrule(r){4-5}
ProtoNet & 4.29\% & 2.17\% & 3.87\% & 5.35\% \\
NNShot & 3.87\% & 3.67\% & 3.86\% & 6.90\% \\
StructShot & 2.84\% & 4.45\% & 3.94\% & 5.56\% \\ \bottomrule
\end{tabular}}
\caption{Error analysis of 5 way 5$\sim$10 shot on $\textsc{Few-NERD (inter)}$, ``Within'' indicates ``within the coarse types'' and ``Outer'' is ``outer the coarse types''.}
\vspace{-0.2cm}
\label{tab:error}
\end{table}
\section{Conclusion and Future Work}
We propose $\textsc{Few-NERD}$, a large-scale few-shot NER dataset with fine-grained entity types. This is the first few-shot NER dataset and also one of the largest human-annotated NER dataset. $\textsc{Few-NERD}$ provides three unified benchmarks to assess approaches of few-shot NER and could facilitate future research in this area.
By implementing state-of-the-art methods, we carry out
a series of experiments on $\textsc{Few-NERD}$, demonstrating that few-shot NER remains a challenging problem and worth exploring.
In the future, we will extend $\textsc{Few-NERD}$ by adding cross-domain annotations, distant annotations, and finer-grained entity types. \textsc{Few-NERD} also has the potential to advance the construction of continual knowledge graphs.
\section*{Acknowledgements}
This research is supported by National Natural Science Foundation of China (Grant No. 61773229 and 6201101015), National Key Research and Development Program of China (No. 2020AAA0106501), Alibaba Innovation Research (AIR) programme, the General Research Project (Grand No. JCYJ20190813165003837, No.JCYJ20190808182805919), and Overseas Cooperation Research Fund of Graduate School at Tsinghua University (Grant No. HW2018002). Finally, we thank the valuable help of Ronny, Xiaozhi, Ziyu and comments of anonymous reviewers.
\section*{Ethical Considerations}
In this paper, we present a human-annotated dataset, $\textsc{Few-NERD}$, for few-shot learning in NER. We describe the details of the collection process and conditions, the compensation of annotators, the measurements to ensure the quality in the main text. The corpus of the dataset is publicly obtained from Wikipedia and we have not modified or interfered with the content.
$\textsc{Few-NERD}$ is likely to directly facilitate the research of few-shot NER, and further increase the progress of the construction of large-scale knowledge graphs (KGs). Models and systems built on $\textsc{Few-NERD}$ may contribute to construct KGs in various domains, including biomedical, financial, and legal fields, and further promote the development of NLP applications on specific domains.
$\textsc{Few-NERD}$ is annotated in English, thus the dataset may mainly facilitate NLP research in English. For the sake of energy saving, we will not only open source the dataset and the code, but also release the checkpoints of our models from the experiments to reduce unnecessary carbon emission.
\bibliographystyle{acl_natbib}
|
1,116,691,500,955 | arxiv | \section*{Introduction}
The theory of Lie groupoids can be viewed as a blend of geometry and Lie theory, and plays an
important role in several branches of mathematics. Lie groupoids can be viewed as the global
integrations of geometrically defined Lie brackets and as such posses a natural deformation theory.
In \cite{cms}, the authors wrote down an explicit cochain complex controlling such deformations.
As expected, this ``deformation cohomology'' is isomorphic to the cohomology of the adjoint representation (up to homotopy as in \cite{ac}), but the point of \cite{cms} is that the latter involves the choice of a connection, whereas the deformation complex is intrinsically defined. Any deformation
of Lie groupoids defines a class in this deformation cohomology.
Lie groupoids also play an important role in noncommutative geometry where they give prime examples in the form of their associated convolution algebras. As is well-known, the deformation theory of an algebra is controlled by its Hochschild cohomology. Since a deformation of the underlying Lie groupoid induces a deformation of its convolution algebra, this strongly suggests a
relationship between Hochschild cohomology and the adjoint representation.
The aim of the present article is to shed light on this issue by exhibiting an explicit morphism of
cochain complexes between the deformation complex of a Lie groupoid and the Hochschild complex
of its convolution algebra which relates the deformation classes associated to a deformation of the
underlying Lie groupoid. We expect this morphism to be part of a larger picture computing the
Hochschild cohomology in terms of higher powers of the adjoint representation.
A classical theme in the theory of Lie groupoids is its relation with the infinitesimal theory of Lie
algebroids. For the deformation complex this is highlighted by the ``van Est'' morphism to the
deformation complex of the Lie algebroid of Lie groupoid, a complex first considered in \cite{cm}.
From the point of view of noncommutative geometry, the relation with the infinitesimal theory follows
from ``quantization and the classical limit'': we show that our cochain morphism can be extended to the adiabatic groupoid
of \cite{connes} interpolating between a Lie groupoid and its Lie algebroid. The van Est-map is
then obtained by constructing a quantization map on the dual of the Lie algebroid using an exponential map on the Lie groupoid. The picture is completed by the relationship between
the deformation cohomology a Lie algebroid and the Poisson cohomology of its dual.
This article is organized as follows: in \cref{prelim} we recall the basic set-up of Lie groupoids and their convolution algebras. In \cref{chainmap} we construct the morphism between the deformation complex of a Lie groupoid and the Hochschild complex of its convolution algebra, and explore some of its properties. Finally, \cref{quant} is devoted to obtaining the van Est map using the adiabatic groupoi together with a quantization.
\section{Preliminaries}\label{prelim}
\subsection{Densities along the fibers of a submersion}
\label{das}
Let $V$ be an n-dimensional vector space. A \textit{density} of $V$ is a map $a:\Lambda^n V\to\mathbb{R}$ such that for every invertible map $A\in\text{GL}(n,\mathbb{R})$ it holds that $a(Av_1,...,Av_n)=|\det(A)|a(v_1,...,v_n)$.
More generally, from a vector bundle $E\to M$, one constructs a bundle of densities $\mathcal{D}_E$. Then if one has a vector bundle isomorphism $\Psi: E\to E$ covered by a diffeomorphism $\Phi: M\to M$, one obtains an action on the section of $\mathcal{D}_E$, defined by
\begin{equation*}
(\Psi^\ast a)_x(v_1,...,v_n)=a_{\Phi(x)}(\Psi v_1,...,\Psi v_n)
\end{equation*}
The case where $E=TM$ is of particular interest because the integral $\int_Ma$ is
canonically defined for a compactly supported density of $TM$. In this case one obtains an action of a vector field $X\in\mathfrak{X}(M)$ on the densities on $TM$, namely:
\begin{equation}
\label{apvf}
Xa=\left.\frac{d}{dt}\right|_{t=0} (\Phi^t_X)^\ast a,
\end{equation}
where $\Phi^t_X$ denotes the flow of $X$.
We will be mostly interested in densities along the fibers of a submersion. For this, let $f: M\to N$ be a submersion, and denote by $\mathcal{D}_f$ the bundle of densities of the vector bundle $\ker df$. In this case the fiber integral
\[
\int_f:\Gamma_c(M,\mathcal{D}_f)\to C^\infty_c(N)
\]
is canonically defined. A vector field $X$ acts on sections of $\mathcal{D}_f$ provided that the flow preserves the fibers of $f$. This is equivalent to there being a vector field $Y\in\mathfrak{X}(N)$ such that $df\circ X=Y\circ f$. In this case $X$ is called \textit{$f$-projectable}, and since $\Phi^t_X\circ f=f\circ\Phi^t_Y$ the flow of $X$ preserves the fibers of $f$ and in turn acts on $\ker df$, and we obtain an action of $X$ on $\Gamma(\mathcal{D}_f)$ by formula \eqref{apvf}.
We denote by $\text{Diff}_f(M)$ the diffeomorphisms of $M$ that preserve the fibers of $f$, and by $\mathfrak{X}_f(M)$ the $f$-projectable vector fields of $M$.
In the following we shall consider $f$-projectable vector fields defined on only a single fiber of $f$ and let it act on densities to get a density on that one fiber. This is similar to the fact that the directional derivative $X(f)(p)$ of a function $f$ along a vector field $X$ in a point $p$ only depends
on $X(p)$.
\begin{lem}
\label{nl}
Let $a\in\Gamma(\mathcal{D}_f)$, let $y\in N$, and let $X\in\mathfrak{X}_f(M)$ be an $f$-projective vector field. If $X$ vanishes along $f^{-1}(y)$, then $(Xa)_x=0$ for all $x\in f^{-1}(y)$.
\end{lem}
\begin{proof}
If $X$ vanishes along $f^{-1}(y)$ we have $\Phi^t_X(x)=x$ for all $t$ and all $x\in f^{-1}(y)$. In particular $d(\Phi^t_X)_x(v)=v$ for all $v\in\text{ker}(df)\subset T_xM$. This means that $((\Phi^t_X)^\ast a)_x=a_x$ and hence $(Xa)_x=0$.
\end{proof}
\begin{rmk}
The previous Lemma allows us to define $(Xa)_x$ for $x\in f^{-1}(y)$, $a\in\Gamma(\mathcal{D}_f)$ and $X\in\mathfrak{X}_f(M)|_{f^{-1}(y)}$. Indeed, we can choose $Y\in\mathfrak{X}_f(M)$ to be an extension of $X$ to a global vector field and define $(Xa)_x=(Ya)_x$. The previous Lemma is then used to show that this definition is independent of the choice of $Y$.
\end{rmk}
\subsection{The convolution algebra of a Lie groupoid}
Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid. For an introduction to the theory of Lie groupoids we refer to \cite{mm}. Here we denote source and target maps by $s,t:\mathcal{G}\to M$ and will think of arrows $g\in\mathcal{G}$ as pointing from right to left, so that the product $g_1g_2$ is defined whenever $s(g_1)=t(g_2)$. The Lie algebroid $A(\mathcal{G})$ is defined as $A(\mathcal{G})=\ker(ds)|_M$. We will concern ourselves with the convolution algebra of $\mathcal{G}$. To define the convolution product, we need entities which can be integrated, and this is where densities come into play. To this end we look at densities along the source-fibers, where we note that there is a canonical isomorphism between $\ker ds$ and $t^\ast A(\mathcal{G})$ using right translations. In this way we can define the convolution product for two compactly supported densities $a_1, a_2\in\Gamma_c(\mathcal{D}_s)$ by
\begin{equation*}
(a_1\ast a_2)_g(v_1,...,v_n)=\int_{h\in s^{-1}(s(g))}(a_1)_{gh^{-1}}(v_1,...,v_n)(a_2)_h
\end{equation*}
In this notation $v_1,...,v_n\in A_{t(g)}=A_{t(gh^{-1})}$ so that the product in the integrand yields a well-defined compactly supported density along $s^{-1}(h)=s(g)$ that can be integrated. Colloquially this product will be written as:
\begin{equation*}
(a_1\ast a_2)(g)=\int_{g_1g_2=g}a_1(g_1)a_2(g_2)=\int_{h\in s^{-1}(s(g))}a_1(gh^{-1})a_2(h)
\end{equation*}
We define the convolution algebra $\mathcal{A}_\mathcal{G}$ of $\mathcal{G}$ to be $\mathcal{A}_\mathcal{G}=(\Gamma_c(\mathcal{D}_s),\ast)$. This definition of the convolution algebra differs slightly (but is isomorphic as a complex algebra) from the more usual one in e.g. \cite{connes} using $1/2$-densities along source {\em and} target fibers.
\subsection{The deformation complex of a Lie groupoid}
Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid and write $\overline{m}$ for the map $\overline{m}(g,h)=gh^{-1}$. Note that $\overline{m}$ has as domain $\mathcal{G}\hspace*{1mm}^s\!\times^s\mathcal{G}:=\{(g_1,g_2)\in\mathcal{G}\times\mathcal{G},~s(g_1)=s(g_2)\}$. Furthermore we write $\mathcal{G}^{(k)}$ for the $k$'th nerve of $\mathcal{G}$:
\begin{equation*}
\mathcal{G}^{(k)}=\{(g_1,...,g_k)\in\mathcal{G}^k|s(g_i)=t(g_{i+1})\}
\end{equation*}
In \cite{cms} the deformation complex is defined as follows:
\begin{defi}
For $k\geq 1$ define
$C^k_\text{def}(\mathcal{G})$ to be the set of smooth maps $c:\mathcal{G}^{(k)}\to T\mathcal{G}$ such that $c(g_1,...,g_k)\in T_{g_1}\mathcal{G}$ and such that there is a section $s_c$ of tthe vector bundle $t^*TM$ over $\mathcal{G}^{(k-1)}$ such that
\[
ds(c(g_1,...,g_k))=s_c(g_2,...,g_k).
\]
The differential $\delta: C^k_\text{def}(\mathcal{G})\to C^{k+1}_\text{def}(\mathcal{G})$ is defined by setting:
\begin{align*}
(\delta c)(g_1,...,g_{k+1})=&-d\overline{m}(c(g_1g_2,g_3,...,g_{k+1}),c(g_2,...,g_{k+1}))\\
&+\sum_{i=2}^k(-1)^ic(g_1,...,g_ig_{i+1},...,g_{k+1})
+(-1)^{k+1}c(g_1,...,g_k).
\end{align*}
The {\em deformation complex} is defined by the graded vector space $C^\bullet_\text{def}(\mathcal{G}):=\bigoplus_{k\geq 1} C^k_\text{def}(\mathcal{G})$ equipped with the differential $\delta$, its cohomology is denoted $H^\bullet_\text{def}(\mathcal{G})$.
\end{defi}
\begin{rmk}
\label{deg0}
It is possible, as in \cite{cms}, to extend the deformation complex in degree zero by putting $C^0_\text{def}(\mathcal{G})=\Gamma(M,A(\mathcal{G}))$
with differential defined for $\alpha\in\Gamma(M,A(\mathcal{G}))$ by
\begin{equation*}
(\delta \alpha)(g)=(dr_g)(\alpha(t(g))+(d(l_g\circ\iota))(\alpha(s(g))
\end{equation*}
We exclude these elements in degree $0$ because, as we will see, these element cannot correspond to Hochschild $0$-cochains.
\end{rmk}
\begin{rmk}
\label{mv}
It follows from the definition above that the closed elements in degree $1$ are exactly the multiplicative vector fields, c.f.\ \cite[\S 4.3]{cms}. These are vector fields $X\in\mathfrak{X}(\mathcal{G})$ that are $s$ and $t$-projectable to the same image in $\mathfrak{X}(M)$, satisfying the following equation:
\begin{equation*}
dm_{(g,h)}(X(g),X(h))=X(gh)
\end{equation*}
\end{rmk}
For certain purposes, most importantly applying the Van Est map, it often necessary to impose more strict relations on elements $c\in C^k_\text{def}(\mathcal{G})$ and their symbol $s_c$. To this end we also introduce the normalized deformation complex:
\begin{defi}
The \textit{normalized deformation complex} is the subcomplex $\hat{C}^\bullet_\text{def}(\mathcal{G})$ of $C^\bullet_\text{def}(\mathcal{G})$ consisting of those elements $c\in C^k_\text{def}(\mathcal{G})$ which satisfy
\begin{equation*}
c(1_x,g_2,...,g_k)=du(s_c(g_2,...,g_k))
\end{equation*}
and
\begin{equation*}
s_c(g_2,...,1_x,...,g_k)=0
\end{equation*}
where the unit is put in any of the $k-1$ slots.
\end{defi}
It is shown in \cite[Prop 11.8]{cms} that the inclusion of the normalized deformation complex into the whole deformation complex is a quasi-isomorphism.
\section{From deformation to Hochschild cohomology}\label{chainmap}
\subsection{The cochain map}
\label{cmh}
In this section we define a cochain map from the deformation complex of $\mathcal{G}$ to the Hochschild complex of the convolution algebra $\mathcal{A}_\mathcal{G}$. As a first hint for the existence of such a morphism, we make the following observation:
\begin{prop}\label{multvfderiv}
Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid.
Multiplicative vector fields on $\mathcal{G}$ act as derivations on the convolution algebra.
\end{prop}
\begin{proof}
Recall the definition of a multiplicative vector field from \cref{mv}. Since a multiplicative vector field on $\mathcal{G}$ is by definition $s$-projectable to $M$, its action on an $s$-density is well-defined by
the discussion in \cref{das}, c.f.\ equation \eqref{apvf}.
The key ingredient in the proof is the observation that the flow of a multiplicative vector field is a groupoid map, that is if $X\in\mathfrak{X}(\mathcal{G})$ is a multiplicative vector field then $\Phi^t_X(gh^{-1})=\Phi^t_X(g)\Phi^t_X(h)^{-1}$. A simple calculation then shows
\begin{align*}
X(a_1 \ast a_2)(g)&=\left.\frac{d}{dt}\right|_{t=0}(a_1\ast a_2)(\Phi^t_Xg)\\
&=\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(\Phi^t_X g))}a_1((\Phi^t_X g)h^{-1})a_2(h)\\
&=\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(g))}a_1((\Phi^t_X g)(\Phi^t_X h)^{-1})a_2(\Phi^t_Xh)\\
&=\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(g))}a_1(\Phi^t_X(gh^{-1}))a_2(\Phi^t_Xh)\\
&=\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(g))}a_1(\Phi^t_X(gh^{-1}))a_2(h)\\ &\hspace{2.5cm}+\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(g))}a_1(gh^{-1})a_2(\Phi^t_Xh)\\
&=(Xa_1\ast a_2)(g)+(a_1\ast Xa_2)(g)
\end{align*}
which proves the proposition.
\end{proof}
In the following we write $C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ for the Hochschild complex of the convolution algebra $\mathcal{A}_\mathcal{G}$ with values in the bimodule $\mathcal{A}_\mathcal{G}$ with differential $\delta_{\rm Hoch}$.
We now describe the cochain map $C^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$.
\begin{defi}
\label{chainmap}
The map $\Phi:C^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ is defined by
\begin{equation*}
(\Phi c)(a_1,...,a_k)(g)=\int_{g_1\cdots g_k=g}(c(-,g_2,...,g_k)a_1)(g_1)a_2(g_2)\cdots a_k(g_k),\qquad\mbox{for}~c\in C^k_\text{def}(\mathcal{G}).
\end{equation*}
This formula should be read as an inductive convolution (first over $g_1g_2=h_1$, then over $h_1g_3=h_2$, et cetera).
\end{defi}
\begin{rmk}
The formula for $\Phi$ above is justified by \cref{nl}: $c\in C^k_\text{def}(\mathcal{G})$ is $s$-projectable and therefore the action of $c(-,g_2,...,g_k)$ on $a\in \mathcal{A}_\mathcal{G}$ along $s^{-1}(t(g_2))$ is well-defined. In particular $(c(-,g_2,...,g_k)a)(g_1)$ is a well-defined density at $g_1$ for $(g_1,...,g_k)\in\mathcal{G}^{(k)}$.
\end{rmk}
Showing that $\Phi$ is a chain map is done by a calculation similar to the one in \cref{multvfderiv}. In particular we need to deal with the term $\Phi^t_X(g)(\Phi^t_X(h))^{-1}$ for divisible $g$ and $h$ when $t$ goes to $0$. In the mulitplicative case this is precisely $\Phi^t_X(gh^{-1})$, but for general deformation elements we need a more general description.
For this we abbreviate the term in $\delta c$ involving $d\overline{m}$ by $\overline{m}c$, that is:
\begin{equation*}
(\overline{m}c)(g_1,...,g_{k+1})=d\overline{m}(c(g_1g_2,...,g_{k+1}),c(g_2,...,g_{k+1}))
\end{equation*}
We remark that this notation commutes with keeping the last all-but-two entries fixed, i.e.:
\begin{equation*}
\overline{m}(c(-,g_3,...,g_{k+1}))(g_1,g_2)=(\overline{m}c)(g_1,...,g_{k+1})
\end{equation*}
The key Lemma is then as follows:
\begin{lem}
Let $x\in M$, $X\in\mathfrak{X}_s(\mathcal{G})|_{s^{-1}(x)}$ and $a_1,a_2\in \mathcal{A}_\mathcal{G}$. Then for all $h\in s^{-1}(x)$ we have $\overline{m}X(-,h)\in\mathfrak{X}_s(\mathcal{G})|_{s^{-1}(t(h))}$ and for $g\in s^{-1}(x)$:
\begin{equation*}
X(a_1\ast a_2)(g)=(a_1\ast Xa_2)(g)+\int_{h\in s^{-1}(x)}((\overline{m}X(-,h))a_1)(gh^{-1})a_2(h)
\end{equation*}
\end{lem}
\begin{proof}
By definition we have:
\begin{equation*}
\overline{m}X(gh^{-1},h)=d\overline{m}(X(g),X(h))\in T_{gh^{-1}}\mathcal{G}
\end{equation*}
with $s$-projection
\begin{equation*}
ds(\overline{m}X(gh^{-1},h))=dt(X(h))
\end{equation*}
so indeed $\overline{m}X(-,h)\in\mathfrak{X}_s(\mathcal{G})|_{s^{-1}(t(h))}$.
Next we assume that $X$ is a globally defined $s$-projectable vector field (otherwise, we choose an extension at this point). Then we know that $\overline{m}X(-,h)$ is generated by the path $\Phi_t$ through $\text{Diff}_s(\mathcal{G})$, which along $s^{-1}(t(h))$ looks like:
\begin{equation*}
\Phi_t(gh^{-1})=\Phi^t_X(g)(\Phi^t_X(h))^{-1}
\end{equation*}
so that we see that:
\begin{equation*}
\int_{h\in s^{-1}(x)}((\overline{m}X(-,h))a_1)(gh^{-1})a_2(h)=\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(x)}a_1(\Phi^t_X(g)(\Phi^t_X(h))^{-1})a_2(h)
\end{equation*}
Using this we calculate $X(a_1\ast a_2)(g)$:
\begin{align*}
X(a_1\ast a_2)(g)=&\left.\frac{d}{dt}\right|_{t=0} (a_1\ast a_2)(\Phi^t_Xg)\\
=&\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(s(\Phi^t_Xg))}a_1((\Phi^t_Xg)h^{-1})a_2(h)\\
=&\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(x)}a_1((\Phi^t_Xg)(\Phi^t_Xh)^{-1})a_2(\Phi^t_X(h))\\
=&\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(x)}a_1((\Phi^t_Xg)(\Phi^t_Xh)^{-1})a_2(h)+\left.\frac{d}{dt}\right|_{t=0}\int_{h\in s^{-1}(x)}a_1(gh^{-1})a_2(\Phi^t_Xh)\\
=&\int_{h\in s^{-1}(x)}((\overline{m}X(-,h))a_1)(gh^{-1})a_2(h)\\
&+(a_1\ast Xa_2)(g)
\end{align*}
which finishes the proof.
\end{proof}
\begin{prop}
The map $\Phi: C^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ is a morphism of cochain complexes.
\end{prop}
\begin{proof}
This proof is essentially writing out all the parts of the Hochschild differential and applying some bookkeeping. We start with $c\in C^k_\text{def}(\mathcal{G})$ for $k\geq 1$, and write down the definition of the various parts of $\delta_\text{Hoch}(\Phi c)$.
\begin{equation*}
(a_1\ast(\Phi c)(a_2,...,a_{k+1}))(g)=\int_{g_1\cdots g_{k+1}=g}a_1(g_1)(c(-,g_3,...,g_{k+1})a_2)(g_2)a_3(g_3)\cdots a_{k+1}(g_{k+1})
\tag{$\star$}
\end{equation*}
\begin{equation*}
-(\Phi c)(a_1\ast a_2,a_3,...,a_{k+1})(g)=-\int_{h\cdot g_3\cdots g_{k+1}=g}(c(-,g_3,...,g_{k+1})(a_1\ast a_2))(h)a_3(g_3)\cdots a_{k+1}(g_{k+1})\tag{$\star\star$}
\end{equation*}
\begin{multline*}
\sum_{i=2}^k(-1)^i(\Phi c)(a_1,...,a_i\ast a_{i+1},...,a_{k+1})(g)=\\
\sum_{i=2}^{k}\int_{g_1\cdots g_{k+1}=g}(-1)^i (c(-,g_2,...,g_ig_{i+1},...,g_{k+1})a_1)(g_1)a_2(g_2)\cdots a_{k+1}(g_{k+1})
\end{multline*}
\begin{equation*}
(-1)^{k+1}((\Phi c)(a_1,...,a_k)\ast a_{k+1})(g)=(-1)^{k+1}\int_{g_1\cdots g_{k+1}=g}(c(-,g_2,...,g_k)a_1)(g_1)a_2(g_2)\cdots a_{k+1}(g_{k+1})
\end{equation*}
The latter two terms we recognize from the differential of the deformation complex, while the first two terms can be rewritten to:
\begin{multline*}
(\star)+(\star\star)=\int_{hg_3\cdots g_{k+1}=g}\left((a_1\ast (c(-,g_3,...,g_{k+1}) a_2)-c(-,g_3,...,g_{k+1})(a_1\ast a_2)\right)(h)a_3(g_3)\cdots a_{k+1}(g_{k+1})
\end{multline*}
Then by the key Lemma we can rewrite this to
\begin{align*}
(\star)+(\star\star)=&-\int_{g_1\cdots g_{k+1}=g}((\overline{m}c)(-,g_2,...,g_{k+1})a_1)(g_1)a_2(g_2)\cdots a_{k+1}(g_{k+1})
\end{align*}
Putting this all together we conclude that:
\begin{align*}
(\delta_\text{Hoch}(\Phi c))(a_1,...,a_{k+1})(g)
&=(\Phi(\delta c))(a_1,...,a_{k+1})(g)
\end{align*}
So we see that $\Phi$ is indeed a chain-map.
\end{proof}
\subsection{Comparing deformation classes}
In this section we compare the deformation classes in $H^2_\text{def}(\mathcal{G})$ and $H^2_{\rm Hoch}(\mathcal{A}_\mathcal{G})$ coming from deformations of the Lie groupoid $\mathcal{G}$.
Recall from \cite[\S 5.2]{cms} that an $s$-constant deformation of $\mathcal{G}$ is a smooth family $\overline{m}_\epsilon: \mathcal{G}\hspace*{1mm}^s\!\times^s\mathcal{G}\to\mathcal{G}$ of division maps parameterized by $\epsilon$ in an open interval in $\mathbb{R}$ containing $0$, such that $\overline{m}_0=\overline{m}$. This induces a deformation cocycle $\beta\in C^2_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ by deforming the associative algebra that is the convolution algebra:
\begin{equation*}
\beta(a_1,a_2)(g)=\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\int_{h\in s^{-1}(s(g))}a_1(\overline{m}_\epsilon(g,h))a_2(h)
\end{equation*}
On the other hand the deformation also induces a deformation element $\xi\in C^2_\text{def}(\mathcal{G})$ set for $(g,g')\in\mathcal{G}^{(2)}$ by:
\begin{equation*}
\xi(g,g'):=\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\overline{m}_\epsilon(gg',g').
\end{equation*}
By \cite[Lemma 5.3]{cms}, this cochain is is closed: $\delta\xi=0$.
\begin{prop}
The chain map $\Phi$ sends $\xi$ to $\beta$.
\end{prop}
\begin{proof}
This follows from the observation that if $s(h)=s(g)$, then
\begin{equation*}
\xi(gh^{-1},h)=\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\overline{m}_e(g,h)
\end{equation*}
With this we see that
\begin{equation*}
\beta(a_1,a_2)(g)=\int_{h\in s^{-1}(s(g))}(\xi(-,h)a_1)(gh^{-1})a_2(h)=\Phi(\xi)(a_1,a_2)(g),
\end{equation*}
exactly as needed.
\end{proof}
\begin{rmk}
In \cite[Prop 5.12]{cms} a deformation cocycle $\xi\in C^2_\text{def}(\mathcal{G})$ is assigned to any deformation (in particular those who are not $s$-constant), whose cohomology class is canonical. Then $\Phi(\xi)$ induces a Hochschild cohomology class of degree 2, which is not immediately linked to a deformation of the convolution product, since if the source map changes the underlying space of the convolution algebra also changes as it consists of densities along the $s$-fibers. Indeed, in \cite{cms} the authors need an auxillary choice of a vector field on the larger deformation space to define the cocycle. This choice of an auxillary vector field is precisely what is needed to compare the various convolution algebras when the source map varies, and in this way $\Phi$ maps $[\xi]\in H^2_\text{def}(\mathcal{G})$ to the Hochschild class of the deformation of the convolution product thus defined.
\end{rmk}
\subsection{Compatibility with the characteristic map to cyclic cochomology}
Denote by $(C^\bullet_{\rm diff}(\mathcal{G}),\delta)$ the cochain complex of inhomogeneous groupoid cochains given by $C^k_{\rm diff}(\mathcal{G}):=C^\infty(\mathcal{G}^{(k)})$ with differential
\begin{align*}
\delta\varphi(g_1,\ldots,g_{k+1})&=\varphi(g_2,\ldots,g_{k+1})+\sum_{i=1}^k(-1)^i\varphi(g_1,\ldots,g_ig_{i+1},\ldots, g_k)+(-1)^{k+1}\varphi(g_1,\ldots, g_k).
\end{align*}
We can turn this cochain complex into a DGA by introducing the product $\cup:C^k_{\rm diff}(\mathcal{G})\times C^l_{\rm diff}(\mathcal{G})\to C^{k+l}_{\rm diff}(\mathcal{G})$ given by
\[
(\varphi\cup\psi)(g_1,\ldots,g_{k+l}):=\varphi(g_1,\ldots,g_k)\psi(g_{l+1},\ldots, g_{k+l}).
\]
In \cite{cms} it is shown that by replacing $\varphi$ by a deformation cochain $c\in C^k_{\rm def}(\mathcal{G})$ in the above formula, $C^\bullet_{\rm def}(\mathcal{G})$ becomes a right module over $C^\bullet_{\rm diff}(\mathcal{G})$. On the other hand, in \cite{ppt}, the smooth groupoid cohomology was used to construct cyclic cocycles. In this section we shall that these two structures are compatible with each others under the cochain map $\Phi$ to Hochschild cohomology of \cref{cmh}. We start by re-writing the map to cyclic cohomology of \cite{ppt} in the following way.
First recall that the Hochschild cochain complex $C^\bullet(\mathcal{A}_{\mathcal{G}},\mathcal{A}_{\mathcal{G}})$ can be given a DGA structure by introducing the product $\cup:C^k(\mathcal{A}_{\mathcal{G}},\mathcal{A}_{\mathcal{G}})\times C^l(\mathcal{A}_{\mathcal{G}},\mathcal{A}_{\mathcal{G}})\to C^{k+l}(\mathcal{A}_{\mathcal{G}},\mathcal{A}_{\mathcal{G}})$
\[
(D\cup E)(a_1,\ldots,a_{k+l}):=D(a_1,\ldots,a_k)*E(a_{k+1},\ldots,a_{k+l}).
\]
Construct a map $\Phi_0:C^\bullet_{\rm diff} (\mathcal{G})\to C^\bullet(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ by
\begin{equation}
\label{dgm}
\Phi_0(\varphi)(a_1,\ldots,a_k)(g):=\int_{g_1\cdots g_k=g}\varphi(g_1,\ldots,g_k)a_1(g_1)\cdots a_k(g_k).
\end{equation}
\begin{lem}
The map $\Phi_0:(C^\bullet_{\rm diff} (\mathcal{G}),\delta,\cup)\to (C^\bullet(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G}),\delta_{\rm Hoch},\cup)$ is a morphism of DGA's.
\end{lem}
\begin{proof}
This is a straightforward computation.
\end{proof}
With this Lemma we can also equip the Hochschild complex $C^\bullet(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ with a module structure over $C^\bullet_{\rm diff}(\mathcal{G})$ by using the cup-product on Hochschild cochains:
\[
(D\cup E)(a_1,\ldots, a_{k+l}):=D(a_1,\ldots,a_k)E(a_{k+1},\ldots,a_{k+l}).
\]
Explicitly, this module structure is given by
\[
D\cdot\varphi:=D\cup\Phi_0(\varphi).
\]
We then have:
\begin{prop}
The cochain map $\Phi: C^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ is a morphism of $C^\bullet_{\rm diff}(\mathcal{G})$-modules.
\end{prop}
\begin{proof}
Let us start with the following case: For $c\in C^k_\text{def}(\mathcal{G})$ and $f \in C^\infty(\mathcal{G})=C^1_{\rm diff}(\mathcal{G})$ we have
\begin{equation*}
\Phi(c\cup f)(a_1,...,a_{k+1})=\Phi(c)(a_1,...,a_k)\ast (f\cdot a_{k+1})
\end{equation*}
The claim follows by carefully writing out the definition
\begin{align*}
\Phi(c\cup f)(a_1,...,a_{k+1})(g)&=\int_{g_1\cdots g_{k+1}=g}((c\cup f)(-,g_2,...,g_{k+1})a_1)(g_1)a_2(g_2)\cdots a_{k+1}(g_{k+1})\\
&=\int_{g_1\cdots g_{k+1}=g}(f(g_{k+1})c(-,g_2,...,g_k)a_1)(g_1)a_2(g_2)\cdots a_{k+1}(g_{k+1})\\
&=\int_{g_1\cdots g_{k+1}=g}(c(-,g_2,...,g_k)a_1)(g_1)a_2(g_2)\cdots a_k(g_k)\left(f(g_{k+1})a_{k+1}(g_{k+1})\right)\\
&=\int_{hg_{k+1}=g}\int_{g_1\cdots g_k=h}(c(-,g_2,...,g_k)a_1)(g_1)a_2(g_2)\cdots a_k(g_k)\left(f(g_{k+1})a_{k+1}(g_{k+1})\right)\\
&=\int_{hg_{k+1}=g}\Phi(c)(a_1,...,a_k)(h)\left(f(g_{k+1})a_{k+1}(g_{k+1})\right)\\
&=\left(\Phi(c)(a_1,...,a_k)\ast (f\cdot a_{k+1})\right)(g)
\end{align*}
Hence by induction we obtain
\begin{equation*}
\Phi(c\cup (f_1\otimes\cdots\otimes f_l))(a_1,...,a_{k+l})=\Phi(c)(a_1,...,a_k)\ast (f_1\cdot a_{k+1})\ast\cdots\ast (f_l\cdot a_{k+l})
\end{equation*}
Writing $f\ast a=\Phi_0(f)(a)$, we can rewrite this as
\[
\Phi(c\cup (f_1\otimes\cdots\otimes f_l))=\Phi (c)\cup\Phi_0(f_1)\cup\ldots\cup\Phi_0(f_l).
\]
From this the general statement of the proposition follows.
\end{proof}
Now, analogous to the action of vector fields on differential forms in geometry, the Hochschild cochains act on Hochschild chains by contraction:
\[
C^k(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})\times C_l(\mathcal{A}_\mathcal{G})\longrightarrow C_{l-k}(\mathcal{A}_\mathcal{G}),\qquad (D,a)\mapsto\iota_Da,
\]
given explicitly by
\[
\iota_D(a_0\otimes\ldots\otimes a_{k+l}):=a_0D(a_1,\ldots,a_k)\otimes a_{k+1}\otimes\ldots\otimes a_{k+l}.
\]
This action satisfies the properties
\begin{align*}
\iota_D\circ\iota_E&=\iota_{D\cup E}\\
[b,\iota_D]&=\iota_{\delta D}.
\end{align*}
The analogue of the Cartan formula for the ``Lie derivative'' $L_D:=B\circ \iota_D+\iota_D\circ B$ in noncommutative geometry also holds true on the level of Hochschild homology.
Next, recall from \cite{ppt} that when $\mathcal{G}$ is unimodular we can define a trace on the convolution algebra $\mathcal{A}_\mathcal{G}$ by
\[
\tau(a):=\int_Ma\Omega,
\]
with on the right hand side $\Omega$ a $\mathcal{G}$-invariant section of the bundle $\mathcal{D}_{A^*}\otimes\mathcal{D}_{TM}$, and we use the duality $\mathcal{D}_A\times\mathcal{D}_{A^*}\to\mathbb{R}$ together with the isomorphism $\mathcal{D}_s|_M=\mathcal{D}_A$, to obtain a density on $M$ that can be integrated. With this trace (a degree $0$ cyclic cocycle), the cochain map
\begin{equation}
\label{chain-diff}
\Psi_\tau:(C^\bullet_{\rm diff}(\mathcal{G}),\delta)\longrightarrow (C^\bullet(\mathcal{A}_\mathcal{G}),b_{\rm Hoch}),
\end{equation}
constructed in \cite{ppt} is simply given by $\Psi_\tau(c):=\iota_{\Phi_0(c)}\tau$.
\begin{cor}
Let $c\in C^k_{\rm def}(\mathcal{G})$ and $f\in C^l_{\rm diff}(\mathcal{G})$. Then the following
identity holds true:
\[
\iota_{\Phi(c\cup f)}\tau=\iota_{\Phi(c)}\Psi_\tau(f).
\]
\end{cor}
With this Corollary, we can construct new cyclic cocycles on the convolution algebra. First of all, if we start with a smooth
groupoid cocycle $\varphi\in C^k_{\rm diff}(\mathcal{G})$, we obtain a Hochschild cocycle by applying $\Psi_\tau$ as in \eqref{chain-diff}. A small computation shows that this cocycle is closed under the $B$-differential, i.e., $B\Psi_\tau(\varphi)=0$, when $\varphi$ is cyclic:
\[
\varphi(g_1,\ldots,g_k)=(-1)^k\varphi((g_1\cdots g_k)^{-1},g_1,\ldots,g_{k-1})
\]
We can work out similar conditions for elements $c\in C^k_\text{def}(\mathcal{G})$, but they are more involved. For example, for $k=2$ we find
\begin{equation*}
(d\iota)(c(g,g^{-1}))=-c(g^{-1},g).
\end{equation*}
\begin{rmk}
It is proved in \cite[\S 9]{cms} that $H^\bullet_\text{def}(\mathcal{G})\cong H^\bullet(\mathcal{G},{\rm Ad})$, where ${\rm Ad}$ denotes the adjoint representation up to homotopy constructed in \cite{ac}. Taking into account the morphism \eqref{dgm}, this strongly suggests to relabel the morphism of \cref{chainmap} as $\Phi_1$ and conjecture the existence of a map $\Phi_p:H^\bullet(\mathcal{G},{\rm Sym}^p({\rm Ad}))\to H^\bullet_{\rm Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ extending the cases $p=0,1$ described in this paper. This would naturally fit with the infinitesimal theory (see also the next section) and the computation in \cite{blom} of the Hochschild cohomology of the universal enveloping algebra $\mathcal{U}(A)$ of the Lie algebroid $A$:
\[
H^\bullet_{\rm Hoch}(\mathcal{U}(A),\mathcal{U}(A))\cong \bigoplus_{p\geq 0} H^\bullet_{CE}(A,{\rm Sym}^pA).
\]
\end{rmk}
\subsection{The case $k=0$}
For the chain map between $C^\bullet_\text{def} (\mathcal{G})$ and $C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ we have just defined, a natural question is whether it can be extended to degree $k=0$, c.f. \cref{deg0}. For this, one must find a a map $\Phi^0: \Gamma(A)\to \mathcal{A}_\mathcal{G}$ which extends the chain map $\Phi$. This is only possible if $\Phi(\delta(\alpha))\in\text{Der}(\mathcal{A}_\mathcal{G})$ is an inner derivation for every $\alpha\in \Gamma(A)$.
Intuitively it is clear that is should not be always possible, since the derivation $\Phi(\delta(\alpha))$ includes taking derivatives, while an inner derivation $\partial_H(a)$ only includes integrations. The following example presents a concrete counterexample:
\begin{ex} Consider the pair groupoid $\mathbb{R}\times\mathbb{R}\rightrightarrows \mathbb{R}$. For this groupoid, a bundle of densities is trivialized by $|dx|$, so that every compactly supported density is of the form $f|dx|$ for a compactly supported smooth function $f$. Furthermore, a section of the algebroid is simply a vector field $X\in\mathfrak{X}(\mathbb{R})$ and for this example we take $X=\frac{\partial}{\partial x}$. We have
\begin{equation*}
\delta(X)(x,y)=(X(x),X(y))
\end{equation*}
so that in this case $\delta(X)=\frac{\partial}{\partial x}+\frac{\partial}{\partial y}$. This vector field has flow
\begin{equation*}
\Phi^t_{\delta(X)}(x,y)=(x+t,y+t)
\end{equation*}
Next we consider $\Phi\left(\delta\left(\frac{\partial}{\partial x}\right)\right)$, so we look at the action of $\frac{\partial}{\partial x}+\frac{\partial}{\partial y}$ on a density $f(x,y)|dx|\in \mathcal{A}_{\mathbb{R}\times\mathbb{R}}$. We see
\begin{equation*}
(\Phi^t_{\delta(X)})^\ast (f(x,y)|dx|)=f(x+t,y+t)|d(x+t)|=f(x+t,y+t)|dx|
\end{equation*}
So that:
\begin{equation*}
\Phi(\delta(X))(f|dx|)=\left(\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\right)|dx|
\end{equation*}
Now suppose that there is some $g|dx|\in \mathcal{A}_{\mathbb{R}\times\mathbb{R}}$, such that $\Phi(\delta(X))=\partial_H(g|dx|)$. Then since always $\partial_H(g|dx|)(g|dx|)=0$, we see that:
\begin{equation*}
\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}=0
\end{equation*}
so that
\begin{equation*}
g(x+t,y+t)=g(x,y)
\end{equation*}
Since $g$ has to be compactly supported, the only possibility is that $g=0$, which is obviously not a solution to $\Phi(\delta(X))=\partial_H(g|dx|)$. We conclude that $\Phi(\delta(X))$ is not an inner derivation.
\end{ex}
In fact, using supports as an argument, we can deduce that $\Phi(X)$ can never be an inner derivation for any $X\in\mathfrak{X}_s(\mathcal{G})$.
\begin{prop}
Let $D\in\text{Hom}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ be a non-zero Hochschild-1-cochain. If $D$ satisfies $\text{supp}(Da)\subset\text{supp}(a)$, then there is no $b\in \mathcal{A}_\mathcal{G}$ such that $D=[-,b]$.
\end{prop}
\begin{proof}
Suppose by contrary that there is a $b$ such that $D=[-,b]$. Let $g\in\mathcal{G}$ and let $a\in \mathcal{A}_\mathcal{G}$ be supported arbitrarily close to $g$. For $h\in t^{-1}(s(g))$ outside of the isotropy of $s(g)$ we obtain:
\begin{equation*}
(a\ast b)(gh)=\int_k a(gk^{-1})b(kh)\sim a(g)b(h)
\end{equation*}
where we use that $a$ is only non-zero close enough to $g$. For the other part of the commutator we have
\begin{equation*}
(b\ast a)(gh)=\int_k b(gk^{-1})a(kh)=0
\end{equation*}
Since there is no way to let $kh$ come arbitrarily close to $g$ since $h$ is not in the isotropy of $s(g)$.
Since $\text{supp} (Da)\subset \text{supp}(a)$ we see that $(a\ast b)(gh)$ also has to be supported arbitrarily close to $g$, so that $b$ is identically zero outside of the isotropy of $\mathcal{G}$.
If we look at $h$ an isotropy element of $\mathcal{G}$ we see that the second term acts like $b(ghg^{-1})a(g)$, so that we see that $b$ is invariant under conjugation. However, if $b$ is invariant under conjugation we conclude that $b\in Z(\mathcal{A}_\mathcal{G})$, which is in contradiction to the fact that $D$ is non-zero. We conclude that there is no $b$ that solves $D=[-,b]$.
\end{proof}
\begin{rmk}
It is possible to define the map $\Phi^0$ if one allows for distributions to be cochains of degree $0$, that is if one defines $C^0_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G}):=\Gamma^{-\infty}_c(\mathcal{D}_s)$.
\end{rmk}
\begin{cor} If $X\in C^1_\text{def}(\mathcal{G})$ is non-zero, then $\Phi(X)$ can never be an inner derivation.
\end{cor}
\begin{proof}
This follows from the previous proposition by the observation that $\Phi(X)$ is local since it involves taking derivatives and the fact that $\Phi$ is easily observed to be injective.
\end{proof}
\subsection{Examples}
In this section we discuss how the chain map $\Phi$ links the deformation cohomology of $\mathcal{G}$ and the Hochschild cohomology of $\mathcal{A}_\mathcal{G}$ in certain examples.
\begin{ex}[Trivial groupoid]
We consider the trivial groupoid $\mathcal{G}=M\rightrightarrows M$. On the density side we simply have $(\mathcal{A}_\mathcal{G},\ast)=(C^\infty_c(M),\cdot)$, with $H^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})=\Lambda^\bullet\mathfrak{X}(M)$. At the side of the deformation complex we note that the $k$-nerve of the trivial groupoid is $M$ for every $k$ and $s$-projectability is a void property, so that for $k>0$ we have $C^k_\text{def}(\mathcal{G})=\mathfrak{X}(M)$, with differential alternating between the identity and the zero map:
\begin{equation*}
C^\bullet_\text{def}(\mathcal{G})=\left[ 0\to \mathfrak{X}(M)\xrightarrow{0}\mathfrak{X}(M)\xrightarrow{\rm id}\mathfrak{X}(M)\to\cdots\right]
\end{equation*}
So the deformation cohomology equals:
\begin{equation*}
H^k_\text{def}(\mathcal{G})\cong\left\{\begin{matrix}
\mathfrak{X}(M) &\text{ if } k=1\\
0 & \text{ else}
\end{matrix}\right.
\end{equation*}
The chain map $\Phi: C^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{Hoch}(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})$ simply becomes:
\begin{equation*}
\Phi(X)(f_1,...,f_k)=(Xf_1)\cdot f_2\cdots f_k
\end{equation*}
and we simply see that:
\begin{equation*}
H^k(\Phi)=\left\{\begin{matrix}
\text{id} &\text{ if }k=1\\
0 & \text{ else}
\end{matrix}\right.
\end{equation*}
We should also remark for this example that using the classical Hochschild--Kostant--Rosenberg theorem, we see that taking exterior powers of deformation elements we retrieve the whole Hochschild cohomology of $C^\infty_c(M)$.
\end{ex}
\begin{ex}[\'Etale groupoids]
In the case of an \'Etale groupoid $\mathcal{G}\rightrightarrows M$, we have $\mathcal{A}_\mathcal{G}=C^\infty_c(\mathcal{G})$, since the distribution $\ker(ds)$ is the trivial distribution. The convolution product in this case is commonly written as
\begin{equation*}
(f_1\ast f_2)(g)=\sum_{g_1g_2=g}f_1(g_1)f_2(g_2)
\end{equation*}
In this case the action of vector fields on densities is just the normal action of vector fields on functions, and the map $\Phi$ reduces to
\begin{equation*}
\Phi(c)(f_1,...,f_k)(g)=\sum_{g_1\cdots g_k=g}(c(g_1,...,g_k)f_1)\cdot f_2(g_2)\cdots f_k(g_k)
\end{equation*}
Since the source map of $\mathcal{G}$ is a local diffeomorphism, we see that there is a 1-1 correspondence between deformation elements $c\in C^k_\text{def}(\mathcal{G})$ and their symbols $s_c\in\Gamma(t^\ast TM\to\mathcal{G}^{(k-1)})$ since we have
\begin{equation*}
c(g_1,...,g_k)=(ds_{g_1})^{-1}(s_c(g_2,...,g_k))
\end{equation*}
In fact, the correspondence establishes an isomorphism between $C^\bullet_\text{def}(\mathcal{G})$ and $C^\bullet(\mathcal{G},TM)[-1]$ where we see $TM$ as a representation of $\mathcal{G}$ where $g$ acts $T_{s(g)}M\to T_{t(g)}M$ as
\begin{equation*}
g\cdot v=dt_g((ds_g)^{-1})(v))
\end{equation*}
The shift by 1 we see here also serves as a justification of why the case $k=0$ is a tricky thing (although for \'Etale groupoids of course we have $C^0_\text{def}(\mathcal{G})=0$).
In the case that we have a proper \'Etale groupoid (over a connected base $M$) we can calculate the cohomologies in both sides of the equation. On the side of the deformation complex we use \cite[Thm 6.1]{cms} to obtain:
\begin{align*}
H^0_\text{def}(\mathcal{G})&\cong\{0\}\\
H^1_\text{def}(\mathcal{G})&\cong\mathfrak{X}(M)_\text{inv}\\
H^k_\text{def}(\mathcal{G})&\cong\{0\}\hspace*{1cm}(k\geq 2)
\end{align*}
For the Hochschild cohomology of the convolution algebra we refer to \cite[Thm 3.11]{nppt} to obtain
\begin{align*}
H^k(\mathcal{A}_\mathcal{G},\mathcal{A}_\mathcal{G})\cong \bigoplus_{\mathcal{O}\in\text{Sec}(\mathcal{G})}\Gamma_\text{inv}(\Lambda^{k-\text{codim}(\mathcal{O})}T\mathcal{O})
\end{align*}
where the sum is over the sectors $\mathcal{O}$ of $\mathcal{G}$. The action of the chain map $\Phi$ on the cohomology of degree $1$ is the inclusion of $\mathfrak{X}(M)_\text{inv}$ into this sum as the term for the sector $\mathcal{O}=M$.
\end{ex}
\section{Deformation quantization and the van Est map}\label{quant}
\subsection{The adiabatic groupoid}\label{adiabatic}
In the theory of deformation quantizations and applications thereof, there is an inherent place for replacing a groupoid with its adiabatic groupoid, as first described in \cite{connes}. In view of our discussion of the deformation complex, we describe it using the division map:
\begin{defi}
Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid, with Lie algebroid $A\xrightarrow{\pi} M$. We define the \textit{adiabatic groupoid} $\mathcal{G}_{\text{ad}}\to M\times\mathbb{R}$ by:
\begin{equation*}
\mathcal{G}_{\text{ad}}=A\times\{0\}\sqcup \mathcal{G}\times\mathbb{R}^\ast
\end{equation*}
The source and target are defined by:
\begin{equation*}
s(v,0)=(\pi(v),0)
\end{equation*}
\begin{equation*}
s(g,\tau)=(s(g),\tau)
\end{equation*}
\begin{equation*}
t(v,0)=(\pi(v),0)
\end{equation*}
\begin{equation*}
t(g,\tau)=(t(g),\tau)
\end{equation*}
Then we define the inversion map by
\begin{equation*}
\iota(v,0)=(-v,0)
\end{equation*}
\begin{equation*}
\iota(g,\tau)=(\iota(g),\tau)
\end{equation*}
Lastly, to define we division map, we note that pairs of divisible arrows come in 2 shapes, namely pairs $(v,0)$ and $(w,0)$ with $\pi(v)=\pi(w)$, and pairs $(g,\tau)$ and $(h,\tau)$ where $g$ and $h$ are divisible. We then define the division map by:
\begin{equation*}
\overline{m}((v,0),(w,0))=(v-w,0)
\end{equation*}
\begin{equation*}
\overline{m}((g,\tau),(h,\tau))=(\overline{m}(g,h),\tau)
\end{equation*}
\end{defi}
This is just the set-theoretical description, but the remarkable feature is that the adiabatic groupoid can be given a smooth
structure. Here we briefly recall this smooth structure and show how to extend normalized deformation elements to deformation elements of the adiabatic groupoid. Both will be done in the context of the procedure known as the {\em deformation to the normal cone}.
\subsubsection{Deformation to the normal cone}
The part of the discussion below concerning the smooth structure and the smooth maps on the deformation to the normal cone is after \cite[\S 4]{higson} and \cite[\S 1.1]{ds}.
\begin{defi}\label{nms}
Let $S\hookrightarrow M$ be a submanifold with normal bundle $N\to S$. The \textit{deformation to the normal cone} $N(M,S)$ is the manifold defined by:
\begin{equation*}
N(M,S)=N\times\{0\}\sqcup M\times\mathbb{R}^\ast
\end{equation*}
\end{defi}
The deformation to the normal cone can be given a topology and smooth structure in two ways. Either it is characterized by the fact that the following two types of maps
\begin{itemize}
\item The map $N(M,S)\to M\times\mathbb{R}$ that sends $(x,\tau)$ for $\tau\neq 0$ to $(x,\tau)$ and sends $(v,0)$ with $v\in N_x$ to $(x,0)$.
\item For every $f\in C^\infty(M)$ such that $f|_S=0$, the map $\delta f: N(M,S)\to\mathbb{R}$ defined by
\begin{align*}
(\delta f)(x,\tau)&=\frac{f(x)}{\tau}\,\,\,(x\in M,\,\tau\neq 0),\\
(\delta f)(v,0)&=d_nf(v)\,\,\,(v\in N)
\end{align*}
\end{itemize}
are smooth. Here by $d_nf$ we mean the smooth map on $N$ that for $v\in TM|_S$ sends $[v]$ to $df(v)$ and which is well-defined since $f|_S=0$.
Equivalently, one uses an exponential map, that is a map $\theta: U\to M$ from an open neighbourhood $U\subset N$ of the zero-section, with the property that for all $p\in S$ and $v\in N_p$ it holds that
\begin{equation*}
\theta(0_p)=p,\qquad
\left.\frac{d}{d\tau}\right|_{\tau=0}\theta(\tau v)=v ~\text{ mod }T_pS
\end{equation*}
The smooth structure on $N(M,S)$ can then also be characterized by the fact that the maps
\begin{equation*}
i_1: M\times\mathbb{R}^\ast\to N(M,S):\,\,(x,\tau)\mapsto (x,\tau)
\end{equation*}
\begin{equation*}
i_2: U'=\{(v,\tau)\in N\times\mathbb{R}: \tau v\in U\}\to N(M,S):\,\,\begin{matrix}(v,\tau)&\mapsto& (\theta(\tau v),\tau)\\
(v,0)&\mapsto& (v,0)\end{matrix}
\end{equation*}
are open smooth embeddings.
Important in considering deformations to normal cones is the action of $\mathbb{R}^\ast$ on $N(M,S)$, which is given by:
\begin{align*}
\lambda\cdot (x,\tau)&=(x,\lambda\tau),\\
\lambda\cdot (v,0)&=\left(\frac{v}{\lambda},0\right)
\end{align*}
where $\lambda,\tau\in\mathbb{R}^\ast$, $x\in M$ and $v\in N$.
We will describe how to extend a vector field on $M$, that is parallel to $S$, to a vector field on $N(M,S)$ that is invariant under the $\mathbb{R}^\ast$-action.
This will be done by writing down a vector field on the normal bundle and combining it with a vector field over $M\times\mathbb{R}^\ast$ to a discrete vector field on $N(M,S)$, and using an explicit description of the smooth functions on $N(M,S)$ to show that this is in fact a {\em smooth} vector field.
\begin{defi} \cite{higson}
Let $X$ be a set and $\mathcal{F}=\{f_\alpha: X\to V_\alpha\}$ be a family of functions from $X$ into smooth manifolds. We say that a function $f: X\to\mathbb{R}$ is \textit{smoothly composed from the family $\mathcal{F}$} if there is a finite collection $(f_{\alpha_1},...,f_{\alpha_n})\subset\mathcal{F}$ and a smooth map $h: V_{\alpha_1}\times\cdots V_{\alpha_n}\to\mathbb{R}$ such that
\begin{equation*}
f(x)=h(f_{\alpha_1}(x),...,f_{\alpha_n}(x))
\end{equation*}
\end{defi}
The smooth structure of $N(M,S)$ then means that all smooth functions on $N(M,S)$ are smoothly composed of type of functions as described after \cref{nms}. If we then apply Taylors theorem we conclude the following.
\begin{lem}\label{smoothvfnms}
A discrete vector field $X$ on $N(M,S)$ is smooth if and only if for every $f\in C^\infty(M)$ with $f|_s=0$ and every $g\in C^\infty(M\times\mathbb{R})$ the maps $\delta f$ and $\tilde{g}\in C^\infty(N(M,S))$ defined by:
\begin{equation*}
\begin{matrix}
(\delta f)(x,\tau)=\frac{f(x)}{\tau}&\,\,\,(\tau\neq 0)\\
(\delta f)(v,0)=d_nf(v)&\,\,\,(v\in N)\\
\\
\tilde{g}(x,\tau)=g(x,\tau)&\,\,\,(\tau\neq 0)\\
\tilde{g}(v,0)=g(x,0)&\,\,\,(v\in N_x)
\end{matrix}
\end{equation*}
satisfy that $X(\delta f),X(\tilde{g})\in C^\infty(N(M,S))$.
\end{lem}
We start with writing down the vector field over $N$. This is the {\em linearization}, as also in \cite[\S 4.1]{az}, that we describe in detail below:
\begin{prop}\label{vectorfieldnormalbundle}
Let $S\hookrightarrow M$ be a submanifold with normal bundle $\pi:N\to S$ and $X\in\mathfrak{X}(M)$ a vector field that is parallel to $S$. Then:
\begin{itemize}
\item[\textbf{a)}] The map that sends a smooth function $f\in C^\infty(M)$ satisfying $f|_S=0$ to the map $d_nf\in C^\infty_\text{lin} (N)$ is a surjection onto $C^\infty_\text{lin}(N)$
\item[\textbf{b)}] If $f\in C^\infty(M)$ satisfies that $f|_S=0$ and $d_nf=0$, then $Xf$ satisfies that $d_n(Xf)=0$.
\item[\textbf{c)}] The maps $(X_N)_{\text{lin}}: C^\infty_\text{lin}(N)\to C^\infty(N)$ and $(X_N)_{\text{cst}}: C^\infty(S)\to C^\infty(N)$ defined by
\begin{equation*}
(X_N)_\text{lin}(d_nf)=d_n(Xf)
\end{equation*}
\begin{equation*}
(X_N)_\text{cst}(g)=X|_S (g)\circ \pi
\end{equation*}
define a smooth vector field $X_N\in\mathfrak{X}(N)$.
\end{itemize}
\end{prop}
\begin{proof} Working down the list:
\begin{itemize}
\item[\textbf{a)}] By using a partition of unity this reduces to the local case $M=\mathbb{R}^m\times\mathbb{R}^n$ with $S=\mathbb{R}^m\times\{0\}$. In this local case there is a canonical diffeomorphism between $M$ and $N$ and pushing a linear map on $N$ through this canonical diffeomorphism yields a smooth map on $M$ which normal derivative equals the linear map on $N$ we started with.
\item[\textbf{b)}] This is again a computation in the local case $M=\mathbb{R}^m\times\mathbb{R}^n$ with $S=\mathbb{R}^m\times\{0\}$. Write
\begin{equation*}
X=\sum_{i=1}^m \alpha_i(x,y)\frac{\partial}{\partial x_i}+\sum_{j=1}^n\beta_j(x,y)\frac{\partial}{\partial y_j}
\end{equation*}
The fact that $X$ is parallel to $S$ means that $\beta_j(x,0)=0$ for all $j=1,...,n$. The fact that $d_nf=0$ is equivalent to the fact $\frac{\partial f}{\partial y_j}(x,0)=0$ for all $j=1,...,n$. Then we have
\begin{equation*}
Xf=\sum_{i=1}^m \alpha_i\frac{\partial f}{\partial x_i}+\sum_{j=1}^n \beta_j\frac{\partial f}{\partial y_j}
\end{equation*}
So that for $k=1,...,n$ we have
\begin{equation*}
\frac{\partial (Xf)}{\partial y_k}=\sum_{i=1}^m\frac{\partial \alpha_i}{\partial y_k}\frac{\partial f}{\partial x_i}+\sum_{i=1}^m\alpha_i\frac{\partial^2 f}{\partial y_k\partial x_i}+\sum_{j=1}^n\frac{\partial\beta_j}{\partial y_k}\frac{\partial f}{\partial y_j}+\sum_{j=1}^n\beta_j\frac{\partial^2 f}{\partial y_k\partial y_j}
\end{equation*}
Then since respectively $\frac{\partial f}{\partial x_i}(x,0)=0$ (since $f(x,0)=0$), $\frac{\partial^2 f}{\partial y_k\partial x_i}(x,0)=\left(\frac{\partial}{\partial x_i}\frac{\partial f}{\partial y_k}\right)(x,0)=0$ (since $\frac{\partial f}{\partial y_k}(x,0)=0$), $\frac{\partial f}{\partial y_j}(x,0)=0$ (by assumption) and $\beta_j(x,0)=0$ (by assumption), we see that
\begin{equation*}
\frac{\partial(Xf)}{\partial y_k}(x,0)=0
\end{equation*}
which implies that $d_n(Xf)=0$.
\item[\textbf{c)}] First note that (by restriction) a smooth vector field $Y\in\mathfrak{X}(E)$ on a vector bundle $\pi: E\to M$ is the same as a pair of maps $Y_\text{lin}: C^\infty_\text{lin}(E)\to C^\infty(E)$ and $Y_\text{cst}: C^\infty(M)\to C^\infty(E)$ such that for all $f,g\in C^\infty(M)$ and $h\in C^\infty_\text{lin}$ it holds that
\begin{equation*}
Y_\text{cst}(fg)=(f\circ\pi)\cdot Y_\text{cst}(g)+(g\circ\pi)\cdot Y_\text{cst}(f)
\end{equation*}
\begin{equation*}
Y_\text{lin}((f\circ\pi)\cdot h)=(f\circ\pi)\cdot Y_\text{lin}(h)+h\cdot Y_\text{cst}(f)
\end{equation*}
We show that these properties hold for the maps $(X_N)_\text{cst}$ and $(X_N)_\text{lin}$.
First we note that $(X_N)_\text{lin}$ is well-defined by parts a) and b). To show that they define a smooth vector field we check for $f,g\in C^\infty(S)$
\begin{align*}
(X_N)_\text{cst}(fg)&=(X|_S(fg))\circ\pi=(f\cdot X|_S(g)+g\cdot X|_S(f))\circ\pi\\
&=(f\circ\pi)\cdot (X|_S(g)\circ\pi)+(g\circ\pi)\cdot (X|_S(f)\circ\pi)\\
&=(f\circ\pi)X_\text{cst}(g)+(g\circ\pi)X_\text{cst}(f)
\end{align*}
Secondly let $f\in C^\infty(S)$ and $h\in C^\infty_\text{lin}(N)$ given by $h=d_ng$ with $g\in C^\infty(M)$ such that $g|_S=0$. Then first we need to find $g'\in C^\infty(M)$ with $g'|_S=0$ such that $fh=d_n(g')$. This can be done by choosing an extension of $f$ which is `constant in the normal direction', which is only well-defined locally or if we choose an exponential map.
We resort to the local case $M=\mathbb{R}^m\times\mathbb{R}^n$ with $S=\mathbb{R}^m\times\{0\}$. Then the map $g'(x,y)=f(x)g(x,y)$ clearly satisfies that $d_ng'=fh$. Then writing $X$ in coordinates as
\begin{equation*}
X=\sum_{i=1}^m\alpha_i\frac{\partial}{\partial x_i}+\sum_{j=1}^n\beta_j\frac{\partial}{\partial y_j}
\end{equation*}
we have
\begin{equation*}
(Xg')(x,y)=\sum_{i=1}^m\alpha_i(x,y)\frac{\partial f}{\partial x_i}(x)g(x,y)+f(x)(Xg)(x,y)
\end{equation*}
so that we see
\begin{equation*}
\frac{\partial (Xg')}{\partial y_k}(x,0)=\sum_{i=1}^m\alpha_i(x,0)\frac{\partial f}{\partial x_i}(x)\frac{\partial g}{\partial y_k}(x,0)+\sum_{i=1}^m\frac{\partial \alpha_i}{\partial y_k}(x,0)\frac{\partial f}{\partial x_i}(x)g(x,0)+f(x)\frac{\partial (Xg)}{\partial y_k}(x,0)
\end{equation*}
Then $g(x,0)=0$ so that the middle term vanishes. Then recognizing terms we obtain
\begin{equation*}
d(Xg')_{(x,0)}\left(\frac{\partial}{\partial y_k}\right)=X|_S(f)(x)\cdot(dg)_x\left(\frac{\partial}{\partial y_k}\right)+f(x)\cdot d(Xg)_{(x,0)}\left(\frac{\partial}{\partial y_k}\right)
\end{equation*}
so that globalizing we have
\begin{align*}
(X_N)_\text{lin}((f\circ \pi)\cdot d_ng)&=(X_N)_\text{lin}(d_ng')\\
&=d_n(Xg')\\
&=(X|_S(f)\circ\pi)d_ng+(f\circ\pi)d(Xg)\\
&=(X_N)_\text{cst}(f)d_ng+(f\circ\pi)(X_N)_\text{lin}(d_ng)
\end{align*}
So we see that we obtain a smooth vector field $X_N\in\mathfrak{X}(N)$.
\end{itemize}
This completes the proof.
\end{proof}
We are now ready to define the $\mathbb{R}^\ast$-invariant extension of the vector field $X$.
\begin{prop}\label{vfonnms}
Let $S\hookrightarrow M$ be a submanifold with normal bundle $N\to S$. Let $X\in\mathfrak{X}(M)$ be a vector field that is parallel to $S$. Then the discrete vector field $X_\text{inv}$ on $N(M,S)$ defined by
\begin{align*}
X_\text{inv}(x,\tau)&=X(x),\quad(\tau\neq 0)\\
X_\text{inv}|_{N\times\{0\}}&=X_N
\end{align*}
is a smooth vector field $X_\text{inv}\in\mathfrak{X}(M,S)$ which is the unique vector field on $N(M,S)$ which equals $X$ on $M\times\mathbb{R}^\ast$ and the unique $\mathbb{R}^\ast$-invariant vector field on $N(M,S)$ which equals $X$ along $M\times\{1\}$.
\end{prop}
\begin{proof}
The invariance and uniqueness is clear assuming that $X_\text{inv}$ is smooth. To show that it is smooth, by \cref{smoothvfnms} the only thing we have to check is that $X_\text{inv}(\delta f)$ and $X_\text{inv}(\tilde{g})$ are smooth for $f\in C^\infty(M)$ with $f|_S=0$ and $g\in C^\infty(M\times\mathbb{R})$. The definition of $X_N$ makes sure that the result is
\begin{equation*}
X_\text{inv}(\delta f)=\delta(Xf)
\end{equation*}
\begin{equation*}
X_\text{inv}(\tilde{g})=\tilde{Xg}
\end{equation*}
where in the second equation $X$ acts on $C^\infty(M\times\mathbb{R})$ as the vector field $X(x,\tau)=X(x)$ on $M\times\mathbb{R}$. By definition $\delta(Xf)$ and $\tilde{Xg}$ are smooth and so the result follows.
\end{proof}
\subsubsection{The adiabatic groupoid as a deformation to the normal cone}
We can now apply this to the case $M\hookrightarrow\mathcal{G}$ with normal bundle $A=\ker ds|_M$. The fact that the source, target and division maps are smooth, follows from the fact that away from $\tau=0$ they are just the respective maps of the original groupoid, while along $\tau=0$ they are the normal derivatives of the respective maps. A general principle of deformations to normal cones then means they are smooth. We note that an exponential map can be obtained by choosing a connection on $A$, see \cite{nwx} and \cite{landsmanboek}.
Next we want to describe the nerve of the adiabatic groupoid. As a set it equals $(\mathcal{G}_{\text{ad}})^{(k)}=\mathcal{G}^{(k)}\times\mathbb{R}^\ast\sqcup A^{\oplus k}\times\{0\}$. From the view point of trying to define vector fields on the nerve of the adiabatic groupoid, this set-theoretic description leads to searching for a connection between $A^{\oplus k}$ and the normal bundle of $M$ inside $\mathcal{G}^{(k)}$ as the diagonal of units.
\begin{lem}\label{normalbundenerve}
Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid with $\Delta: M\to\mathcal{G}^{(k)}$ the diagonal inclusion via the units. The vector bundle map $\nu: A^{\oplus k}\to\Delta^\ast T\mathcal{G}^{(k)}$ given by
\begin{equation*}
\nu(v_1,...,v_k)=(v_1+\sum_{i=2}^k du(dt(v_i)),v_2+\sum_{i=3}^k du(dt(v_i)),...,v_{k-1}+du(dt(v_k)),v_k)
\end{equation*}
induces an isomorphism between $A^{\oplus k}$ and the normal bundle of $M$ inside $\mathcal{G}^{(k)}$.
\end{lem}
\begin{proof}
First one checks that $\nu$ indeed maps into the tangent space of $\mathcal{G}^{(k)}\subset\mathcal{G}^{\times k}$, which is a simple calculation. Next to show that it induces an isomorphism to the normal bundle to $\Delta$, we first use the decomposition $T_{1_x}M=A_x\oplus T_xM$ to see that if $\nu(v_1,...,v_k)\in T_xM\subset T_{\Delta(x)}\mathcal{G}^{(k)}$ then $(v_1,...,v_k)=0$, so that the map into the normal bundle is injective. A simple case of dimension counting then implies that it the induced map is an isomorphism.
\end{proof}
\begin{cor}
There is a natural isomorphism between $N(\mathcal{G}^{(k)},M)$ and $\mathcal{G}_{\text{ad}}^{(k)}$ which away from $\tau=0$ links $((g_1,...,g_k),\tau)$ and $((g_1,\tau),...,(g_k,\tau))$.
\end{cor}
\subsubsection{Haar systems on the adiabatic groupoid}
We intend to link deformation quantizations of the Poisson manifold $A^\ast$ with the Van Est map $\mathcal{V}:\tilde{C}^\bullet_\text{def}(\mathcal{G})\to C^\bullet_\text{def}(A)$. To make the syntax line up, we need to explicitely write down isomorphisms between smooth functions on $\mathcal{G}$ and elements of the convolution algebra. This is done via Haar systems, which we will describe here in terms of densities.
\begin{defi}A Haar system on a groupoid $\mathcal{G}\rightrightarrows M$ is a collection $\lambda=\{\lambda_x\}_{x\in M}$ of positive sections $\lambda_x\in\Gamma(\mathcal{D}_s|_{s^{-1}(x)})$ that are invariant under right translations $R_g: s^{-1}(t(g))\to s^{-1}(s(g))$ and such that for every compactly supported function $f\in C^\infty_c(\mathcal{G})$ the map $\lambda(f): M\to\mathbb{R}$ given by
\begin{equation*}
\lambda(f)(x)=\int_{s^{-1}(x)}f(g)\lambda_x(g)
\end{equation*}
is smooth.
\end{defi}
We know that every Lie groupoid admits a Haar system (\cite[Prop 3.4]{landsman}) and if we have a Haar system $\lambda$ on a Lie groupoid $\mathcal{G}\rightrightarrows M$ with $s$-fibers of dimension $d$, we can (\cite[p.19]{landsman}) induce a Haar system $\hat{\lambda}$ on $\mathcal{G}_{\text{ad}}$ given by
\begin{equation*}
\hat{\lambda}(g,\tau)=|\tau|^d\lambda(g)
\end{equation*}
\begin{equation*}
\hat{\lambda}(v,0)=\lambda(\pi(v))
\end{equation*}
Here $\pi: A\to M$ is the projection and we take the canonical isomorphism $\ker(d\pi)\cong\pi^\ast(A)$ as a given.
Note that in particular we obtain a Haar system on the vector bundle $A\to M$, seen as a groupoid in the canonical way.
The choice of a Haar system induces an isomorphism between the sheaf of smooth functions on $\mathcal{G}$ and the sheaf of densities along the source fibers, and hence we can transport the convolution product over to the compactly supported functions where it is given by:
\begin{equation*}
(f_1\ast f_2)(g)=\int_{s^{-1}(s(g))}f_1(gh^{-1})f_2(h)\lambda_{s(g)}(h)
\end{equation*}
In particular on the adiabatic groupoid $\mathcal{G}_{\text{ad}}$ if we have two compactly supported functions $f_1,f_2$ we obtain:
\begin{equation*}
(f_1\ast f_2)(g,\tau)=|\tau|^{-d}\int_{s^{-1}(s(g))}f_1(gh^{-1},\tau)f_2(h,\tau)\lambda_{s(g)}(h)\,\,\,\,(\tau\neq 0)
\end{equation*}
\begin{equation*}
(f_1\ast f_2)(v,0)=\int_{A_{\pi(v)}}f_1(v-w,0)f_2(w,0)\lambda_{\pi(v)}(w)
\end{equation*}
At this point we notice that the convolution at $\tau=0$ does not require the functions to be compactly supported on $A_x$, being Schwartz is enough (c.f. the usual theory of Fourier transform in $\mathbb{R}^n$). This allows us, in the case of $\mathcal{G}_{\text{ad}}$, to enlarge the type of functions/densities on which we let the deformation complex act.
To this end we refer to the work of \cite{cr}, where a Fr\'ech\`et algebra $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$ is constructed with evaluations
\[
\mathscr{S}_c(\mathcal{G}_{\text{ad}})_t=\begin{cases} \mathscr{S}_c(A)& t=0\\ C_c^\infty(\mathcal{G})&t\not = 0.\end{cases}
\]
Here $\mathscr{S}_c(A)$ denotes the space of functions that are Schwartz along the fibers of the Lie algebroid and have compact support along $M$. This Schwartz type algebra should be thought of as a dense subalgebra the reduced $C^*$-algebra $C^*_r(\mathcal{G}_{\text{ad}})$.
By the discussion above, the convolution product is perfectly well-defined on $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$ and we can extend our viewpoint of the map $\Phi: C^\bullet_\text{def}(\mathcal{G}_{\text{ad}})\to C^\bullet_\text{Hoch}(\mathcal{A}_{\mathcal{G}_{\text{ad}}})$ to let $\Phi(c)$ (for $c\in C^k_\text{def}(\mathcal{G}_{\text{ad}})$) act on functions in $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$. At this point it should be remarked that the isomorphism between functions and densities induced by a Haar system does not preserve the action of vector fields (indeed on the level of densities one also needs to compare $\mathcal{L}_X\lambda$ with $\lambda$!). So really we should introduce in parallel to $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$ the notion of densities with are of Schwartz-type along $\tau=0$, but for the sake of not being overly pedantic we will not do this and just be careful when writing down the action of $\Phi(c)$.
In what follows for a smooth family $\{f_t\}_{t\neq 0}$ of compactly supported functions on $\mathcal{G}$ and $f'\in\mathscr{S}_c(A)$ we will use the notation
\begin{equation*}
\lim_{t\to 0} f_t=f'
\end{equation*}
if the function $F:\mathcal{G}_{\text{ad}}\to\mathbb{R}$ given by
\begin{equation*}
F(g,t)=f_t(g)
\end{equation*}
\begin{equation*}
F(v,0)=f'(v)
\end{equation*}
is an element of $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$.
\subsection{Fourier transform on vector bundles}
We briefly discuss the notion of Fourier transform on a vector bundle $E\to M$ under the choice of a Haar system on $E$. This discussion follows the results of Landsman and Ramazan \cite[\S 7]{landsman}. Recall that a vector bundle $\pi: E\to M$ can be seen as a groupoid over $M$ where both the source and the target map are the projection $\pi$ and the multiplication is the fiberwise addition. Since $\ker(d\pi)\cong\pi^\ast E$ a choice of a Haar system is at every $v\in E$ a choice of a density on $E_{\pi(v)}$ that is invariant, where invariance in this case means that the choice is constant along the fiber.
If we choose such a Haar system $\{\mu_x\}_{x\in M}$, in \cite{landsman} the Fourier transform $\mathcal{F}_\mu:\mathscr{S}(E)\to\mathscr{S}(E^\ast)$ was defined by
\begin{equation*}
(\mathcal{F}_\mu f)(\xi_x)=\int_{E_x} f(v)e^{-i\langle\xi_x,v\rangle}d\mu_x(v)
\end{equation*}
Furthermore, it was shown that this map is a linear isomorphism which intertwines the $\mu$-convolution product on $E$ and the pointwise product on $E^\ast$, and when $(x,v)$ are coordinates on $E$ induced by a frame with dual coordinates $(x,\xi)$ we have for $f\in\mathscr{S}(E)$, $g\in\mathscr{S}(E^\ast)$ and $a\in C^\infty(M)$ that
\begin{align*}
\mathcal{F}_\mu((a\circ\pi)f)&=(a\circ \pi)\mathcal{F}_\mu\\
\frac{\partial\mathcal{F}_\mu(f)}{\partial x_j}&=\mathcal{F}_\mu\left(\frac{\partial f}{\partial x_j}\right)+\left(\frac{\partial\text{log}(\mu_e)}{\partial x_j}\circ\pi\right)\mathcal{F}_\mu(f)\\
\frac{\partial\mathcal{F}_\mu(f)}{\partial \xi_j}&=-i\mathcal{F}_\mu(v_j f)\\
\frac{\partial\mathcal{F}_\mu^{-1}(g)}{\partial v_j}&=i\mathcal{F}_\mu^{-1}(\xi_jg)
\end{align*}
Note that after the choice of a Haar system $\mu$ we obtain an isomorphism between the algebra of functions $C^\infty_c(E)$ with the $\mu$-convolution product and the convolution algebra $\mathcal{A}_E$ of densities with the (intrinsic) convolution product. In particular if $X\in\mathfrak{X}(E)$, we can see $\Phi(X)$ as defined on functions (which is, again, not equal to the usual action of vector fields on functions), and we can extend the action to Schwartz functions.
Now using the Fourier transform, we can transport the action on the convolution algebra of $E$ to an action on the usual algebra with the pointwise product on $E^\ast$.
\begin{prop}
Let $X$ be a linear vector field on $E$. Then the map $\hat{X}:\mathscr{S}(E^\ast)\to\mathscr{S}(E^\ast)$ given by
\begin{equation*}
\hat{X}(f)=\mathcal{F}_\mu(\Phi(X)(\mathcal{F}_\mu^{-1}(f)))
\end{equation*}
defines a linear vector field on $E^\ast$. Here $\Phi$ is the natural chain map we defined before, applied to the vector bundle $E$ seen as a groupoid.
\end{prop}
\begin{proof}
First we show that $\hat{X}$ is indeed a vector field, i.e. a derivation with respect to the pointwise product. Since $\hat{X}$ is the conjugation of $\Phi(X)$ with an isomorphism which intertwines the convolution product on $\mathscr{S}(E)$ and the pointwise product on $\mathscr{S}(E^\ast)$ this is equivalent to showing that $\Phi(X)$ is a derivation for the convolution product. When we see $E\to M$ as a groupoid, this is equivalent to showing that $X$ is a multiplicative vector field, and it is easy to see that on a vector bundle the multiplicative vector fields are precisely the linear vector fields.
To see that $\hat{X}$ is a linear vector field we do a local computation on a trivial vector bundle $E=\mathbb{R}^m_x\times\mathbb{R}^n_v\to\mathbb{R}^m_x$ with Haar system $f(x)dv_1\wedge\cdots\wedge dv_n$. Using the properties of the Fourier transform stated before it follows that if
\begin{equation*}
X(x,v)=\sum_{i=1}^mX_i(x)\frac{\partial}{\partial x_i}+\sum_{j=1}^n\sum_{k=1}^n Y_{jk}(x)v_j\frac{\partial}{\partial v_k}
\end{equation*}
then
\begin{equation*}
\hat{X}(x,\xi)=\sum_{i=1}^mX_i(x)\frac{\partial}{\partial x_i}-\sum_{j=1}^n\sum_{k=1}^nY_{jk}(x)\xi_k\frac{\partial}{\partial\xi_j}
\end{equation*}
which indeed shows that $\hat{X}$ is a linear vector field.
\end{proof}
Recall that a linear vector field $X\in\mathfrak{X}(E)$ is the same as a linear map $X:\Gamma(E^\ast)\to\Gamma(E^\ast)$ with a symbol $s_X\in\mathfrak{X}(M)$ such that
\begin{equation*}
X(f\alpha)=fX(\alpha)+s_X(f)\alpha\qquad(f\in C^\infty(M),~\alpha\in\Gamma(E)).
\end{equation*}
Furthermore, recall the canonical pairing $\langle-,-\rangle:\Gamma(E^\ast)\times\Gamma(E)\to C^\infty(M)$. Then for a linear vector field $X$, the local calculation from the proof above generalizes to the following.
\begin{prop}\label{xhat}
Let $X\in\mathfrak{X}(E)$ be a linear vector field, then the linear vector field $\hat{X}\in\mathfrak{X}(E^\ast)$ is uniquely determined by the fact that for $\beta\in\Gamma(E^\ast)$ and $\alpha\in\Gamma(E)$
\begin{equation*}
\langle\beta,\hat{X}(\alpha)\rangle+\langle X(\beta),\alpha\rangle=s_X(\langle\beta,\alpha\rangle)
\end{equation*}
\end{prop}
We can play a similar game, albeit slightly more involved in notation, for higher order deformation elements of the vector bundle. So consider an element $X\in\tilde{C}^k_\text{def}(E)$ given by
\begin{equation*}
X(v_1,...,v_n)=X_1(v_1)\langle \beta_2,v_2\rangle\cdots\langle \beta_k,v_k\rangle
\end{equation*}
where $X_1$ is a linear vector field on $E$ and $\beta_2,...,\beta_k\in\Gamma(E^\ast)$. One immediately checks that this is a closed element of $\tilde{C}^k_\text{def}(E)$, so that the Fourier transform
\begin{equation*}
\hat{X}(f_1,...,f_k)=\mathcal{F}_\mu(\Phi(X)(\mathcal{F}_\mu^{-1}(f_1),...,\mathcal{F}_\mu^{-1}(f_k)))
\end{equation*}
is a closed element of the Hochschild complex of $C^\infty(E^\ast)$. By the specific form of $X$ is it easy to see that
\begin{equation*}
\Phi(X)(a_1,...,a_k)=\Phi(X_1)(a_1)\ast (\beta_2a_2)\ast\cdots\ast(\beta_k a_k)
\end{equation*}
where we see the $s_i$ as fiberwise linear maps on $E$. In particular we see that
\begin{equation*}
\hat{X}=\hat{X_1}\otimes\hat{\beta_2}\otimes\cdots\otimes\hat{\beta_k}
\end{equation*}
where for $\beta\in\Gamma(E^\ast)$, $\hat{\beta}$ is the vector field on $E^\ast$ given by
\begin{equation*}
\hat{\beta}(f)=\mathcal{F}_\mu(\beta\mathcal{F}_\mu^{-1}(f))
\end{equation*}
A local computation shows that $\hat{\beta}$ is identically zero on fiberwise constant maps and for the map induced by a section $\alpha\in\Gamma(E)$ we have
\begin{equation*}
\hat{\beta}(\alpha)=\frac{1}{i}\langle \beta,\alpha\rangle
\end{equation*}
In particular, we see that if we anti-symmetrize, we obtain the linear multivectorfield $\hat{X_1}\wedge\hat{\beta_2}\wedge\cdots\wedge\hat{\beta_k}$ on $E^\ast$.
\subsection{Deformation quantization of $A^\ast$ and the Van Est map}
Now, fix a choice of a Haar system of $\mathcal{G}$, which by the discussion above induces a Haar system on $\mathcal{G}_{\text{ad}}$ and a Haar system $\mu$ on $A\to M$. The last one makes sure that we can talk about a Fourier transform $\mathcal{F}_\mu:\mathscr{S}(A)\to\mathscr{S}(A^\ast)$.
Slightly tweaking the results of \cite{landsman} we obtain {\em quantization maps} $q_t:\mathscr{S}_c(A^*)\to C^\infty_c(\mathcal{G}),~t\not = 0$ given by
\[
q_t(f)(g):=\chi(g)\mathcal{F}_\mu^{-1}(f)(\frac{1}{t}\exp^{-1}(g)),
\]
which satisfy
\begin{equation}
\label{quant}
\lim_{t\to 0}(q_t(f_1 f_2)-q_t(f_1)\ast q_t(f_2))=0,\quad \lim_{t\to 0}(\frac{1}{it}[q_t(f_1),q_t(f_2)]-q_t(\{f_1,f_2\}))=0.
\end{equation}
Here $\chi\in C^\infty_c(\mathcal{G})$ is a cut-off function that equals $1$ in a neighborhood of $M\subset\mathcal{G}$
with support inside an open neighbourhood of the units onto which the exponential map is a diffeomorphism. The Poisson
bracket $\{~,~\}$ is the bracket associated to the so-called Lie--Poisson structure on $A^*$.
Explicitely, one of the differences with the results of \cite{landsman} is that we do not need the property $q_t(f^\ast)=q_t(f)^\ast$ for which the Weyl exponential map $\text{exp}^{\text{W}}$ is used, and instead we can use the normal exponential map. Secondly, we do not need to restrict to Paley-Wiener functions, as we allow for Schwarz-type functions at $t=0$ and use the cut-off function on the level of $\mathcal{G}$ instead of $A$, the deviation vanishing as $t$ approaches $0$. Lastly, as the relevant calculations on the local forms in $A$ and $A^\ast$ are valid for all Schwarz functions and not just Paley-Wiener functions, the relevant propositions in \cite{landsman} still hold in this situation. The variety of quantizations by using different types of exponential maps is also reflected on the more algebraic level in \cite{nw} by using different orderings in the Fedossov construction of {\em formal} deformation quantizations of $A^*$.
We now briefly recall the van Est-map as given in \cite[\S 10]{cms}. First the deformation complex of the algebroid $C^k_\text{def}(A)$ is given by antisymmetric multilinear maps $D: \Gamma(A)^k\to\Gamma(A)$ that have a symbol $s_D:\Gamma(A)^{k-1}\to\mathfrak{X}(M)$ such that
\begin{equation*}
D(\alpha_1,...,f\alpha_k)=fD(\alpha_1,...,\alpha_k)+s_D(\alpha_1,...,\alpha_{k-1})(f)\alpha_k
\end{equation*}
Note that we can, and will, see elements of $C^k_\text{def}(A)$ as linear multivectorfields on $A^\ast$ (by noting that sections of $A$ are the same as fiberwise linear maps on $A^\ast$) and in turn see the deformation complex of $A$ as the linear Poisson complex of the Poisson manifold $A^\ast$.
Then for $\alpha\in\Gamma(A)$ there are maps $R_\alpha:\tilde{C}^k_\text{def}(\mathcal{G})\to\tilde{C}^{k-1}_\text{def}(\mathcal{G})$ which are given for $k=1$ by
\begin{equation*}
R_\alpha(c)=[c,\overrightarrow{\alpha}]|_M
\end{equation*}
and for $k>0$ by
\begin{equation*}
R_\alpha(c)(g_1,...,g_{k-1})=(-1)^{k-1} \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} c(g_1,...,g_{k-1},\Phi^\epsilon_{\overrightarrow{\alpha}}(s(g_{k-1}))^{-1})
\end{equation*}
The van Est-map $\mathcal{V}:\tilde{C}^k_\text{def}(\mathcal{G})\to C^k_\text{def}(A)$ is then given by
\begin{equation*}
\mathcal{V}(c)(\alpha_1,...,\alpha_k)=\sum_{\sigma\in S_k}(-1)^\sigma (R_{\alpha_{\sigma(k)}}\circ\cdots\circ R_{\alpha_{\sigma(1)}})(c)
\end{equation*}
The connection between the van Est-map and the quantization maps is then as follows.
\begin{thm}
Let $k\geq 1$ and $c\in \tilde{C}^k_{\text{def}}(\mathcal{G})$ and suppose the choice of a Haar system on $\mathcal{G}$ inducing a Haar system $\mu$ on the algebroid $A$. Given $f_1,\ldots,f_k\in \mathscr{S}_c(A^*)$, the following
equality holds true:
\[
\mathcal{V}(c)(f_1,\ldots,f_k)=\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{\sigma\in S_k}(-1)^\sigma\frac{1}{(it)^{k-1}}\Phi(c)(q_t(f_{\sigma(1)}),\ldots,q_t(f_{\sigma(k)}))\right)\right)
\]
\end{thm}
\begin{rmk}
Note that the right hand side of the equation above is well-defined, since the sum of which we take the limit is a function on the groupoid. The limit is then a Schwartz function on the algebroid and so if we take the Fourier transform we obtain a Schwartz function on the dual of the algebroid.
\end{rmk}
\begin{proof}
We start with the case $k=1$. First note that for $f\in\mathscr{S}_c(A^\ast)$ the map $q(f):\mathcal{G}_{\text{ad}}\to\mathbb{R}$ given by
\begin{align*}
q(f)(g,t)&=q_t(f)(g)\\
q(f)(v,0)&=\mathcal{F}_\mu^{-1}(f)(v)
\end{align*}
is an element of $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$. Then note that the family $\{c\}_{t\neq 0}$ is a family of vector fields on $\mathcal{G}$ which can be extended to a vector field on $\mathcal{G}_{\text{ad}}$, namely to the vector field $c_\text{inv}$ obtaines by \cref{vfonnms}. Then notice that $\Phi(c_\text{inv})(q(f))$ is an element of $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$ consisting of
\begin{align*}
\Phi(c_\text{inv})(q(f))_t&=\Phi(c)(q_t(f)),\qquad(t\neq 0)\\
\Phi(c_\text{inv})(q(f))_0&=\Phi(c_0)(\mathcal{F}_\mu^{-1}(f))
\end{align*}
where $c_0$ is the linear vector field that is the restriction of $c_\text{eqv}$ to $t=0$. Note that it is linear, since it is the application of \cref{vectorfieldnormalbundle} to the vector field $c$ on $\mathcal{G}$. In particular we see that
\begin{equation*}
\lim_{t\to 0}\Phi(c)(q_t(f))=\Phi(c_0)(\mathcal{F}_\mu^{-1}(f))
\end{equation*}
and so we need to show that $\mathcal{V}(c)=\hat{c_0}$.
By \cref{xhat} this means that we need to show that for $\beta\in\Gamma(A^\ast)$ and $\alpha\in\Gamma(A)$ we have
\begin{equation*}
\langle\beta,\mathcal{V}(c)(\alpha)\rangle+\langle c_0(\beta),\alpha\rangle=s_c(\langle\beta,\alpha\rangle)
\end{equation*}
We note two things. First that every $\beta\in\Gamma(A^\ast)=C^\infty_\text{lin}(A)$ can be written as $d_nh$ for $h\in C^\infty(\mathcal{G})$ with $h|_M=0$. Second that since we have an explicit inclusion of $A$ into the tangent bundle of $\mathcal{G}$ this means that:
\begin{equation*}
\langle d_nh,\alpha\rangle(x)=\alpha(x)(h)
\end{equation*}
We are now ready to show the equality. First we have
\begin{align*}
\langle d_nh,\mathcal{V}(c)(\alpha)\rangle(x)&=[c,\overrightarrow{\alpha}](1_x)(h)\\
&=c(1_x)(\overrightarrow{\alpha}(h))-\alpha(x)(c(h))
\end{align*}
\begin{equation*}
\langle c_0(d_n h),\alpha\rangle(x)=\langle d_n(ch),\alpha\rangle(x)=\alpha(x)(c(h))
\end{equation*}
and since $c_{1_x}=du(s_c(x))$ combined with $\overrightarrow{\alpha}|_M=\alpha$ we have
\begin{equation*}
s_c(\langle d_nh,\alpha\rangle)(x)=s_c(\overrightarrow{\alpha}(h)|_M)(x)=c(1_x)(\overrightarrow{\alpha}(h))
\end{equation*}
For $k>1$ we restrict to the case where $c=c_1\otimes h_2\otimes\cdots\otimes h_k$ with $c_1\in\mathfrak{X}(\mathcal{G})$ and $h_2,...,h_k\in C^\infty(\mathcal{G})$. For $c$ to be an element of $\tilde{C}^k_\text{def}(\mathcal{G})$ it is necessairy and sufficient to have $c_1\in\tilde{C}^1_\text{def}(\mathcal{G})$ and $h_i|_M=0$. Similar to the case $k=1$ we note that
\begin{equation*}
\frac{1}{(it)^{k-1}}\Phi(c)(q_t(f_1),...,q_t(f_k))=\Phi(\frac{1}{(it)^{k-1}}c)(q_t(f_1),...,q_t(f_k))
\end{equation*}
which, as $t\to 0$, converges
\begin{equation*}
\Phi(c_0)(\mathcal{F}_\mu^{-1}(f_1),...,\mathcal{F}_\mu^{-1}(f_k))
\end{equation*}
if we find a vector field $c_0$ on $A$ that together with the family $\{\frac{1}{(it)^{k-1}}c\}_{t\neq 0}$ defines a smooth deformation element of $\mathcal{G}_{\text{ad}}$.
To calculate this localization we remark that we can do the calculation in $\mathcal{G}^k$ using the cartesian product of the exponential map $A\to\mathcal{G}$, in stead of working in $\mathcal{G}^{(k)}$ and using the machinery of the previous section. This is for two reasons: firstly our definition of $c$ extends to $\mathcal{G}^k$. Secondly the difference of $(v_1,...,v_k)\in A^{\oplus k}$ seen as tangent vectors on $\mathcal{G}^k$ and $(v_1,...,v_k)\in A^{\oplus k}$ seen as tangent vectors in $\mathcal{G}^{(k)}$ which are normal to the units, using the isomorphism of \cref{normalbundenerve}, are tangent vectors in $\mathcal{G}^k$ which are along the units. Since $c$ vanishes along the units, we can neglect this.
Now to do the actual calculation we consider the chart $\theta: A^{\oplus k}\times\mathbb{R}^\ast\to\mathcal{G}^k\times\mathbb{R}^\ast$ given by
\begin{equation*}
\theta(v_1,...,v_k,t)=(\exp(tv_1),...,\exp(tv_k),t)
\end{equation*}
Then if we look at the family $\{\frac{1}{(it)^{k-1}}c\}_{t\neq 0}$, we see that if we take the pullback along $\theta$ we obtain:
\begin{equation*}
\theta^\ast(\{\frac{1}{(it)^{k-1}}c\}_{t\neq 0})(v_1,...,v_k,t)=\frac{1}{(it)^k}c_1(\exp(tv_1))h_2(\exp(tv_2))\cdots h_k(\exp(tv_k))
\end{equation*}
Distributing the $k$ powers of $\frac{1}{t}$ over the $k$ different terms we see that
\begin{equation*}
c_0(v_1,...,v_k)=\frac{1}{i^{k-1}}(c_1)_0(v_1)d_nh_2(v_2)\cdots d_nh_2(v_k)
\end{equation*}
since
\begin{equation*}
\frac{1}{t}c_1(\exp(tv_1))\to (c_1)_0(v_1)
\end{equation*}
\begin{equation*}
\frac{1}{t}h(\exp(tv))\to d_nh(v)
\end{equation*}
as $t\to 0$, so we see that $c_0=\frac{1}{i^{k-1}}(c_1)_0\otimes d_nh_2\otimes\cdots\otimes d_nh_k$, which is a linear deformation element, and we want to show that $\mathcal{V}(c)$ is the anti-symmetrization of the Fourier transform $\hat{c_0}$. By the discussion at the end of the previous subsection we see that $\hat{c_0}$ is determined for $\alpha_1,...,\alpha_k\in\Gamma(A)$ by
\begin{equation*}
\hat{c_0}(\alpha_1,...,\alpha_k)=\frac{1}{i^{2(k-1)}}\hat{(c_1)_0}(\alpha_1)\langle d_nh_2,\alpha_2\rangle\cdots\langle d_nh_k,\alpha_k\rangle
\end{equation*}
Next we investigate $R_\alpha(c)$, we obtain:
\begin{align*}
R_\alpha(c)(g_1,...,g_{k-1})&=(-1)^{k-1}\frac{d}{d\epsilon}|_{\epsilon=0}c_1(g_1)h_2(g_2)\cdots h_{k-1}(g_{k-1})h_k(\Phi^\epsilon_{\overrightarrow{\alpha}}(s(g_k))^{-1})\\
&=(-1)^{k-1}c_1(g_1)h_2(g_2)\cdots h_{k-1}(g_{k-1})dh_k(d\iota(\alpha(s(g_k)))
\end{align*}
Then since $f_k|_M=0$ and for $v\in A_x$ we have $d\iota v=-v+d(u\circ t)(v)$ we obtain
\begin{equation*}
R_\alpha(c)(g_1,...,g_{k-1})=(-1)^k c_1(g_1)h_2(g_2)\cdots h_{k-1}(g_{k-1})d_nh_k(\alpha(s(g_{k-1}))
\end{equation*}
Doing this inductively, and using that the flow of $\overrightarrow{\alpha}$ preserves source fibers, we see
\begin{equation*}
(R_{\alpha_2}\circ\cdots\circ R_{\alpha_k})(c)(g)=(-1)^{\frac{(k-1)(k-2)}{2}}c_1(g)d_nh_2(\alpha_2(s(g))\cdots d_nh_k(\alpha_k(s(g))
\end{equation*}
Then since this is simply $c_1$ multiplied with a function that is constant along the $s$-fibers, we obtain:
\begin{align*}
(R_{\alpha_1}\circ\cdots \circ R_{\alpha_k})(c)&=(-1)^{\frac{(k-1)(k-2)}{2}}\mathcal{V}(c_1)(\alpha_1)\langle d_nh_2,\alpha_2\rangle\cdots\langle d_nh_k,\alpha_k\rangle\\
&=i^{(k-1)(k-2)}\mathcal{V}(c_1)(\alpha_1)\langle d_nh_2,\alpha_2\rangle\cdots\langle d_nh_k,\alpha_k\rangle
\end{align*}
Since already know by the calculation in the case $k=1$ that $\mathcal{V}(c_1)(\alpha_1)=\hat{(c_1)_0}(\alpha_1)$ we see that
\begin{equation*}
(R_{\alpha_1}\circ\cdots \circ R_{\alpha_k})(c)=i^{k(k-1)}\hat{c_0}(\alpha_1,...,\alpha_k)
\end{equation*}
Then note that there is a mismatch in the summation over $S_k$ in $\mathcal{V}(c)$ and in the right hand side of the theorem. In particular the right hand side in the last equation corresponds to the identity permutation in the statement of the theorem, while the right hand side corresponds to the permutation in the definition of $\mathcal{V}(c)$ that sends $j$ to $k-j$. This sign of this permutation is $(-1)^{\frac{k(k-1)}{2}}$, for which we have to correct, so that we obtain
\begin{align*}
\mathcal{V}(c)(\alpha_1,...,\alpha_k)&=\sum_{\sigma\in S_k}(-1)^\sigma (R_{\alpha_{\sigma(k)}}\circ\cdots R_{\alpha_{\sigma(1)}})(c)\\
&=\sum_{\sigma\in S_k}(-1)^\sigma i^{k(k-1)}(R_{\alpha_{\sigma(1)}}\circ\cdots\circ R_{\alpha_{\sigma(k)}})(c)\\
&=\sum_{\sigma\in S_k}(-1)^\sigma i^{2k(k-1)}\hat{c_0}(\alpha_{\sigma(1)},...,\alpha_{\sigma(k)})\\
&=\sum_{\sigma\in S_k}(-1)^\sigma \hat{c_0}(\alpha_{\sigma(1)},...,\alpha_{\sigma(k)})
\end{align*}
So we see that $\mathcal{V}(c)$ equals the linear multivector field that is the antisymmetrization of $\hat{c_0}$. In particular this means that for $f_1,...,f_k\in\mathscr{S}_c(A^\ast)$ we have
\begin{align*}
\mathcal{V}(c)(f_1,...,f_k)&=\sum_{\sigma\in S_k}(-1)^\sigma\hat{c_0}(f_{\sigma(1)},...,f_{\sigma(k)})\\
&=\frac{1}{i^{k-1}}\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{\sigma\in S_k}(-1)^\sigma\frac{1}{(it)^{k-1}}\Phi(c)(q_t(f_{\sigma(1)}),\ldots,q_t(f_{\sigma(k)}))\right)\right)
\end{align*}
This completes the proof.
\end{proof}
\begin{rmk}
This theorem, restricted to multiplicative vector fields, can be viewed as a statement about the ``classical limit'' of certain derivations of the convolution algebra, and looks very similar to certain aspects of the proof of the Atiyah--Singer index theorem given in \cite{enn}. Indeed, it would be interesting to investigate its use in index theory for Lie groupoids, as it
exactly fits into the framework of relating the van Est map to the classical limit, as shown in the index theorem of \cite{ppt}
for smooth groupoid cohomology $H^\bullet_{\rm diff}(\mathcal{G})$.
\end{rmk}
In the previous proof we have only used the fact that $q_t(f)$ converges to $\mathcal{F}_\mu^{-1}(f)$ in $\mathscr{S}_c(\mathcal{G}_{\text{ad}})$ as $t$ goes to $0$, we have not used the properties which makes the family $\{q_t\}_{t\neq 0}$ a family of quantization maps, namely their compatibility with the Poisson bracket. However, we have not introduced these specific maps without reason, since we will use the fact that
\begin{equation*}
\lim_{t\to 0}(\frac{1}{it}[q_t(f_1),q_t(f_2)])=\lim_{t\to 0}q_t(\{f_1,f_2\}))
\end{equation*}
to give an alternative proof of the fact that the Van Est map is a {\em chain map}, i.e, compatible with the differentials:
\begin{cor}
The van Est map $\mathcal{V}:\tilde{C}_\text{def}^\bullet(\mathcal{G})\to C^\bullet_{\text{Pois,lin}}(A^\ast)$ is a chain map.
\end{cor}
\begin{proof}
Let $c\in\tilde{C}^k_\text{def}(\mathcal{G})$ for $k\geq 1$ and we start by dissecting $\mathcal{V}(\partial c)$. Using the previous theorem we obtain
\small
\begin{align*}
\mathcal{V}(\delta c)(f_1,...,f_{k+1})=&\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{\sigma\in S_{k+1}}(-1)^\sigma\frac{1}{(it)^{k}}\Phi(\delta c)(q_t(f_{\sigma(1)}),\ldots,q_t(f_{\sigma(k+1)}))\right)\right)\\
=&\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{\sigma\in S_{k+1}}(-1)^\sigma\frac{1}{(it)^{k}}(\delta_\text{Hoch}\Phi(c))(q_t(f_{\sigma(1)}),\ldots,q_t(f_{\sigma(k+1)}))\right)\right)\\
=&\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{\sigma\in S_{k+1}}(-1)^\sigma\frac{1}{(it)^k}[q_t(f_{\sigma(1)}),\Phi(c)(q_t(f_{\sigma(2)}),...,q_t(f_{\sigma(k+1)}))]\right)\right)\\
&+\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{j=1}^k\sum_{\substack{\sigma\in S_{k+1}\\\sigma^{-1}(j)<\sigma^{-1}(j+1)}}(-1)^\sigma (-1)^j\frac{1}{(it)^k}\Phi(c)(q_t(f_{\sigma(1)}),...,[q_t(f_{\sigma(j)}),q_t(f_{\sigma(j+1)})],...,q_t(f_{\sigma(k)}))\right)\right)
\end{align*}
\normalsize
Now the relation between the commutator, the Poisson bracket and the quantization maps, we can use 1 power of $\frac{1}{it} $ to turn the commutators into Poisson brackets. Also using the fact that $q_t(f)\to \mathcal{F}_\mu^{-1}(f)$ as $t\to 0$ this results in
\small
\begin{align*}
\mathcal{V}(\delta c)(f_1,...,f_{k+1})=&\sum_{\sigma\in S_{k+1}}(-1)^\sigma\left\{f_{\sigma(1)},\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\frac{1}{(it)^{k-1}}\Phi(c)(q_t(f_{\sigma(2)}),...,q_t(f_{\sigma(k+1)}))\right)\right)\right\}\\
&+\mathcal{F}_\mu\left(\lim_{t\to 0}\left(\sum_{j=1}^k\sum_{\substack{\sigma\in S_{k+1}\\\sigma^{-1}(j)<\sigma^{-1}(j+1)}}(-1)^\sigma (-1)^j\frac{1}{(it)^{k-1}}\Phi(c)(q_t(f_{\sigma(1)}),...,q_t(\{f_{\sigma(j)},f_{\sigma(j+1)}\}),...,q_t(f_{\sigma(k)}))\right)\right)
\end{align*}
\normalsize
Then using the previous Theorem in reverse order we see that this leads to
\begin{align*}
\mathcal{V}(\delta c)(f_1,...,f_{k+1})&=\sum_{j=1}^{k+1}(-1)^{j+1}\left\{f_j,\mathcal{V}(c)(f_1,...,\hat{f_j},...,f_{k+1})\right\}\\
&+\sum_{j_1<j_2}(-1)^{j_1+j_2}\mathcal{V}(c)(\{f_{j_1},f_{j_2}\},f_1,...,\hat{f_{j_1}},\hat{f_{j_2}},...,f_{k+1})
\end{align*}
which shows that the Van Est map is a chain map.
\end{proof}
|
1,116,691,500,956 | arxiv | \section{Introduction}
Splittings were first considered in \cite{St67} in connection with the problem of tiling the Euclidean space by translates of certain polytopes composed of unit cubes, called $k$-crosses and $k$-semicrosses, see also \cite{HS86} and \cite{ St84, SS94, Sz86, Sz87, SzS09}. Splitter sets are equivalent to codes correcting
single limited magnitude errors in flash memories (see \cite{BE13}, \cite{EB10, HP03, KBE11, KLNY11, KLY12, M96, OSW18, Sc12, Sc14, YKB13} and the references
therein).
Let $G$ be a finite group, written additively, $M$ a set of integers, and $S$ a subset of $G$.
We say that $M$ and $S$ form a \textsl{splitting} of $G$ if every nonzero element $g$ of $G$ has a unique representation of the form $g=ms$ with $m\in M$ and $s\in S$, while $0$ has no such representation.
(Here $ms$ denotes the sum of $m$ $s$'s if $m\geq 0$, and $-((-m)s)$ if $m<0$.)
We write $G\setminus \{0\}=MS$ to indicate that $M$ and $S$ form a splitting of $G$.
$M$ is referred to as the multiplier set and $S$ as the splitter set.
We also say that $M$ splits $G$ with a splitter set $S$, or simply that $M$ splits $G$. A splitting $G\setminus \{0\}=MS$ of a finite group $G$ is called {\it nonsingular} if every element of $M$ is relatively prime to $|G|$, otherwise the splitting is called {\it singular}.
The following notations are fixed throughout this paper.
$\bullet$ For an odd prime $p$, a primitive root $g$ modulo $p$, and an
integer $b$ not divisible by $p$, there exists a unique integer
$l\in [0, p - 2]$ such that $g^l\equiv b \pmod{ p}$. It is known as
the index of $b$ relative to the base $g$, and it is denoted by
$ind_g(b)$.
$\bullet$ For any positive integer $q$, let $\mathbb{Z}_q$ be the ring of integers
modulo $q$ and $\mathbb{Z}_q ^\ast= \mathbb{Z}_q \backslash \{0\}$. For $a\in\mathbb{Z}_q ^\ast$, $o(a)$ denotes the order of $a$ in the multiplicative group $\mathbb{Z}_q ^\ast$.
$\bullet$ Let $a, b$ be integers such that $a\le b$, denote
$$[a, b] = \{a, a + 1, a + 2, \ldots , b\}\,\, \mbox{and}$$
$$[a, b]^\ast = \{a, a + 1, a + 2, \ldots, b\}\backslash\{0\}.$$
$\bullet$ Unless additionally defined, we assume that $aT=a\cdot T = \{a\cdot t : t\in T \}$, \, $A+B=\{a+b, a\in A, b\in B\}$ and $AB=A\cdot B = \{a\cdot b, a\in A, b\in B\}$ for any element
$a$ and any sets $A$ and $B$, where $\cdot$ and $+$ are binary operators.
$\bullet$ For a nonempty set $M$, $|M|$ denotes the number of elements in $M$.
\begin{defn}Let $(G, \cdot)$ be an abelian group (written multiplicatively). If each element $g\in G$ can be expressed uniquely in the form
$$g = a \cdot b, a \in A,\,\, b \in B,$$
then the equation $G = A \cdot B$ is called a {\it factorization} of $G$. A non-empty subset of $G$ is called to be a {\it direct factor} of $G$ if there exists a subset $B$ such that $G=A\cdot B$ is a factorization. \end{defn}
In 1983, Hickerson \cite{H83} proved the following result.
\begin{prop}{\rm (\cite{H83}, Theorem 2.2.3)}\label{nonsingular}
Let $G$ be a finite group and $M$ a set of nonzero integers.
Then $M$ splits $G$ nonsingularly if and only if $M$ splits $\mathbb{Z}_p$ for each prime divisor $p$ of $|G|$.
\end{prop}
By the above proposition of Hickerson \cite{H83}, for the study of nonsingular splittings of abelian groups we can restrict the study to the cyclic group $\mathbb{Z}_p$. In this paper, we focus our study on nonsingular splittings of $\mathbb{Z}_p$ for some prime $p$.
The arrangement of the paper is as follows: In Section 2, we introduce a new notation. By using a powerful of Kummer and Mills, we prove the main results of this paper in Section 2. In Section 3, we present the related results when $M=[-k_1, k_2]^*$. In Section 4, we give a characterization of the possible splitter sets $B$ such that $[-1, 5]^*$ splits $\mathbb{Z}_p$
with the splitter set $B$.
\section{Direct KM logarithm and nonsingular splittings}
We apply a powerful theorem first proved by Kummer and
generalized by Mills \cite{[M]} to the splittings of cyclic groups.
A $k$-character $\chi$ on $\mathbb{Z}_p$ ($p$ is a prime) is a homomorphism from $\mathbb{Z}_p^\star$ to
$\mathbb{Z}_k$. Let $p_1, \ldots, p_t$ be distinct primes and let $b_1, \ldots, b_t$ be elements of $\mathbb{Z}_k$.
Mills \cite{[M]} obtained the following necessary and sufficient conditions for the existence
of a prime $p$ and corresponding $k$-character $\chi$ such that $\chi(p_i)= b_i, 1\le i\le t$.
\begin{thm}\label{th21} (Kummer-Mills) Let $p_1, \ldots, p_t$ be distinct primes and let
$b_1, \ldots, b_t\in \mathbb{Z}_k$. There is an infinite number of primes $p$ and $k$-characters
$\chi : \left(\mathbb{Z}_p\right)^\ast\to \mathbb{Z}_k$ such that $\chi(p_i) = b_i$ for $1 \le i \le t$ if and only if
(1) $k$ is odd,
(2) $k = 2m$ where $m$ is odd and (a) for each $p$ such that $p\equiv1\pmod{4}$ and $p$
divides $m$, $b_i$ is even and (b) for all $p\equiv 3\pmod{4}$ which divide $m$, the corresponding $b_i$ all have the same parity,
(3) $k = 4m$ and for each $p_i$ that divides $m$, $b_i$ is even.
Moreover, if there is one such prime $p$ for which a $k$-character exists with prescribed
values at $p_1, \ldots, p_t$, then there is an infinite number. \end{thm}
If $p_1=-1$, then we have the following result.
\begin{thm}\label{th22} (Kummer-Mills) Let $p_1=-1$, $p_2 \ldots, p_t$ be distinct primes and let
$b_1, \ldots, b_t\in \mathbb{Z}_k$. There is an infinite number of primes $p$ and $k$-characters
$\chi : \left(\mathbb{Z}_p\right)^\ast\to \mathbb{Z}_k$ such that $\chi(p_i) = b_i$ for $1 \le i \le t$ if and only if
(1) $k = 2m$ where $m$ is odd and (a) for each $p$ such that $p\equiv1\pmod{4}$ and $p$
divides $m$, $b_i$ is even, and (b)$b_1\equiv k/2\pmod{k}$ is odd and for all $p\equiv 3\pmod{4}$ which divide $m$, all the corresponding $b_i$are odd,
(2) $k = 4m$ where $m$ is odd and and (a) for each odd prime $p_i$ that divides $m$, $b_i$ is even and (b) $b_1\equiv k/2\pmod{k}$ and $b_i$ is odd for $p=2$.
(3) $k = 8m$ and for each prime $p_i$ that divides $m$, $b_i$ is even.
Moreover, if there is one such prime $p$ for which a $k$-character exists with prescribed
values at $p_1, \ldots, p_t$, then there is an infinite number. \end{thm}
Let $k_1$ and $k_2$ be non-negative integers with $k_1\le k_2$ and $k_1+k_2=k\ge3$, a {\it logarithm \, function} (of length $k$) is a function $f: [-k_1, k_2]^\ast\to \mathbb{Z}_k$ such that
$f(xy) = f(x)+ f(y)$ whenever $ x, y, xy \in[-k_1, k_2]^\ast$. A {\it logarithm} is a bijective logarithmic function. Logarithms are used in lattice tilings, group theory, number theory, coding theory, and $k$-radius sequences, see Blackburn and Mckee \cite{BM12} and the references therein.
Let $p$ be a prime such that $p\equiv1\pmod{k}$, a $k$-character $\chi$ on $\mathbb{Z}_p$ ($p$ is a prime) is a homomorphism from $\mathbb{Z}_p^\star$ to
$\mathbb{Z}_k$. If the $k$-character $\chi$ is primitive, i.e., the homomorphism is an epimorphism, then there is a primitive $k$th root of unity $\zeta$ in $\mathbb{Z}_p$ such that $\chi$ is determined by the equation $x^{(p-1)/k}\equiv \zeta^{\chi(x)}\pmod{p}$. Observe that the above determined function $\chi$ satisfies $\chi(ab)\equiv \chi(a)+\chi(b)\pmod{k}$ for any $a, b\in\mathbb{Z}_p$ and the image of $\chi$ is $\mathbb{Z}_k$. We say (following Galovich and
Stein [6], Blackburn and Mckee \cite{BM12}) that $f$ is a Kummer-Mills-logarithm, or KM-logarithm, if it arises in this
way for some prime $p$.
Now we introduce the following general definitions.
\begin{defn}Let $M$ be a finite subset of nonzero integers. An injective function $f: M\to \mathbb{Z}_k$ is called a {\it direct logarithm} if
$f(xy) = f(x)+ f(y)$ when $ x, y, xy \in M$ and $f(M)$ is a direct factor of $\mathbb{Z}_k$. A {\it direct logarithm} that meets the
conditions of Theorems \ref{th21} and \ref{th22} is called a {\it direct KM-logarithm}.\end{defn}
It is easy to see that $|M||k$ and the direct KM-logarithm is the usual KM-logarithm when $|M|=k$. Hence it is a generalization of the usual KM-logarithm. We first prove the following useful proposition.
\begin{prop}\label{mainprop} Let $M$ be a finite set of nonzero integers. Suppose that there is a direct logarithm $f: M\to \mathbb{Z}_k$ for some positive integer $k$ with $|M||k$, then the function $g:
M\to \mathbb{Z}_{8k}$ defined by $g(m)\equiv8f(m)\pmod{8k}$ is a direct KM logarithm. \end{prop}
\begin{proof} Observe that $g(m)$ is even for any $m\in M$, so it suffices to show that $g$ is a direct logarithm. Since $f$ is a direct logarithm, so $f$ is injective and there is a subset of $\mathbb{Z}_k$ such that $f(M)+B=\mathbb{Z}_k$ is a factorization. Let $B_1=\{8b\pmod{8k}, b\in B\}$. We will show that
$$g(M)+B_1+\{0, 1, 2, 3, 4, 5, 6, 7\}=\mathbb{Z}_{8k}$$
is a factorization. For any element $a\in \mathbb{Z}_{8k}$, $0\le a<8k$, it is easy to see that
$$a=a_1+8t, \quad a_1\in \{0, 1, 2, 3, 4, 5, 6, 7\}, \,\, 0\le t<k$$ and the representation is unique. Recall that $f(M)+B=\mathbb{Z}_k$ is a factorization, so we have $t=f(m)+b, b\in B, m\in M$ and the representation is unique. Hence
$$a=a_1+8f(m)+8b=a_1+g(m)+b_1,\quad a_1\in \{0, 1, 2, 3, 4, 5, 6, 7\},\quad b_1\in B_1,$$ and the representation is unique, as required.\end{proof}
\begin{thm}\label{main} Let $M$ be a finite set of nonzero integers. Suppose that there is a direct KM logarithm $f: M\to \mathbb{Z}_k$ for some positive integer $k$ with $|M||k$, then there are infinitely
many primes $p$ such that $M$ splits $\mathbb{Z}_p$. \end{thm}
\begin{proof} By the definition of a direct KM logarithm and Kummer-Mills Theorem, there are infinitely
many primes $p$ such that the restriction of the $k$-th power character map $\varphi: \mathbb{Z}_p^*\to \mathbb{Z}_p^*$ defined by $\varphi(a)\equiv a^{\frac{p-1}{k}t}\pmod{p}, \gcd(t, k)=1$ to $M$ is injective and $\varphi(M)$ is a direct factor of the cyclic group $\varphi(\mathbb{Z}_p^*)$ of order $k$. We will show that for all these primes $p$, $M$ splits $\mathbb{Z}_p$.
Since $\varphi(M)$ is a direct factor of the cyclic group $\varphi(\mathbb{Z}_p^*)$, so there is a finite subset $B$ of $\varphi(\mathbb{Z}_p^*)$ such that $1\in B$ and $\varphi(M)B=\varphi(\mathbb{Z}_p^*)$ is a factorization. Let $B_1=\{a\in\mathbb{Z}_p^*, \varphi(a)\in B\}$. Then $|B_1|=|B|\cdot\frac{p-1}{k}$ and $p-1=|M||B_1|$. Therefore it suffices to show that $MB_1$ is direct. Suppose now that $m_1b_1=m_2b_2, m_1, m_2\in M$ and $b_1, b_2\in B_1$, then
$$ \varphi(m_1)\varphi(b_1)=\varphi(m_2)\varphi(b_2).$$
Notice that $\varphi(m_1), \varphi(m_2)\in \varphi(M)$ and $\varphi(b_1), \varphi(b_2)\in \varphi(B_1)=B$ and $\varphi(M)B=\varphi(\mathbb{Z}_p^*)$ is a factorization. Hence $\varphi(m_1)=\varphi(m_2)$, which implies that $m_1=m_2$ since the restriction of $\varphi$ to $M$ is injective, and thus $b_1=b_2$. This completes the proof. \end{proof}
\begin{thm} Let $M$ be a set of integers. Suppose that there is a prime $q$ such that $M$ splits $\mathbb{Z}_q$, then there are infinitely
many primes $p$ such that $M$ splits $\mathbb{Z}_p$. \end{thm}
\begin{proof}Since $q$ is a prime and $M$ splits $\mathbb{Z}_q$, so restriction of the $q-1$-th power character map $\varphi: \mathbb{Z}_q^*\to \mathbb{Z}_q^*$ defined by $\varphi(a)\equiv a\pmod{p}$ to $M$ is injective and $\varphi(M)$ is a direct factor of the cyclic group $\varphi(\mathbb{Z}_q^*)$ of order $q-1$. It is a direct logarithm, therefore the result follows from Proposition \ref{mainprop} and Theorem \ref{main}.\end{proof}
To illustrate, let us give two examples.
{\bf Example 2.1:} For the set $[-4, 4]^*$, we first show that there is no KM logarithm $f:[-4, 4]^*\to \mathbb{Z}_8$. The reason is that: for any logarithm $f:[-4, 4]^*\to \mathbb{Z}_8$, we have $f(-1)=4$, $f(1)=0$. If $f$ were a KM logarithm, then $f(2)$ should be even. It follows that $f(2), f(-2), f(4), f(-4)$ are even, so $f(2), f(-2), f(4), f(-4), f(-1), f(1)$ are even, which contradicts to that $f$ is a logarithm. However, we have the following direct KM logarithm $g:[-4, 4]^*\to \mathbb{Z}_{16}$ given by $g(1)=0$, $g(-1)=8$, $g(2)=2$, $g(4)=4$, $g(-2)=10$, $g(-4)=12$, $g(3)=6$, $g(-3)=14$. We have $g([-4, 4]^*)\bigoplus\{-1, 1\}=\mathbb{Z}_{16}$ and $g(2)$ is even, so $g$ is a direct KM logarithm, which implies that there are infinitely many primes $p$ such that $[-4, 4]^*$ splits $\mathbb{Z}_p$ by Theorem \ref{main}.
{\bf Example 2.2:} For other cases with $k_1+k_2=8$, we have
$$(-1, 1, 2, 3, 4, 5, 6, 7)\mapsto(4, 0, 1, 6, 2, 3, 7, 5); (-2, -1, 1, 2, 3, 4, 5, 6)\mapsto(5, 4, 0, 1, 6, 2, 3, 7);$$
$$(-3, -2, -1, 1, 2, 3, 4, 5)\mapsto(7, 5, 4, 0, 1, 3, 2, 6)$$
are logarithms from $[-k_1, k_2]^*$ to $\mathbb{Z}_8$. It is easy to see that none of them is a KM logarithm. By the same argument as in Example 1, there are direct KM logarithms from $[-k_1, k_2]^*$ to $\mathbb{Z}_{16}$. Therefore there are infinitely many primes $p$ such that $[-k_1, k_2]^*$ ($[-k_1, k_2]^*=[-1, 7]^*$ or $[-2, 6]^*$ or $[-3, 5]^*$) splits $\mathbb{Z}_p$.
We say that a prime $p$ is a $k$-radius prime if the following two conditions both hold:
(a) $p\equiv1\pmod{2k}$;
(b) the elements $1^{(p-1)/k}, 2^{(p-1)/k}, \ldots, k^{(p-1)/k}$ in $\mathbb{Z}_p^*$ are pairwise distinct.
In \cite{BM12}, Blackburn and Mckee proved that there is a special KM-logarithm of length $k$ if and only if there are infinitely many $k$-radius primes. Let $k$ be a fixed positive integer, and let $f_{spec}(k)$ be the number of special KM-logarithms of length $k$, then there exists two positive constant $d_k$ and $A_k$ such that the number of $k$-radius primes less
than or equal to $x$ is
$$d_kf_{spec}(k)\frac{x}{\log x} + O(x \exp(-A_k\log x),$$
as $x\to\infty$, where the implied constants depends only on $k$ by Theorem 1 in Elliott \cite{E70}.
We have
\begin{prop}Let $k$ be a positive integer. Suppose that the prime $p$ is a $k$-radius prime, then both $[1, k]$ and $[-k, k]^\star$ split $\mathbb{Z}_p$.\end{prop}
\begin{proof}Let $\varphi:\mathbb{Z}_p^*\to\mathbb{Z}_p^*, \varphi(a)\equiv a^{(p-1)/k}$ be the power residue map. Then we have $\varphi(ab)=\varphi(a)\varphi(b), a, b\in\mathbb{Z}_p^*$. It is easy to check that $[1, k]ker(\varphi)=\mathbb{Z}_p^*$ since $p$ is a $k$-radius prime. Note that $\{-1,1\}$ is a subgroup of $ker(\varphi)$, so $ker(\varphi)=\{-1, 1\}B$ is a factorization for some $B\in\mathbb{Z}_p^*$, which implies $[-k, k]^*B=\mathbb{Z}_p^*$ is also a factorization. This completes the proof.\end{proof}
\section{Existence and nonexistence of split $[-k_1, k_2]^*$ sets}
The cases $M=[1, k]$ and $[-k, k]^*$ arise in the case of tiling Euclidean space by certain star bodies (see \cite{St67}). Motivated by an application to error-correcting codes for non-volatile memories, Schwarz suggested in \cite{Sc14} to consider the cases $[-k_1, k_2]^*, 1\le k_1\le k_2$. In \cite{H73}, Hamarker proved that $M=\{1, 3, 27\}$ splits no finite abelian group.
In this section, we consider the case when $M=[-k_1, k_2]^*$, where $k_1\le k_2$ are non-negative integers.
By Proposition \ref{mainprop} and Theorem \ref{main}, we have.
\begin{thm}\label{th31}Let $k_1$ and $k_2$ be non-negative integers with $k_1\le k_2$ and $k_1+k_2=k\ge3$. Then the following are equivalent:
(a) $M=[-k_1, k_2]^*$ splits $\mathbb{Z}_p$ for some prime $p$ with $p\equiv1\pmod{(k_1+k_2)}$.
(b)There are infinitely
many primes $p$ such that $[-k_1, k_2]^*$ splits $\mathbb{Z}_p$.
(c) There is a direct logarithm from $[-k_1, k_2]^*$ to $\mathbb{Z}_k$ for some positive integer $k$ with $(k_1+k_2)|k$.
\end{thm}
As immediate consequences of Theorem \ref{th31}, we have
\begin{coro}Let $k_1$ and $k_2$ be non-negative integers with $k_1\le k_2$ and $k_1+k_2=k\ge3$. Then
(i) If $k_1+k_2+1$ is a prime, then there are infinitely
many primes $p$ such that $[-k_1, k_2]^*$ splits $\mathbb{Z}_p$.
(ii) If $2k+1$ is a prime, then there are infinitely
many primes $p$ such that $[1, k]$ splits $\mathbb{Z}_p$.\end{coro}
For the nonexistence of certain nonsingular splittings, we also need the following result.
\begin{prop}\label{kfac}{\rm (\cite{SzS09}Theorem 7.12)}If $ G = A \cdot B$ is a factorization of the finite abelian group $G$ (written multiplicatively) and $k$ is an integer relatively prime to $|A|$, then $ G = A^k\cdot B$ is a factorization of the abelian group $G$, where $A^k=\{a^k : a \in A\}$.\end{prop}
We first prove the following more general theorem.
\begin{thm}\label{th32} Let $n=2m$ be an even positive integer. Suppose that $N$ is a subset of the cyclic group $\mathbb{Z}_{2m}$ such that
$\{0, m\}\subseteq N$ and $|N|$ is odd, then $N$ is not a direct factor of $\mathbb{Z}_{2m}$. \end{thm}
\begin{proof} If $N$ is a direct factor of $\mathbb{Z}_{2m}$, then there exists a subset $A$ of $\mathbb{Z}_{2m}$ such that $N+A=\mathbb{Z}_{2m}$ is a factorization. Since $|N|$ is odd, by Proposition \ref{kfac}, $2N+A $ is also a factorization of $\mathbb{Z}_{2m}$, which implies that $|2N|=|N|$. However, $2\cdot0=2\cdot m=0$ in $\mathbb{Z}_{2m}$, it follows that $|2N|\le|N|-1$, a contradiction. \end{proof}
\begin{thm}Let $M$ be a finite set of nonzero integers. Suppose that $\{-1, 1\}\subset M$ and $|M|$ is odd, then $M$ does not split $\mathbb{Z}_p$ for any prime $p$. In particular, if $1\le k_1\le k_2$ and $k_1+k_2$ is odd, then $[-k_1, k_2]^*$ does not split $\mathbb{Z}_p$ for any prime $p$.\end{thm}
\begin{proof}Assume that $M$ splits $\mathbb{Z}_p$ with the splitter set $B$ for some prime $p$. Let $g$ be a primitive root of the prime $p$, $N=\{ind_g (m), m\in M\}$, $S=\{ind_g(m), m\in B\}$, then we have $\{0, (p-1)/2\}\subset N$ and $N+S=\mathbb{Z}_{p-1}$ is a factorization, which contradicts Theorem \ref{th32}. This completes the proof.\end{proof}
\section{Some presentations for the splitter set $B$}
In this section, we will give some presentations for the possible splitter set. We first present the following useful results.
\begin{lem}\label{kongji}Let $M$ be a finite subset of nonzero integers and $1\in M$, and let $X=M/M\backslash\{1\}=\{\frac{m_1}{m_2}: m_1\ne m_2, \, m_1,m_2\in M\}$. Let $p$ be an odd prime with $p\equiv1\pmod{|M|}$. Then we have
(a) If $M$ splits $\mathbb{Z}_p$ with the splitter set $B$, then $B\cap BX=\emptyset$.
(b) If $B$ is a subset of $\mathbb{Z}_p$ such that $1\in B$, $MB=\mathbb{Z}_p\backslash\{0\}$ and $B\cap BX=\emptyset$, then $M$ splits $\mathbb{Z}_p$ with the splitter set $B$.\end{lem}
\begin{proof}(a) On the contrary, there are elements $b_1, b_2\in B$ and $x\in X$ such that $b_1=b_2x$. By the definition of $X$, we have $x=m_1/m_2, m_1, m_2\in M$ and $m_1\ne m_2$. Hence $b_1m_2=b_2m_1$, which contradicts $MB$ being a splitting of $\mathbb{Z}_p$.
(b) By the assumptions, it suffices to prove that for any $ m_1, m_2\in M, b_1, b_2\in B$ with $m_1b_1=m_2b_2$, we have $b_1=b_2$ and $m_1=m_2$. If $m_1b_1=m_2b_2$, then $b_2=b_1\cdot\frac{m_1}{m_2}\in BX$ when $m_1\ne m_2$, so $m_1=m_2$, and hence $b_1=b_2$. It follows that $M$ splits $\mathbb{Z}_p$ with the splitter set $B$, as required. \end{proof}
\begin{lem}(\cite{[PK]}, Theorem IV.1.) \label{only if subgroup}
Let $k_1$, $k_2$ be positive integers with $1\leq k_1\leq k_2$ and let $p$ be an odd prime with $p\equiv 1$(mod $k_1 + k_2)$. Then $M = [-k_1,k_2]^*$ is a direct factor of $\mathbb{Z}_p^*$ if and only if $M$ is a direct factor of the subgroup $H =<-1,2,\cdots ,k_2 >$ of $\mathbb{Z}_p^*$.
\end{lem}
In \cite{[PK]}, we obtained a necessary and sufficient conditions for the prime $p$ such that $[-1, 3]^*$ splits $\mathbb{Z}_p$ with the some splitter set $B$. Now we give a presentation of $B$ for $[-1, 5]^*$, we have.
\begin{thm} \label{|T_1/T_2|}
For a prime $p$ with $p\equiv1\pmod{6}$, if $[-1, 5]^*$ splits $\mathbb{Z}_p^*$ with a splitter set $B$ and let $B_1=B\cap <-1, 2, 3, 5>$, then
$$B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{2}{5}), (-\frac{4}{3})>, \quad \varepsilon_k=1 \, \mbox{or} \, -1,$$ or $$B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{4}{5}), (-\frac{2}{3})>, \quad \varepsilon_k=1 \, \mbox{or} \, -1.$$
\end{thm}
\begin{proof}
Suppose $[-1, 5]^*$ splits $\mathbb{Z}_p^*$ with a splitter set $B$, then
$$X=\frac{[-1,5]^*}{[-1,5]^*}\setminus \{1\}=\left\{\pm2, \pm3, \pm4, \pm5, \pm\frac{1}{2}, \pm\frac{1}{3}, \pm\frac{1}{4}, \pm\frac{1}{5}, \frac{2}{3}, \frac{3}{2},\frac{2}{5}, \frac{5}{2},\frac{3}{4}, \frac{4}{3}, \frac{3}{5}, \frac{5}{3}, \frac{4}{5}, \frac{5}{4},\right\}.$$
Since $$-2=(-1)\times 2=1\times (-2)=2\times (-1)=3\times (-\frac{2}{3})=4\times (-\frac{1}{2})=5\times (-\frac{2}{5}),$$
by Lemma \ref{kongji} (a) we see that
$$-\frac{2}{3}\in B, \ or \ -\frac{2}{5}\in B. $$
By $$-4=(-1)\times 4=1\times (-4)=2\times (-2)=3\times (-\frac{4}{3})=4\times (-1)=5\times (-\frac{4}{5}),$$
similarly, by Lemma \ref{kongji} (a) again we get $$-\frac{4}{3}\in B, \ or \ -\frac{4}{5}\in B. $$
Note that $-\frac{4}{3}=-\frac{2}{3}\times 2$ and $-\frac{4}{5}=-\frac{2}{5}\times 2$, hence we have
$$-\frac{2}{3},-\frac{4}{5}\in B \ or \ -\frac{2}{5},-\frac{4}{3}\in B.$$
For $-2=-\frac{2}{3}\times3 =-\frac{2}{5}\times5$, $-4=-\frac{4}{3}\times3 =-\frac{4}{5}\times 5$, $-\frac{8}{3}=-\frac{2}{3}\times 4=-\frac{4}{3}\times 2$ and $-\frac{8}{5}=-\frac{4}{5}\times 2=-\frac{2}{5}\times 4$, by Lemma \ref{kongji} (a) we have $-4,$ $-2$, $-\frac{8}{3}$, $-\frac{8}{5} \not\in B$.
Since $$-8=(-1)\times 8=1\times (-8)=2\times (-4)=3\times (-\frac{8}{3})=4\times (-2)=5\times (-\frac{8}{5}),$$
so $$-8, \ or \ 8\in B. $$
Now we first consider the case where $-\frac{2}{5},$ $-\frac{4}{3}\in B$.
We will show that $$<-\frac{2}{5},-\frac{4}{3}>\subseteq B,$$
that is, $(-\frac{2}{5})^a(-\frac{4}{3})^b \in B$ for any non-negative integers $a$ and $b$.
The proof is by induction on $a+b$.
We have proved that $(-\frac{2}{5})^a(-\frac{4}{3})^b \in B$ for $a+b\leq 1.$ (Since $1\in B$.)
Assume that the result hold for all $a+b-1$.
We show that the result then holds for $a+b$.
This can be done by induction on $b$.
If $b=0$, set $M=(-\frac{2}{5})^{a-1}$,
then by induction $M,$ $(-\frac{4}{3})(-\frac{2}{5})^{a-2} \in B$. By Lemma \ref{kongji} (a), we have $-M$, $2M,$ $-2M,$ $-\frac{1}{2}M$, $-\frac{2}{3}M\times 5=(-\frac{4}{3})(-\frac{2}{5})^{a-2}\times (-1) \not\in B$. Since
$$-2M=(-1)\times (2M)=1\times (-2M)=2\times (-M)=3\times (-\frac{2}{3}M)=4\times (-\frac{1}{2}M)=5\times (-\frac{2}{5}M),$$
then $-\frac{2}{5}M=(-\frac{2}{5})^a\in B$.
Now suppose that $b>0$ and the result holds for $b-1$.
Set $M=(-\frac{2}{5})^{a}(-\frac{4}{3})^{b-1}$.
Thus by induction $M,$ $(-\frac{2}{5})^{a+1}(-\frac{4}{3})^{b-1} \in B$. By Lemma \ref{kongji} (a), we have $4M,$ $-4M,$ $-2M,$ $-M$, $-\frac{4}{5}M=(-\frac{2}{5})^{a+1}(-\frac{4}{3})^{b-1}\times 2\not\in B$.
Since $$-4M=(-1)\times (4M)=1\times (-4M)=2\times (-2M)=3\times (-\frac{4}{3}M)=4\times (-M)=5\times (-\frac{4}{5}M),$$
then $-\frac{4}{3}M=(-\frac{2}{5})^a(-\frac{4}{3})^{b}\in B$ and the proof is complete.
Next, we will use induction on $k$ to prove that
$$-8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\in B \,\, \mbox{or}\,\, 8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\in B$$
for any non-negative integers $a$, $b$ and $k$.
Set $N=8(-\frac{2}{5})^a(-\frac{4}{3})^b$, then we have
$$N=(-1)\times (-N)=1\times N=2\times (\frac{1}{2}N)=3\times (\frac{1}{3}N)=4\times (\frac{1}{4}N)=5\times (\frac{1}{5}N).$$
Observe that $$\frac{1}{2}N\times (-1)=(-\frac{2}{5})^a(-\frac{4}{3})^{b+1}\times 3,$$
$$\frac{1}{3}N\times (-1)=(-\frac{2}{5})^a(-\frac{4}{3})^{b+1}\times 2,$$
$$\frac{1}{4}N=(-\frac{2}{5})^a(-\frac{4}{3})^{b}\times 2$$
and
$$\frac{1}{5}N\times (-1)=(-\frac{2}{5})^{a+1}(-\frac{4}{3})^{b}\times 4.$$
Combining these results with $<-\frac{2}{5},-\frac{4}{3}>\subseteq B$ yields that $-N\in B$ or
$N\in B$. Hence we have proved that it is true for $k=1$. Suppose that $k>1$ and that it is true for $k-1$.
For $8^k(-\frac{2}{5})^a(-\frac{4}{3})^b$, by the same argument as above, we have
$$8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\in B \,\, \mbox{or}\,\,-8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\in B.$$
On the other hand, we observe that if $8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\in B$, then $8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\times(-\frac{2}{5})\not\in B$, $8^k(-\frac{2}{5})^a(-\frac{4}{3})^b\times(-\frac{4}{3})\not\in B$. That is, $-8^k(-\frac{2}{5})^{a+1}(-\frac{4}{3})^b\not\in B$, $-8^k(-\frac{2}{5})^a(-\frac{4}{3})^{b+1}\not\in B$. Hence
$8^k(-\frac{2}{5})^{a+1}(-\frac{4}{3})^b\in B$ and $8^k(-\frac{2}{5})^a(-\frac{4}{3})^{b+1}\in B$. Therefore we have proved that
$$8^k<(-\frac{2}{5}), (-\frac{4}{3})>\subset B.$$
Let
$$B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{2}{5}), (-\frac{4}{3})>\subseteq B, \quad \varepsilon_k=1 \, \mbox{or} \, -1.$$
For any element $(-1)^{\sigma}2^u3^v5^s\in <-1,2,3,5>$,
$$(-1)^{\sigma}2^u3^v5^s=(-\frac{2}{5})^{-s}(-\frac{4}{3})^{-v}(-1)^{\sigma+s+v}2^{s+2v+u}.$$
Set $s+2v+u=3k+t$ with $t\in \{0,1,2\}$, then $$(-1)^{\sigma}2^u3^v5^s=b\cdot r, \quad b\in B_1, \,\,r\in\{\pm1, \pm2, \pm4\}.$$
If $(-1)^{\sigma}2^u3^v5^s=b\cdot (-2), b\in B_1$, then $(-1)^{\sigma}2^u3^v5^s=b\cdot(-\frac{2}{5})\cdot5$. If $(-1)^{\sigma}2^u3^v5^s=b\cdot (-4), b\in B_1$, then $(-1)^{\sigma}2^u3^v5^s=b\cdot(-\frac{4}{3})\cdot3$. Hence, we have $[-1, 5]^*B_1=<-1,2,3,5>$. By Lemma \ref{only if subgroup}, $B$ exists if and only if $[-1, 5]^*B_1=<-1,2,3,5>$ is a direct sum. Hence $B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{2}{5}), (-\frac{4}{3})>\subseteq B, \quad \varepsilon_k=1 \, \mbox{or} \, -1$.
If $-\frac{2}{3},-\frac{4}{5}\in B$, we obtain a similar result by the same argument. This completes the proof.
\end{proof}
{\bf Remark:} We can also obtain some similar results for other $M=[-k_1, k_2]^*$.
For $B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{2}{5}), (-\frac{4}{3})>\subseteq B, \quad \varepsilon_k=1 \, \mbox{or} \, -1$, we take $k=6$, $r=4$, $a_1=-1$, $a_2=2$, $a_3=3$, $a_4=5$, $\varepsilon_1=-1$, $\varepsilon_2=e^{i\pi/3}$, $\varepsilon_3=-e^{2i\pi/3}$, $\varepsilon_4=-e^{i\pi/3}$. By the Lemma in Mills \cite{[M]}, we have $a_1^{v_1}a_2^{v_2}a_3^{v_3}a_4^{v_4}=\beta^6, \beta\in\mathbb{Q}(e^{i\pi/3}), 1\le v_i\le6, i=1,2, 3, 4$ if and only if $a_1^{v_1}a_3^{v_3}=-3^3$ and $v_2=v_4=6$, hence $N(6, 4)=3(-(-e^{2i\pi/3})^3+1)=6>0$, where $N(6, 4)=\sum_{v_1=1}^6\sum_{v_1=2}^6\sum_{v_3=1}^6\sum_{v_4=1}^6(\varepsilon_1^{v_1}\varepsilon_2^{v_2}\varepsilon_3^{v_3}\varepsilon_4^{v_4})$, and the summation is over all $(v_1, v_2, v_3, v_4)$ such that $a_1^{v_1}a_2^{v_2}a_3^{v_3}a_4^{v_4}=\beta^6, \beta\in\mathbb{Q}(e^{i\pi/3}$. Therefore by Theorem 1 in Elliott \cite{E70}, there are infinitely many prime $p$ such that
$$\left(\frac{a_i}{p}\right)_6\equiv\varepsilon_i\pmod{p}, \quad i=1,2, 3, 4.$$
Finally, we prove that for the above prime $p$, $[-1, 5]^*$ splits $\mathbb{Z}_p$. To do this, we first prove that $[-1, 5]^*B'=<-1, 2, 3, 5>$, where $B'=\bigcup_{k=0}^\infty(-8)^k<(-\frac{2}{5}), (-\frac{4}{3})>$ is a factorization. If $m_1b_1=m_2b_2, m_1, m_2\in[-1, 5]^*, b_1, b_2\in B'$, then we have
$$\left(\frac{m_1b_1}{p}\right)_6=\left(\frac{m_2b_2}{p}\right)_6\pmod{p}.$$
A simple computation shows that $\left(\frac{m_1}{p}\right)_6=\left(\frac{m_2}{p}\right)_6$, which yields $m_1=m_2$, and thus $b_1=b_2$. This proves that $[-1, 5]^*B'=<-1, 2, 3, 5>$ is a factorization, so $[-1, 5]^*B_1=<-1, 2, 3, 5>$ is also a factorization. Therefore $[-1, 5]^*$ splits $\mathbb{Z}_p$ by Lemma \ref{only if subgroup}.
For $B_1=\bigcup_{k=0}^\infty\varepsilon_k8^k<(-\frac{4}{5}), (-\frac{2}{3})>\subseteq B, \quad \varepsilon_k=1 \, \mbox{or} \, -1$, we take $k=6$, $r=4$, $a_1=-1$, $a_2=2$, $a_3=3$, $a_4=5$, $\varepsilon_1=-1$, $\varepsilon_2=e^{2i\pi/3}$, $\varepsilon_3=-e^{2i\pi/3}$, $\varepsilon_4=e^{i\pi/3}$. We obtain that $[-1, 5]^*B_1=<-1, 2, 3, 5>$ is a factorization, and hence $[-1, 5]^*$ splits $\mathbb{Z}_p$.
{\bf Example 4.1:} For the set $[-4, 4]^*$, we take $k=16$, $r=3$, $a_1=-1$, $a_2=2$, $a_3=3$, $\varepsilon_1=-1$, $\varepsilon_2=e^{i\pi/4}$, $\varepsilon_3=e^{3i\pi/4}$. By the Lemma in Mills \cite{[M]}, we have $a_1^{v_1}a_2^{v_2}a_3^{v_3}=\beta^{16}, \beta\in\mathbb{Q}(e^{i\pi/8}), 1\le v_i\le8, i=1,2, 3$ if and only if $a_1^{v_1}=1$, $v_2=8$ or $16$ and $v_3=16$, hence $N(16, 3)=8((e^{i\pi/4})^8+1)=16>0$. Therefore by Theorem 1 in Elliot \cite{E70}, there are infinitely many prime $p$ such that
$$\left(\frac{a_i}{p}\right)_{16}\equiv\varepsilon_i\pmod{p}, \quad i=1,2, 3.$$
Let $B$ be the kernel of the homomorphism $\varphi: \mathbb{Z}_p^*\to\mathbb{Z}_p^*$ given by $\varphi(a)\equiv a^{p-1/16}$, it is easy to check that $[-4, 4]^*B=\mathbb{Z}_p^*$ is a factorization, so $[-4, 4]^*$ splits $\mathbb{Z}_p$ with splitter set $B$.
\section*{Acknowledgments}
The author thanks Professor Qing Xiang
for useful comments and suggestions.
|
1,116,691,500,957 | arxiv | \section*{Introduction}
The MiniBooNE low-energy excess (LEE) is a long-standing anomaly in neutrino physics.
This excess of electron-like events was observed in the muon-neutrino dominated flux from the Booster Neutrino Beam (BNB), and is most significant between $\SI{200}\MeV$ and $\SI{600}\MeV$ in reconstructed neutrino energy.
Initially reported in 2007~\cite{MiniBooNE:2007uho}, the excess reached a significance of $4.8\sigma$ in the energy range $\SI{200}\MeV < E_\nu^\text{QE} < \SI{1250}\MeV$ with the full MiniBooNE $\nu$ and $\bar \nu$ data set~\cite{MiniBooNE:2020pnu}.
We note that this significance is derived from a direct comparison between MiniBooNE data and the Standard Model (SM) prediction, and is thus independent of the any physics model, including the $3+1$ model explored in this paper.
A wide range of explanations for the excess have been put forward, but the initial, and still most-referenced, new physics explanations invoke $\nu_\mu \rightarrow \nu_e$ oscillations.
The BNB flux is produced through $\SI{8}\GeV$ protons impinging on a beryllium target that is located inside a magnetic focusing horn, which can reverse polarity to run in neutrino or antineutrino mode, followed by a $\SI{50}\m$ meson decay pipe.
The MiniBooNE detector, which is a $\SI{450}\tonne$ fiducial mass, mineral-oil-based Cherenkov detector, is located $\SI{541}\m$ downstream of the beryllium target.
The detector is sensitive to neutrinos with energies between $\SI{100}\MeV$ and $\SI{3}\GeV$.
This combination of energy and baseline makes MiniBooNE an ideal experiment to probe the appearance of electron-neutrinos from $\nu_\mu\rightarrow\nu_e$ oscillations in a mass-squared splitting region greater than $\SI{1e-2}{\square\eV}$.
The full data set taken in a series of runs between 2002 and 2019 yields a $1\sigma$ allowed region in $\Delta m^2$ between $\SI{0.04}{\square\eV}$ and $\SI{0.4}{\square\eV}$, with mixing angles varying from $1.0$ to $0.01$~\cite{MiniBooNE:2020pnu}.
These mass-squared splittings are more than an order of magnitude larger than the splitting of atmospheric neutrino oscillations, $\Delta m^2_{atmos}\approx \SI{2.5e-3}{\square\eV}$~\cite{ParticleDataGroup:2020ssz}--associated with the largest mass splitting in three neutrino oscillation models.
Therefore, to accommodate such oscillations, it is necessary to postulate the existence of a fourth neutrino mass, and a fourth neutrino flavor that must be non-weakly-interacting (or ``sterile'') to avoid constraints from $Z$ decay~\cite{ALEPH:2005ab}.
In such a model, the sterile neutrino flavor and the three active flavors are connected to a fourth mass state through an extension of the PMNS mixing matrix.
Such a model introduces a combination of three possible experimental signatures: 1) electron flavor disappearance to other flavors, leading to fewer $\nu_e$ events than expected (``$\nu_e\rightarrow\nu_e$"); 2) muon flavor disappearance to other flavors (``$\nu_\mu\rightarrow\nu_\mu$") reducing the $\nu_\mu$ rate; and 3) $\nu_\mu\rightarrow\nu_e$ appearance, where an excess of $\nu_e$ events is observed.
Past MiniBooNE $\nu_\mu\rightarrow\nu_e$ appearance analyses have assumed that the $\nu_e$ and $\nu_\mu$ disappearance effects were negligible\footnote{This assumption was justifiable in the context of the best-fit LSND two-neutrino oscillation oscillation scenario, and the unitarity constraints of the model.}.
However, this approach has been considered to be overly simplified, since MiniBooNE uses the $\nu_\mu$ data to predict the $\nu_e$ backgrounds in the beam, while disappearance will affect these predictions.
In response to this, in this paper we expand the analyses of the full MiniBooNE data sets and simulation samples, to present the first full 3+1 sterile-neutrino oscillation model by the collaboration.
In 2015, the MicroBooNE experiment joined the MiniBooNE experiment as a user of the BNB beamline.
The MicroBooNE experiment was designed with the primary goal of investigating the LEE by using the detailed information from its liquid-argon time-projection-chamber (LArTPC) to distinguish between electron induced events and photon induced events.
This allows the rejection of many mis-identified backgrounds in the MiniBooNE data set.
MicroBooNE has recently released results of a search for a generic $\nu_e$ excess, assuming the median shape of the MiniBooNE excess, in a strategy that is agnostic to particular oscillation models.
External analyses have applied more focused studies, placing limits on $\nu_e$ disappearance~\cite{Denton:2021czb}, expanding the MicroBooNE analysis to all systematically allowed shapes of the MiniBooNE excess~\cite{Arguelles:2021meu}, and considering how the MicroBooNE data constrain the parameters of a 3+1 sterile neutrino model~\cite{Arguelles:2021meu}.
However, until now, there has been no MiniBooNE-MicroBooNE joint analysis.
Because MiniBooNE and MicroBooNE share the same beamline, we can use MiniBooNE tools to perform a joint fit to the data of the two experiments.
On the other hand, because the detectors are substantially different, the two experiments have complementary capabilities.
MicroBooNE is an $\SI{85}\tonne$ active mass LArTPC~\cite{MicroBooNE:2016pwy}, which allows for detailed reconstruction of neutrino interactions that is not possible using the MiniBooNE Cherenkov detector.
The MiniBooNE experiment has a large sample size, but relatively high backgrounds from mis-identification backgrounds that dominate MiniBooNE's electron neutrino sample.
The MicroBooNE experiment uses a relatively small detector, but can remove most mis-identification backgrounds~\cite{MicroBooNE:2021jwr}.
Also, the imaging capability of the LArTPC has allowed the MicroBooNE experiment to select a high purity sample of charged current quasi-elastic (CCQE) interactions, which has low systematic uncertainty compared to a semi-inclusive or fully inclusive cross section~\cite{MicroBooNE:2021rmx,MicroBooNE:2021jwr,MicroBooNE:2021nxr,MicroBooNE:2021sne}.
Thus, in principle, the MicroBooNE data allow for a clean test, albeit with a small sample, of the hypothesis that the MiniBooNE excess events are due to $\nu_e$ charged-current quasi-elastic interactions.
Therefore, we consider only MicroBooNE's exclusive CCQE $\nu_e$ search~\cite{MicroBooNE:2021jwr} in this paper, which presents the first MiniBooNE/MicroBooNE joint fit.
\section*{Fit Details}
The model of interest is a three-active plus one-sterile neutrino model called ``3+1.''
This model expands the $3\times 3$ neutrino mixing matrix to $4\times 4$:
\begin{equation}
U_{3+1} = \begin{bmatrix}
U_{e1} & U_{e2} & U_{e3} & U_{e4} \\
U_{\mu 1} & U_{\mu 2} & U_{\mu 3} & U_{\mu 4} \\
U_{\tau 1} & U_{\tau 2} & U_{\tau 3} & U_{\tau4} \\
U_{s1} & U_{s2} & U_{s3} & U_{s4}
\end{bmatrix}. \label{4mixmx}
\end{equation}
In such a model, both $\nu_\mu$ and $\nu_e$ disappearance are expected to occur with the same $\Delta m^2$ as the $\nu_\mu \rightarrow \nu_e$ appearance signal, as long as both $U_{e4}$ and $U_{\mu 4}$ are non-zero.
The three processes are related through their effective mixing angles, which are expressed as:
\begin{eqnarray}
\sin^2 (2\theta_{\mu \mu })&=& 4(1-|U_{\mu4}|^2)|U_{\mu4}|^2, \nonumber \\
\sin^2 (2\theta_{ee})&=& 4(1-|U_{e4}|^2)|U_{e4}|^2, \nonumber \\
\sin^2 (2\theta_{e \mu}) &=& 4 |U_{e4}|^2 |U_{\mu 4}|^2,
\label{mixings}
\end{eqnarray}
which appear within the oscillation probability formulae:
\begin{eqnarray}
P(\nu_\mu \to \nu_e) &=&\sin^2 2\theta_{\mu e} \sin^2 (\Delta m_{41}^2 L/E), \nonumber \\
P(\nu_{e} \to \nu_e) &=& 1-\sin^2 2\theta_{ee} \sin^2 (\Delta m_{41}^2 L/E), \nonumber
\\
P(\nu_{\mu} \to \nu_\mu) &=& 1-\sin^2 2\theta_{\mu \mu} \sin^2 (\Delta m_{41}^2 L/E).
\label{osc}
\end{eqnarray}
There are three physics parameters in the 3+1 model relevant to these two experiments: the sterile mass splitting $\Delta m_{4i}^2 \equiv \Delta m^2$ (where we assume degeneracy for $i \in \{1,2,3\}$) and the two mixings of the new mass eigenstate to the electron weak eigenstate $|U_{e4}|^2$ and muon weak eigenstate $|U_{\mu4}|^2$.
Different combinations of these parameters will induce different rates of $\nu_e$ appearance as well as $\nu_\mu$ and $\nu_e$ disappearance in the MiniBooNE and MicroBooNE detectors.
In each case the oscillation probability depends upon the true neutrino energy, $E$, and baseline of each event, $L$.
The oscillation prediction in MiniBooNE is determined by a simple reweighting of the MiniBooNE $\nu_\mu\rightarrow\nu_e$ simulation using the oscillation formulae (Eqs.~\ref{osc}).
This direct method is not possible for the MicroBooNE $1e1p$ CCQE analysis, as only limited simulation information for this analysis is available~\cite{hepdata:1953568}.
Instead, for MicroBooNE, we use the MiniBooNE BNB simulation to obtain a ratio between the nominal intrinsic $\nu_e$ background prediction and the $\nu_e$ appearance prediction at the MicroBooNE baseline as a function of true neutrino energy, using the BNB flux prediction at the MicroBooNE location.
This ratio, combined with the intrinsic $\nu_e$ simulation provided by MicroBooNE allows us to obtain a $\nu_e$ appearance prediction in MicroBooNE.
We use the same procedure to account for $\nu_e$ disappearance.
However, we neglect $\nu_\mu$ disappearance in the MicroBooNE prediction, as the $\nu_\mu$ background contamination in MicroBooNE's $1e1p$ analysis is sub-dominant and the simulation information for the $\nu_\mu$ contribution is not provided by MicroBooNE.
We also note that $\nu_\mu \to \nu_\tau$ neutral-current backgrounds in MiniBooNE's electron neutrino measurement are not included in the prediction; however, this effect is expected to be small.
An example of this oscillation prediction is shown in Figure~\ref{fig:osc_ex}.
\begin{figure}
\subfloat[\label{fig:mb_mu_ex}MiniBooNE $\nu_\mu$ + $\bar{\nu}_\mu$]{
\includegraphics[width=0.8\linewidth]{MiniBooNE_numu_numubar.pdf}} \\
\subfloat[\label{fig:mb_e_ex}MiniBooNE $\nu_e$ + $\bar{\nu}_e$]{
\includegraphics[width=0.8\linewidth]{MiniBooNE_nue_nuebar.pdf}} \\
\subfloat[\label{fig:ub_ex}MicroBooNE $\nu_e$]{
\includegraphics[width=0.8\linewidth]{MicroBooNE_nue_nuebar.pdf}}
\caption{The comparison predictions assuming the SM and 3+1 ``Combination'' fit parameters of Table~\ref{tbl:bestfit} for each experiment.
Black crosses show the observed data and the statistically allowed band of per bin expectations.
The SM prediction is represented as a dashed purple line, and the 3+1 prediction shown with either purple crosses or a solid line.
The top panel shows unconstrained predictions and errors.
The middle panel shows predictions and errors in purple after being constrained by the $\nu_\mu+\bar\nu_\mu$ data, and shows the unconstrained 3+1 prediction as the stacked histogram.
The displayed errors on the predictions contain only systematic and finite Monte-Carlo errors.
The bottom panel shows predictions after the allowed systematic variations have been fit to data, and thus does not have systematic error bars shown.
}
\label{fig:osc_ex}
\end{figure}
For the MiniBooNE likelihood we compare the fixed observation to the theoretical expectation with a multivariate normal distribution that includes systematic uncertainties, Poisson statistical uncertainties on the expectation, and finite Monte-Carlo statistical uncertainties.
With the large MiniBooNE sample size, the multivariate normal distribution is a reasonable approximation for the likelihood.
The MiniBooNE systematic errors of this analysis remain the same as in~\cite{MiniBooNE:2020pnu}, with one exception.
The correlated systematic errors from uncertainties in the MiniBooNE optical model are limited to the three principal components of the corresponding covariance matrix with the largest eigenvalues, and the remaining optical model errors are assumed to be uncorrelated with no covariance among energy bins.
For MicroBooNE, we use a Poisson-derived likelihood that accounts for finite Monte-Carlo size~\cite{Arguelles:2019izp}; additionally, the expectation in each bin is treated as a nuisance parameter that is constrained by the systematics covariance matrix~\cite{hepdata:1953568/t3}.
The total likelihood is then composed of these two experimental likelihoods.
We note that the fit presented here does not account for systematic correlations between MiniBooNE and MicroBooNE.
Additionally, we allow the MicroBooNE $\nu_\mu$ measurement to constrain the MicroBooNE $\nu_e$ prediction and uncertainties, and do not account for oscillations in MicroBooNE's $\nu_\mu$ prediction.
Ignoring $\nu_\mu$ disappearance in MicroBooNE is a reasonable assumption for small $U_{\mu4}$ given the limited sample statistics from MicroBooNE.
\section*{Results}
With the methods described in the preceding section we can examine the MiniBooNE LEE in the context of a 3+1 sterile neutrino model, both with the MiniBooNE data alone and together with the MicroBooNE electron-neutrino data.
We show the no-oscillation SM prediction as a dashed purple line in Figure~\ref{fig:osc_ex}.
In the SM case the MiniBooNE prediction lies substantially below the data in the electron-neutrino channel.
For MicroBooNE, the data lie scattered above and below the SM prediction, in part due to the small sample size.
The disparity between the data and SM prediction in MiniBooNE shows the inability of the SM to accommodate the MiniBooNE low energy excess in the electron-neutrino data while remaining in agreement with the MiniBooNE muon-neutrino data.
In contrast to the SM, the 3+1 oscillation model provides the additional freedom necessary to potentially better accommodate the MiniBooNE muon neutrino data, and low energy excess, within systematic errors.
In the 3+1 scenario we expect $\nu_\mu\rightarrow\nu_e$ oscillations to increase the prediction in the electron-neutrino channels of both experiments, while $\nu_e$ disappearance will reduce the intrinsic electron-neutrino backgrounds, and $\nu_\mu$ disappearance will reduce the muon-neutrino prediction as well as the contribution of misidentified events in the electron-neutrino observable channel.
The prediction for the best-fit 3+1 scenario across both experiments is shown in Figure~\ref{fig:osc_ex}, separated by component, experiment, and observable channel.
Figure~\ref{fig:mb_mu_ex} compares the MiniBooNE unconstrained muon neutrino and antineutrino prediction to observed data, where the crosses denote the unconstrained 3+1 prediction and the dashed line denotes the unconstrained SM prediction.
Figure~\ref{fig:mb_e_ex} compares the MiniBooNE electron neutrino and antineutrino prediction to data; the prediction and errors are shown after being constrained by the muon neutrino data for the $3+1$ and SM scenarios in purple, whereas the unconstrained $3+1$ prediction is shown by the stacked histogram.
While the best-fit 3+1 scenario is preferred to the no-oscillation scenario, it still cannot perfectly describe MiniBooNE's low energy excess, especially at the lowest energies.
This is consistent with the recent MicroBooNE results, which indicate that the low energy excess cannot be explained entirely by electron neutrinos~\cite{MicroBooNE:2021rmx}.
This is also consistent with previous MiniBooNE studies indicating a forward-peaked angular distribution of the low energy excess~\cite{MiniBooNE:2020pnu}.
The best-fit 3+1 parameters and the $\Delta\chi^2$ between the SM and 3+1 scenarios are given in Table~\ref{tbl:bestfit}.
We obtain a best-fit that includes substantial sterile-electron mixing, with $|U_{e4}|^2$ near $0.5$, and moderate sterile-muon mixing, with $|U_{\mu 4}|^2$ near $0.02$, for both the MiniBooNE only and combined fits.
The best-fit $\Delta m^2$ is near $\SI{0.2}{\square\eV}$ as well.
The large sterile-electron mixing at the best-fit point is in tension with constraints on unitarity in the neutrino sector~\cite{Ellis:2020hus}.
However, a broad region in parameter space is allowed within the estimated $1\sigma$ confidence region, as is visualized in Figure~\ref{fig:fit_contours}, extending to regions of parameter space which are not in tension with unitarity constraints.
The $1\sigma$ allowed region in $\Delta m^2$ and $\sin^2(2\theta_{\mu e})$ is similar to that reported in~\cite{MiniBooNE:2020pnu}, and takes the form of a diagonal band because the MiniBooNE LEE spans a broad energy range and extends down to the $\SI{200}\MeV$ boundary.
The excess drives the allowed values of $\sin^2(2\theta_{\mu e})$, but large deviations from the best-fit in $|U_{e4}|^2$ and $|U_{\mu 4}|^2$ are allowed, provided the combination produces enough $\nu_\mu\rightarrow\nu_e$ appearance to describe the excess.
This freedom is present in part because the systematic errors of the prediction allow large changes to the muon-neutrino channel with little penalty, which in turn provides only a weak constraint on $|U_{\mu 4}|^2$ through $\nu_\mu$ disappearance.
MicroBooNE's electron-neutrino data do not exhibit an excess at the lower end of their energy spectrum, as MiniBooNE's electron-neutrino data do, and MicroBooNE overall observes a lower event rate than predicted by the nominal no-oscillation model~\cite{MicroBooNE:2021rmx}.
However, the data sample from MicroBooNE does not have the statistical power needed to rule out a 3+1 $\nu_\mu\rightarrow\nu_e$ explanation of the MiniBooNE low-energy-excess.
The observed event-rate from MicroBooNE's electron-neutrino channel precludes very large $\nu_\mu\rightarrow\nu_e$ appearance at values of $\Delta m^2$ and $\sin^2(2\theta_{\mu e})$ higher than the MiniBooNE allowed region.
This manifests in Figure~\ref{fig:fit_contours} as a small shift in the allowed region to lower $\Delta m^2$ and lower $\sin^2(2\theta_{\mu e})$.
In Figure~\ref{fig:ub_ex}, the best-fit 3+1 oscillation prediction increases the expected number of events in a region that MicroBooNE observes a deficit, suggesting that the fit is primarily driven by the larger MiniBooNE data sample, in line with our expectation.
The 3+1 scenario is preferred over the no-oscillation model in both the MiniBooNE-only and joint-fit cases.
In the MiniBooNE-only fit we obtain a $\Delta \chi^2=\mbchi$ between the two models, whereas in the joint-fit fit we obtain a $\Delta \chi^2=\jchi$ for 3 additional degrees of freedom introduced in the fit.
If we assume the asymptotic approximation to the test-statistic distribution provided by Wilks' theorem~\cite{wilks1938} with a difference of three degrees of freedom between the models, then we obtain p-values of $\mbpval$ and $\jpval$ in favor of the 3+1 scenario for the MiniBooNE-only and joint analyses, respectively.
However, we expect the true difference in degrees of freedom between the models to be less than three, based on both the degeneracy inherent in the 3+1 model and the smaller difference in degrees of freedom observed in the two-neutrino MiniBooNE oscillation study~\cite{MiniBooNE:2020pnu}.
A reduction in the difference in degrees of freedom between the models would increase the significance of these two statistical tests.
Therefore, we conservatively estimate that the MiniBooNE-only 3+1 model test prefers the 3+1 model to the SM at approximately $\mbsigma\sigma$, and the addition of the MicroBooNE electron-neutrino CCQE data reduces this significance to approximately $\jsigma\sigma$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fit_contours.pdf}
\caption{The results of the MiniBooNE-only and combined fits.
The likelihood is obtained by profiling over all parameters except $\Delta m^2$ and $\sin^2(2\theta_{\mu e})$.
The two best-fit points are shown as appropriately colored stars, and the contours are obtained by comparing the profile-likelihood-ratio test-statistic to the asymptotic distribution provided by Wilks' theorem, and assuming a difference of two degrees of freedom.}
\label{fig:fit_contours}
\end{figure}
\section*{Conclusion}
\begin{table}[t!]
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\begin{center}
\begin{tabularx}{\linewidth}{l Y Y Y Y}
\toprule
3+1 Fit & $|U_{e4}|^2$ & $|U_{\mu 4}|^2$ & $\Delta m^2$ & $\Delta \chi^2$/ dof \\
\midrule
MiniBooNE only & \mbue & \mbuu & \mbdmbare & \mbchidof \\
Combination & \jue & \juu & \jdmbare & \jchidof \\
\bottomrule
\end{tabularx}
\caption{Summary of results. The $\Delta \chi^2/\text{dof}$ in the last column compares the $3+1$ model to the no-oscillation model.}
\label{tbl:bestfit}
\end{center}
\end{table}
This letter has explored a full 3+1 sterile-neutrino oscillation model within the context of results from the MiniBooNE and MicroBooNE experiments.
In the MiniBooNE electron-like analysis, we consider $\nu_\mu\rightarrow\nu_e$ appearance alongside both $\nu_e$ and $\nu_\mu$ disappearance.
In the MicroBooNE CCQE analysis, we consider $\nu_e$ appearance and $\nu_e$ disappearance.
In an analysis of the MiniBooNE-only data, we find a best-fit to the 3+1 model of $\Delta m^2 = \mbdm$, $|U_{e4}|^2=\mbue$, $|U_{\mu 4}|^2=\mbuu$, and $\sin^2(2\theta_{\mu e})=\mbsin$.
A joint-fit to both analyses finds a best fit to the 3+1 model at oscillation parameters of $\Delta m^2 = \jdm$, $|U_{e4}|^2 = \jue$, $|U_{\mu 4}|^2 = \juu$, and $\sin^2(2\theta_{\mu e})=\jsin$.
In the MiniBooNE only analysis, the 3+1 scenario is preferred over the no-oscillation case with a $\Delta \chi^2$/dof of $\mbchidof$, whereas in the combined analysis we obtain $\Delta \chi^2/\text{dof} = \jchidof$.
Although the 3+1 model is not a perfect description of the low-energy MiniBooNE electron neutrino data, we find that a 3+1 sterile neutrino oscillation scenario is a better description of the MiniBooNE data than the no-oscillation scenario and is not in tension with MiniBooNE's muon neutrino data.
We also find that the MicroBooNE electron-neutrino data do not rule out the allowed 3+1 interpretations for the MiniBooNE data, but do slightly reduce the significance of the result and make only a small modification to the allowed regions.
\begin{acknowledgments}
We acknowledge the support of Fermilab, the Department of Energy,
and the National Science Foundation, and
we acknowledge Los Alamos National Laboratory for LDRD funding.
\vspace{-0.01in}
\end{acknowledgments}
\section{Likelihood} \label{app:likelihood}
The physics parameters of the model are the mass squared splitting $\Delta m^2$, electron-sterile mixing $\left|U_{e4}\right|^2$, and muon-sterile mixing $\left|U_{\mu 4}\right|^2$.
The mixing parameters ($\left|U_{e4}\right|^2$, $\left|U_{\mu 4}\right|^2$) are allowed to vary between $0$ and $1$ while maintaining unitarity of the mixing matrix through the condition $\left|U_{e4}\right|^2 + \left|U_{\mu 4}\right|^2 \leq 1$.
The additional nuisance parameters of the model are the MicroBooNE per-bin systematic scalings $\alpha_i$.
Here the set of physics parameters are denoted by $\vec{\theta}$, and the set of nuisance parameters denoted by $\vec{\eta}$.
The combined MiniBooNE-MicroBooNE likelihood is the product the two experimental likelihoods such that
\[
\mathcal{L}(\vec{\theta},\vec{\eta}|\vec{x}) = \mathcal{L}_\text{MB}(\vec{\theta}|\vec{x}_\text{MB}) \times \mathcal{L}_\text{uB}(\vec{\theta},\vec{\eta}|\vec{x}_\text{uB}),
\]
where $\mathcal{L}_\text{MB}$ is the MiniBooNE likelihood, $\mathcal{L}_\text{uB}$ is the MicroBooNE likelihood, $\vec{x}_\text{MB}$ is collection of the MiniBooNE data counts, $\vec{x}_\text{uB}$ is the collection of MicroBooNE data counts, and $\vec{x}=\vec{x}_\text{MB}\cup\vec{x}_\text{uB}$ is the collection of all data counts.
The MiniBooNE likelihood is approximated as a multivariate normal distribution
\[
\mathcal{L}_\text{MB}(\vec{\theta},\vec{\eta}|\vec{x}_\text{MB}) = \mathcal{N}(\vec{x}_\text{MB}|\vec{\mu}_\text{MB}(\vec{\theta}),\bm{\Sigma}_\text{MB}(\vec{\theta})),
\]
where $\vec{\mu}_\text{MB}$ is the predicted number of data counts in each bin, and $\bm{\Sigma}_\text{MB}$ is the MiniBooNE covariance matrix.
In this case the MiniBooNE covariance matrix includes systematic errors, Poisson statistical errors, and Monte-Carlo statistical errors.
The MicroBooNE likelihood is given by
\[
\mathcal{L}_\text{uB}(\vec{\theta},\vec{\eta}|\vec{x}_\text{uB}) = \mathcal{N}(\vec{\alpha}|1,\bm{\Sigma}_\text{uB}) \times \prod_i \mathcal{L}^\text{Eff}(\alpha_i\mu_i^\text{uB}(\vec{\theta}),\sigma_{i,\text{mc}}^2(\vec{\theta},\alpha_i)|x_{i,\text{uB}}),
\]
where $\mathcal{N}(\vec{\alpha}|1,\bm{\Sigma}_\text{uB})$ is the multivariate normal prior on the MicroBooNE systematics scalings, $\bm{\Sigma}_\text{uB}$ is the MicroBooNE fractional covariance matrix, $\mu_i$ is the predicted number of data counts in each bin before systematic modifications, and $\sigma_{i,\text{mc}}^2$ is the Monte-Carlo statistical error on the per-bin data count prediction after the systematics scalings have been applied.
The MicroBooNE fractional covariance matrix, $\bm{\Sigma}_\text{uB}$, is the constrained fractional covariance matrix from~\cite{MicroBooNE:2021jwr,hepdata:1953568/t3}.
The likelihood $\mathcal{L}^\text{Eff}$ is a Poisson-based likelihood that accounts for finite Monte-Carlo sample errors, and is described in~\cite{Arguelles:2019izp}.
|
1,116,691,500,958 | arxiv |
\section{Peering Schemes under Optimal Strategies}
In the previous section, we constructed a network model under which the IAP can offer CPs differentiated peering types.
Based on this model, we explore the optimal strategies that maximize the IAP's profit or end-users' welfare in this section. We will show that the peering schemes under the optimal strategies (pure paid, pure free, or hybrid peering) largely depend on the type of data traffic, e.g., text or video, and the characteristic of network capacity.
In particular, we first show the embodiments of traffic type and capacity characteristic in our model as a preliminary (Section \ref{subsec:type and characteristic}).
We then study the IAP's profit-optimal strategy and its corresponding peering scheme under various network conditions (Section \ref{subsec:profit-optimal pricing}).
Finally, we study the welfare-optimal strategy and compare it with the profit-optimal counterpart to derive regulatory implications (Section \ref{subsec:welfare-optimal pricing}).
\subsection{Traffic Type and Capacity Characteristic}\label{subsec:type and characteristic}
As the demanders of network service, end-users' data usage is discounted under network congestion. In Section \ref{subsec:model user}, we captured the discount of usage by the gain function \(G(\phi)\) and the congestion sensitivity of usage by the elasticity of gain \(\epsilon^G_\phi\). Furthermore, when the end-users' usage is for different types of data traffic, e.g., text or video, the congestion elasticity of gain may have very different properties.
In particular, text traffic such as file sharing is usually insensitive to mild congestion but cannot tolerate severe congestion. Conversely, video traffic such as online video streaming is quite sensitive to mild congestion. Based on these characteristics, we assume that the usage gain \(G(\phi)\) of text (video) traffic decreases concavely (convexly) with the congestion level \(\phi\) and its elasticity \(\epsilon^G_\phi\) increases (decreases) with the congestion level \(\phi\). Under such assumptions, the type of data traffic can be embodied in the monotonicity of the elasticity of gain.
As the supplier of network service, the IAP is required to provide enough capacity to guarantee the service quality. In Section \ref{subsec:model CP}, we captured the capacity requirement by the capacity function \(H(\phi)\). Because it always requires more capacity to maintain lower congestion level, we assumed that \(H(\phi)\) decreases with the congestion level \(\phi\). Besides the monotonicity of the capacity function, characteristics of network capacity are also reflected in the monotonicity of the congestion elasticity \(\epsilon^H_\phi\), which depends on the network technology adopted by the IAP. An increasing (decreasing) elasticity function \(\epsilon^H_\phi\) of the congestion level \(\phi\) means that as \(\phi\) is higher, the required capacity is more (less) sensitive to the congestion.
In summary, the gain function \(G(\phi)\) and the capacity function \(H(\phi)\) characterize the impact of network congestion on the end-users and the IAP, respectively.
With the development of CPs' content services and IAPs' network technologies, end-users' data traffic type and IAPs' capacity characteristic are continuously changing. These changes can be reflected in the monotonicity of the congestion elasticities of the gain and capacity functions which affect the peering schemes of the optimal strategies as shown in the following two subsections.
\subsection{Profit-optimal Strategy}\label{subsec:profit-optimal pricing}
In this subsection, we study the optimal strategy and the corresponding peering scheme which maximize the IAP's profit. We consider that the IAP incurs a cost of \(k \in (0,+\infty)\) by serving per-unit data traffic, which models the recurring maintenance and utility cost like electricity. We define the IAP's profit by \(U(\theta,c,k) \triangleq (p+q-k) d_h(\theta,c) + (p-k) d_l(\theta,c)\),
where the first and second terms are its profits from the data traffic of paid and settlement-free peering, respectively. Under any given total capacity \(c\) and per-unit traffic cost \(k\), the IAP can maximize its profit \(U\) by determining the strategy \(\theta\), i.e., the triple \((p,q,r)\), that solves the optimization problem:
\begin{align*}
\underset{\theta}{\text{maximize}} \quad & U(\theta,c,k) = (p\!+\!q\!-\!k) d_h(\theta,c) + (p\!-\!k) d_l(\theta,c) \\
\text{subject to} \quad & \theta\in (0,+\infty) \!\times\! (0,+\infty) \!\times\! [0,1].
\end{align*}
By solving the above optimization problem, we characterize the IAP's profit-optimal strategy as follows.
\begin{theorem}\label{the:profit maximization}
If a strategy \(\theta = (p,q,r)\) maximizes the IAP's profit \(U\), then its capacity allocation decision must not be zero, i.e., \(r\neq 0\), and the following conditions must hold:\\
1) the IAP's revenue from end-users satisfies that
\begin{equation}\label{eq:profit maximization average}
\displaystyle p d_t = (p+q-k)d_h\epsilon^{d_h}_p + (p - k) d_l \epsilon^{d_l}_p.
\end{equation}
2) the ratio of the IAP's profits from the paid and settlement-free peering types satisfies that
\begin{equation}\label{eq:profit maximization ratio}
\frac{(p+q-k)d_h}{(p-k)d_l} \!=\! \frac{-pd_t\epsilon^{d_l}_q+qd_h\epsilon^{d_l}_p}{pd_t\epsilon^{d_h}_q-qd_h\epsilon^{d_h}_p} \!\!
\begin{cases}
\!=\! -\displaystyle\frac{\epsilon^{d_l}_r}{\epsilon^{d_h}_r} \ \text{if} \ r\in (0,1);\vspace{0.05in}\\
\!\ge\! -\displaystyle\frac{\epsilon^{d_l}_r}{\epsilon^{d_h}_r}\ \text{if} \ r\! =\! 1.
\end{cases}
\end{equation}
\end{theorem}
Theorem \ref{the:profit maximization} states that the capacity allocation decision of a profit-maximizing IAP must not be zero, i.e., the peering scheme under any profit-optimal strategy must not be the pure free peering.
In fact, the IAP could always be better off by switching from the pure free peering scheme to the hybrid peering scheme.
Intuitively, when the IAP adopts the pure free peering scheme, i.e., \(r=0\), its revenue is all from end-users.
However, when the IAP switches to the hybrid peering scheme, i.e., \(r\in (0,1)\), it simultaneously offers the paid and settlement-free peering types and some high-value CPs would shift from the settlement-free peering type to the paid peering type. As a result, the IAP could extract additional revenue from the high-value CPs and thus obtain higher total profit.
Notice that from the perspective of profit maximization, although the hybrid peering scheme is always better than the pure free peering scheme, this kind of relationship does not exist between the pure paid and pure free peering schemes.
When the IAP shifts from the pure free peering scheme to the pure paid peering scheme, i.e., \(r=1\), some low-value CPs which cannot afford the paid peering type have to exit the network platform. Consequently, the IAP can no longer charge end-users for their data usages on the low-value CPs, although it can generate additional revenue from the high-value CPs using the paid peering type. As a result, the comprehensive effect of these changes may either increase or decrease the IAP's total profit.
Theorem \ref{the:profit maximization} also gives the necessary conditions, i.e., Equation (\ref{eq:profit maximization average}) and (\ref{eq:profit maximization ratio}), that the profit-optimal strategies need to meet.
Equation (\ref{eq:profit maximization average}) shows the relationship among the IAP's revenue from end-users (left-hand side), its profits from the data traffic of paid and settlement-free peering types, and the elasticities of the data loads with respect to the user-side price.
Equation (\ref{eq:profit maximization ratio}) characterizes the relationship among the ratio of the profits from the data traffic of paid and settlement-free peering types and the elasticities of the data loads with respect to the price and allocation decisions.
By far we have shown that the peering scheme of a profit-maximizing IAP must not be the pure free peering. Furthermore, Corollary \ref{cor:profit maximization congestion responses} and \ref{cor:profit maximization hazard rate responses} tell when the scheme can only be the pure paid peering and the hybrid peering, i.e., \(r=1\) and \(r\in (0,1)\), respectively.
\begin{corollary}\label{cor:profit maximization congestion responses}
For any profit-optimal strategy, its capacity allocation decision must be one, if \(H(\epsilon^H_\phi/\epsilon^G_\phi+1)\) is an increasing function of the congestion level \(\phi\).
\end{corollary}
Corollary \ref{cor:profit maximization congestion responses} provides a sufficient condition for the peering scheme under any profit-optimal strategy to be the pure paid peering: the function \(H(\epsilon^H_\phi/\epsilon^G_\phi+1)\) is increasing in the congestion level \(\phi\). In this condition, \(\epsilon^H_\phi/\epsilon^G_\phi\) can be interpreted as a metric of {\em relative gain elasticity of capacity}, i.e., the ratio of the percentage increases in the required capacity and usage gain in response to the percentage decrease in the congestion level. Because the gain function \(G\) and the capacity function \(H\) both decrease with the congestion level \(\phi\), if \(H(\epsilon^H_\phi/\epsilon^G_\phi+1)\) is an increasing function of \(\phi\), the relative gain elasticity of capacity \(\epsilon^H_\phi/\epsilon^G_\phi\) must increase with \(\phi\). This means that as the congestion level is lower, the required capacity is less elastic to the usage gain, or equivalently, the usage gain is more elastic to the required capacity.
Under such a condition, when the IAP increases the capacity allocated to the paid peering type that maintains a lower congestion level, the increase of the profit from the data traffic of paid peering type is larger than the decrease of the profit from the data traffic of settlement-free peering type. As a result, the IAP's total profit would increase. Thus, the IAP should only offer the paid peering type for optimizing its total profit.
Furthermore, we see that whether the sufficient condition in Corollary \ref{cor:profit maximization congestion responses} holds largely depends on the monotonicity of the elasticity of capacity \(\epsilon^H_\phi\) and the monotonicity of the elasticity of gain \(\epsilon^G_\phi\). The former and the latter reflect the characteristic of system capacity and the type of data traffic, respectively, as we assumed in Section \ref{subsec:type and characteristic}.
When \(\epsilon^H_\phi\) decreases with congestion and \(\epsilon^G_\phi\) increases with congestion, e.g., data traffic is mainly for text content, the sufficient condition must not hold. Under such a case, the profit-optimal peering schemes are often the hybrid peering, which will be shown in Section \ref{sec:simulation}.
Conversely, when \(\epsilon^H_\phi\) increases with congestion or \(\epsilon^G_\phi\) decreases with congestion, e.g., data traffic is mostly for online video, the sufficient condition may hold, under which the profit-optimal peering schemes are the pure paid peering.
\begin{definition}
The hazard rate of a cumulative distribution function \(F(x)\) is defined by \(\tilde{F}(x) = F'(x)/\left[1-F(x)\right]\).
\end{definition}
The hazard rate captures the proportion of the complementary cumulative distribution \(1-F(x)\) that is reduced due to a marginal increase of \(x\) and measures the rate of decrease in \(1-F(x)\) at the value \(x\).
In particular, for the cumulative distributions of users' value \(F_u(x)\) and CPs' value \(F_v(y)\), the hazard rates \(\tilde{F}_u(x)\) and \(\tilde{F}_v(y)\) measure the rates of decrease in the population of users and CPs whose values are higher than \(x\) and \(y\), respectively.
\begin{corollary}\label{cor:profit maximization hazard rate responses}
For any profit-optimal strategy, its capacity allocation decision must be in the interval \((0,1)\), if the inequality \(\tilde{F}_u(p) < \tilde{F}_v(q)\) holds for any prices \(p,q\in (0,+\infty)\).
\end{corollary}
Corollary \ref{cor:profit maximization hazard rate responses} provides a sufficient condition for the peering scheme under any profit-optimal strategy to be the hybrid peering: the hazard rate of the distribution of users' value \(\tilde{F}_u(p)\) is lower than that of CPs' value \(\tilde{F}_v(q)\) for any prices \(p\) and \(q\). This condition means that with the increases in any two-sided prices \(p\) and \(q\), the rate of decrease of the population of the users whose values are higher than \(p\) is always slower than that of the CPs whose values are higher than \(q\). Under this condition, the user side has a higher value on data usages than the CP side. When the IAP adopts the pure paid peering scheme, some low-value CPs will quit the IAP's network platform and the IAP will lose lots of charges from the high-value users for their data usages on the low-value CPs. As a result, the IAP's profit cannot be maximized. By contrast, when the IAP adopts the hybrid peering scheme, all CPs will connect to the IAP by the paid or settlement-free peering type and the IAP can fully charge the high-value users, under which the IAP's profit can be maximized.
\vspace{0.03in}
\textbf{Summary of Implications}: The theoretical results in this subsection could guide IAPs in choosing peering schemes.
In particular, IAPs should always provide the paid peering type for optimizing their profits (by Theorem \ref{the:profit maximization}), because it can extract additional revenue by charging the high-value CPs.
Furthermore, when the CP side is more sensitive to price than the user side or network traffic is mainly for text content, the peering schemes under the profit-optimal strategies are usually to simultaneously offer the paid and settlement-free peering types (by Corollary \ref{cor:profit maximization hazard rate responses}).
However, as online video streaming increases constantly and users become more sensitive to network congestion, it may change to be to only offer the paid peering type (by Corollary \ref{cor:profit maximization congestion responses}).
\subsection{Welfare-optimal Strategy}\label{subsec:welfare-optimal pricing}
In this subsection, we explore the welfare-optimal strategy and the corresponding peering scheme. We also contrast them with the profit-optimal counterparts so as to draw implications on desirable regulations from a regulatory perspective.
For any user-side price \(p\), a user of value \(u\) gets the surplus \((u-p)\) for one unit data usage, and therefore, the total surplus of all users, when each of them generates one unit data usage, can be defined by
\[S(p) \triangleq \int_p^{+\infty} (u-p) dF_u\]
where \(F_u(\cdot)\) is the cumulative distribution function of users' value. Furthermore, the per-user average surplus for one unit data usage can be defined by \(s(p) \triangleq S(p)/M(p)\) where \(M(p)\) is the user population defined by Equation (\ref{eq:user population}). Accordingly, we define the total user welfare by
\begin{equation}\label{eq:user welfare}
W(\theta,c) \triangleq s(p)d_t(\theta,c) = s(p)\big[d_h(\theta,c)+d_l(\theta,c)\big],
\end{equation}
i.e., the users' average per-unit usage surplus \(s(p)\) multiplied by the total data usages \(d_t(\theta,c)\).
Under any given capacity \(c\), we can maximize the user welfare \(W\) by determining the strategy \(\theta\), i.e., the triple \((p,q,r)\), that solves the optimization problem:
\begin{align*}
\underset{\theta}{\text{maximize}} \quad & W(\theta,c) = s(p) d_t(\theta,c) \\
\text{subject to} \quad & \theta\in (0,+\infty) \!\times\! (0,+\infty) \!\times\! [0,1].
\end{align*}
By solving the above optimization problem, we characterize the welfare-optimal strategy as follows.
\begin{theorem}\label{the:welfare maximization}
If a strategy \(\theta = (p,q,r)\) maximizes the user welfare \(W\), then its capacity allocation decision must not be one, i.e., \(r\neq 1\), and the elasticities of the total data load \(d_t\) with respect to the decisions \(p,q,\) and \(r\) must satisfy:
\begin{equation}\label{eq:welfare maximization}
\epsilon^{d_t}_p = - \epsilon^s_p,\ \epsilon^{d_t}_q = 0,\ \text{and}\ \, \epsilon^{d_t}_r
\begin{cases}
= 0 & \text{if} \ \, r\in (0,1);\\
\ge 0 & \text{if} \ \, r = 0.
\end{cases}
\end{equation}
\end{theorem}
Theorem \ref{the:welfare maximization} shows that the capacity allocation decision of a welfare-maximizing IAP must not be one, i.e., the peering scheme under any welfare-optimal strategy must not be the pure paid peering.
By Equation (\ref{eq:user welfare}), under any given user-side price, the IAP's capacity allocation should maximize its total data load for optimizing user welfare. As mentioned before, however, when the IAP adopts the pure paid peering scheme (i.e., only offers the paid peering type), it will lose the data usages on the low-value CPs which cannot afford the paid peering type and thus the total data load cannot be maximized.
This result implies that to protect user welfare, the settlement-free peering type should be retained for guaranteeing that all CPs connect to the IAP. In other words, the IAP should be encouraged to employ the pure free or hybrid peering scheme.
Besides, Equation (\ref{eq:welfare maximization}) in Theorem \ref{the:welfare maximization} gives necessary conditions for strategies to be welfare-optimal by characterizing the elasticities \(\epsilon^{d_t}_p, \epsilon^{d_t}_q\) and \(\epsilon^{d_t}_r\) of the total data load with respect to the strategy parameters. In particular, the user-side price elasticity \(\epsilon^{d_t}_p\) is opposite to the elasticity \(\epsilon^s_p\) of the user average surplus and the CP-side price elasticity \(\epsilon^{d_t}_q\) is zero. The capacity allocation elasticity \(\epsilon^{d_t}_r\) is zero or non-negative, when the peering scheme is the hybrid or pure free peering, respectively.
Theorem \ref{the:welfare maximization} has implied that the peering scheme of a welfare-maximizing IAP can only be the pure free or hybrid peering. Furthermore, Corollary \ref{cor:welfare maximization congestion responses} tells when the scheme must be the pure free peering, i.e., \(r=0\).
\begin{corollary}\label{cor:welfare maximization congestion responses}
For any welfare-optimal strategy, its capacity allocation decision must be zero, if \(H(\epsilon^H_\phi/\epsilon^G_\phi+1)\) is a decreasing function of the congestion level \(\phi\).
\end{corollary}
Corollary \ref{cor:welfare maximization congestion responses} provides a sufficient condition for the peering scheme of any welfare-optimal strategy to be the pure free peering: the function \(H(\epsilon^H_\phi/\epsilon^G_\phi+1)\) is decreasing in the congestion level \(\phi\). Specifically, if the relative gain elasticity of capacity \(\epsilon^H_\phi/\epsilon^G_\phi\) decreases with the congestion level \(\phi\), the sufficient condition must hold because the capacity function \(H\) decreases with \(\phi\).
Under such a condition, when the IAP allocates more capacity to the paid peering type, the increase of the data traffic of paid peering type is smaller than the decrease of the data traffic of settlement-free peering type. As a result, without changing the user-side price, the IAP's total data load will decrease and thus user welfare will decrease. Therefore, for optimizing user welfare, the peering scheme must be the pure free peering.
Moreover, we see that if the elasticity \(\epsilon^H_\phi\) decreases with congestion and \(\epsilon^G_\phi\) increases with congestion, e.g., data traffic is mainly for text content, the sufficient condition in Corollary \ref{cor:welfare maximization congestion responses} must be established and the peering schemes under the welfare-optimal strategies must be the pure free peering. This result is consistent with our intuitions: the paid peering type that maintains a lower congestion level requires more capacity than the settlement-free peering type under the same data load. However, it only brings a small increase of data usage for the congestion-insensitive text traffic. Thus, from the perspective of usage and welfare maximizations, it is worthless to offer the paid peering type for the IAP with limited total capacity, i.e., the pure free peering scheme is better than the hybrid peering scheme. Conversely, if the elasticity \(\epsilon^H_\phi\) increases with congestion and \(\epsilon^G_\phi\) decreases with congestion, e.g., data traffic is mostly for online video, the sufficient condition in Corollary \ref{cor:welfare maximization congestion responses} may not hold and the peering schemes under the welfare-optimal strategies may be the hybrid peering.
\vspace{0.03in}
\textbf{Summary of Implications}: The theoretical results in this subsection could help regulators to make regulatory policies on IAPs' peering schemes. In particular, to protect user welfare, regulators should discourage IAPs to only offer the paid peering type (by Theorem \ref{the:welfare maximization}). Since some low-value CPs have to exit the network platform under the pure paid peering scheme and users would lose the data usage and welfare generated from these CPs.
Furthermore, when network traffic is mostly for text (video), the peering schemes under the welfare-optimal strategies are often to offer only the settlement-free peering type (both the paid and settlement-free peering types) (by Corollary \ref{cor:welfare maximization congestion responses});
however, the peering schemes under the profit-optimal strategies are often to offer both the paid and settlement-free peering types (only the paid peering type) (by Corollary \ref{cor:profit maximization congestion responses}).
The discrepancies suggest that regulators might want to encourage IAPs to allocate more capacity to the settlement-free peering type.
\section{Sensitivities of Optimal Strategies}\label{sec:simulation}
In the previous section, we analyzed the profit-optimal and welfare-optimal strategies, especially their corresponding peering schemes under certain network conditions. With the rapid development of the Internet, characteristics of the market participants, i.e., users, IAPs, and CPs, are continuously changing. For instance, as online video streaming grows fast, users generate higher data demand on CPs and become more sensitive to network congestion, and IAPs are adopting new technologies, e.g., 4G and 5G, to update their capacity.
In this section, we study the sensitivities (dynamics) of the optimal strategies under these varying characteristics of the market participants.
\subsection{Setup of Model Parameters}
We first extend the demand distribution function \(F_w(w)\) and the gain function \(G(\phi)\) to characterize the varying characteristics of the market participants. To capture the growing data demand, we adopt a family of demand distribution function parameterized by \(\alpha\): \(F_w(w,\alpha) = w^\alpha\) for \(0\le w\le 1, \alpha>0\). Under this polynomial function form, as the parameter \(\alpha\) becomes larger, users' data demands are leaning towards higher values. To capture the increasing congestion sensitivity, we choose a family of gain function parameterized by \(\beta\): \(G(\phi,\beta) = 1 - \phi^{\frac{1}{\beta}}\) for \(0\le \phi\le 1, \beta>0\), which satisfies \(G(\phi,\beta_1) > G(\phi,\beta_2)\) for any \(\beta_1<\beta_2\). The parameter \(\beta\) can be regarded as a metric of users' congestion sensitivity, i.e., under the same congestion level, when the parameter \(\beta\) is larger, users' usage gain is smaller and they are more sensitive to network congestion.
\begin{figure}[t]
\centering
\includegraphics[width=0.19\textwidth]{figs/figure_1.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_2.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_3.pdf}
\caption{Optimal strategies under varying demand \(\alpha\) when \(\beta = 1, c=0.2\).}
\label{figure:demand}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.19\textwidth]{figs/figure_7.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_8.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_9.pdf}
\caption{Optimal strategies under varying sensitivity \(\beta\) when \(\alpha = 1, c=0.2\).}
\label{figure:sensitivity}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.19\textwidth]{figs/figure_4.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_5.pdf}
\includegraphics[width=0.19\textwidth]{figs/figure_6.pdf}
\caption{Optimal strategies under varying capacity \(c\) when \(\alpha=\beta=1\).}
\label{figure:capacity}
\end{figure}
We then choose the forms of the capacity function \(H(\phi)\) and the value distribution functions of users \(F_u(u)\) and CPs \(F_v(v)\). In particular, we use the capacity function \(H(\phi) = 1/\phi\) for \(\phi>0\), under which the required capacity is inversely proportional to the congestion level \(\phi\). This function form was widely used in prior work \cite{gibbens2000internet,jain2001analysis,richard2014pay}. We adopt the value distribution functions \(F_u(u) = u^{0.33}\) and \(F_v(v) = v^{0.33}\) for \(0\le u,v \le 1\), under which the values of users and CPs are leaning towards low values. Besides, we set the cost \(k\) of per-unit data traffic to be \(0.2\).
\subsection{Profit-optimal Strategy}
Under any given parameter of demand distribution \(\alpha\), congestion sensitivity of users \(\beta\), and capacity of the IAP \(c\), we denote the optimal strategy that maximizes the IAP's profit by \(\theta^* = (p^*,q^*,r^*)\).
Next, we explore how this profit-optimal strategy \(\theta^*\) changes with the parameters \(\alpha,\beta\), and \(c\).
Figures \ref{figure:demand}, \ref{figure:sensitivity}, and \ref{figure:capacity} plot the capacity allocation \(r^*\) and prices \(p^*,q^*\) as functions (solid curves) of the parameter of demand distribution \(\alpha\), congestion sensitivity \(\beta\), and IAP's capacity \(c\), respectively.
From Figures \ref{figure:demand} and \ref{figure:sensitivity}, we observe that as \(\alpha\) or \(\beta\) increases, \(r^*\), \(p^*\), and \(q^*\) all increase.
This observation indicates that when users have larger data demand or higher congestion sensitivity, under the profit-optimal strategy, more capacity would be allocated to the paid peering type and the prices of user side and paid peering type would be higher.
Intuitively, as users become more sensitive to network congestion, the IAP should add the capacity allocated to the paid peering type which maintains a lower congestion level (better data delivery quality) than the settlement-free peering type. The increase of congestion sensitivity also makes the network service more valuable for users and CPs; and therefore, the IAP should raise the prices for extracting more revenue from them.
From Figure \ref{figure:capacity}, we observe that larger values of the capacity \(c\) induce lower values of \(r^*\), \(p^*\), and \(q^*\). This observation implies that when the IAP's capacity is expanded, the fraction of the capacity allocated to the paid peering type and the prices of user side and paid peering type under the profit-optimal strategy would all decrease. Intuitively, if the IAP extend its capacity, the supply of data traffic becomes more abundant and thus the profit-optimal prices would decrease as the basic economic principles of demand and supply implies.
\subsection{Welfare-optimal Strategy}
To protect user welfare, the IAP's strategy usually needs to be regulated. There are two points to note when making regulatory policies.
First, it is unwise to directly regulate the capacity allocation of the IAP which is often not visible to the public, and the regulatory policies should focus on the prices of user side and paid peering type.
Second, to ensure the feasibility of the policies, the IAP should not incur a loss under the regulated strategy.
Owing to these, we consider welfare-optimal strategies with two constraints: 1) the capacity allocation is profit-optimal, i.e., \(r = r^*\), and 2) the profit of the IAP is always positive, i.e., \(U>0\). Under these constraints, we denote the welfare-optimal prices of user side and paid peering type by \(p^\circ\) and \(q^\circ\), respectively. Next, we study the relationship between the welfare-optimal prices \(p^\circ, q^\circ\) and the profit-optimal prices \(p^*, q^*\).
The second and third subfigures of Figures \ref{figure:demand} to \ref{figure:capacity} plot the prices \(p^\circ\) and \(q^\circ\) as functions (dotted curves) of the parameter of demand distribution \(\alpha\), congestion sensitivity \(\beta\), and IAP's capacity \(c\).
We observe that the curves of the welfare-optimal prices \(p^\circ\) and \(q^\circ\) are always lower and higher than those of the profit-optimal prices \(p^*\) and \(q^*\), respectively.
This observation indicates that when the objective changes from profit maximization to welfare maximization, one should lower the price of user side but raise the price of paid peering type. Thus, to protect user welfare, regulators might limit the user-side price and encourage IAPs to change their profit source from the user side to the CP side.
When comparing the trends between the welfare-optimal and profit-optimal prices, we further observe that \(p^\circ\) and \(q^\circ\) have the opposite and same trends as \(p^*\) and \(q^*\), respectively, when \(\alpha\), \(\beta\), or \(c\) varies. In particular, the user-side prices \(p^\circ\) and \(p^*\) decreases and increases with \(\alpha\) or \(\beta\), but increases and decreases with \(c\), respectively. This observation suggests that to protect user welfare, when users' data demand or congestion sensitivity increases, regulators might want to tighten the price regulation on the user side; however, when IAPs expand their capacities, the price regulation might be relaxed moderately.
\vspace{0.03in}
\textbf{Summary of Implications}: The simulation results in this section could help IAPs and regulators to adjust the strategies and the corresponding regulatory policies under varying characteristics of the market participants.
In particular, with the rapid growth of online video streaming, data demand and congestion sensitivity of users are ever-increasing. For optimizing the profits, IAPs should allocate more capacity to the paid peering type and raise the prices of user side and paid peering type. For protecting user welfare, regulators might tighten the price regulation on the user side.
However, when IAPs expand their network capacities, IAPs and regulators should take the opposite operations.
\section{Introduction}
Internet access providers (IAPs) have constructed massive network platforms to make end-users access the Internet and obtain data from content providers\footnote{We use the term `content provider' in a broad sense that it includes Internet companies, e.g., Facebook \cite{facebook} and Netflix \cite{netflix}, content delivery networks (CDNs), e.g., Akamai \cite{akamai}, and transit ISPs, e.g., Sprint \cite{sprint}.} (CPs).
Traditionally, data delivery between IAPs and CPs was often based on settlement-free peering agreements \cite{faratin2007complexity}, under which the providers exchange traffic without any form of compensation. As a result, IAPs' revenues were mainly from charges on end-users.
In recent years, however, Internet traffic has been growing more than 50\% per annum \cite{labovitz2010internet} with the rapid popularity of data-intensive services, e.g., online video streaming and cloud-based applications.
To sustain the traffic growth, IAPs need to upgrade their network infrastructures but they feel that the revenues from end-users are often not sufficient to recoup the corresponding costs.
Meanwhile, such rapid traffic growth has caused serious network congestion, especially during peak hours. Consequently, the best-effort data delivery obtained by the settlement-free peering often does not satisfy the QoS requirements of congestion-sensitive CPs, e.g., Netflix \cite{netflix}.
Owing to these, some IAPs have begun to offer CPs a new type of peering agreement, called paid peering \cite{faratin2007complexity}, under which they provide CPs with a better delivery quality for a fee.
For instance, Comcast \cite{ComcastXFINITY} and Netflix reached a paid peering agreement in 2014 \cite{wyatt2014comcast}, where Comcast offers Netflix a direct connection that requires compensation.
Although the paid peering may increase the revenues of IAPs and improve the data delivery quality of CPs, it has caused concerns over net neutrality \cite{wu2003network}, i.e., whether IAPs should be allowed to charge CPs and differentiate their data traffic.
In 2015, the U.S. Federal Communications Commission (FCC) \cite{FCC} passed the Open Internet Order \cite{FCC_Open} to protect net neutrality. However, existing paid peering agreements were exempt from the ruling. Because the FCC feels that it lacks in-depth background ``in the Internet traffic exchange context''.
Although several prior work \cite{faratin2007complexity,lodhi2014open,ma2017pay} has studied the settlement-free or paid peering, or their impacts on the market participants, e.g., IAPs, CPs, and end-users, some important questions remain unanswered, including the follows.
\begin{itemize}
\item What is the optimal peering scheme for IAPs? More specifically, for maximizing the profits, whether should IAPs offer CPs both of the settlement-free and paid peering to choose from or only one of them?
\item How should regulators make policies on IAPs' peering schemes for protecting the welfare of end-users?
\item How should IAPs and regulators adjust peering schemes and regulatory policies under varying market parameters, e.g., data demand and congestion sensitivity of end-users and capacities of IAPs?
\end{itemize}
There are two challenges in order to address the above questions.
First, the level of network congestion, which reflects the data delivery quality of a peering type, is an endogenous variable in a network platform and cannot be directly set by an IAP.
On the one hand, the congestion level is influenced by the data load of network platform. On the other hand, the data usage of end-users and load of network platform are also influenced by the network congestion.
It is crucial to accurately capture the endogenous congestion so as to faithfully characterize the data delivery quality of a peering type.
Second, although the peering agreements are between IAPs and CPs, it is not enough to only characterize the IAPs' decisions towards CPs, e.g., pricing on the paid peering and capacity allocation between the settlement-free and paid peering.
We also need to consider the IAPs' charge on end-users because it directly affects the optimal objectives of IAPs and regulators, i.e., the profits and user welfare. Meanwhile, it also affects the data usage of end-users which impacts the level of network congestion and the delivery quality of peering types.
In this work, we model a network platform built by an IAP to answer the above questions.
We consider that the IAP can offer CPs the settlement-free and paid peering to choose from.
We measure the quality of a peering type by its congestion level, which is modeled as a function of network capacity and data load.
We consider that the IAP decides the network capacity allocated to the peering types and charges the CPs that use paid peering and end-users for accessing the Internet.
We capture CPs' choices over the peering types and end-users' data usage under the congestion and pricing parameters.
We derive an endogenous system congestion in an equilibrium.
Based on the equilibrium model, we characterize the optimal strategies, i.e., capacity allocation and pricing decisions, that maximize the IAP's profit or user welfare. We analyze the peering schemes under the optimal strategies. We also evaluate the changes in the optimal strategies under varying system parameters, e.g., data demand and congestion sensitivity of end-users and capacity of the IAP. Our main contributions and results are as follows.
\begin{itemize}
\item We model a network platform in which paid peering, settlement-free peering, or both are offered. We show the existence and uniqueness of a congestion equilibrium (Theorem \ref{the:uniqueness}) and study its changes under varying capacity allocation and pricing parameters (Corollary \ref{cor:pc effect} to \ref{cor:r effect}).
\item We analyze the peering schemes under the profit-optimal strategies (Theorem \ref{the:profit maximization} and Corollary \ref{cor:profit maximization congestion responses} and \ref{cor:profit maximization hazard rate responses}).
We find that to maximize the profits, IAPs need to always offer paid peering. When data traffic is mostly for text (video), they might (might not) want to simultaneously offer settlement-free peering.
\item We analyze the peering schemes under the welfare-optimal strategies (Theorem \ref{the:welfare maximization} and Corollary \ref{cor:welfare maximization congestion responses}).
We find that to maximize user welfare, settlement-free peering should always be provided. When data traffic is mostly for video (text), paid peering needs (needs not) to be simultaneously provided.
The results suggest that regulators might want to encourage IAPs to allocate more capacity to settlement-free peering.
\item We observe the changes of the optimal strategies under varying system parameters. We find that with growing user demand and congestion sensitivity, IAPs should allocate more capacity to paid peering and regulators might want to tighten user-side price regulation. However, as IAPs expand their capacities, IAPs and regulators should take the opposite operations.
\end{itemize}
We believe that our model and analysis could help IAPs to choose peering schemes and guide regulators to legislate desirable regulations.
\section{Conclusions}
In the paper, we model a network platform under which an IAP can offer CPs the paid and settlement-free peering to choose from and charge CPs of using paid peering and end-users.
We capture the data delivery qualities of the peering types by their levels of network congestion and derive an endogenous system congestion in an equilibrium.
Based on the model, we characterize the IAP's peering schemes under the profit-optimal and welfare-optimal strategies.
We find that the profit and welfare objectives always drive the IAP to offer the paid and settlement-free peering, respectively.
However, whether to simultaneously offer the other peering type largely depends on the type of data traffic, e.g., text or video.
In particular, when data traffic is mostly for text (video), the IAP might (might not) want to offer the settlement-free peering for maximizing its profit and should not (should) offer the paid peering for optimizing user welfare.
This result suggests that regulators might want to encourage the IAP to allocate more capacity to the settlement-free peering.
We also explore the changes of the optimal strategies under varying data demand and congestion sensitivity of users and capacity of the IAP.
We find that with growing data demand and congestion sensitivity, the IAP needs to allocate more capacity to the paid peering under the profit-optimal strategies. However, when the IAP expands the capacity, it should take the opposite operation.
\section{Network Model}\label{sec:model}
In this section, we model a network platform built by an IAP, which transmits data traffic between CPs and end-users.
In particular, the IAP can offer CPs two types of peering agreements, i.e., settlement-free peering and paid peering.
Since CPs' utilities and preferences on the peering types depend on the data usage of end-users, we first capture the impact of IAP's charge on end-users' population and usage (Section \ref{subsec:model user}).
We then characterize the settlement-free and paid peering types and CPs' choices over them (Section \ref{subsec:model CP}).
Finally, we describe the IAP's capacity allocation to the peering types and derive a congestion equilibrium of the network platform (Section \ref{subsec:model congestion}).
\subsection{User Population and Data Usage}\label{subsec:model user}
We assume that the IAP adopts usage-based pricing \cite{hande10pricing} to charge end-users and per-unit usage charge is denoted by \(p \in (0,+\infty)\). This pricing scheme is widely used by most wireless IAPs, e.g., AT\&T \cite{att} and T-Mobile \cite{tmobile}, and some wired IAPs, e.g., Comcast \cite{comcastusage}. We consider a continuum of end-users, such that each user is modeled by her average value \(u\) of per-unit data usage. We denote the cumulative distribution function of users' value by \(F_u(\cdot)\) and assume it is continuously differentiable over \(\mathbb{R}_+\)\footnote{In this paper, the sign \(\mathbb{R}_+\) expresses the range \([0,+\infty)\).}. Intuitively, a user would benefit from and subscribe to the Internet access service if and only if her value \(u\) is higher than the price \(p\). Therefore, the population of the active users of the IAP is a function of \(p\), defined by
\begin{equation}\label{eq:user population}
M(p) \triangleq \int_p^{+\infty} dF_u = 1 - F_u(p).
\end{equation}
We consider a continuum of CPs and model each CP by two orthogonal characteristics: its average value \(v\) of per-unit data usage and end-users' average data demand \(w\) on it. We denote the cumulative distribution functions of CPs' value and demand by \(F_v(\cdot)\) and \(F_w(\cdot)\), respectively, and assume they are both continuously differentiable over \(\mathbb{R}_+\).
Data demand of an end-user is her desirable amount of data consumed under a congestion-free network. In fact, when there exists network congestion, e.g., packet delay or drop, end-users' data demand might not be fully filled. We denote the level of network congestion for end-users retrieving CPs' contents by \(\phi\in [0,1]\). We define the end-users' average data usage on a CP of demand \(w\) by \(T(w,\phi) \triangleq wG(\phi)\), i.e., the data demand \(w\) multiplied by a gain factor \(G(\phi)\).
\begin{assumption}\label{ass:gain}
\(G(\phi)\colon [0,1] \mapsto [0,1]\) is a decreasing and continuously differentiable function of \(\phi\).
It satisfies \(G(0) = 1\) and \(G(1) = 0\).
\end{assumption}
Assumption \ref{ass:gain} states that the {\em usage gain} or simply {\em gain} \(G(\phi)\) decreases monotonically when the network congestion \(\phi\) deteriorates. In particular, the gain is one and the end-users' data usage \(T(w,\phi)\) equals the demand \(w\) under no congestion, i.e., \(\phi = 0\).
To characterize the rate of decrease of gain with respect to congestion, {\em elasticity} is often considered.
\begin{definition}
The elasticity of $y$ with respect to $x$, or $x$-elasticity of $y$, is defined by
$\displaystyle\epsilon_x^y \triangleq -\frac{x}{y}\frac{\partial y}{\partial x}$.
\label{def:elasticity}
\end{definition}
Elasticity can be expressed as $\epsilon_x^y = (-{\partial y}/y)/({\partial x}/x)$ and interpreted as the percentage decrease in $y$ (numerator) in response to the percentage increase in $x$ (denominator). In particular, $\epsilon_{\phi}^{G}$ characterizes the percentage decrease in the usage gain in response to the percentage increase in the congestion level. Based on this characterization, different forms of gain functions \(G(\phi)\) can be used to model different congestion sensitivities of data demand. For example, when end-users' data demand is more or less sensitive, i.e., decreases more sharply or gently, with respect to congestion, gain functions with higher or lower elasticities can be adopted, respectively.
\subsection{Paid Peering and Settlement-Free Peering} \label{subsec:model CP}
We consider that the IAP can offer CPs two types of peering agreements, i.e., paid peering and settlement-free peering, to choose from. The former provides a better data delivery quality than the latter but requires compensation. This peering differentiation scheme has been adopted by many IAPs, e.g., Comcast \cite{ComcastXFINITY} and Verizon \cite{VerizonFios}, in recent years.
In our model, we use the congestion level of a peering type to reflect its data delivery quality.
We denote the congestion levels of paid and settlement-free peering by \(\phi_h\) and \(\phi_l\), respectively. Because paid peering has better delivery quality than settlement-free peering, we assume the congestion level of paid peering is no higher than that of settlement-free peering, i.e., \(\phi_h\le \phi_l\).
We define the vector of the congestion levels by \(\bm{\phi} = (\phi_h,\phi_l)\).
We assume the IAP charge the CPs using paid peering a price \(q\in (0,+\infty)\) of per-unit data usage.
For any CP with value \(v\) and demand \(w\), we define its utility of using paid peering by
\begin{align*}
\Pi_h(v,w;p,q,\phi_h) \triangleq (v-q)M(p)T(\phi_h,w) = (v-q)M(p)wG(\phi_h),
\end{align*}
i.e., the CP's surplus \((v-q)\) of per-unit data usage multiplied by end-users' aggregate data usage \(M(p)T(\phi_h,w)\) on the CP. Similarly, we define its utility of using settlement-free peering by
\begin{align*}
&\Pi_l(v,w;p,\phi_l) \!\triangleq\! vM(p)T(\phi_l,w) \!=\! vM(p)wG(\phi_l).
\end{align*}
We assume any CP would choose the peering type which induces the higher utility\footnote{Without loss of generality, we assume that a CP would choose paid peering when both peering types induce the same utility.}. Therefore, a CP would choose to use paid peering if and only if
\begin{equation}\label{eq:gain ratio}
v \ge \underline{v}(q,\bm{\phi}) \triangleq \frac{qG(\phi_h)}{G(\phi_h)-G(\phi_l)}
\end{equation}
where \(\underline{v}(q,\bm{\phi})\) defines the boundary value of CPs choosing different peering types.
Based on the CPs' distributions \(F_v(\cdot)\) and \(F_w(\cdot)\), we define the data loads of paid and settlement-free peering types, i.e., the end-users' aggregate data usages on the CPs that choose them, by
\begin{align}\label{eq:congestion load}
\begin{cases}
\displaystyle D_h(p,q,\bm{\phi}) \triangleq \int_0^{+\infty}\!\!\!\!\int_{\underline{v}(q,\bm{\phi})}^{+\infty}M(p)T(\phi_h,w)dF_vdF_w \vspace{0.05in}\\
\displaystyle D_l(p,q,\bm{\phi}) \triangleq \int_0^{+\infty}\!\!\!\!\int_0^{\underline{v}(q,\bm{\phi})}\!M(p)T(\phi_l,w)dF_vdF_w.
\end{cases}
\end{align}
\subsection{Capacity Allocation and Congestion Equilibrium}\label{subsec:model congestion}
As shown in Equation (\ref{eq:congestion load}), given congestion levels \(\bm{\phi}\), the peering types have induced data loads.
To accommodate its data load under a certain congestion, each peering type needs to have enough network capacity. We model the capacity needed by a peering type as a function \(C(d,\phi)\) of its data load \(d\) and congestion level \(\phi\).
\begin{assumption}\label{ass:capacity}
We assume that \(C(d,\phi) = H(\phi)d\), where \(H(\phi)\colon (0,1] \mapsto [0,+\infty)\) is a decreasing and continuously differentiable function of \(\phi\) and satisfies \(H(\phi)\rightarrow +\infty\) as \(\phi \rightarrow 0\).
\end{assumption}
The form of function \(C(d,\phi)\) in Assumption \ref{ass:capacity} captures the capacity sharing \cite{chau2010viability} nature of network services, under which the needed capacity is proportional to the data load, i.e., \(kC(d,\phi) = C(kd,\phi)\). This guarantees CPs will not perceive any difference in terms of congestion level after service partitioning or multiplexing \cite{chau2010viability}. Therefore, the IAP can arbitrarily partition its capacity to multiple service classes (in our case, two peering types) without degradation of congestion level.
In Assumption \ref{ass:capacity}, we also define a {\em capacity} function \(H(\phi)\) to measure the capacity needed by a peering type that has a unit data load and maintains a congestion level \(\phi\). Intuitively, a peering type needs less capacity as its congestion deteriorates, i.e., \(H(\phi)\) decreases with \(\phi\).
In particular, when \(H(\phi) = 1/\phi\), the capacity needed by a peering type is \(C(d,\phi) = d/\phi\) which was widely used by prior work \cite{gibbens2000internet,jain2001analysis,richard2014pay,wang2017optimal}.
We denote the inverse function of \(C(d,\phi)\) with respect to \(d\) by \(C^{-1}(c,\phi)\), which can be interpreted as the implied data load under a network capacity \(c\) and observed congestion level \(\phi\). By Assumption \ref{ass:capacity}, \(C^{-1}(c,\phi)\) increases with both \(c\) and \(\phi\).
We assume the IAP has a total capacity \(c\in (0,+\infty)\) and that it allocates fractions \(r\) and \((1-r)\) of the capacity to the paid and settlement-free peering, respectively, where \(r\in [0,1]\).
When the IAP makes exogenous price decisions \(p,q\) and capacity allocation decision \(r\), the congestion levels \(\bm{\phi}\) of the peering types can be endogenously determined. We define such a congestion equilibrium of the network platform as follows.
\begin{definition}[Congestion Equilibrium]\label{def:congestion equilibrium}
For any fixed prices \(p,q\), total capacity \(c\) and allocation decision \(r\), the congestion level \(\bm{\phi} = (\phi_h,\phi_l)\) is an equilibrium if and only if
\begin{align}\label{eq:congestion equilibrium}
\begin{cases}
D_h(p,q,\bm{\phi}) = C^{-1}\big(rc,\phi_h\big)\vspace{0.05in}\\
D_l(p,q,\bm{\phi}) = C^{-1}\big((1-r)c,\phi_l\big).
\end{cases}
\end{align}
\end{definition}
In Equation (\ref{eq:congestion equilibrium}) of Definition \ref{def:congestion equilibrium}, the left-hand sides are the induced data loads of the peering types given the price decisions \(p,q\) and congestion levels \(\bm{\phi}\), and the right-hand sides are their implied data loads under the capacity \(c\) and allocation decision \(r\). Under an equilibrium, both equal the end-users' aggregate data usages on the CPs using the peering types.
\begin{theorem}\label{the:uniqueness}
Under Assumption \ref{ass:gain} and \ref{ass:capacity}, for any fixed price decisions \(p,q\), total capacity \(c\) and allocation decision \(r\), there always exists a unique equilibrium for the network platform.
\end{theorem}
Theorem \ref{the:uniqueness} states that under minor assumptions of the usage gain (Assumption \ref{ass:gain}) and the required capacity (Assumption \ref{ass:capacity}), the existence and uniqueness of equilibrium can be guaranteed. Based on Theorem \ref{the:uniqueness}, we denote the unique equilibrium congestion by \(\bm{\varphi} = (\varphi_h,\varphi_l)\).
We denote the IAP's {\em strategy} by \(\theta \triangleq (p,q,r)\), i.e., the triple of the price and allocation decisions. Because the equilibrium congestion is determined by the strategy \(\theta\) and the total capacity \(c\), we also denote it by \(\bm{\varphi}(\theta,c)\). We denote the corresponding data loads of the paid and settlement-free peering types, respectively, by
\[d_h(\theta,c) \!\triangleq D_h\big(p,q,\bm{\varphi}(\theta,c)\big) \ \text{and} \ \ d_l(\theta,c) \!\triangleq D_l\big(p,q,\bm{\varphi}(\theta,c)\big).\]
We define the total data load of the network platform by \(d_t(\theta,c) \triangleq d_h(\theta,c) + d_l(\theta,c)\), i.e., the summation of the data loads of the two peering types.
The following three corollaries show how the equilibrium congestion and data loads change when the strategy \(\theta\) changes.
\begin{corollary}\label{cor:pc effect}
For any fixed price \(q\in (0,+\infty)\) and allocation decision \(r\in (0,1)\), the congestion levels \(\varphi_h,\varphi_l\) and data loads \(d_h,d_l\) all decrease with the user-side price \(p\).
\end{corollary}
Corollary \ref{cor:pc effect} tells that the congestion levels and data loads of the peering types will decrease if the user-side price increases. Intuitively, when the IAP raises the user-side price, the population of the active end-users will decline, which further reduces the congestion levels under equilibrium and corresponding data loads of the peering types.
\begin{corollary}\label{cor:q effect}
For any fixed price \(p\in (0,+\infty)\) and allocation decision \(r\in (0,1)\), the congestion level \(\varphi_h\) (\(\varphi_l\)) and data load \(d_h\) (\(d_l\)) both decrease (increase) with the price \(q\) of paid peering.
\end{corollary}
Corollary \ref{cor:q effect} tells that the congestion level and data load of paid (settlement-free) peering type decrease (increase) if the price \(q\) of paid peering increases. Because some CPs that choose paid peering will shift to settlement-free peering under a higher price \(q\). This result also tells that the IAP could use the price of paid peering as a means to adjust the service segment of low or high congestion, i.e., the set of CPs that choose paid or settlement-free peering, respectively.
\begin{corollary}\label{cor:r effect}
For any fixed prices \(p,q\in (0,+\infty)\), the congestion level \(\varphi_l\) and data load \(d_h\) increase with the allocation decision \(r\) and the data load \(d_l\) decreases with the allocation decision \(r\). Besides, it satisfies 1) \(\varphi_l = \varphi_h\) if and only if \(r=0\) and 2) \(\varphi_l = 1\) if and only if \(r=1\).
\end{corollary}
Corollary \ref{cor:r effect} states that the congestion level \(\varphi_l\) of settlement-free peering type will increase if the capacity fraction \(r\) allocated to paid peering increases. In particular, when the IAP allocates all its capacity to the settlement-free peering type, i.e., \(r=0\), the congestion levels \(\varphi_l, \varphi_h\) of the two peering types are equal, under which no CPs will use paid peering and the price \(q\) does not play a role.
In such a case, we say the IAP's peering scheme is {\em pure free peering}, i.e., to only offer the settlement-free peering type.
At the other extreme, when the IAP allocates all its capacity to the paid peering type, i.e., \(r=1\), the congestion level \(\varphi_l\) of the settlement-free peering type is one, which can be interpreted as a termination of the settlement-free peering type.
In such a case, we say the IAP's peering scheme is {\em pure paid peering}, i.e., to only offer the paid peering type.
Otherwise, when the paid and settlement-free peering types are both allocated with positive capacity, i.e., \(r\in (0,1)\), they both include active CPs.
In such a case, we say the IAP's peering scheme is {\em hybrid peering}, i.e., to simultaneously offer the paid and settlement-free peering types.
\section{Related Work}
Some previous work \cite{reed2014current,lodhi2014open,lodhi2015complexities,ma2017pay,faratin2007complexity} has studied the peering agreements and their impacts on the market participants and evolution of the Internet.
Ma \cite{ma2017pay} adopted a choice model to analyze the peering strategies between IAPs and CPs and showed that whether CPs choose paid peering to connect with IAPs depends on their user stickiness and market shares.
Lodhi {\em at al.} \cite{lodhi2014open,lodhi2015complexities} built an agent-based model to evaluate the peering decisions of transit and tier-2 providers. In \cite{lodhi2015complexities}, they also explored sources of complexity in peering and limitations for tier-2 providers on accurately forecasting the effect of peering decisions.
Reed {\em at al.} \cite{reed2014current} captured the evolution of peering and interconnection to understand the changing characteristics of network traffic.
Unlike these efforts, our work study the IAP's capacity allocation between the paid and settlement-free peering types, based on which we characterize the IAP's optimal peering schemes that maximize its profit or user welfare.
In our work, as the IAP simultaneously offers the paid and settlement-free peering types, its pricing mechanism on the CP side is a type of Paris Metro Pricing (PMP) \cite{odlyzko1999paris}, which has been studied and used by some prior work \cite{chau2010viability,ma2013public}.
Chau {\em at al.} \cite{chau2010viability} provided sufficient conditions for the viability of the PMP based on a general model of congestion externality.
Ma and Misra \cite{ma2013public} used the PMP to capture the pricing schemes of non-neutral ISPs and claimed that introducing a public option ISP always benefits end-users.
Both of the work focused on one-sided pricing models, i.e., they only consider the charges of IAPs on end-users \cite{chau2010viability} or CPs \cite{ma2013public}. In the current Internet, however, many IAPs charge both of end-users and CPs. To reflect this state quo, we build a more general two-sided pricing model \cite{Rochet2003}, under which IAPs use the PMP on the CP side and pure usage-based pricing on the user side.
|
1,116,691,500,959 | arxiv | \section{\label{}}
\section{INTRODUCTION}
\noindent The high cross section for heavy flavour production at the Large Hadron Collider (LHC) offers a fantastic opportunity to study the known heavy flavour hadrons and to search for the many ``missing'' states predicted in the quark model. In particular, hadrons containing a $b$ quark are expected to be copiously produced at the LHC. Further to this, many heavy hadron states (bottom baryons in particular) are only accessible at hadron colliders, and with the end of data taking at the Tevatron, the LHC is fast becoming the one of the best facilities to study heavy hadron spectroscopy. The LHC has already delivered its first two discoveries of new $b$ hadrons within the first three years of running. The ATLAS collaboration has observed a new state in radiative transitions to ${\Ups}(1S)$ and ${\Ups}(2S)$ and interprets this as the first observation of the $\chib(3P)$ states~\cite{ATLAS_chib}. The CMS collaboration observes a new $b$ baryon decaying to $\Xi_{b}^{-}\pi^{+}$ (plus charge conjugates), interpreted as a neutral $\Xi_{b}^{*}$ baryon~\cite{CMS_Xi}.
\section{OBSERVATION OF A NEW $\chib$ STATE AT ATLAS}
\noindent The $\chib$ states represent the $S=1$ (parallel quark spins) $P$-wave states of the bottomonium ($\bbbar$) system. The $\chib$ comprise a triplet of states, $\chibj0$, $\chibj1$, $\chibj2$, with quantum numbers $J^{PC}=0^{++}, 1^{++}, 2^{++}$. The three states are characterised by a small hyperfine mass splitting of ${\cal{O}}(10 \MeV)$. The branching fractions for the radiative decays $\chib \to {\Ups} \, \gamma$ are large, ${\cal{O}}(10 \%)$. The $\chib(1P)$ and $\chib(2P)$ triplets (with spin-weighted masses of around $9.90$ and $10.26$ GeV respectively) have been studied in detail at $\ee$ colliders, providing precise measurements of the hyperfine mass structure and radiative branching fractions \cite{chib_ee1, chib_ee2, chib_ee3}. Additionally, a third triplet of states, the $\chib(3P)$, is expected below the open beauty threshold at a mass around $10.525\GeV$~\cite{predictions1, predictions2, predictions3}.
Given the complex hadronic environment of the LHC, the radiative decays $\chib \to {\Ups} \, \gamma$ with ${\Ups}\to\mumu$ represent a very clean experimental channel to study the $\chib$ states, where the presence of two muons offers a clear signature to trigger upon. The aim of the ATLAS analysis is to reconstruct the radiative decays $\chib\to{\Ups}(1S)\gamma$ and $\chib\to{\Ups}(2S)\gamma$ with two independent analyses that reconstruct the photon with either a direct calorimetric measurement or through the reconstruction of $\ee$ conversions in the ATLAS tracker. The two photon reconstruction methods have their own advantages and disadvantages. In particular, converted photons offer better invariant mass resolution compared to photons reconstructed by the calorimeter but at the expense of a much lower reconstruction efficiency. The ATLAS collaboration has recently reported the observation of a new structure decaying to ${\Ups}(1S)\gamma$ and ${\Ups}(2S)\gamma$ consistent with the $\chib(3P)$ system~\cite{ATLAS_chib}. The following sections summarise the ATLAS analysis and results, a detailed description of the ATLAS detector can be found in \cite{ATLAS_Paper}.
\subsection{Data Sample and Selection of ${\Ups} \to \mumu$ decays}
\label{sec_ATLAS_Upsi}
\noindent The ATLAS analysis uses a data sample, recorded by the ATLAS experiment during the 2011 LHC proton-proton collision run at a centre of mass energy $\rts = 7 \TeV$, representing an integrated luminosity of $4.4\,\ifb$. The data sample was collected by a set of triggers designed to select events containing di-muon candidates or single high transverse momentum muons.
Muon candidates are reconstructed from stand alone tracks reconstructed in the ATLAS muon spectrometer combined with tracks reconstructed in the ATLAS Inner Detector (ID). Two oppositely charged muon candidates, both with transverse momentum $p_{T} > 4 \GeV$ and pseudorapidity $|\eta| < 2.3$, are fitted to a common vertex to form di-muon candidates. The di-muon vertex fit quality is required to satisfy $\chi^{2}/d.o.f < 20$. Finally, di-muon candidates are required to have transverse momentum $p_{T} > 12 \GeV$ and rapidity $|y|< 2.0$. The invariant mass distribution of di-muon candidates is shown in Figure~\ref{Upsilon}. ${\Ups}(1S)$ and ${\Ups}(2S)$ candidates are selected from di-muon candidates with an invariant mass within the range $9.25 < m({\mumu}) < 9.65 \GeV$ and $9.80 < m({\mumu}) < 10.10 \GeV$ respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{fig_01.pdf}
\caption{The invariant mass distribution of di-muon candidates selected as described in Section~\ref{sec_ATLAS_Upsi}. The shaded areas A and B represent the invariant mass selections for ${\Ups}(1S)$ and ${\Ups}(2S)$ candidates respectively~\cite{ATLAS_chib}.} \label{Upsilon}
\end{figure*}
\subsection{Converted Photon Selection}
\noindent Candidate photon conversions are reconstructed from two oppositely charged tracks reconstructed in the ATLAS inner detector (ID) that intersect at a common vertex. The two tracks are fitted to a common conversion vertex where the fit is required to converge with a $\chi^{2}$ probability greater than 0.01. Each electron track is required to have transverse momentum $p_{T} > 500 \MeV$ and pseudorapidity $|\eta| < 2.3$ and to be reconstructed from at least 4 hits in the silicon layers of the ID. Conversion candidates reconstructed from tracks associated to the the di-muon candidate are rejected. To reduce background contamination from Dalitz decays and fake conversions, conversion vertices are required to be reconstructed with a radial distance from the beam axis of greater than $40\,\mathrm{mm}$. Converted photons not associated with the di-muon vertex are rejected by demanding that the impact parameter of the converted photon candidate with respect to the di-muon vertex be less than $2\,\mathrm{mm}$.
\subsection{Unconverted Photon Selection}
\noindent Unconverted photons are reconstructed from energy deposits in the ATLAS electromagnetic calorimeter that are not matched to any ID track. Unconverted photon candidates are required to have a transverse energy greater than $2.5\GeV$ and pseudorapidity $|\eta| < 2.37$. Photon candidates are also required to satisfy the ``loose'' photon identification selection described in \cite{ATLAS_photon} to reject backgrounds from $\pi^{0}\to\gamma\gamma$ decays and narrow jets. Unconverted photons reconstructed within the transition region ($1.37 < |\eta| < 1.52$) between the barrel and end cap calorimeters are not selected. To improve the momentum resolution of unconverted photons, the polar angle of the unconverted photon is corrected to point back to the di-muon vertex, exploiting the longitudinal segmentation of the ATLAS electromagnetic calorimeter. This procedure also allows photons that are not compatible with having originated from the di-muon vertex to be rejected through a loose cut on the fit quality of $\chi^{2}/d.o.f < 200$. This procedure is described in detail in \cite{ATLAS_pointing}.
\subsection{Selection of $\chib$ Candidates}
\noindent Reconstructed ${\Ups} \to \mumu$ candidates are associated with reconstructed photons to form $\chib$ candidates. To minimise the effects of the experimental di-muon mass resolution, the invariant mass difference $\Delta m = m(\mumu\gamma)-m(\mumu)$ is calculated. The $\Delta m$ distributions for ${\Ups}(1S)\gamma$ and ${\Ups}(2S)\gamma$ candidates can be shown on the same mass scale through the definition of the $\Delta m + m_{{\Ups}(kS)}$ distribution (the $m_{{\Ups}(k=1,2S)}$ represent the current world average masses of the ${\Ups}(1S)$ and ${\Ups}(2S)$ states)~\cite{PDG}. The $\Delta m + m_{{\Ups}(1,2S)}$ distributions for $\chib$ candidates reconstructed from unconverted and converted photons are shown in Figures \ref{chib_calo} and \ref{chib_conv} respectively. A final selection requirement on the transverse momentum of the di-muon system of $p_{T} > 20 \GeV$ is imposed for unconverted photon candidates to maximise the signal significance of the $\chib(1P)$ and $\chib(2P)$ peaks.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{fig_02a.pdf}
\caption{The $\Delta m + m_{{\Ups}(1S)}$ distribution for $\chib\to{\Ups}(1S)\gamma$ candidates reconstructed from unconverted photons~\cite{ATLAS_chib}.} \label{chib_calo}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{fig_02b.pdf}
\caption{The $\Delta m + m_{{\Ups}(1S)}$ (filled points) and $\Delta m + m_{{\Ups}(2S)}$ (open triangles) distributions for $\chib\to{\Ups}(1S,2S)\gamma$ candidates reconstructed from converted photons. The data points have not been corrected for energy losses due to bremsstrahlung~\cite{ATLAS_chib}.} \label{chib_conv}
\end{figure*}
The mass distributions for ${\Ups}(1S)\gamma$ candidates reconstructed from both unconverted and converted photons shown in Figures \ref{chib_calo} and \ref{chib_conv} exhibit clear peaks at approximately $9.9 \GeV$ and $10.2 \GeV$ consistent with $\chib(1P)\to{\Ups}(1S)\gamma$ and $\chib(2P)\to{\Ups}(1S)\gamma$ decays. In addition to these peaks, a third structure is also observed at a mass of approximately $10.5 \GeV$. This additional structure is also observed in the mass distribution for ${\Ups}(2S)\gamma$ candidates. The mass and decay modes of these additional structures is consistent with the expectations for the $\chib(3P)$ states decaying in the modes $\chib(3P)\to{\Ups}(1S)\gamma$ and $\chib(3P)\to{\Ups}(2S)\gamma$. The higher transverse momentum threshold for the reconstruction of unconverted photons prohibits the reconstruction of soft photons from $\chib(2P,3P)\to{\Ups}(2S)\gamma$ decays.
\subsection{Fit Description and Results}
\noindent Unbinned maximum likelihood fits are performed to the $\Delta m + m_{{\Ups}(1,2S)}$ distributions for both the converted and unconverted $\chib$ candidates to measure the mass of the new structure under its interpretation as the $\chib(3P)$ states, both fits are described in detail in~\cite{ATLAS_chib}.
The $\Delta m + m_{{\Ups}(1S)}$ distribution for unconverted photon candidates is described by three Gaussian probability density functions (PDFs), each with independent mean value, width and normalisation parameters. The background distribution is described by the smooth function, $\exp{\left(A\Delta m + B {\Delta m}^{-2}\right)}$ with two free parameters, $A$ and $B$. The mass barycenter of the $\chib(3P)$ signal is measured to be $\overline{m}_{3} = 10.541 \pm 0.011 \stat \pm 0.030 \syst \GeV$ from the fit to unconverted photon candidates alone~\cite{ATLAS_chib}. The systematic uncertainty on the unconverted photon mass measurement is dominated by the modelling of the background distribution and the uncertainty associated with the unconverted photon energy scale.
Both the $\Delta m + m_{{\Ups}(1S)}$ and $\Delta m + m_{{\Ups}(2S)}$ distributions for converted photon candidates are fitted together in a simultaneous fit. Each $\chib(nP)$ peak is described by a pair of Crystal Ball (CB) functions. This is motivated by the fact that the converted photon mass resolution is comparable to the hyperfine splitting between the $J=1,2$ states. The background distribution for both the $\Delta m + m_{{\Ups}(1S)}$ and $\Delta m + m_{{\Ups}(2S)}$ distributions is described by the function $(\Delta m - q_{0})^{\alpha}\cdot \exp\left \{ (\Delta m - q_{0})\cdot \beta \right \}$ where $q_{0}$, $\alpha$ and $\beta$ are all free parameters (an independent set of parameters for both the $\Delta m + m_{{\Ups}(1,2S)}$ distributions). The mass barycenter of the $\chib(3P)$ signal, determined from converted photon candidates alone, is measured to be $\overline{m}_{3} = 10.530 \pm 0.005\stat \pm 0.009\syst \GeV$~\cite{ATLAS_chib}. The systematic uncertainty associated with the mass measurement is dominated by the various assumptions made in the simultaneous fit.
The mass measurements using both converted and unconverted photons are compatible with each other and with the theoretical expectations for the $\chib(3P)$ system. The mass measurement from the converted photon analysis with the lower statistical and systematic uncertainty is chosen to represent the final measurement of the mass barycenter of the $\chib(3P)$ system.
The significance of the $\chib(3P)$ signal is assessed from the logarithmic likelihood ratio $\log{\left(L_{max}/L_{0}\right)}$, where $L_{max}$ and $L_{0}$ represent likelihood values calculated from fits with and without a $\chib(3P)$ signal included respectively. The significance is re-assessed for each set of systematic variations and is consistently found to be in excess of $6$ standard deviations for both the unconverted and converted photon analyses independently.
The observation of a new structure in the ${\Ups}(1S)\gamma$ spectrum was recently confirmed by the $D\O$ collaboration~\cite{D0}. $D\O$ measure the mass of the structure to be $\overline{m}_{3} = 10.551 \pm 0.014\stat \pm 0.017\syst \GeV$, consistent with the ATLAS measurement.
\subsection{Conclusion}
\noindent The ATLAS collaboration has observed the known $\chib(1P)$ and $\chib(2P)$ states in radiative transitions to ${\Ups}(1S)\gamma$ in proton-proton collisions at the LHC. In addition to this, ATLAS observe a new structure in radiative decays to ${\Ups}(1S)\gamma$ and ${\Ups}(2S)\gamma$ consistent with theoretical expectations for the $\chib(3P)$ system. The mass barycenter of this structure, under the interpretation as the $\chib(3P)$ system, is measured to be $\overline{m}_{3} = 10.530 \pm 0.005\stat \pm 0.009\syst \GeV$~\cite{ATLAS_chib}. Further measurements by ATLAS and the other LHC experiments will hopefully shed more light on this new state in the near future.
\section{OBSERVATION OF A NEW $\Xi_{b}$ BARYON AT CMS}
\noindent Until recently, the $\Xi_{b}$ states represented the only experimentally observed baryons to contain one strange and one bottom valence quark within the quark model of baryons. The first direct observation of the $\Xi_{b}$, of which there exists both neutral $usb$ and negatively charged $dsb$ varieties, came from the Tevatron experiments ~\cite{Tev1,Tev2,Tev3}, although indirect evidence for the $\Xi_{b}^{-}$ was seen at LEP~\cite{LEP1,LEP2}. In addition to the the $J^{P} = 1/2^{+}$ $\Xi_{b}$ ground states, the quark model predicts the $J^{P} = 1/2^{+}$ $\Xi_{b}^{\prime}$, $J^{P} = 3/2^{+}$ $\Xi_{b}^{*}$ and two further states with $L=1$ orbital angular momentum between the $b$ quark and the light di-quark system~\cite{XiTheory1,XiTheory2,XiTheory3,XiTheory4,XiTheory5}. The strong decay $\Xi_{b}^{\prime}\to \Xi_{b}\pi$ is expected to be kinematically forbidden due to an expected $\Xi_{b}^{\prime}-\Xi_{b}$ mass difference below the pion mass. However, the $\Xi_{b}^{*}\to \Xi_{b}\pi$ decay is expected to be kinematically allowed, in analogy with the better studied $\Xi_{c}$ baryon system.
The CMS analysis represents a search for $\Xi_{b}^{*0}$ baryons in $\Xi_{b}^{-}\pi^{+}$ (plus charge conjugate) decays with $\Xi_{b}^{-}\to\Jpsi\,\Xi^{-}$, $\Jmumu$, $\Xi^{-}\to \Lambda^{0}\,\pi^{-}$ and $\Lambda^{0}\to p\pi^{-}$ (plus charge conjugates). For brevity, the reconstruction of the charge conjugate decay modes will be implied throughout this summary. The following sections summarise the CMS analysis and results presented in \cite{CMS_Xi}. A detailed description of the CMS detector can be found in~\cite{CMS_Paper}.
\subsection{Data Sample and Event Selection}
\noindent The CMS analysis is based on a sample of $pp$ collision data representing an integrated luminosity of $5.3\,\ifb$ collected during the 2011 LHC run at a centre of mass energy $\rts = 7\TeV$. The data used by the CMS analysis are collected by specialised triggers designed to record events containing two oppositely charged muons that are compatible with having been produced in the decay of a $\Jpsi$. Separate triggers are used to select events containing $\Jmumu$ candidates that are promptly produced and those that are displaced from the primary vertex.
\subsection{Selection of $\Xi_{b}^{-}$ candidates}
\noindent The foundation of the analysis is the reconstruction of a pure sample of $\Xi_{b}^{-}\to\Jpsi\,\Xi^{-}$ decays (with $\Xi^{-}\to \Lambda^{0}\,\pi^{-}$ and $\Lambda^{0}\to p\pi^{-}$). Candidate $\Jpsi$ are formed from pairs of oppositely charged muons reconstructed from tracks in the silicon tracker, matched to independent tracks in the muon detectors. The two muons are required to pass the trigger selection and the di-muon candidate must have an invariant mass within $150\MeV$ of the world average $\Jpsi$ mass~\cite{PDG}.
The reconstruction of $\Xi^{-}$ decays begins with the identification of candidate $\Lambda^{0}\to p\pi$ decays from pairs of oppositely charged tracks. The track with the higher momentum is taken to be the proton. The two tracks are required to have been reconstructed from at least 6 six hits in the silicon tracker and to have a track fit $\chi^{2}/d.o.f < 5$. The two tracks are fitted to a common decay vertex and the fit result is required to satisfy $\chi^{2}/d.o.f < 7$. The decay vertex is required to be displaced from the beam line by more than ten times the uncertainty on the displacement. To remove possible contamination from $K^{0}_{s}$ decays, the invariant mass of the candidate (with both tracks assigned the pion mass) is rejected if it lies within $20\MeV$ of the $K^{0}_{s}$ mass. $\Xi^{-}$ candidates are then reconstructed through the combination of a candidate $\Lambda^{0}$ with a track (denoted $\pi_{\Xi}$) with the same charge as the pion from the $\Lambda^{0}\to p\pi$ candidate (denoted $\pi_{\Lambda}$). The three tracks are then subjected to a kinematic vertex fit where the invariant mass of the two tracks from the $\Lambda^{0}$ candidate are constrained to the world average $\Lambda^{0}$ mass. To reject backgrounds from mis-reconstructed $\Omega^{-}\to\Lambda^{0}\,K^{-}$ decays, the $\Xi^{-}$ candidate is rejected if its invariant mass, with the $\pi_{\Xi}$ track given the charged kaon mass, lies within $20\MeV$ of the $\Omega^{-}$ mass.
Finally, $\Xi_{b}^{-}$ candidates are formed by combining candidate $\Xi^{-}$ and $\Jpsi$ decays in a kinematic vertex fit where the masses of the $\Xi^{-}$and $\Jpsi$ candidates are constrained to the world average values.
To select a high yield sample of $\Xi_{b}^{-}$ candidates with high purity, the $\Xi_{b}^{-}$ selection (characterised by the values of thirty selection variables) is optimised by an iterative algorithm that maximises both the signal yield and significance. The thirty optimised variables include the transverse momentum threshold of the $\Jpsi$, $p$, $\pi_{\Lambda}$, $\pi_{\Xi}$, $\Xi^{-}$, $\Xi_{b}^{-}$ and muon candidates. The values of the latter two thresholds ($\Xi_{b}^{-}$ and both muons) can take different values depending on whether the candidates were reconstructed in the barrel or endcap regions on the detector. The requirements on the difference between the reconstructed invariant masses and the world average values for the $\Lambda^{0}$, $\Xi^{-}$ and $\Jpsi$ candidates are also optimised. Other important optimised variables include the impact parameter significance of the $p$, $\pi_{\Lambda}$ and $\pi_{\Xi}$ tracks and transverse decay length (and its significance) of the $\Lambda^{0}$, $\Xi^{-}$ and $\Xi_{b}^{-}$ decay vertices. The details of the optimisation procedure are described in \cite{CMS_Xi}.
Figure~\ref{XiB} shows the invariant mass of selected $\Xi_{b}^{-}$ candidates. The $m(\Jpsi\,\Xi^{-})$ distribution exhibits a clear peak at approximately $5.8\GeV$, consistent with a $\Xi_{b}^{-}$ signal. The $m(\Jpsi\,\Xi^{-})$ distribution is fit with a Gaussian PDF to describe the signal and a second order polynomial to describe the background. The fit results in a yield of $108\pm14$ $\Xi_{b}^{-}$ candidates and a fitted $\Xi_{b}^{-}$ mass of $5795.0\pm3.1\stat \MeV$, a value which agrees well with the current world average~\cite{PDG}.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{XiB_best6.pdf}
\caption{The $\Jpsi\,\Xi^{-}$ invariant mass distribution for $\Xi_{b}^{-}$ candidates (filled points). The invariant mass distribution of $\Jpsi\,\Xi^{-}$ pairs where the proton and $\pi_{\Xi}$ have the same charge is shown also shown (open squares)~\cite{CMS_Xi}. } \label{XiB}
\end{figure*}
\subsection{Search for $\Xi_{b}^{*0}$ baryons}
\noindent $\Xi_{b}^{-}$ candidates with a mass within $2.5\sigma$ of the fitted $\Xi_{b}^{-}$ mass are associated with a track, given the pion mass, with a charge opposite to that of the $\pi_{\Xi}$. The tracks are required to be compatible with having been produced at the primary vertex and to have a transverse momentum greater than $0.25\GeV$. The events in the CMS data sample contain on average eight primary vertices. The primary vertex reconstructed closest to the $\Xi_{b}^{-}$ line of flight is assumed to be associated with the production of the $\Xi_{b}^{-}$.
A potential $\Xi_{b}^{*0}$ signal is expected to appear as a peak in the $Q = m( \Jpsi\,\Xi^{-}\pi^{+}) - m(\Jpsi\,\Xi^{-}) - m(\pi)$ distribution. In order to search for such peaks, the background contribution to the $Q$ distribution must first be reliably estimated. This is done through the preparation of a background sample of $\Xi_{b}^{-}$ candidates associated with prompt pion tracks of the same charge as the $\Xi_{b}^{-}$. The $Q$ distribution for this background sample is shown in Figure~\ref{XiB_SameSign}. The momentum distributions of the $\Xi_{b}^{-}$ and pions ($p(\Xi_{b})$ and $p(\pi)$) and the distribution of angle between them ($\alpha$) from the (same sign) background sample are used to randomly generate an uncorrelated set of values for $p(\Xi_{b})$, $p(\pi)$ and $\alpha$. The uncorrelated set of values is used to calculate a value for $Q$, this is repeated $10^8$ times to give a $Q$ distribution which is expected to predict the shape of the combinatorial background. This distribution is then fitted with the function $Q^{c_{1}}\left( e^{-c_{2}Q} + e^{-c_{3}Q} + e^{-c_{4}Q} \right)$ where the $c_{i}$ are all free parameters. The fit result is shown by the red dashed line in Figure~\ref{XiB_SameSign}.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{xib0_WL.pdf}
\caption{The $Q$ distribution for $\Xi_{b}^{-}$ candidates associated with prompt pions of the same charge~\cite{CMS_Xi}.} \label{XiB_SameSign}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{xib0_Zoom.pdf}
\caption{The $Q$ distribution for $\Xi_{b}^{-}\pi^{+}$ candidates where the $\Xi_{b}^{-}$ and prompt pion candidates have opposite charges~\cite{CMS_Xi}.} \label{XiBStar}
\end{figure*}
The $Q$ value distribution for opposite sign $\Xi_{b}^{-}\pi^{+}$ combinations is shown in Figure~\ref{XiBStar}. The $Q$ value distribution in Figure~\ref{XiBStar} exhibits a clear excess above the background expectation in the region $12 < Q < 18 \MeV$. An unbinned maximum likelihood fit is performed to the $Q$ value distribution in Figure~\ref{XiBStar} where the excess is modelled by a Breit-Wigner distribution convolved with a Gaussian resolution function (with a width fixed to $1.91\pm0.11\MeV$, a value derived from Monte Carlo (MC) simulation). The background model discussed earlier is used to describe the combinatorial background where the $c_{i}$ parameters are free to vary within their total uncertainties. The fitted mean value of the signal is $14.84\pm0.74\stat\MeV$ with a Breit-Wigner width of $2.1\pm1.7\stat\MeV$~\cite{CMS_Xi}.
The significance of the signal is assessed though the calculation of $\sqrt{\ln{\left(L_{s+b}/L_{b}\right)}}$ where $L_{s+b}$ is the likelihood value of the nominal (signal and background) fit and $L_{b}$ is the likelihood value for a fit performed with a background only model. The statistical significance calculated in the way is $6.9$ standard deviations. The significance is also evaluated though a number of other approaches (including a treatment of the ``look elsewhere effect''), all of which result in a significance in excess of $5$ standard deviations.
Various cross checks are performed using MC simulation samples of $B^{+}$, $B^{0}$, $B_{s}$ and $\Lambda_{b}$ decays. The analysis procedure performed on these samples does not exhibit any excess in the $Q$ distribution that might arise from the mis-reconstruction of known $b$ hadrons.
The systematic uncertainty on the measured $Q$ value of the signal receives contributions from the observation of a small upwards shift in the $Q$ distribution in MC simulation and uncertainties associated with the background model. Together these effects constitute a systematic uncertainty of $0.28\MeV$.
With the charged pion and $\Xi_{b}^{-}$ masses taken from the PDG the mass of the new baryon is measured to be $5945.0 \pm 0.7\stat \pm 0.3\syst \pm 2.7~(\mathrm{PDG}) \MeV$~\cite{CMS_Xi}.
\subsection{Conclusion}
\noindent The CMS collaboration has observed a new $\Xi_{b}$ baryon, decaying to $\Xi_{b}^{-}\pi^{+}$ (plus charge conjugate) with a statistical significance in excess of $5$ standard deviations. The mass of the new baryon is measured to be $5945.0 \pm 0.7\stat \pm 0.3\syst \pm 2.7~(\mathrm{PDG}) \MeV$~\cite{CMS_Xi}. The measured mass and decay mode are consistent with theoretical expectations for the $\Xi_{b}^{*0}$ state, predicted to have quantum numbers $J^{P} = 3/2^{+}$.
\clearpage
\begin{acknowledgments}
\noindent The author is grateful for the financial support provided by the Science and Technology Facilities Council (STFC) in the UK.
\end{acknowledgments}
|
1,116,691,500,960 | arxiv | \section{Introduction}
Sampling is a cornerstone of probabilistic modelling, in particular in the Bayesian framework where statistical inference is rephrased as the estimation of the posterior distribution given the data~\citep{robert2007bayesian,murphy2012machine}: the representation of this distribution through samples is both flexible, as most interesting quantities can be computed from them (e.g., various moments or quantiles), and practical, as there are many sampling algorithms available depending on the various structural assumptions made on the model. Beyond one-dimensional distributions, a large class of these algorithms are iterative and update samples with a Markov chain which eventually converges to the desired distribution, such as Gibbs sampling or Metropolis-Hastings (or more general Markov chain Monte-Carlo algorithms~\citep{gamerman2006markov,gilks1995markov,durmus2017nonasymptotic}) which are adapted to most situations, or Langevin's~algorithm~\citep{durmus2017nonasymptotic,raginsky2017non,Welling2011BayesianLV,Mandt:2017:SGD:3122009.3208015,lelievre_stoltz_2016,bakry2014}, which is adapted to sampling from densities in $\mathbb{R}^d$.
While these sampling algorithms are provably converging in general settings when the number of iterations tends to infinity, obtaining good explicit convergence rates has been a central focus of study, and is often related to the mixing time of the underlying Markov chain~\citep{meyn2012markov}. In particular, for sampling from positive densities in $\mathbb{R}^d$, the Markov chain used in Langevin's algorithm can classically be related to a diffusion process, thus allowing links with other communities such as molecular dynamics~\citep{lelievre_stoltz_2016}. The main objective of molecular dynamics is to infer macroscopic properties of matter from atomistic models via averages with respect to probability measures dictated by the principles of statistical physics. Hence, it relies on high dimensional and highly multimodal probabilistic models.
When the density is log-concave, sampling can be done in polynomial time with respect to the dimension~\citep{ma2018sampling,durmus2017,durmus2017nonasymptotic}. However, in general, sampling with generic algorithms does not scale well with respect to the dimension. Furthermore, the multimodality of the objective measure can trap the iterates of the algorithm in some regions for long durations: this phenomenon is known as metastability. To accelerate the sampling procedure, a common technique in molecular dynamics is to resort to importance sampling strategies where the target probability measure is biased using the image law of the process for some low-dimensional function, known as ``reaction coordinate'' or ``collective variable''. Biasing by this low-dimensional probability measure can improve the convergence rate of the algorithms by several orders of magnitude \citep{Lelievre_2008,Lelievre2013}. Usually, in molecular dynamics, the choice of a good reaction coordinate is based on physical intuition on the model but this approach has limitations, particularly in the Bayesian context \citep{Chopin2012}. There have been efforts to numerically find these reaction coordinates \cite{gkeka2019ml}. Computations of spectral gaps by approximating directly the diffusion operator work well in low-dimensional settings but scale poorly with the dimension. One popular method is based on diffusion maps \citep{COIFMAN20065,Boaz2006,Rohrdanz2011}, for which reaction coordinates are built by approximating the entire infinite-dimensional diffusion operator and selecting its first eigenvectors.
In order to assess or find a reaction coordinate, it is necessary to understand the convergence rate of diffusion processes. We first introduce in Section \ref{sec:PI} Poincaré inequalities and Poincaré constants that control the convergence rate of diffusions to their equilibrium. We then derive in Section \ref{sec:Estimation} a kernel method to estimate it and optimize over it to find good low dimensional representation of the data for sampling in Section \ref{sec:learning_RC}. Finally we present in Section \ref{sec:experiments} synthetic examples for which our procedure is able to find good reaction coordinates.
\paragraph{Contributions.} In this paper, we make the following contributions:
\begin{itemize}
\item We show both theoretically and experimentally that, given sufficiently many samples of a measure, we can estimate its Poincaré constant and thus quantify the rate of convergence of Langevin dynamics.
\item By finding projections whose marginal laws have the largest Poincaré constant, we derive an algorithm that captures a low dimensional representation of the data. This knowledge of ``difficult to sample directions'' can be then used to accelerate dynamics to their equilibrium measure.
\end{itemize}
\section{Poincaré Inequalities }
\label{sec:PI}
\subsection{Definition}
We introduce in this part the main object of this paper which is the Poincaré inequality \citep{bakry2014}. Let us consider a probability measure $d\mu$ on $\mathbb{R}^d$ which has a density with respect to the Lebesgue measure. Consider $H^1(\mu)$ the space of functions in $L^2(\mu)$ (i.e., which are square integrable) that also have all their first order derivatives in $L^2$, that is, $H^1(\mu) = \{f \in L^2(\mu),\ \int_{\mathbb{R}^d}f^2 d\mu+\int_{\mathbb{R}^d}\|\nabla f\|^2 d\mu < \infty \}$.
\begin{defi} [Poincaré inequality and Poincaré constant]
\label{defi:PI}
The Poincaré constant of the probability measure $d\mu$ is the smallest constant $\mathcal{P}_{\mu}$ such that for all $f \in H^1(\mu) $ the following Poincaré inequality \textbf{(PI)} holds:
\begin{align}
\label{eq:Poicare_constant}
\int_{\mathbb{R}^d}f(x)^2 d\mu(x) - &\left(\int_{\mathbb{R}^d}f(x) d\mu(x)\right)^2 \leqslant \mathcal{P}_{\mu} \int_{\mathbb{R}^d}\|\nabla f(x)\|^2 d\mu(x).
\end{align}
\end{defi}
In Definition \ref{defi:PI} we took the largest possible and the most natural functional space $ H^1(\mu)$ for which all terms make sense, but Poincaré inequalities can be equivalently defined for subspaces of test functions $\mathcal{H}$ which are dense in $ H^1(\mu)$. This will be the case when we derive the estimator of the Poincaré constant in Section \ref{sec:Estimation}.
\begin{rmk}[A probabilistic formulation of the Poincaré inequality.]
Let $X$ be a random variable distributed according to the probability measure $d\mu$. \textbf{(PI)} can be reformulated as: for~all~$f \in H^1(\mu)$,
\begin{align}
\mathrm{Var}_\mu\,(f(X)) \leqslant \mathcal{P}_{\mu}\,\mathbb{E}_\mu \left[\,\|\nabla f(X)\|^2\right].
\end{align}
Poincaré inequalities are hence a way to bound the variance from above by the so-called \textit{Dirichlet energy} $ \mathbb{E} \left[\,\|\nabla f(X)\|^2\right]$ (see \emph{\citep{bakry2014}}).
\end{rmk}
\subsection{Consequences of \textbf{(PI)}: convergence rate of diffusions}
Poincaré inequalities are ubiquitous in various domains such as probability, statistics or partial differential equations (PDEs). For example, in PDEs they play a crucial role for showing the existence of solutions of Poisson equations or Sobolev embeddings \citep{Gilbarg2001}, and they lead in statistics to concentration of measure results \citep{gozlan2010}. In this paper, the property that we are the most interested in is the convergence rate of diffusions to their stationary measure $d\mu$. In this section, we consider a very general class of measures: $d\mu (x) = \mathrm{e}^{-V(x)}dx$ (called Gibbs measures with potential $V$), which allows for a clearer explanation. Note that all measures admitting a positive density can be written like this and are typical in Bayesian machine learning \citep{robert2007bayesian} or molecular dynamics \citep{lelievre_stoltz_2016}. Yet, the formalism of this section can be extended to more general cases \citep{bakry2014}.
Let us consider the overdamped Langevin diffusion in $\mathbb{R}^d$, that is the solution of the following stochastic differential equation (SDE):
\begin{align}
\label{eq:langevin}
\mathrm{d}X_t = -\nabla V (X_t) \mathrm{d}t + \sqrt{2}\,\mathrm{d} B_t,
\end{align}
where $(B_t)_{t\geqslant0}$ is a $d$-dimensional Brownian motion. It is well-known \citep{bakry2014} that the law of $(X_t)_{t\geqslant0}$ converges to the Gibbs measure $d\mu$ and that the Poincaré constant controls the rate of convergence to equilibrium in $L^2(\mu)$. Let us denote by $P_t (f) $ the Markovian semi-group associated with the Langevin diffusion $(X_t)_{t\geqslant0}$. It is defined in the following way: $P_t (f) (x) = \mathbb{E}[f(X_t)| X_0 = x]$. This semi-group satisfies the dynamics
$$ \frac{d } {dt} P_t (f)= \mathcal{L} P_t (f), $$
where $\mathcal{L} \phi = \Delta^L \phi - \nabla V \cdot \nabla \phi$ is a differential operator called the infinitesimal generator of the Langevin diffusion \eqref{eq:langevin} ($\Delta^L$ denotes the standard Laplacian on $\mathbb{R}^d$). Note that by integration by parts, the semi-group $(P_t)_{t \geqslant 0}$ is reversible with respect to $d\mu$, that is: $-\int f(\mathcal{L}g)\,d\mu = \int \nabla f \cdot \nabla g\, d\mu = -\int (\mathcal{L}f)g\,d\mu$. Let us now state a standard convergence theorem (see e.g.~\citep[Theorem 2.4.5]{bakry2014} ), which proves that $\mathcal{P}_\mu$ is the characteristic time of the exponential convergence of the diffusion to equilibrium in~$L^2(\mu)$.
\begin{thm}[Poincaré and convergence to equilibrium]
\label{thm:Poinca_diffusions}
With the notation above, the following statements are equivalent:
\begin{enumerate}[label=$(\roman{*})$]
\item $\mu$ satisfies a Poincaré inequality with constant $\mathcal{P}_\mu$;
\item For all $f$ smooth and compactly supported, $\mathrm{Var}_\mu (P_t (f)) \leqslant \mathrm{e}^{-2t / \mathcal{P}_\mu} \mathrm{Var}_\mu (f)$ for all $t \geqslant 0$.
\end{enumerate}
\end{thm}
\begin{proof}
The proof is standard. Note that upon replacing $f$ by $f-\int\!f d\mu$, one can assume that $\int\!f d\mu = 0$. Then, for all $t \geqslant 0$,
\begin{align*}
\label{eq:variance_poinca}
\frac{d}{dt}\mathrm{Var}_\mu (P_t (f)) &= \frac{d}{dt}\int(P_t (f))^2 d\mu = 2 \int P_t (f) (\mathcal{L} P_t (f)) d\mu = -2 \int \|\nabla P_t (f)\|^2 d\mu \tag{$\ast$}
\end{align*}
Let us assume $(i)$. With equation \eqref{eq:variance_poinca}, we have $$ \frac{d}{dt}\mathrm{Var}_\mu (P_t (f)) = -2 \int \|\nabla P_t (f)\|^2 d\mu \leqslant -2\, \mathcal{P}_\mu ^{-1}\int(P_t (f))^2 d\mu = -2\, \mathcal{P}_\mu ^{-1} \mathrm{Var}_\mu (P_t (f)). $$ The proof is then completed by using Grönwall's inequality.
Let us assume $(ii)$. We write, for $t > 0$,
\begin{align*}
&-t^{-1}(\mathrm{Var}_\mu (P_t (f)) - \mathrm{Var}_\mu (f)) \geqslant -t^{-1}(\mathrm{e}^{-2t / \mathcal{P}_\mu} - 1)\mathrm{Var}_\mu (f).
\end{align*}
By letting $t$ go to $0$ and using equation \eqref{eq:variance_poinca},
\begin{align*}
2 \mathcal{P}_\mu^{-1} \mathrm{Var}_\mu (f) &\leqslant \frac{d}{dt}\mathrm{Var}_\mu (P_t (f))_{t=0} = 2 \int \|\nabla f \|^2 d\mu,
\end{align*}
which shows the converse implication.
\end{proof}
\begin{rmk}
Let $f$ be a centered eigenvector of $-\mathcal{L}$ with eigenvalue $\lambda \neq 0$. By the Poincaré inequality,
\begin{align*}
\int f^2d\mu \leqslant \mathcal{P}_\mu \int \|\nabla f\|^2 d\mu &= \mathcal{P}_\mu \int f(-\mathcal{L}f) d\mu = \mathcal{P}_\mu \lambda \int f^2 d\mu,
\end{align*}
from which we deduce that every non-zero eigenvalue of $-\mathcal{L}$ is larger that $1/\mathcal{P}_\mu$. The best Poincaré constant is thus the inverse of the smallest non zero eigenvalue of $-\mathcal{L}$. The finiteness of the Poincaré constant is therefore equivalent to a \emph{spectral gap} property of $-\mathcal{L}$. Similarly, a discrete space Markov chain with transition matrix $P$ converges at a rate determined by the spectral gap of $I-P$.
\end{rmk}
There have been efforts in the past to estimate spectral gaps of Markov chains \citep{hsu2015mixing,levin2016estimating,qin2019estimating,wolfer2019estimating,combes2019computationally} but these have been done with samples from trajectories of the dynamics. The main difference here is that the estimation will only rely on samples from the stationary measure.
\paragraph{Poincaré constant and sampling.} In high dimensional settings (in Bayesian machine learning~\citep{robert2007bayesian}) or molecular dynamics \citep{lelievre_stoltz_2016} where~$d$ can be large -- from~$100$ to~$10^7$), one of the standard techniques to sample $d\mu(x) = \mathrm{e}^{-V(x)}dx$ is to build a Markov chain by discretizing in time the overdamped Langevin diffusion \eqref{eq:langevin} whose law converges to $d\mu$. According to Theorem~\ref{thm:Poinca_diffusions}, the typical time to wait to reach equilibrium is~$\mathcal{P}_\mu$. Hence, the larger the Poincaré constant of a probability measure $d\mu$ is, the more difficult the sampling of $d\mu$ is. Note also that~$V$ need not be convex for the Markov chain to converge.
\subsection{Examples}
\label{subsec:examples}
\paragraph{Gaussian distribution.} For the Gaussian measure on $\mathbb{R}^d$ of mean $0$ and variance $1$: $d\mu (x) = \frac{1}{(2\pi)^{d/2}} \mathrm{e}^{-\|x\|^2/2} dx$, it holds for all $f$ smooth and compactly supported,
\begin{align*}
\mathrm{Var}_{\mu}(f) \leqslant \int_{\mathbb{R}^d} \|\nabla f\|^2 d\mu,
\end{align*}
and one can show that $\mathcal{P}_\mu = 1$ is the optimal Poincaré constant (see \citep{chernoff1981}). More generally, for a Gaussian measure with covariance matrix $\Sigma$, the Poincaré constant is the spectral radius of $\Sigma$.
Other examples of analytically known Poincaré constant are $1/d$ for the uniform measure on the unit sphere in dimension $d$ \citep{Ledoux2014} and $4$ for the exponential measure on the real line \citep{bakry2014}. There also exist various criteria to ensure the existence of \textbf{(PI)}. We will not give an exhaustive list as our aim is rather to emphasize the link between sampling and optimization. Let us however finish this part with particularly important results.
\paragraph{A measure of non-convexity.} Let $d\mu(x) = \mathrm{e}^{-V(x)}dx$. It has been shown in the past decades that the ``more convex'' $V$ is, the smaller the Poincaré constant is. Indeed, if $V$ is $\rho$-strongly convex, then the Bakry-Emery criterion \citep{bakry2014} tells us that $\mathcal{P}_\mu \leqslant 1/\rho$. If $V$ is only convex, it has been shown that $d\mu$ satisfies also a \textbf{(PI)} (with a possibly very large Poincaré constant) \citep{Kannan1995,Bobkov1995}. Finally, the case where $V$ is non-convex is explored in detail in a one-dimensional setting and it is shown that for potentials $V$ with an energy barrier of height $h$ between two wells, the Poincaré constant explodes exponentially with respect the height $h$ \citep{menz2014}. In that spirit, the Poincaré constant of $d\mu(x) = \mathrm{e}^{-V(x)}dx$ can be a quantitative way to quantify how multimodal the distribution $d\mu$ is and hence how non-convex the potential $V$ is \citep{jain2017non,raginsky2017non}.
\section{Statistical Estimation of the Poincaré Constant}
\label{sec:Estimation}
The aim of this section is to provide an estimator of the Poincaré constant of a measure $\mu$ when we only have access to $n$ samples of it, and to study its convergence properties. More precisely, given $n$ independent and identically distributed (i.i.d.) samples $(x_1,\hdots,x_n)$ of the probability measure $d\mu$, our goal is to estimate $\mathcal{P}_{\mu}$. We will denote this estimator (function of $(x_1,\hdots,x_n)$) by the standard notation $\widehat{\mathcal{P}}_\mu$.
\subsection{Reformulation of the problem in a reproducing kernel Hilbert Space}
\paragraph{Definition and first properties.} Let us suppose here that the space of test functions of the \textbf{(PI)}, $\mathcal{H}$, is a reproducing kernel Hilbert space (RKHS) associated with a kernel $K$ on $\mathbb{R}^d$ \citep{smola-book,Cristianini2004}. This has two important consequences:
\begin{enumerate}
\item $\mathcal{H}$ is the linear function space $\mathcal{H} = \mathrm{span}\{ K(\cdot,x), \ x \in \mathbb{R}^d \}$, and in particular, for all $ x \in \mathbb{R}^d$, the function $y \mapsto K(y,x)$ is an element of $\mathcal{H}$ that we will denote by $K_x$.
\item The reproducing property: $\forall f \in \mathcal{H}$ and $\forall x \in \mathbb{R}^d$, $f(x) = \langle f,K(\cdot,x) \rangle_\mathcal{H}$. In other words, function evaluations are equal to dot products with canonical elements of the RKHS.
\end{enumerate}
We make the following mild assumptions on the RKHS:
\begin{assshort}\label{asm:density}
\hspace*{.2cm} The RKHS $\mathcal{H}$ is dense in $ H^1(\mu)$.
\end{assshort}
Note that this is the case for most of the usual kernels: Gaussian, exponential \citep{micchelli2006universal}. As \textbf{(PI)} involves derivatives of test functions, we will also need some regularity properties of the RKHS. Indeed, to represent $\nabla f$ in our RKHS we need a partial derivative reproducing property of the kernel space.
\begin{assshort}\label{asm:regularity_RKHS}
\hspace*{.2cm} $K$ is a Mercer kernel such that $K \in C^2(\mathbb{R}^d \times \mathbb{R}^d)$.
\end{assshort}
Let us denote by $\partial_i = \partial_{x^i}$ the partial derivative operator with respect to the $i$-th component of $x$. It has been shown \citep{Zhou2008} that under assumption \asm{asm:regularity_RKHS}, $\forall i \in \llbracket1, d\rrbracket$, $\partial_i K_x \in \mathcal{H}$ and that a partial derivative reproducing property holds true: $\forall f \in \mathcal{H}$ and $\forall x \in \mathbb{R}^d$, $\partial_i f(x) = \langle \partial_i K_x , f \rangle_\mathcal{H} $. Hence, thanks to assumption \asm{asm:regularity_RKHS}, $\nabla f$ is easily represented in the RKHS. We also need some boundedness properties of the kernel.
\begin{assshort}\label{asm:bounded_kernel}
\hspace*{.2cm} $K$ is a kernel such that $\forall x \in~\mathbb{R}^d, \,K (x,x) \leqslant \mathcal{K}$ and\footnote{The subscript $d$ in $\mathcal{K}_d$ accounts for the fact that this quantity is expected to scale linearly with $d$ (as is the case for the Gaussian kernel).} $ \left\|\nabla K_x\right\|^2 \leqslant \mathcal{K}_d$, where $\left\|\nabla K_x \right\|^2: = \sum_{i=1}^d\langle \partial_i K_x, \partial_i K_x \rangle = \sum_{i=1}^d \frac{\partial^2 K}{\partial x^i \partial y^i} (x,x)$ (see calculations below), $x$ and $y$ standing respectively for the first and the second variables of $(x,y) \mapsto K(x,y)$.
\end{assshort}
The equality mentioned in the expression of $\|\nabla K_x\|^2$ arises from the following computation: $\partial_i K_y (x) = \langle \partial_i K_y, K_x \rangle = \partial_{y^i} K (x,y) $ and we can write that for all $x, y \in \mathbb{R}^d$, $\langle \partial_i K_x, \partial_i K_y \rangle = \partial_{x^i} \left( \partial_i K_y (x) \right) = \partial_{x^i} \partial_{y^i} K(x,y) $. Note that, for example, the Gaussian kernel satisfies \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel}.
\paragraph{A spectral point of view.} Let us define the following operators from $\mathcal{H}$ to $\mathcal{H}$:
\begin{align*}
\Sigma &= \mathbb{E} \left[K_x \otimes K_x \right],\hspace*{1.5cm} \Delta = \mathbb{E} \left[\nabla K_x \otimes_d \nabla K_x \right],
\end{align*}
and their empirical counterparts,
\begin{align*}
\widehat{\Sigma} = \frac{1}{n} \sum_{i=1}^n K_{x_i} \otimes K_{x_i}, \hspace*{0.5cm} \widehat{\Delta} = \frac{1}{n} \sum_{i=1}^n \nabla K_{x_i} \otimes_d \nabla K_{x_i},
\end{align*}
where $\otimes$ is the standard tensor product: $\forall f,g, h \in \mathcal{H}$, $(f \otimes g) (h) = \langle g, h \rangle_{_\mathcal{H}} f$ and $\otimes_d$ is defined as follows: $\forall f,g \in \mathcal{H}^d$ and $h \in \mathcal{H}$, $(f \otimes_d g) (h) = \sum_{i=1}^d \langle g_i, h\rangle_{_\mathcal{H}} f_i $.
\begin{prop}[Spectral characterization of the Poincaré constant]
\label{prop:Spectral_characterization}
Suppose that assumptions \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel} hold true. Then the Poincaré constant $\mathcal{P}_\mu$ is the maximum of the following Rayleigh ratio:
\begin{align}
\label{eq:spectral_poinca}
\mathcal{P}_\mu = \sup_{f \in \mathcal{H} \setminus \mathrm{Ker}(\Delta) } \frac{\langle f,C f \rangle_\mathcal{H}}{\langle f, \Delta f \rangle_\mathcal{H} } = \left\|\Delta^{-1/2} C \Delta^{-1/2}\right\|,
\end{align}
with $\|\cdot\|$ the operator norm on $\mathcal{H}$ and $C = \Sigma - m \otimes m$ where $ m = \int_{\mathbb{R}^d} K_x d\mu(x) \in \mathcal{H} $ is the covariance operator, considering $\Delta^{-1}$ as the inverse of $\Delta$ restricted to $\left(\mathrm{Ker} (\Delta) \right)^\perp$.
\end{prop}
Note that $C$ and $\Delta$ are symmetric positive semi-definite trace-class operators (see Appendix~\ref{subsec:operators}). Note also that $\mathrm{Ker} (\Delta)$ is the set of constant functions, which suggests introducing $\mathcal{H}_0 := (\mathrm{Ker} (\Delta))^\perp = \mathcal{H} \cap L^2_0(\mu)$, where $L^2_0(\mu)$ is the space of $L^2(\mu)$ functions with mean zero with respect to $\mu$. Finally note that $\mathrm{Ker} (\Delta) \subset \mathrm{Ker} (C)$ (see Section \ref{sec:proofs_of_prop_1/2} of the Appendix). With the characterization provided by Proposition \ref{prop:Spectral_characterization}, we can easily define an estimator of the Poincaré constant~$\widehat{\mathcal{P}}_\mu$, following standard regularization techniques from kernel methods \citep{smola-book,Cristianini2004,fukumizu2007statistical}.
\begin{defi}
The estimator $\widehat{\mathcal{P}}_\mu^{n,\lambda}$ of the Poincaré constant is the following:
\begin{align}
\label{eq:Empirical_rayleigh_ratio}
\widehat{\mathcal{P}}_\mu^{n,\lambda} := \sup_{f \in \mathcal{H} \setminus \mathrm{Ker} (\Delta)} \frac{\langle f,\widehat{C} f \rangle_{\mathcal{H}}}{\langle f, (\widehat{\Delta} + \lambda I) f \rangle_{\mathcal{H}} } = \left\|\widehat{\Delta}_\lambda^{-1/2} \widehat{C} \widehat{\Delta}_\lambda^{-1/2}\right\|,
\end{align}
with $\widehat{C} = \widehat{\Sigma} - \widehat{m} \otimes \widehat{m}$ and where $ \widehat{m} = \frac{1}{n}\sum_{i=1}^n K_{x_i}$. $\widehat{C}$~is the empirical covariance operator and $\widehat{\Delta}_\lambda = \widehat{\Delta} + \lambda I$ is a regularized empirical version of the operator $\Delta$ restricted to $\left(\mathrm{Ker} (\Delta) \right)^\perp$ as in Proposition~\ref{prop:Spectral_characterization}.
\end{defi}
Note that regularization is necessary as the nullspace of $\widehat{\Delta}$ is no longer included in the nullspace of $\widehat{C}$ so that the Poincaré constant estimates blows up when $\lambda \to 0$. The problem in Equation \eqref{eq:Empirical_rayleigh_ratio} has a natural interpretation in terms of Poincaré inequality as it corresponds to a regularized \textbf{(PI)} for the empirical measure $\widehat{\mu}_n = \frac{1}{n} \sum_{i=1}^n \delta_{x_i}$ associated with the i.i.d. samples $x_1,\hdots,x_n$ from $d\mu$. To alleviate the notation, we will simply denote the estimator by $\widehat{\mathcal{P}}_\mu$ until the end of the paper.
\subsection{Statistical consistency of the estimator}
We show that, under some assumptions and by choosing carefully $\lambda$ as a function of $n$, the estimator~$\widehat{\mathcal{P}}_\mu$ is statistically consistent, i.e., almost surely:
$$\widehat{\mathcal{P}}_\mu \xrightarrow{n \rightarrow \infty} \mathcal{P}_\mu.$$
As we regularized our problem, we prove the convergence in two steps: first, the convergence of $\widehat{\mathcal{P}}_\mu$ to the regularized problem $\mathcal{P}^\lambda_\mu = \sup_{f \in \mathcal{H} \setminus \{0\}} \frac{\langle f,C f \rangle}{\langle f, (\Delta + \lambda I) f \rangle } = \|\Delta_\lambda^{-1/2} C \Delta_\lambda^{-1/2}\|$, which corresponds to controlling the statistical error associated with the estimator $\widehat{\mathcal{P}}_\mu$ (variance); second, the convergence of $\mathcal{P}^\lambda_\mu$ to $\mathcal{P}_\mu$ as $\lambda$ goes to zero which corresponds to the bias associated with the estimator $\widehat{\mathcal{P}}_\mu$. The next result states the statistical consistency of the estimator when $\lambda$ is a sequence going to zero as $n$ goes to infinity (typically as an inverse power of $n$).
\begin{thm}[Statistical consistency]
\label{thm:statistical_consistency}
Assume that \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel} hold true and that the operator $\Delta^{-1/2} C \Delta^{-1/2}$ is compact on $\mathcal{H}$. Let $(\lambda_n)_{n \in \mathbb{N}}$ be a sequence of positive numbers such that $\lambda_n \rightarrow 0$ and $\lambda_n\sqrt{n} \rightarrow + \infty$. Then, almost surely,$$\widehat{\mathcal{P}}_\mu \xrightarrow{n \rightarrow \infty} \mathcal{P}_\mu.$$
\end{thm}
As already mentioned, the proof is divided into two steps: the analysis of the statistical error for which we have an explicit rate of convergence in probability (see Proposition~\ref{prop:hat_P_to_P_lambda} below) and which requires $n^{-1/2}/\lambda_n \rightarrow 0$, and the analysis of the bias for which we need $\lambda_n \rightarrow 0$ and the compactness condition (see Proposition \ref{prop:P_lambda_to_P}). Notice that the compactness assumption in Proposition~\ref{prop:P_lambda_to_P} and Theorem~\ref{thm:statistical_consistency} is stronger than \textbf{(PI)}. Indeed, it can be shown that satisfying \textbf{(PI)} is equivalent to having the operator $\Delta^{-1/2} C \Delta^{-1/2}$ bounded whereas to have convergence of the bias we need compactness. Note also that $\lambda_n = n^{-1/4}$ matches the two conditions stated in Theorem \ref{thm:statistical_consistency} and is the optimal balance between the rate of convergence of the statistical error (of order $\frac{1}{\lambda \sqrt{n}}$, see Proposition \ref{prop:hat_P_to_P_lambda}) and of the bias we obtain in some cases (of order~$\lambda$, see Section \ref{sec:analysis_of_bias} of the Appendix). Note that the rates of convergence do not depend on the dimension $d$ of the problem which is a usual strength of kernel methods and differ from local methods like diffusion maps \cite{COIFMAN20065,hein2007graph}.
For the statistical error term, it is possible to quantify the rate of convergence of the estimator to the regularized Poincaré constant as shown below.
\begin{prop}[Analysis of the statistical error]
\label{prop:hat_P_to_P_lambda}
Suppose that \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel} hold true. For any $\delta \in (0,1/3)$, and $\lambda > 0$ such that $\lambda \leqslant\|\Delta\| $ and any integer $n \geqslant 15 \frac{\mathcal{K}_d}{\lambda} \log \frac{4\,\mathrm{Tr } \Delta }{\lambda \delta }$, with probability at least $1-3\delta$,
\begin{align}
\label{eq:hat_P_to_P_lambda}
\left|\widehat{\mathcal{P}}_\mu-\mathcal{P}^\lambda_\mu\right| \leqslant \frac{8 \mathcal{K}}{\lambda \sqrt{n}}\log (2/\delta) + \mathrm{o}\left(\frac{1}{\lambda \sqrt{n}}\right).
\end{align}
\end{prop}
Note that in Proposition~\ref{prop:hat_P_to_P_lambda} we are only interested in the regime where $\lambda \sqrt{n}$ is large. Lemmas \ref{lemma:concentration_C} and \ref{lemma:concentration_delta} of the Appendix give explicit and sharper bounds under refined hypotheses on the spectra of $C$ and $\Delta$. Recall also that under assumption \asm{asm:bounded_kernel}, $C$ and $\Delta$ are trace-class operators (as proved in the Appendix, Section \ref{subsec:operators}) so that $\|\Delta\|$ and $\mathrm{Tr}(\Delta)$ are indeed finite. Finally, remark that \eqref{eq:hat_P_to_P_lambda} implies the almost sure convergence of the statistical error by applying the Borel-Cantelli lemma.
\begin{prop}[Analysis of the bias]
\label{prop:P_lambda_to_P}
Assume that \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel} hold true, and that the bounded operator $\Delta^{-1/2} C \Delta^{-1/2}$ is compact on $\mathcal{H}$. Then,
\begin{align*}
\underset{\lambda \rightarrow 0}{\lim} \ \mathcal{P}^\lambda_\mu = \mathcal{P}.
\end{align*}
\end{prop}
As said above the compactness condition (similar to the one used for convergence proofs of kernel Canonical Correlation Analysis \citep{fukumizu2007statistical}) is stronger than satisfying \textbf{(PI)}. The compactness condition adds conditions on the spectrum of $\Delta^{-1/2} C \Delta^{-1/2}$: it is discrete and accumulates at~$0$. We give more details on this condition in Section \ref{sec:analysis_of_bias} of the Appendix and derive explicit rates of convergence under general conditions. We derive also a rate of convergence for more specific structures (Gaussian case or under an assumption on the support of $\mu$) in Sections \ref{sec:analysis_of_bias} and \ref{sec:gaussian_bias} of the Appendix.
\section{Learning a Reaction Coordinate}
\label{sec:learning_RC}
If the measure $\mu$ is multimodal, the Langevin dynamics~\eqref{eq:langevin} is trapped for long times in certain regions (modes) preventing it from efficient space exploration. This phenomenon is called \textit{metastability} and is responsible for the slow convergence of the diffusion to its equilibrium \citep{Lelievre2013,Lelievre_2008}. Some efforts in the past decade \citep{Lelievre2015} have focused on understanding this multimodality by capturing the behavior of the dynamics at a coarse-grained level, which often have a low-dimensional nature. The aim of this section is to take advantage of the estimation of the Poincaré constant to give a procedure to unravel these dynamically meaningful slow variables called reaction coordinate.
\subsection{Good Reaction Coordinate}
\label{subsec:GRC}
From a numerical viewpoint, a good reaction coordinate can be defined as a low dimensional function $\xi: \mathbb{R}^d \to \mathbb{R}^p\ (p \ll d) $ such that the family of conditional measures $\left(\mu(\cdot | \xi(x) = r)\right)_{z \in \mathbb{R}^p}$ are ``less multimodal'' than the measure $d\mu$. This can be fully formalized in particular in the context of free energy techniques such as the adaptive biasing force method, see for example \cite{Lelievre_2008}. For more details on mathematical formalizations of metastability, we also refer to \cite{Lelievre2013}. The point of view we will follow in this work is to choose $\xi$ in order to maximize the Poincaré constant of the pushforward distribution $\xi * \mu$. The idea is to capture in $\xi * \mu$ the essential multimodality of the original measure, in the spirit of the two scale decomposition of Poincaré or logarithmic Sobolev constant inequalities \citep{Lelievre2009,menz2014,OTTO2007121}.
\subsection{Learning a Reaction Coordinate}
\label{subsec:learning_RC}
\paragraph{Optimization problem.} Let us assume in this subsection that the reaction coordinate is an orthogonal projection onto a linear subspace of dimension $p$. Hence $\xi$ can be represented by $\forall x \in \mathbb{R}^d, \ \xi(x) = A x$ with $A \in \mathcal{S}^{p,d}$ where $\mathcal{S}^{p,d} = \{A \in \mathbb{R}^{p\times d}\ \mathrm{s.\ t.}\ A A^\top = I_p \}$ is the Stiefel manifold~\citep{edelman1998geometry}. As discussed in Section~\ref{subsec:GRC}, to find a good reaction coordinate we look for $\xi$ for which the Poincaré constant of the pushforward measure $\xi * \mu$ is the largest. Given $n$ samples, let us define the matrix $X = (x_1,\hdots,x_n)^\top \in \mathbb{R}^{n \times d}$. We denote by $\widehat{\mathcal{P}}_{X}$ the estimator of the Poincaré constant using the samples $(x_1,\hdots,x_n)$. Hence $\widehat{\mathcal{P}}_{AX^\top}$ defines an estimator of the Poincaré constant of the pushforward measure $\xi * \mu$. Our aim is to find $\underset{A \in \mathcal{S}^{p,d}}{\mathrm{argmax}}\ \ \widehat{\mathcal{P}}_{AX^\top}$.
\paragraph{Random features.} One computational issue with the estimation of the Poincaré constant is that building $\widehat{C}$ and $\widehat{\Delta}$ requires respectively constructing $n \times n$ and $nd \times nd$ matrices. Random features~\citep{Rahimi2008} avoid this problem by building explicitly features that approximate a translation invariant kernel $\ K(x,x') = K(x-x')$. More precisely, let $M$ be the number of random features, $(w_m)_{1 \leqslant m\leqslant M}$ be random variables independently and identically distributed according to $\P(dw) = \int_{\mathbb{R}^d} \mathrm{e}^{-\mathrm{i} w^\top \delta} K(\delta) d \delta \, dw$ and $(b_m)_{1 \leqslant m\leqslant M}$ be independently and identically distributed according to the uniform law on $[0,2\pi]$, then the feature vector $\phi^M (x) = \sqrt{\frac{2}{M}} \left(\cos(w_1^\top x + b_1), \hdots, \cos(w_M^\top x + b_M)\right)^\top \in \mathbb{R}^M$ satisfies $K(x,x') \approx \phi^M (x)^\top \phi^M (x') $. Therefore, random features allow to approximate $\widehat{C}$ and $\widehat{\Delta}$ by $M \times M$ matrices $\widehat{C}^M$ and $\widehat{\Delta}^M$ respectively. Finally, when these matrices are constructed using the projected samples, i.e.~$\left(\cos(w_m^\top A x_i + b_m)\right)_{_{{ \substack{1\leq m\leq M \\ 1\leq i\leq n}}}}$ , we denote them by $\widehat{C}^M_A$ and $\widehat{\Delta}^M_A$ respectively. Hence, the problem reads
\begin{align}
\label{eq:optimization_problem}
\mathrm{Find} \ \underset{A \in \mathcal{S}^{p,d}}{\mathrm{argmax}}\ \ \widehat{\mathcal{P}}_{AX^\top} = \underset{A \in \mathcal{S}^{p,d}}{\mathrm{argmax}}\ \max_{v \in \mathbb{R}^M \setminus \{0\}} \ F(A,v)\ ,\quad \textrm{where}\ F(A,v):=\frac{v^\top \widehat{C}^M_A v}{v^\top (\widehat{\Delta}^M_A + \lambda I) v }.
\end{align}
\paragraph{Algorithm.} To solve the non-concave optimization problem \eqref{eq:optimization_problem}, our procedure is to do one step of non-Euclidean gradient descent to update $A$ (gradient descent in the Stiefel manifold) and one step by solving the generalized eigenvalue problem to update $v$. More precisely, the algorithm~reads:
\vspace*{0.25cm}
\begin{algorithm}[H]
\KwResult{Best linear Reaction Coordinate: $A_* \in \mathcal{S}^{d, p}$}
$A_0$ random matrix in $\mathcal{S}^{d, p}$, $\eta_t > 0$ step-size\;
\For{$t = 0, \hdots, T-1$}{
\begin{itemize}
\item Solve generalized largest eigenvalue problem with matrices $\widehat{C}^M_{A_t}$ and $\widehat{\Delta}^M_{A_t}$ to get~$v^* (A_t)$: $$ v^* (A_t) = \underset{v \in \mathbb{R}^M \setminus\{0\}}{\mathrm{argmax}}\ \ \frac{v^\top \widehat{C}^M_A v}{v^\top (\widehat{\Delta}^M_A + \lambda I) v }. $$
\item Do one gradient ascent step: $A_{t+1} = A_t + \eta_t\ \mathrm{grad}_A\, F(A,v^* (A_t)).$
\end{itemize}
}
\caption{Algorithm to find best linear Reaction Coordinate.}
\end{algorithm}
\section{Numerical experiments}
\label{sec:experiments}
We divide our experiments into two parts: the first one illustrates the convergence of the estimated Poincaré constant as given by Theorem~\ref{thm:statistical_consistency} (see Section~\ref{subsec:experiments_poincare}), and the second one demonstrates the interest of the reaction coordinates learning procedure described in Section~\ref{subsec:learning_RC} (see Section~\ref{subsec:experiments_RC}).
\subsection{Estimation of the Poincaré constant}
\label{subsec:experiments_poincare}
In our experiments we choose the Gaussian Kernel $K(x,x') = \exp\, (-\|x-x'\|^2)$? This induces a RKHS satisfying \asm{asm:density}, \asm{asm:regularity_RKHS}, \asm{asm:bounded_kernel}. Estimating $\widehat{\mathcal{P}}_\mu$ from $n$ samples $(x_i)_{i \leqslant n}$ is equivalent to finding the largest eigenvalue for an operator from $\mathcal{H}$ to $\mathcal{H}$. Indeed, we have
$$ \widehat{\mathcal{P}}_\mu = \left\| (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \widehat{S}_n^* \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)\, \widehat{S}_n\, (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \right\|_\mathcal{H}, $$
where $\widehat{Z}_n = \sum_{i=1}^d \widehat{Z}^i_n$ and $\widehat{Z}^i_n$ is the operator from $\mathcal{H}$ to $\mathbb{R}^n$: $\forall g\in \mathcal{H}$, $\widehat{Z}^i_n(g) = \frac{1}{\sqrt{n}} \left( \langle g, \partial_i K_{x_j} \rangle \right)_{1 \leqslant j\leqslant n}$ and $\widehat{S}_n$ is the operator from $\mathcal{H}$ to $\mathbb{R}^n$: $\forall g\in \mathcal{H}$, $\widehat{S}_n(g) = \frac{1}{\sqrt{n}} \left( \langle g, K_{x_j} \rangle \right)_{1 \leqslant j\leqslant n}$.
By the Woodbury operator identity, $(\lambda I+\widehat{Z}_n^*\widehat{Z}_n)^{-1}=\frac{1}{\lambda}\left(I- \widehat{Z}_n^*(\lambda I+\widehat{Z}_n\widehat{Z}_n^*)^{-1}\widehat{Z}_n\right)$, and the fact that for any operator $\|T^* T\| = \|T T^*\|$,
\begin{align*}
\widehat{\mathcal{P}}_\mu &= \left\| (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \widehat{S}_n^* \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)\, \widehat{S}_n\, (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \right\|_\mathcal{H} \\
&= \left\| (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \widehat{S}_n^* \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right) \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)\, \widehat{S}_n\, (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-\frac{1}{2}} \right\|_\mathcal{H} \\
&= \left\| \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)\, \widehat{S}_n\, (\widehat{Z}_n^* \widehat{Z}_n + \lambda I)^{-1} \widehat{S}_n^* \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right) \right\|_2 \\
&= \frac{1}{\lambda} \left\| \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right) \left(\widehat{S}_n \widehat{S}_n^* - \widehat{S}_n \widehat{Z}_n^* \, (\widehat{Z}_n \widehat{Z}_n^* + \lambda I)^{-1} \widehat{Z}_n \widehat{S}_n^*\right) \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right) \right\|_2,
\end{align*}
which is now the largest eigenvalue of a $n \times n$ matrix built as the product of matrices involving the kernel $K$ and its derivatives. Note for the above calculation that we used that $\left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)^2 = \left(I - \frac{1}{n}\mathds{1}\mathds{1}^\top\right)$.
\begin{figure}[ht]
\includegraphics[width=0.49\textwidth]{comparison.pdf}
\hspace{0.1cm}%
\includegraphics[width=0.49\textwidth]{mixture.pdf}
\caption{ \textbf{(Left)} Comparison of the convergences of the kernel-based method described in this paper and diffusion maps in the case of a Gaussian of variance~$1$ (for each $n$ we took the mean over $50$ runs). The dotted lines correspond to standard deviations of the estimator. \textbf{(Right)} Exponential growth of the Poincaré constant for a mixture of two Gaussians $\mathcal{N}(\pm \frac{a}{2},\sigma^2)$ as a function of the distance $a$ between the two Gaussians ($\sigma = 0.1$ and $n = 500$).}
\label{fig:poinca}
\end{figure}
We illustrate in Figure~\ref{fig:poinca} the rate of convergence of the estimated Poincaré constant to $1$ for the Gaussian $\mathcal{N}(0,1)$ as the number of samples $n$ grows. Recall that in this case the Poincaré constant is equal to~$1$ (see Subsection \ref{subsec:examples}). We compare our prediction to the one given by diffusion maps techniques \citep{COIFMAN20065}. For our method, in all the experiments we set $\lambda_n = \frac{C_\lambda}{n}$, which is smaller than what is given by Theorem \ref{thm:statistical_consistency}, and optimize the constant $C_\lambda$ with a grid search. Following \citep{hein2007graph}, to find the correct bandwidth $\varepsilon_n$ of the kernel involved in diffusion maps, we performed a similar grid search on the constant $C_\varepsilon$ for the Diffusion maps with the scaling $\varepsilon_n = \frac{C_\varepsilon}{n^{1/4}}$. Additionally to a faster convergence when $n$ become large, the kernel-based method is more robust with respect to the choice of itss hyperparameter, which is of crucial importance for the quality of diffusion maps. Note also that we derive an explicit convergence rate for the bias in the Gaussian case in Section \ref{sec:gaussian_bias} of the Appendix. In Figure~\ref{fig:poinca}, we also show the growth of the Poincaré constant for a mixture of Gaussians of variances~$1$ as a function of the distance between the two means of the Gaussians. This is a situation for which the estimation provides an estimate when, up to our knowledge, no precise Poincaré constant is known (even if lower and upper bounds are known \citep{Chafai2010}).
\subsection{Learning a reaction coordinate}
\label{subsec:experiments_RC}
We next illustrate the algorithm described in Section \ref{sec:learning_RC} to learn a reaction coordinate which, we recall, encodes directions which are difficult to sample. To perform the gradient step over the Stiefel manifold we used Pymanopt \citep{Pymanopt2016}, a Python library for manifold optimization derived from Manopt~\citep{manopt} (Matlab). We show here a synthetic two-dimensional example example. We first preprocessed the samples with ``whitening'', i.e., making it of variance~$1$ in all directions to avoid scaling artifacts. In both examples, we took $M = 200 $ for the number of random features and $n = 200 $ for the number of samples.
\begin{figure}[ht]
\footnotesize
\includegraphics[width=0.49\textwidth]{mixture_samples.pdf}
\hspace{0.1cm}%
\includegraphics[width=0.49\textwidth]{mixture_whiten_samples.pdf}
\\
\begin{center}
\includegraphics[width=0.49\textwidth]{mixture_optim.pdf}
\end{center}
\caption{ \textbf{(Top Left)} Samples of mixture of three Gaussians. \textbf{(Top right)} Whiten samples of Gaussian mixture on the left. \textbf{(Bottom)} Plot of the Poincaré constant of the projected samples on a line of angle $\theta$.}
\label{fig:three_gaussians}
\end{figure}
We show (Figure~\ref{fig:three_gaussians}) one synthetic example for which our algorithm found a good reaction coordinate. The samples are taken from a mixture of three Gaussians of means $(0,0), (1,1) $ and $(2,2)$ and covariance $\Sigma = \sigma^2 I$ where $\sigma = 0.1$. The three means are aligned along a line which makes an angle $\theta = \pi/4$ with respect to the $x$-axis: one expects the algorithm to identify this direction as the most difficult one to sample (see left and center plots of Figure~\ref{fig:three_gaussians}). With a few restarts, our algorithm indeed finds the largest Poincaré constant for a projection onto the line parametrized by $\theta = \pi / 4$.
\section{Conclusion and Perspectives}
In this paper, we have presented an efficient method to estimate the Poincaré constant of a distribution from independent samples, paving the way to learn low-dimensional marginals that are hard to sample (corresponding to the image measure of so-called reaction coordinates). While we have focused on linear projections, learning non-linear projections is important in molecular dynamics and it can readily be done with a well-defined parametrization of the non-linear function and then applied to real data sets, where this would lead to accelerated sampling~\citep{Lelievre2015}. Finally, it would be interesting to apply our framework to Bayesian inference~\citep{Chopin2012} and leverage the knowledge of reaction coordinates to accelerate sampling methods.
\bibliographystyle{plain}
|
1,116,691,500,961 | arxiv | \section{Introduction}
Object detection is an essential problem in computer vision which aims to detect the locations of semantic objects in videos or digital images. Object detection is widely used in areas such as image retrieval, object identification, video surveillance, etc. Pedestrian detection is a branch of object detection problem which deals with detecting the specific human class. It has applications in various topics such as advanced driver assistance systems, person identification, face recognition, etc.
The pedestrian detection problem can be decomposed into region proposal generation, feature extraction, and pedestrian verification. In general, object detection involves generating candidates for bounding boxes enclosing the objects of interest, extracting robust features as high level representations of the candidates, and verifying each candidate to be a true or a false positive. In recent years, convolutional neural network based techniques have successfully been applied to pedestrian detection and achieved better performances in many challenging scenarios. Li et al. \cite{safcnn} trained multiple Fast R-CNN \cite{fastrcnn} based networks to detect pedestrians with different scales and combined results from all networks to generate the final results. Hosang et al. \cite{SCF+AlexNet} used the SquaresChnFtrs \cite{tenyears} method to generate pedestrian proposals and trained AlexNet \cite{alexnet} to perform pedestrian verification. Zhang et al. \cite{rpn} used a Region Proposal Network (RPN) \cite{fasterrcnn} to compute pedestrian candidates and a cascaded Boosted Forest \cite{bf} to perform sample re-weighting to classify the candidates.
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\linewidth]{whole_new.png}
\end{center}
\caption{Proposed Fused DNN Architecture}
\label{fig:pipeline}
\end{figure*}
In this paper, we propose a deep neural network fusion architecture to address the pedestrian detection problem, called Fused Deep Neural Network.
Compared to previous methods, our proposed system is faster while achieving better detection accuracy. The architecture consists of a pedestrian candidate generator, which is obtained by training a deep convolutional neural network trained as a single shot detector (SSD) with a high detection rate, albeit a large false positive rate.
A novel soft-rejection strategy is used to adjust the confidence in the detector candidates by fusion with a classification network employing an ensemble learning approach, and semantic segmentation network which provides pixel-wise labeling. The classification network deploys an ensemble of deep neural networks trained independently as verification networks, and their results are softly fused together with the detection results using the novel soft-rejection method. To prepare the training data for the verification networks, we devise a novel soft-label method to assign floating point labels to the detected candidates. Unlike traditional hard-label method for object verification, where binary labels are used, the value of the our soft-label is set to be the largest overlap ratio between the current detected bounding box and all the ground-truth bounding boxes, and is adjusted to saturate to the binary values.
Additional accuracy improvements can be achieved at the expense of speed by the parallel semantic segmentation network which provides pixel-wise labels to generate a segmentation mask that delivers another soft confidence vote on the generated pedestrian candidates, and are further fused within the soft fusion framework.
The proposed network architecture is shown in Figure~\ref{fig:pipeline}.
Some of the ideas in this paper were presented at the 2017 IEEE WACV \cite{fdnn_xianzhidu}. In this paper, we provide more detailed analysis of these ideas, show results on more datasets, and provide additional enhancements that improved the performance over that presented at the 2017 IEEE WACV \cite{fdnn_xianzhidu}. The new techniques which we present here and helped provide the additional gains over \cite{fdnn_xianzhidu} are the soft-label method for training classification methods, learning the parameters of soft-rejection fusion by an additional fusion network, and the new kernel based method to fuse the results of the semantic segmentation system and the detection system. The new techniques of this paper helped to significantly increase the detection accuracy on the Caltech dataset from $8.18\%$ to $7.65\%$. We also extend the model to work on more classes besides pedestrians, such as cars and cyclists. Besides the Caltech Pedestrian dataset, we evaluated on three more popular pedestrian detection datasets: INRIA, ETH, and KITTI. Our techniques performed better than all the previous state-of-the-arts on Caltech, INRIA, and ETH in both accuracy and speed, and achieved comparable results on KITTI. More ablation analysis is conducted to explain the effectiveness of our system.
The rest of this paper is organized as follows. Section $2$ provides a detailed description of the pedestrian detection system. Section $3$ describes the semantic segmentation system and how it helps to refine the detection results. Section $4$ discusses the experiment results and explores the effectiveness of each component of the system. Section $5$ draws conclusions on this work.
\section{Pedestrian Detection system}
\subsection{Pedestrian Candidate Generator}
In order to quickly obtain pedestrian candidates in various sizes and aspect ratios at all possible locations of the input image, we use an single shot multi-box detector (SSD) \cite{SSD} as the candidate generator. The main reason we select SSD instead of other systems is that it uses multiple feature maps as the output layers. By lowering the accepting threshold of the confidence score, it outputs a large number of pedestrian candidates which are very likely to cover all the ground-truth pedestrians. Since it has a fully convolutional framework, it's also very fast.
The SSD network is a feed-forward convolutional network which consists of a VGG16 base, 8 convolutional layers above it, and a global pooling layer at the top. In the VGG16 base, the kernel size of the pool5 layer is set to $3\times 3$ and the stride is set to one, and the fc6 and fc7 layers are converted to dilated convolutional layers. Bounding box regression and classification are performed on the feature maps of 'conv4\textunderscore 3', 'fc7', 'conv6\textunderscore 2', 'conv7\textunderscore 2', 'conv8\textunderscore 2', 'conv9\textunderscore 2', and 'pool6' to generate the pedestrian candidates.
Since the output of 'conv4\textunderscore 3' has a much larger feature scale than that of other filters, an $L_2$ normalization technique is used to scale down the feature magnitudes \cite{ParseNetLW} . After each output layer, bounding box (BB) regression and classification are performed to generate pedestrian candidates.
The network architecture of the deployed pedestrian SSD is shown in Figure~\ref{fig:ssd}.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.7\linewidth]{ssd_new.png}
\end{center}
\caption{Fully Convolution}
\label{fig:ssd}
\end{figure}
To generate the pedestrian candidates, a set of default bounding boxes are slided on top of each output feature map. For each output layer of size $m \times n \times p$, a set of default BBs at different scales and aspect ratios are placed at each location. At every pixel location of the 7 output layers, we place 6 default bounding boxes (BBs) with aspect ratios $[0.1, 0.2, 0.41, 0.8, 1.6, 3.0]$ and relative heights $[0.05, 0.1, 0.24, 0.38, 0.52, 0.66, 0.80]$. Since $0.41$ is the average aspect ratio of all the pedestrian annotations, we place another set of default bounding boxes with that aspect ratio and relative heights $[0.1, 0.24, 0.38, 0.52, 0.66, 0.80, 0.94]$. In the training stage, we further categorize all the pedestrians into three classes: `Full pedestrian', `Occluded pedestrian', and `People'. The `People' class is defined as a group of people that are very close to or overlap with each other. The aspect ratios $0.6$ and $3.0$ are designed for `People'. At each default bounding box, a $3 \times 3 \times p$ convolutional kernel is applied to produce classification scores, and perform bounding box regression by calculating the location offsets with respect to the default bounding box.
The multi-task training objective of SSD is given by Equation~\eqref{eq:1}
\begin{equation}
L=\frac{1}{N}(L_{conf}+\alpha L_{loc})
\label{eq:1}
\end{equation}
where $L_{conf}$ is the softmax loss for the classification task and $L_{loc}$ is the smooth $L_1$ localization loss \cite{fastrcnn}, $N$ is the number of positive default boxes, and $\alpha$ is a constant weighting factor to keep a balance between the two losses. Since SSD uses 7 output layers to generate multi-scale BB outputs, it provides a large pool of pedestrian candidates varying in scales and aspect ratios.
When training the SSD as the primary candidate generator, a default detection BB is labeled as positive if it has a Jaccard overlap (intersection over union ratio) greater than 0.5 with any ground truth BB, otherwise it is labeled as negative.
The SSD generated pedestrian candidates should cover almost all ground truth pedestrians, while being as accurate as possible. This is essential to our fused DNN architecture, where following classification networks will attempt to improve the precision by decreasing the confidence in many of the false positives introduced by the SSD, while preserving the recall rate. By being a fully convolutional network, SSD is a fast candidate generator.
\subsection{Classification System \label{FNN}}
\subsubsection{Classification networks with soft-label method}
In this section, we describe the soft-label method we devised for preparing the training data for the classification network. The hard-label used in common object detection networks assigns a binary label to each pedestrian candidate bounding box by thresholding the Jaccard overlap ratio between this bounding box and the ground-truth bounding boxes. However, this is not the optimal strategy, especially when the overlap ratio is close to the threshold. In this work, we introduce the soft-label method to label pedestrian candidates for further classification. The soft-label method will assign a floating point label to each pedestrian candidate using the largest overlap ratio between the current pedestrian candidate and all the ground-truth bounding boxes. Suppose we have a pedestrian candidate and the ground-truth bounding box it overlaps with the most. The soft-labels for the pedestrian class $\mbox{label}_{ped}$ and the background class $\mbox{label}_{bg}$ are respectively calculated by Equations~\eqref{eq:2} and~\eqref{eq:3}, where $A_{BB_d}$ and $A_{BB_g}$ represent the areas covered by the detection BB and the ground truth BB, respectively
\begin{equation} \label{}
\mbox{label}_{ped}= \frac{A_{BB_{d}}\cap A_{BB_{g}}}{A_{BB_{d}}}
\label{eq:2}
\end{equation}
\begin{equation} \label{}
\mbox{label}_{bg}=1- \frac{A_{BB_{d}}\cap A_{BB_{g}}}{A_{BB_{d}}}
\label{eq:3}
\end{equation}
However, we found that it is better to use a soft-label in cases of mild confidence, and use a hard-label in cases when the confidence in the label exceeds a threshold. Hence, we devised a hybrid soft-label method as follows: If the overlap ratio between the pedestrian candidate and the ground truth bounding box is lower than a threshold $th_a$ or greater than a threshold $th_b$, we think it's safe to label the current sample as background or pedestrian with probability one. For the candidates with intermediate confidence, the soft-label method is used to assign floating point labels, and we normalize the range [$th_a$, $th_b$] to [$0$, $1$]. Moreover, for convenience of implementation, the soft-label is made continuous over its range from $0$ to $1$. The final soft-label method is shown by Equations~\eqref{eq:6} and~\eqref{eq:7}.
\begin{equation}
\mbox{label}_{ped}=
\begin{cases}
1,& \text{if } \frac{A_{BB_{d}}\cap A_{BB_{g}}}{A_{BB_{d}}}>th_b \\
0,& \text{if } \frac{A_{BB_{d}}\cap A_{BB_{g}}}{A_{BB_{d}}}<th_a \\
\frac{\frac{A_{BB_{d}}\cap A_{BB_{g}}}{A_{BB_{d}}}-th_a}{th_b-th_a}, & \text{otherwise,}
\end{cases}
\label{eq:6}
\end{equation}
\begin{equation}
\mbox{label}_{bg}=1-\mbox{label}_{ped}.
\label{eq:7}
\end{equation}
The classification networks are trained to minimize the cross entropy loss objective function,
\begin{equation} \label{}
\epsilon=-\sum^{c}_{i=1}l_i \log{y_i}
\label{eq:4}
\end{equation}
where $c$ is the number of classes, $l_i$ and $y_i$ are the soft-label and the softmax probability for class $i$. Note that $\sum_i l_i=1$.
Note that for the conventional binary labeling method, $l_i$ is the indicator function, which is $1$ for the correct class and $0$ otherwise. Minimizing the cross entropy is equivalent to maximizing the softmax probability of the correct class. In our soft-label case, the softmax probabilities of all the classes are used to contribute to the cross entropy loss. The floating point soft-labels will determine how much each class contributes.
When doing back-propagation, the derivative of the cross entropy cost function with respect to class $i$ is calculated as Equation~\eqref{eq:5}.
\begin{equation} \label{}
\begin{aligned}
\frac{\partial \epsilon}{\partial z_i}&=-\sum^{c}_{j=1}l_j\frac{\partial \log{y_j}}{\partial z_i}=-\frac{l_i}{y_i}\frac{\partial y_i}{\partial z_i}-\sum^c_{j\neq i}\frac{l_j}{y_j}\frac{\partial y_j}{\partial z_i}\\
&=-\frac{l_i}{y_i}y_i(1-y_i)-\sum^c_{j\neq i}\frac{l_j}{y_j}(-y_j y_i)\\
&=-l_i+y_i\sum^c_{j=1}l_j=y_i-l_i
\end{aligned}
\label{eq:5}
\end{equation}
We note that Equation~\eqref{eq:5} shows the gradient calculations and holds for both conventional hard-label method as well as for the proposed soft-label method. For conventional hard-label method, a training sample has label $1$ for the correct class, and label $0$ for the incorrect classes. We note that it holds for the soft-label method as long as the sum of the soft-labels over all classes is $1$.
As shown in Figure~\ref{fig:fusion}, the classification network constitutes of multiple networks, run in parallel, where we used the idea of ensemble learning. The constituent networks are deep classification neural networks which have different network structures, but trained with the same input data.
All pedestrian candidates generated by the primary candidate generator with confidence score greater than 0.01 and height greater than 40 pixels are collected as the new training data for the classification network. To solve the problem of classifying candidates with different aspect ratios and sizes, all candidate BBs are rescaled to a fixed input size. The goal of these secondary classification networks is to further classify the detections from the first stage single shot detector as a true detection or a false detection.
\begin{figure*}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.55\linewidth]{fusionnet2.png}
\label{fig:sfig1}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.55\linewidth]{fusionnet1.png}
\label{fig:sfig2}
\end{subfigure}
\caption{The two fusion network designs described in the paper. The left structure is an end-to-end training scheme. The right structure trains all the networks separately.}
\label{fig:fusion}
\end{figure*}
\subsubsection{Soft-rejection based fusion \label{subsection:SNF1} }
The opinions of all the constituent classification networks are fused with those of the candidate generator (CG). By doing so, it's more likely to get a lower error than having a single network. Since it's hard to bias towards each of the single network, it's also less likely to over-fit. Also, since the networks are run in parallel, the speed of the classification network is limited by its slowest constituent network.
There are several conventional methods commonly used for opinion fusion, such as computing the mean of all results, majority voting, or the hard-rejection method. The hard-rejection method will eliminate a pedestrian candidate based on a single negative vote from one classification network. Instead, we introduce the soft-rejection network fusion (SNF) method, as we use classification networks with different structures, and we expect each network to work well in some of the subcategories while performing mediocre in other categories. The soft-rejection based fusion method can be described as follows: For one pedestrian candidate, the $k_{th}$ classification network gives us a classification probability $p_k$. If $p_k$ is higher than a threshold $t_1$, we generate a scaling factor $s_k$ greater than one to boost the initial confidence score generated by the SSD candidate generator. Otherwise, we generate a scaling factor less than one to decrease the initial confidence score. To prevent any of the classification networks from dominating the final results, we set a lower bound $t_2$ to the scaling factors. The scaling factors coming out from all the classification networks are further multiplied together with the initial confidence score to produce the final score. The idea behind this is that instead of accepting or rejecting any candidate, we boost or decrease their scores instead. This is because a poor classification network can be compensated by other good ones with SNF, whereas a wrong elimination of a true pedestrian by hard-rejection cannot be corrected. The SNF idea is illustrated in Equation~\eqref{eq:8} and Equation~\eqref{eq:9}.
\begin{equation}
s_{k}=\max \left(p_{k}\times \frac{1}{t_1},t_2 \right)
\label{eq:8}
\end{equation}
\begin{equation}
S_{FDNN}=S_{CG}\times \prod_{k=1}^{K}s_k.
\label{eq:9}
\end{equation}
\subsubsection{Soft-rejection based fusion network \label{subsection:SNF2} }
The values of all the parameters in the soft-rejection based fusion method as described in Subsection \ref{subsection:SNF1} were selected by cross-validation in our previous work \cite{fdnn_xianzhidu}. However, we found that the parameters when optimized on one dataset do not easily generalize to other datasets. Instead of hand-crafting such parameters, we propose in this paper to use a neural network to learn the optimal parameters, while keeping the idea of the soft-rejection based fusion method. We call this new method soft-rejection based fusion network.
Let $p_1, p_2, ..., p_K$ be the inputs to the fusion network, where $p_k$ is the softmax output of the $k_{th}$ classification network. The input layer also deploys a log layer to get the classification log probabilities of the individual networks. The input layer is followed by two fully connected layers, each has $500$ neurons, and one softmax layer to predict the weights for each classification network. The output of the neural network is the exponent of the weighted sum of all classification log probabilities. This results in a soft-fusion scheme which scales the candidate generator's confidence scores with the product of (exponentially) weighted classification probabilities of all individual classification networks, where the weights $w_k$ are optimized by the fusion network, and adheres to our soft network fusion framework described by Equation~\eqref{eq:9},
\begin{equation}
\begin{aligned}
S_{FDNN2}&=S_{CG}*\exp\left(\sum_{k=1}^K w_k*\log(p_k)\right) \\
&=S_{CG}*\prod_{k=1}^K p_k^{w_k}.
\end{aligned}
\label{eq:10}
\end{equation}
\subsubsection{Training the classification system}
There are two ways to train the classification system. The first method is to train an end-to-end system. For all the classification networks, we remove their loss layers and concatenate the output neurons for the pedestrian class from the softmax layers to form the input layer to the fusion network. This system has classification networks as branches and merged together by the fusion network at the end. This is shown at the left of Figure~\ref{fig:fusion}. However, since we train all the networks together, the structure grows huge and is prone to overfitting. What's more, since all the branches have different structures, they require different settings of optimal hyper-parameters and have different converging speed.
The second method is to train the classification networks first, and use the output probabilities as inputs to train the fusion network separately. Since this is straightforward and easy to implement, as shown at the right figure of Figure~\ref{fig:fusion}, this method is finally used in this paper.
\section{Pixel-wise Semantic Segmentation System }
Based on the idea of fusion of multiple experts, we propose to enhance the accuracy of the system by adding an expert network which is trained independently from the candidate generator. In this implementation, we chose the independent network to be a semantic segmentation network trained to provide pixel-wise classification of pixels, in contrast to the region classification done by the candidate generator.
In our work, we utilize a semantic segmentation (SS) network based on deep dilated convolutions and context aggregation \cite{sspaper} running in parallel with the pedestrian detection system to further refine the end detection results of the whole system. The SS network is trained on the Cityscapes dataset for driving scene segmentation \cite{cityscapes}. To perform dense prediction, the network consists of a fully convolutional VGG16 network, adapted with dilated convolutions as the front end prediction module, whose output is fed to a multi-scale context aggregation module, consisting of a fully convolutional network whose convolutional layers have increasing dilation factors. We consider both the "person" and "rider" classes in Cityscapes dataset as pedestrians, and the remaining classes as background.
\subsection{Soft Fusion with a Pixel-wise Semantic Segmentation System \label{subsection:FSS}}
The results of the semantic segmentation system are fused with those of the pedestrian detection system as follows: First, we process the same original input image using the semantic segmentation system. This produces a binary mask where the foreground pixels are set to $1$ to represent the class of interest (e.g. pedestrian) and the background pixels are set to $0$. Then, for each of the bounding boxes detected produced by the candidate generator, we analysis its pixels at the same locations on the binary mask. The soft fusion scaling factor is computed as the weighted sum of the foreground pixels within the bounding box, where the weighting factors are calculated from a weight matrix, called the Kernel, within the bounding box. To enable this weighted sum, all detected bounding boxes and their overlapped masks are rescaled to have the same size as the Kernel. The Kernel is trained as the mean of the semantic segmentation binary masks of all ground-truth pedestrian bounding boxes in the training set and normalized to have a sum of $1$. This fusion method is illustrated in Equations~\eqref{eq:11} and~\eqref{eq:12}.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{sstest.png}
\end{center}
\caption{The idea of kernel-based method to fuse the semantic segmentation system and the detection system.}
\label{fig:kernel}
\end{figure*}
\begin{equation}
S_{SS}=\sum_{i,j}^{A_{BB}} \mbox{mask}(i,j) \times \mbox{Kernel}(i,j),
\label{eq:11}
\end{equation}
\begin{equation}
S_{FDNN2 + SS}=S_{FDNN2} \times S_{SS}
\label{eq:12}
\end{equation}
where the $A_{BB}$ is the area of the bounding box. $\mbox{mask}(i,j)$ and $\mbox{Kernel}(i,j)$ are the pixel value of the binary mask and the Kernel at location $(i, j)$. From the visualization of the weight mask, illustrated in Figure~\ref{fig:kernel}, we see that the pixels at the center of the kernel tend to have higher values than the pixels at the boundary. This agrees with the fact that in a perfect detection, the object of interest tends to appear at the center of the bounding box. We can see that the Kernel will have the effect of boosting the score of a detection whose bounding box fits the object of interest (e.g. pedestrian) and decreasing the score of a detection whose bounding box is not well located. This model will be referred to as `FDNN2 + SS'
It is worth noting that in our previous work \cite{fdnn_xianzhidu}, the soft fusion of the semantic segmentation results with that of the object detection system, labeled `FDNN + SS', was done differently.
For the sake of completeness and comparison of the results, we describe it here.
The SS mask is intersected with all detected BBs produced by the CG and the degree to which each candidate's BB overlaps with the pedestrian category in the SS activation mask gives a measure of the confidence of the SS network in the candidate generator's results. If the pedestrian pixels occupy at least $20\%$ of the candidate BB area, we accept the candidate and keep its score unaltered; Otherwise, the candidate generator's scores are softly fused by scaling as in Equation~\ref{eq:FusedDNNSS},
\begin{equation} \label{eq:FusedDNNSS}
S_{FDNN+SS}=
\begin{cases}
S_{FDNN},& \text{if } \frac{A_M}{A_{BB}}>0.2 \\
S_{FDNN}\times \max(\frac{A_M}{A_{BB}} \times a_{ss}, & b_{ss}), \text{otherwise}
\end{cases}
\end{equation}
where $A_{BB}$ represents the area of the BB, $A_M$ represents the area within $A_{BB}$ covered by the semantic segmentation mask, $a_{ss}$, and $b_{ss}$ are chosen as $4$ and $0.35$ by cross validation.
We also note that SNF of the CG network with an SS network is slightly different from SNF with a classification network. The reason is that the SS network can generate new detections which have not been produced by the CG, which is not the case for the classification networks. To address this, the proposed SNF methods `FDNN2 + SS' and `FDNN + SS' eliminate new detections from the SS network, which do not overlap with any CG detection.
\section{Experiments and result analysis}
\subsection{Training settings}
The proposed method is trained on the training sets of the Caltech Pedestrian dataset, the ETH dataset, and the TudBrussels dataset.
To train the pedestrian candidate generator, both the original images and the horizontally flipped images which contain at least one annotated bounding box are used, which results in around $68,000$ training images in total. Among all the annotated bounding boxes, there are about $109,000$ annotated bounding boxes in 'Person\textunderscore full' class, $60,000$ annotated bounding boxes in 'Person\textunderscore occluded' class, and $35,000$ bounding boxes in 'People' class. All the images are of size $480\times 640$. The model is fine-tuned from the Microsoft COCO \cite{coco} pre-trained SSD model for $40,000$ iterations using the standard stochastic gradient descent (SGD) algorithm and the back-propagation algorithm at a learning rate of $10^{-5}$.
To train the classification system, all the ground-truth annotations and the pedestrian candidates generated from the previous stage with height greater than $40$ pixels and confidence score larger than $0.01$ are selected, and rescaled into a fixed size of $250\times 250$ to represent the training samples. For data augmentation, a $224\times 224$ patch is randomly cropped out of each training sample and horizontally flipped with probability $0.5$. To label the training samples, the soft-label method as described by Equations~\eqref{eq:6} and~\eqref{eq:7}
is implemented. The thresholds $th_a$ and $th_b$ are set to $0.4$ and $0.6$, respectively. To build the classification networks, one ResNet-50 \cite{res50} and one GoogleNet \cite{googlenet} are used as the classification networks. Both of the classifiers are fine-tuned from the ImageNet pre-trained models using the standard SGD algorithm and the back-propagation algorithm at a learning rate of $10^{-4}$.
As we don't have pixel-level labels for pedestrian detection datasets
to incorporate the semantic segmentation network, the dilated convolution model \cite{sspaper} trained on the Cityscapes dataset \cite{cityscapes} is directly implemented. All the classes are considered as background, except the 'Person' and 'Rider' classes which are considered as the `Pedestrian' class. Due to the lack of well-labeled pedestrian datasets for our problem, no fine-tuning is involved in this step. All the input images are rescaled from $480\times 640$ into $1024\times 2048$. To preserve the aspect ratio so as to preserve the human body shape, the image's height is firstly scaled to $1024$ and then blank patches are padded on both left and right sides.
All the above mentioned models are built with the Caffe deep learning framework \cite{caffe}.
\subsection{Evaluation settings and results}
We evaluate the proposed method on the four most popular pedestrian detection datasets: the Caltech Pedestrian dataset, the INRIA dataset, the ETH dataset, and the KITTI dataset. The log-average miss rate (L-AMR) is used as the performance evaluation metric \cite{caltech}. L-AMR is computed evenly spaced in log-space in the range $10^{-2}$ to $10^0$ by averaging the miss rate at the rate of nine false positives per image (FPPI) \cite{caltech}. There are multiple evaluation settings defined based on the height and visible part of the bounding boxes. The most popular settings are listed in Table~\ref{tab:cal_settings}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Setting & Description\\
\hline\hline
Reasonable & 50+ pixels. Occ. none or partial\\
All & 20+ pixels. Occ. none, partial, or heavy\\
Far & 30- pixels\\
Medium & 30-80 pixels\\
Near & 80+ pixels\\
Occ. none & 0\% occluded\\
Occ. partial & 1-35\% occluded\\
Occ. heavy & 35-80\% occluded\\
\hline
\end{tabular}
\end{center}
\caption{Evaluation settings for Caltech Pedestrian dataset.}
\label{tab:cal_settings}
\end{table}
We refer to the new models of this paper as F-DNN2, which is the proposed pedestrian detection system with a fusion network, and F-DNN2+SS, which is F-DNN2 system fused with the semantic segmentation system, and to the models of our previous work \cite{fdnn_xianzhidu} as FDNN and FDNN+SS, for fusion of the CG with the classification system only or with both the classification system and the SS network, respectively, as described in Subsections~\ref{subsection:SNF1},~\ref{subsection:SNF2}, and~\ref{subsection:FSS}. Descriptions of each dataset and its evaluation results are given below.
\begin{table*}[ht]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Method & Reasonable & All & Far & Medium & Near & Occ. none & Occ. partial & Occ. heavy\\
\hline\hline
SCF+AlexNet \cite{SCF+AlexNet} & 23.32\% & 70.33\% & 100\% & 62.34\% & 10.16\% & 19.99\% & 48.47\% & 74.65\%\\
SAF R-CNN \cite{safcnn} & 9.68\% & 62.6\% & 100\% & 51.8\% & \textbf{0\%} & 7.7\% & 24.8\% & 64.3\%\\
MS-CNN \cite{mscnn} & 9.95\% & 60.95\% & 97.23\% & 49.13\% & 2.60\% & 8.15\% & 19.24\% & 59.94\%\\
DeepParts \cite{DeepParts2015} & 11.89\% & 64.78\% & 100\% & 56.42\% & 4.78\% & 10.64\% & 19.93\% & 60.42\%\\
CompACT-Deep \cite{CompACT2015} & 11.75\% & 64.44\% & 100\% & 53.23\% & 3.99\% & 9.63\% & 25.14\% & 65.78\%\\
RPN+BF \cite{rpn} & 9.58\% & 64.66\% & 100\% & 53.93\% & 2.26\% & 7.68\% & 24.23\% & 69.91\%\\
F-DNN+SS \cite{fdnn_xianzhidu} & 8.18\% & 50.29\% & 77.47\% & \textbf{33.15\%} & 2.82\% & 6.74\% & \textbf{15.11\%} & 53.76\% \\
F-DNN2 (ours) & 8.12\% & 51.86\% & 77.99\% & 36.72\% & 1.68\% & 6.75\% & 17.51\% & 40.84\% \\
F-DNN2+SS (ours) & \textbf{7.67\%} & \textbf{49.80\%} & \textbf{75.83\%} & 35.09\% & 1.51\% & \textbf{6.35\%} & 16.17\% & \textbf{39.84\%} \\
\hline
\end{tabular}
\end{center}
\caption{Detailed breakdown performance comparisons of our models and other state-of-the-art models on the 8 evaluation settings. All numbers are reported in L-AMR.}
\label{tab:cal}
\end{table*}
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Method & RPN+BF & SketchTokens & SpatialPooling & RandForest & VJ & HOG & F-DNN2+SS (ours) \\
\hline\hline
L-AMR & 6.88\% & 13.32\% & 11.22\% & 15.37\% & 72.48\% & 63.49\%& \textbf{6.78}\%\\
\hline
\end{tabular}
\end{center}
\caption{Performance comparisons of our models and other state-of-the-art models on the INRIA dataset.}
\label{tab:inria}
\end{table*}
\begin{table*}[ht]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Method & RPN+BF & TA-CNN & SpatialPooling & RandForest & VJ & HOG & F-DNN2+SS (ours) \\
\hline\hline
L-AMR & 30.32\% & 34.98\% & 37.37\% & 45.04\% & 74.69\% & 89.89\%& \textbf{30.02}\%\\
\hline
\end{tabular}
\end{center}
\caption{Performance comparisons of our models and other state-of-the-art models on the ETH dataset.}
\label{tab:eth}
\end{table*}
\begin{table*}[ht]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Setting & Description\\
\hline\hline
Easy & Min. BB height: 40 Px, Max. occlusion level: Fully visible, Max. truncation: 15\%\\
Moderate & Min. BB height: 25 Px, Max. occlusion level: Partly occluded, Max. truncation: 30\%\\
Hard & Min. BB height: 25 Px, Max. occlusion level: Difficult to see, Max. truncation: 50\%\\
\hline
\end{tabular}
\end{center}
\caption{Evaluation settings for KITTI object detection dataset.}
\label{tab:kitti_settings}
\end{table*}
\textbf{Evaluation on the Caltech Pedestrian data}: The Caltech Pedestrian dataset contains 11 sets (S0-S10), where each set consists of 6 to 13 one-minute long videos collected from a vehicle driving through an urban environment. There are about $250,000$ frames with about $350,000$ annotated BBs and $2,300$ unique pedestrians. Each bounding box is assigned with one of the three labels: 'Person', 'People' (large group of individuals), and 'Person?' (unclear identifications). The detailed breakdown performances of our two models (detection system only and detection system plus semantic segmentation system) on this dataset is shown in Table~\ref{tab:cal}. We compare with all the state-of-the-art methods reported on Caltech Pedestrian website. We can see that both of our models significantly outperform others on almost all evaluation settings. On the 'Reasonable' setting, our `FDNN+SS' best model achieves $8.18\%$ L-AMR, which has a $14.6\%$ relative improvement from the previous best result $9.58\%$ by RPN+BF. Even more accuracy can be obtained by our proposed `FDNN2' and `FDNN2+SS' models, which achieve $8.12\%$ and $7.67\%$ L-AMR, respectively. On the 'All' evaluation setting, we achieve $50.29\%$ with `FDNN+SS', a relative improvement of $17.5\%$ from $60.95\%$ by MS-CNN \cite{mscnn}. The `FDNN2+SS' has an even better accuracy with an L-AMR of $49.8\%$ under the 'ALL' evaluation setting. The L-AMR vs. FPPI plots for the 'Reasonable' and 'All' evaluation settings are shown in Figure~\ref{fig:cal_res} and Figure~\ref{fig:cal_all}. Except the VJ \cite{Viola} and the HOG \cite{HOG} methods, which are plotted as the baselines, all the other results are CNN-based methods.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{plot_res.png}
\end{center}
\caption{L-AMR vs. FPPI plot under the 'Reasonable' evaluation setting on Caltech Pedestrian dataset.}
\label{fig:cal_res}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{plot_all.png}
\end{center}
\caption{L-AMR vs. FPPI plot under the 'All' evaluation setting Caltech Pedestrian dataset.}
\label{fig:cal_all}
\end{figure}
\textbf{Evaluation on INRIA}: We evaluate the proposed method using the converted INRIA pedestrian dataset provided by Caltech Pedestrian group. There are $614$ full positive training images and $288$ full positive testing images in the INRIA dataset. At least one pedestrian is annotated in each image. To test the generalization capability of our model, we directly test our Caltech-pretrained model on the INRIA test set without any fine-tuning on the INRIA training set. On the 'Reasonable' setting, our `FDNN2+SS' method achieves $6.78\%$ L-AMR, outperforming the previous best result $6.9\%$ by RPN+BF. Table~\ref{tab:inria} shows the best results reported on INRIA dataset and Figure~\ref{fig:inria} shows the L-AMR vs. FPPI plot. Results from VJ and HOG are plotted as the baselines.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{plot_inria.png}
\end{center}
\caption{L-AMR vs. FPPI plot on 'Reasonable' evaluation setting on INRIA dataset.}
\label{fig:inria}
\end{figure}
\textbf{Evaluation on ETH}: There are $1,804$ images from three video sequences in the ETH pedestrian dataset. As we used the ETH images to train our model, in order to test on the ETH dataset, we removed all the training images from the ETH dataset in our training set and retrained our model. On the 'Reasonable' setting, our method achieves $30.02\%$ L-AMR, outperforming the previous best result $30.23\%$ by RPN+BF. Table~\ref{tab:eth} shows the best results reported on the ETH dataset and Figure~\ref{fig:eth} shows the L-AMR vs. FPPI plot. Results from VJ and HOG are plotted as the baselines.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{plot_eth.png}
\end{center}
\caption{L-AMR vs. FPPI plot on 'Reasonable' evaluation setting on ETH dataset.}
\label{fig:eth}
\end{figure}
\textbf{Evaluation on KITTI}: We further generalize our method to multi-class detection problem and test on KITTI object detection dataset. KITTI object detection dataset contains $7,481$ training images and $7,518$ test images. All the annotations are split into 7 classes such as cars, vans, trucks, pedestrians, cyclists, trams, and 'Don't care'. Only cars, pedestrians, and cyclists are evaluated. There are three evaluation settings as shown in Table~\ref{tab:kitti_settings}. Following \cite{mscnn}, we split the training data into a training set and a validation set. We fine-tune three models using all training annotations for the three evaluating classes respectively. For the three models, we set the main aspect ratio to the mean aspect ratio of each class, which is $0.4$ for pedestrians, $0.7$ for cyclists, and $1.6$ for cars. Table~\ref{tab:kitti} shows the results on KITTI object detection dataset. We achieved comparable results on all classes. Since the Caltech Pedestrian dataset doesn't distinguish between pedestrians and cyclists, while the KITTI object detection dataset does, it degrades our performance on the pedestrians class and the cyclists class on KITTI.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Benchmark&Easy&Moderate&Hard\\
\hline\hline
Car& 89.68 \% & 85.11 \% & 70.35 \%\\
Pedestrian& 74.05 \% & 61.17 \% & 57.15 \%\\
Cyclist& 67.06 \% & 51.85 \% & 46.67 \%\\
\hline
\end{tabular}
\end{center}
\caption{Evaluation results on KITTI object detection dataset.}
\label{tab:kitti}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Method & Reasonable\\
\hline\hline
CG & 13.06\%\\
CG+GglNet & 8.64\%\\
CG+ResNet & 8.38\%\\
CG+GglNet+ResNet+Fusion net (F-DNN2) & 8.12\%\\
CG+GglNet+ResNet+Fusion net+SS (F-DNN2+SS)& 7.67\%\\
\hline
\end{tabular}
\end{center}
\caption{Ablation study of our system.}
\label{tab:netfusion}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Method & Hard-label & Soft-label\\
\hline\hline
CG + ResNet & 8.97\% & 8.38\%\\
CG + GglNet & 9.41\% & 8.64\%\\
CG + GglNet + ResNet & 8.65\% & 8.12\%\\
\hline
\end{tabular}
\end{center}
\caption{Effectiveness of the soft-label method compared to the conventional hard-label method on Caltech Pedestrian dataset using the reasonable setting.}
\label{tab:slhl}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|}
\hline
Method & Speed on TITAN X \\
&(seconds per image)\\
\hline\hline
CompACT-Deep & 0.5\\
SAF R-CNN & 0.59\\
F-DNN2 & \textbf{0.3}\\
F-DNN2 (Reasonable) & \textbf{0.16} \\
CG+GglNet (Reasonable) & \textbf{0.11} \\
CG+SqueezeNet (Reasonable) & \textbf{0.09} \\
F-DNN2+SS & 2.48\\
\hline
\end{tabular}
\end{center}
\caption{A comparison of speed among the state-of-the-art models.}
\label{tab:speed}
\end{table}
\begin{table*}[h!]
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Method & Reasonable & All & Far & Medium & Near & Occ. none & Occ. partial & Occ. heavy\\
\hline\hline
CG+GglNet & 8.64\% & 51.59\% & 76.87\% & 37.75\% & 1.72\% & 7.18\% & 18.05\% & 41.19\%\\
\hline
CG+ResNet & 8.38\% & 49.58\% & 74.60\% & 34.88\% & 1.70\% & 6.94\% & 19.26\% & 42.71\%\\
\hline
\end{tabular}
\end{center}
\caption{Breakdown comparisons between CG+GglNet and CG+ResNet on Caltech Pedestrian dataset.}
\label{tab:gglres}
\end{table*}
\subsection{Result analysis}
\subsubsection{An ablation study: effectiveness of the network fusion technique}
In this subsection, we analysis the performance increases step by step from the candidate generator (CG) to the final system. The L-AMR is $13.06\%$ by using the candidate generator alone, due to the large number of false positives. By fusing the candidate generator with GoogLeNet, we can improve the performance to $8.64\%$. By fusing the candidate generator with ResNet-50, we can improve the L-AMR to $8.38\%$. Furthermore, by fusing the candidate generator with GoogLeNet and ResNet-50 using our proposed fusion net, we can achieve the lowest L-AMR so far at $8.12\%$. Finally, by fusing the semantic segmentation network into our system, we can achieve the best performance at $7.67\%$. The analysis shows the capability of the network fusion framework and the advantages of using the idea of ensemble learning. The results of ablation study are given in Table~\ref{tab:netfusion}.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{visualizations.png}
\end{center}
\caption{Detection comparisons on five challenging pedestrian detection scenarios. Each row represents one challenging scenario and the four columns show the bounding boxes from the ground truth, RPN+BF, F-DNN, and F-DNN2, respectively.}
\label{fig:visualize}
\end{figure*}
\subsubsection{GoogLeNet VS. ResNet-50}
We explore how each part of the classification system contributes to our final results. The breakdown performance comparisons of all evaluation settings on the Caltech Pedestrian dataset between fusing with GoogLeNet alone and fusing with ResNet-50 alone are given in Table~\ref{tab:gglres}. From the results we can see that GoogLeNet works better in partial/heavy occluded pedestrians while ResNet-50 works better in non-occluded pedestrians. By analyzing the weights learnt in Equation~\eqref{eq:10}, we see that the weight for GoogLeNet is $1.11$ and the weight for ResNet-50 is $2.22$, which means that our fusion network values the ResNet-50 more than the GoogLeNet. This is reasonable since there are more non-occluded pedestrians in the training data.
\subsubsection{Soft-label method versus hard-label method}
To test how effectively the soft-label method improves the performance, we compare with the conventional hard-label method on the Caltech Pedestrian dataset. Since we use the overlap ratio between the candidate bounding box and the ground-truth annotation to assign labels, the soft-label method gives us not only the information of the existence of a pedestrian in each candidate's bounding box, but also how much of the bounding box belongs to the pedestrian. This feature benefits even more in cases where the overlap ratio is around $0.5$: e.g. it is too risky to directly assign a hard-label $1$ or $0$ to a bounding box with overlap ratio $0.49$ or $0.51$. We give the performance comparisons between the hard-label method and the soft-label method in Table~\ref{tab:slhl}.
\subsubsection{Results visualization on challenging scenarios and failure cases}
We visualize the detection results generated by our system compared with previous state-of-the-art systems on several challenging scenarios: far pedestrians, crowed scene, occluded pedestrians, pedestrians overlap with each other. Figure~\ref{fig:visualize} visualizes detection results on five challenging scenarios. In Figure~\ref{fig:visualize}, each row represents one challenging scenario and the four columns show the bounding boxes from the ground truth, RPN+BF, F-DNN, and F-DNN2, respectively. From the visualizations we can see that our model is more robust and accurate on various challenging cases.
\subsubsection{Speed analysis}
We use one NVIDIA TITAN X GPU to analysis the processing speed of each component and the overall architecture of our network. There are four main components: the candidate generator, GoogLeNet, ResNet-50, and the semantic segmentation network. Since the candidate generator has a fully convolutional framework which removes the fully connected layers in the original VGG net, it takes $0.06$s to process one image. To test the processing time of the classification system, we have two settings: the first test runs on all pedestrian candidates; The second test runs only on candidates above 40-pixel in height, which is designed for the 'reasonable' evaluation setting. Since the number of pedestrian candidates varies from image to image, the test reports the mean processing time of all images. Running the candidate generator followed by GoogLeNet and ResNet-50 in parallel on one GPU, the speed for the classification system is $0.3$s and $0.16$s per image for the two tests. To achieve real time pedestrian detection, we fuse the candidate generator with SqueezeNet~\cite{SqueezeNet}. The processing time per image is $0.09$s, while being $10.8\%$ in L-AMR. By processing the semantic segmentation network in parallel with the pedestrian detection system and fusing them together, the processing time per image of our final model is $2.48$s. The speed comparisons of our models and other methods are given in Table~\ref{tab:speed}.
\section{Conclusion and Discussion}
We present an effective solution to the pedestrian detection problem in this paper. The proposed system consists of two parallel subsystems: The main pedestrian detection system to generate all detections and the semantic segmentation system to help refine the results. The pedestrian detection system further consists of a pedestrian candidate generator and a classification system. To give more bounding box information to the classification network, we propose a new soft-label method which assigns floating point label to all classes. We implement the idea of ensemble learning to design the classification system. We proposed a soft-rejection network fusion methodology to combine the opinions of all classification networks and the semantic segmentation network with that of the candidate generator network.
The performance of our system is evaluated on four popular pedestrian detection datasets: Caltech Pedestrian dataset, INRIA dataset, ETH dataset, and KITTI dataset. We achieve the state-of-the-art performance on the first three datasets and comparable results on the KITTI dataset. Our system also works faster in processing speed than other state-of-the-arts when testing using the same experiment settings. The results and analysis show that our system works accurately, efficiently, and robustly to detect pedestrians and other object classes under various challenging scenarios.
\bibliographystyle{ieeetran}
|
1,116,691,500,962 | arxiv |
\section{Introduction}
Due to proximity effects, a hybrid device made of a superconductor coupled to a mesoscopic normal conductor allows to study a wide range of quantum phenomena. In particular, in the Coulomb blockade regime these include supercurrent transport carried by Cooper pairs~\cite{glaz89,bas99,roz01,Doh05,dam06,Jar06}, coherent electron transport in terms of multiple Andreev reflections~\cite{Sch01,Bui03,andersen11,deon11}, as well as quasiparticle transport~\cite{levy97,GoLo04,dam06,eich07,grove09,dirk09,fran10,Pfaller13,Gaass14}. Andreev reflections lead to subgap structures with steps at bias voltage $2\Delta/ne$ ($n\in \mathbb{N}^+$) in the current-voltage characteristics~\cite{levy97,joh99,Bui03,eich07,andersen11,deon11,gunel12}, which are smeared out by increasing the temperature~\cite{eich07,deon11}. In contrast, temperature favors quasiparticle transport, as it increases the probability of thermal activation of quasiparticles across the gap. The emergence of a zero bias peak inside the Coulomb diamond by
increasing temperature~\cite{eich07,deon11} was explained in terms of resonant tunneling~\cite{levy97} of thermal quasiparticles. Recently, the additional possibility to observe transport features due to sequential tunneling of thermally excited quasiparticles has been theoretically proposed in Ref. \cite{Pfaller13} and experimentally confirmed in Ref. \cite{Gaass14}. Such processes lead to thermal resonance lines within the Coulomb blockade region, parallel to the Coulomb diamond edges. Cotunneling processes due to quasiparticles, however, have so far only been reported for bias voltages above the superconducting energy gap~\cite{grove09,dirk09}. In this work we present measurements in complete agreement with theoretical predictions on thermally excited quasiparticle transport in the cotunneling regime.
Cotunneling is a transport process in which the QD is either excited (inelastic cotunneling), or kept in the same state as the initial state (elastic cotunneling), by means of tunneling events to an intermediate virtual state. Thus, for the inelastic case a bias threshold corresponding to the excitation energy is required to enable charge transfer~\cite{averin90}. In contrast to sequential tunneling processes, cotunneling in lowest order is expected to be independent of the gate voltage.
We report on elastic and inelastic cotunneling spectroscopy on individual carbon nanotube (CNT) devices coupled to Nb superconducting leads. In the low temperature limit transport theory predicts for a CNT quantum dot superconductivity enhanced transport features at bias voltages $\pm2\Delta/e$ and $\pm(2\Delta+\delta_m)/e$ due to elastic and inelastic cotunneling of quasiparticles, respectively~\cite{grove09}. Here \{$\delta_m$\} is the set of excitation energies of the CNT from an N particle ground state. With increasing temperature, we predict and observe the appearance of elastic and inelastic cotunneling features in the subgap region (i.e. for bias voltage amplitudes smaller than $2\Delta/e$) due to thermally excited quasiparticles. In particular, the emergence of a zero-bias peak, corresponding to the thermal replica of the elastic cotunneling resonance, is expected. Our theoretical predictions are in good quantitative agreement with our experimental findings.
Individual single wall carbon nanotubes were grown on a highly p-doped Si/SiO$_2$ substrate by chemical vapor deposition~\cite{kong98}. The substrate acting as a global back gate is used to tune the electron occupation of the CNT. The source and drain electrodes were patterned on an individual single wall carbon nanotube by standard electron beam lithography and lift-off techniques. Here we report on measurements on two distinct samples. For sample \textsf{A}, Fig.~\ref{fig:sampleA}, electrodes made of $3\,$nm Pd and $45\,$nm sputtered Nb with a spacing between electrodes of the order of $300\,$nm were used (see Fig.~\ref{fig:sampleA}(a)); for sample \textsf{B}, Fig.~\ref{fig:amit} in the appendix, a metalization of $3\,$nm Pd and $60\,$nm sputtered Nb with a contact spacing of the order of $430\,$nm was applied (see Fig.~\ref{fig:amit}(a)). In order to perform four-point measurements and as a resistive on-chip element, each superconducting electrode was connected to two leads made of AuPd to damp
oscillations at the plasma
frequency of the Josephson junction~\cite{pall08,mart89}. Low temperature electrical transport measurements were performed inside a 3He/4He dilution refrigerator with a base temperature of $25\,$mK.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{Fig1}
\caption{(a) Scanning electron micrograph of the device \textsf{A}. The gray line indicates the approximated location of the nanotube (not visible itself). (b) Differential conductance at $T=24\,$mK as function of bias voltage and back gate voltage. (c)(d) Zoom into the Coulomb blockade region of the third Coulomb diamond for temperatures $T=300\,$mK and $T=1700\,$mK, respectively. The dashed white box corresponds to the range of gate voltages over which is averaged to obtain the differential conductance curve shown on the left side of each figure.
}\label{fig:sampleA}
\end{figure}
In both samples we observe regular CB diamonds over a large gate voltage range. Signatures of four-fold periodicity are observed in the measured gate range only for Sample \textsf{A}. Figs.~\ref{fig:sampleA}(b) and \ref{fig:amit}(b) show the high resolution measurements for selected gate range for contacts in the superconducting state at temperature $T=25\,$mK and $30\,$mK, respectively. In both samples lines of high conductivity are observed well inside the Coulomb diamonds; all these lines are horizontal, independent of gate voltage. To clearly identify them we restrict the gray scale for the differential conductance below the maximum conductance. Fig.~\ref{fig:sampleA}(c) shows a zoom corresponding to the region inside the diamond denoted \textcircled{\bf\footnotesize{3}} in Fig.~\ref{fig:sampleA}(b)~\bibnote{The discrepancy in gate voltage range between Fig.~\ref{fig:sampleA}(b) and Figs.~\ref{fig:sampleA}(c),(d) is caused by a long-time scale drift of all Coulomb blockade features. Sequential tunneling
features of this data
set have
already been discussed in Ref. \cite{Gaass14}}. Horizontal lines are clearly visible and indicated by arrows in the conductance curve.
One set of line occurs at bias voltage $V_\textsuperscript{SD}\sim\pm 0.52\,$mV (gray arrows). We ascribe it to elastic cotunneling processes at $V_\textsuperscript{SD}=\pm2\Delta/e$. We extract $\Delta\sim0.26\,$meV for our superconducting film, compared to the expected value of $\Delta=1.5\,$meV for bulk Nb. The mismatch of about a factor of five has already been reported in similar Nb-based devices~\cite{kulm72,may72,grove09,Kum14}. The reason for the gap reduction is still an open question. Possible explanations are the formation of niobium oxide, the thin composite of Nb and Pd, or the contamination of the lower Nb interface. For the deposited Nb/Pd strip a critical temperature of about 8 K was measured, where the resonant features remain present up to temperatures of about 4 - 5 K. Thus the transition temperature of the thin film is comparable to bulk Nb, that is in contrast to the observed small value of $\Delta$ and the BCS-relation $\Delta=1.76k_BT$. The inelastic part of the cotunneling spectra reveals excitations of the CNT quantum dot. Our data show a broad inelastic feature at a distance $\delta=1.3\,$meV from the elastic line (black arrows). From additional stability diagrams for sample \textsf{A}, recorded at higher temperature and finite magnetic field to suppress superconductivity, we extract a charging energy $E_\textsuperscript{C} \simeq 15\,$meV, implying $E_\textsuperscript{C}/\Delta\sim50$ for sample \textsf{A}. Similarly, from the elastic and inelastic line, we can also extract $\Delta\sim0.23\,$meV and $\delta\sim0.11\,$meV for sample \textsf{B}. From additional stability diagrams in a regime in which superconductivity is largely suppressed, we identify a smaller charging energy $E_\textsuperscript{C}\simeq3.2\,$meV. The two samples have roughly the same superconducting gap $\Delta$ but differ in the charging energy $E_\textsuperscript{C}$, leading to different transport regimes. In both samples charging effects and the small coupling strength $\hbar\Gamma<\Delta$ suppress Andreev processes, such that current is carried by quasiparticles. In sample \textsf{A} the large charging energy further suppresses multiple quasiparticle processes. Thus the transport is dominated by sequential and cotunneling events. For sample \textsf{B} a simple description in terms of resonant tunneling of quasiparticles~\cite{levy97} may be conceived.
As the temperature is increased new horizontal lines are observed. In sample \textsf{A} the novel lines arise for temperatures above $T\approx 600\,$mK at zero-bias and at bias voltage $V_\textsuperscript{SD}=\pm\delta/e$. Fig.~\ref{fig:sampleA}(d) shows the same gate region as in Fig.~\ref{fig:sampleA}(c) but now for the temperature $T=1.7\,$K. The additional lines, marked by stars, become more and more pronounced with increasing temperature. Andreev reflections do not give an explanation for the thermal behavior of such transition lines~\cite{Sch01,Bui03,Doh05,Jar06,Kum14}. Also the Kondo effect cannot be the reason for the resonant peak at zero bias, as it has an opposite thermal behavior~\cite{bui02,sia04,Cle06,Kim13,Lee14,Chang13}.
The feature of a zero-bias conductance peak is also supported by sample $\textsf{B}$, as shown in Fig.~\ref{fig:amit}(c) in the appendix. The bias trace is taken in the middle of the Coulomb blockade valley at gate voltage $V_\textsuperscript{gate}\approx-11.71\,$V. Upon increase of the temperature, one observes a rising conductance peak at zero bias, and pairs of symmetrically displaced elastic and inelastic cotunneling peaks at finite bias. The feature at bias voltage $V_\textsuperscript{SD}=\pm\Delta/e$ and the thermal zero bias peak resemble data already reported in Refs. \cite{eich07,deon11}. In analogous fashion, we expect them to be reproducible within the simple resonant model of Ref. \cite{levy97}. The more complex behavior of sample \textsf{A}, where several cotunneling and sequential lines are observed within the CB diamond, clearly goes beyond the capability of the simple resonant picture that exclude Coulomb interaction. As shown below, a full transport theory including all tunneling processes
up to
second order in the coupling strength $\hbar\Gamma$ to the leads can capture the experimental behavior to high detail.
\section{Transport theory for S-CNT-S junctions}
To understand the experimental observations, we consider a minimal model for a CNT quantum dot connected to two BCS-type superconducting leads. For the back-gated CNT we consider a single longitudinal mode incorporating orbital, $m$, and spin, $\sigma$, degrees of freedom. Coulomb interaction effects are considered within a constant interaction model, with $U$ being the charging energy. The quadruplet CNT Hamiltonian thus reads
\begin{equation}
\hat H_\textsuperscript{CNT}=\sum_{m\sigma} E_{m\sigma}\hat d^\dagger_{m\sigma}\hat d_{m\sigma}+\frac{U}{2}\hat N(\hat N-1)-\alpha eV_\textsuperscript{gate}\hat N,
\end{equation}
where $\hat N$ is the charge number operator of the dot and $\alpha$ a conversion factor for the gate voltage. Finally, $E_{m\sigma}=\epsilon_d+\frac{1}{2}m\sigma\delta$ (with $m=\pm1$, $\sigma=\pm1$), where $\delta$ accounts for the breaking of the fourfold degeneracy of a longitudinal mode with energy $\epsilon_d$ due to spin-orbit interaction and valley mixing~\cite{laird14}.
The BCS superconducting leads are described by a conventional pair-interaction Hamiltonian on a mean-field level with respect to an offset energy $E_l^0$:
\begin{equation}
\hat H_l=E_l^0+\sum_{\vec k\sigma}E_{l\vec k}\hat\gamma^\dagger_{l\vec k\sigma}\hat\gamma_{l\vec k\sigma}+\mu_l\hat N_l.
\end{equation}
It can be obtained by means of a particle conserving Bogoliubov-Valatin transformation~\cite{Bog58,Val58}
\begin{eqnarray}
\hat c^\dagger_{l\vec k\sigma}&=&u_{l\vec k}\hat\gamma^\dagger_{l\vec k\sigma}+\sigma v^*_{l\vec k}\hat S^\dagger_l\hat\gamma_{l-\vec k\bar\sigma},\nonumber \\*
\hat c_{l\vec k\sigma}&=&u^*_{l\vec k}\hat\gamma_{l\vec k\sigma}+\sigma v_{l\vec k}\hat S_l\hat\gamma^\dagger_{l-\vec k\bar\sigma},
\end{eqnarray}
for the leads' electron creation and annihilation operators $\hat c^\dagger_{l\vec k\sigma}$ and $\hat c_{l\vec k\sigma}$, respectively. The electron operators are represented in terms of quasiparticle operators $\hat\gamma^{(\dagger)}_{l\vec k\sigma}$ and of Cooper pair operators $\hat S^{(\dagger)}_l$ with the corresponding prefactors $u^{(*)}_{l\vec k}$ and $v^{(*)}_{l\vec k}$~\cite{Jos62,Bar62}. Furthermore, the quasiparticles have an excitation energy $E_{l\vec k}=\sqrt{(\epsilon_{\vec k}-\mu_l)^2+\Delta^2}$ measured with respect to the electrochemical potential $\mu_l$. Finally, the BCS gap is defined by
\begin{eqnarray}
\Delta\equiv |V|\sum_{\vec k}\left\langle \hat S^\dagger_l\hat c_{l-\vec k\downarrow}\hat c_{l\vec k \uparrow}\right\rangle\label{gap},
\end{eqnarray}
where $|V|$ characterizes the interaction potential between a pair of electrons.
The connection with the superconducting leads is realized by a single-particle tunneling Hamiltonian $\hat H_{T,l}=T_l\sum_{\vec k \sigma m}\left(\hat d^\dagger_{m\sigma}\hat c_{l\vec k\sigma}+h.c.\right)$ where, for the sake of simplicity, the tunnel coefficient $T_l$ of lead $l$ is considered to be spin, wave vector and valley independent. The tunnel coupling strength can then be defined as $\hbar\Gamma_l\equiv2\pi|T_l|^2\sum_{\vec k}\delta(\omega-\epsilon_{\vec k})$, which is assumed to be energy independent.
We describe the time evolution of the system with the generalized master equation~\cite{Blu12}:
\begin{eqnarray}
\dot{\hat\rho}_\textsuperscript{red}(t)=-\frac{i}{\hbar}\left[\hat H_\textsuperscript{CNT},\hat\rho_\textsuperscript{red}(t)\right]+\int^t_{t_0}d\tau\hat K(t,\tau)\hat\rho_\textsuperscript{red}(\tau),
\end{eqnarray}
for the dynamics of the reduced density operator $\hat\rho_\textsuperscript{red}$. This (still exact) equation allows a systematic perturbation expansion of the kernel superoperator $\hat K(t,\tau)$ in powers of the coupling strength $\hbar\Gamma$~\cite{weymann05,Koller10}. In the steady state limit and charge conserved regime the master equation can be simplified further by applying the Laplace transform $f(\lambda)\equiv\int^\infty_0 d\tau'\,e^{-\lambda \tau'}f(\tau')$ and its properties:
\begin{eqnarray}
0=-\frac{i}{\hbar}\sum_{\chi_i\chi'_i}\delta_{\chi_i\chi_f}\delta_{\chi'_i\chi'_f}(E_{\chi_i}-E_{\chi'_i})\rho_{\chi_i\chi'_i}+\sum_{\chi_i\chi'_i}K^{\chi_i\chi'_i}_{\chi_f\chi'_f}\rho_{\chi_i\chi'_i},
\end{eqnarray}
with $K^{\chi_i\chi'_i}_{\chi_f\chi'_f}\equiv \langle \chi_f|\hat K(\lambda=0^+)[|\chi_i\rangle\langle\chi'_i|]|\chi'_f\rangle$ and $\rho_{\chi_i\chi'_i}\equiv\langle\chi_i|\hat\rho_\textsuperscript{red}(t\to\infty)|\chi'_i\rangle$. The matrix elements are evaluated in the basis $\{|\chi\rangle\}$ of the eigenstates of the Hamiltonian $\hat H_\textsuperscript{CNT}$. Noticeably, each term in the perturbation expansion of $K^{\chi_i\chi'_i}_{\chi_f\chi'_f}$ can be represented in a diagrammatic language in which simple rules exist to directly obtain the corresponding analytical expression. In Ref. \cite{gov08} these rules are derived and discussed in detail for the case of hybrid S-QD-S nanostructures.
An expression for the steady state current in terms of a perturbative expansion can be obtained in the same way. In particular, the net current of lead $l$ is described by
\begin{eqnarray}
I_l(t\to\infty)=e\sum_{\chi_f}\sum_{\chi_i\chi'_i}(K_{I_l})^{\chi_i\chi'_i}_{\chi_f\chi'_f}\rho_{\chi_i\chi'_i}.
\end{eqnarray}
In the charge conserved regime the reduced density matrix $\rho_{\chi_i\chi'_i}$ is block diagonal (see appendix D). Thus the kernel element $K^{\chi_i\chi'_i}_{\chi_f\chi'_f}$ up to second order also represents the physical rate for processes transferring 0, 1, or 2 charge(s), depending on the charge difference between the states $|\chi_i\rangle$ and $|\chi_f\rangle$.
The problem of non-equilibrium hybrid superconducting-quantum dot junctions with an applied bias voltage is intrinsically time dependent. This can lead to time-dependent harmonic contributions to the stationary current associated to Andreev tunneling~\cite{andersen11}. However, in the charge conserved regime considered in this work, these harmonics are absent, and hence $\dot{\hat\rho}_\textsuperscript{red}(t)\to 0$ at long times. This is because the expectation values $\langle\hat c^\dagger_{l\vec k\sigma}(t)\hat c^\dagger_{l'\vec k'\sigma'}(\tau)\rangle$ and $\langle\hat c_{l\vec k\sigma}(t)\hat c_{l'\vec k'\sigma'}(\tau)\rangle$ vanish since they break the conservation of total charge. Let us emphasize that, according to Eq.~(\ref{gap}), we still have a finite superconducting gap and superconducting features (see appendix C for a detailed discussion).
Thermally assisted quasiparticle transport has yet only been discussed in the context of sequential~\cite{Pfaller13,Gaass14} and resonant~\cite{deon11,eich07} tunneling. Responsible for the energy distribution of the fermionic quasiparticles is beside the BCS density of states (DOS) also the Fermi function. For high enough temperatures the Fermi function is thermally smeared, in the sense that quasiparticles can also occupy the high energy branch of the DOS and thus can contribute to an additional transport channel.
\begin{figure}[tb]
\centering
\includegraphics[width=.9\columnwidth]{cot}
\caption{a) Theoretically expected transition lines in the stability diagram of a CNT for one specific Coulomb diamond. Solid and dashed blue lines correspond to standard sequential tunneling and cotunneling processes, respectively. The thermal replicas of these transition lines are shown as solid and dashed orange lines. b) Many-body spectrum of the 2, 3 and 4 electron subspace for a gate voltage corresponding to the center of the Coulomb diamond \textcircled{\bf\scriptsize{3}}. The tunneling events contributing to the elastic cotunneling lines are shown.}\label{fig:thermal}
\end{figure}
In the sequential tunneling regime, this gives rise to thermal replicas of the sequential tunneling transitions displaced by $\pm4\Delta/e$ in bias voltage (solid orange lines in Fig.~\ref{fig:thermal}(a)). When cotunneling processes are also taken into account, the number of expected thermal lines is largely increased, as sketched in Fig.~\ref{fig:thermal}(a). In the figure we restrict us to the exemplary Coulomb diamond denoted \textcircled{\bf\footnotesize{3}}. Gate-dependent lines, induced by sequential processes, can be clearly distinguished from gate independent cotunneling induced lines. Blue solid and dashed lines are transitions which are due to ``standard'' sequential tunneling and cotunneling processes, respectively, i.e., contributions that are also present at low temperatures. Orange solid and dashed lines, in contrast, are due to thermally excited quasiparticles. Hence, they are present only at large enough temperatures.
As already mentioned, standard elastic cotunneling lines are expected at bias $V_\textsuperscript{SD}=\pm2\Delta/e$, and the inelastic cotunneling features occur at a bias $V_\textsuperscript{SD}=\pm(2\Delta+\delta)/e$, reflecting the excitation energy $\delta$. Fig.~\ref{fig:thermal}(b) visualizes the elastic cotunneling events in the many-body spectrum where the 3-particles ground-state is used as reference energy. Choosing the center of diamond \textcircled{\bf\footnotesize{3}}, corresponding to a certain gate voltage, the 2-particles and the 4-particles ground-state have the same energy. Thus, transitions from the 3-particles ground-state to the 2-particles ground-state and backwards have the same probability as those from the 3-particles ground-state to the 4-particles ground state and backwards, leading to elastic cotunneling. As shown below, thermal excitation of the lead quasiparticles yields thermal replicas at a bias $2\Delta/e$ smaller than for standard cotunneling features. We thus predict, in
particular, the emergence
of a cotunneling line at zero bias, being the thermal replica of the standard elastic lines at $\pm2\Delta/e$.
One exemplary contribution to elastic cotunneling in the diagrammatic language is shown in Fig.~\ref{fig:cotunnel}(a).
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig3}
\caption{(a) Exemplary diagrammatic representation of one main contribution to elastic cotunneling. (b) Energy-DOS diagram explaining the transport mechanism for thermally assisted elastic cotunneling. The time ordering of the tunnel processes has the same declaration as in the diagram (a). A measurable elastic cotunneling current is observed if thermally occupied quasiparticle states in the source are simultaneously aligned with empty quasiparticle states in the drain. (c) Integrand of Eq.~(\ref{eqcot}) for the parameter regime of figure (b). Blue corresponds to the low temperature parameter regime $T\ll\Delta/k_B$ where the product of Fermi functions and density of states is finite. Orange represents the area where the product has to be taken into account for higher temperatures $T<\Delta/k_B$. }\label{fig:cotunnel}
\end{figure}
Using the diagrammatic rules~\cite{gov08,Koller10} the analytic expression is given by the kernel element
\begin{eqnarray}
(\hat K_\textsuperscript{EC})^{\chi\chi}_{\chi\chi}&\equiv&-i\hbar\Gamma_\textsuperscript{S}\Gamma_\textsuperscript{D}\sum_\nu\int \frac{d\omega}{2\pi} \frac{d\omega'}{2\pi} D_\textsuperscript{S}(\omega,\Delta)D_\textsuperscript{D}(\omega',\Delta) \nonumber \\*
&&\times\frac{f_\textsuperscript{S}(\omega)(1-f_\textsuperscript{D}(\omega'))}{(-\omega+\delta E+i0^+)(\omega'-\omega+i0^+)(\omega'-\delta E+i0^+)} \nonumber \\*
&\equiv&-\frac{i}{\hbar}\Gamma_\textsuperscript{S}\Gamma_\textsuperscript{D}\sum_\nu\int \frac{d\omega}{2\pi} \frac{d\omega'}{2\pi}\,I(\omega,\omega'),\hspace{4.8em}\label{eqcot}
\end{eqnarray}
including $f_l(\omega)\equiv 1/[\exp((\omega-\mu_l)/k_BT)+1]$, the DOS $D_l(\omega,\Delta)\equiv\sqrt{\frac{(\omega-\mu_l)^2}{(\omega-\mu_l)^2-\Delta^2}}$ $\times\Theta(|\omega-\mu_l|-\Delta)$, and the energy difference $\delta E=E_\nu-E_\chi$ between the energy $E_\nu$ of the virtual dot state $|\nu\rangle$ and $E_\chi$ of the dot state $|\chi\rangle$. Notice that in the example of Fig.~\ref{fig:cotunnel}(a) the state $|\nu\rangle$ has one unit of charge more than state $|\chi\rangle$. The charges entering and leaving the dot carry the energies $\omega$ and $\omega'$, respectively. An analysis of the double integral shows that, at low temperatures, it gives one pronounced contribution only in the case
$V_\textsuperscript{SD}\ge2\Delta/e$ (see appendix E). The bias threshold $V_\textsuperscript{SD}=2\Delta/e$ corresponds to the resonant case in which the highest occupied quasiparticle states in the source are aligned with the lowest empty quasiparticle states in the drain, such that elastic cotunneling onto and out of the CNT is possible. However, at higher temperatures thermally excited quasiparticles enable cotunneling transport also at zero bias. This mechanism is visualized in Fig.~\ref{fig:cotunnel}(b), where the numbers \textsf{1}, \textsf{2}, \textsf{3}, \textsf{4} correspond to the tunneling events occurring at times $\tau\equiv t_1\le t_2 \le t_3 \le t_4\equiv t$ shown in Fig.~\ref{fig:cotunnel}(a). As seen in Fig.~\ref{fig:cotunnel}(b), if the thermally occupied quasiparticle states of the source are in resonance with the unoccupied quasiparticle states of the drain, elastic cotunneling through the dot can occur also at zero bias. The tunneling rate $\Gamma_\textsuperscript{EC}^{\chi\to\chi}\equiv2\textnormal{Re}(\hat K_\textsuperscript{EC})^{\chi\chi}_{\chi\chi}$ for such a process is given by the expression in Eq.~(\ref{eqcot}) adding the hermitian conjugated.
Mathematically, the condition for the onset of elastic cotunneling can be obtained from the analysis of the integrand $I(\omega,\omega')$ of Eq.~(\ref{eqcot}). This integrand is schematically depicted in Fig.~\ref{fig:cotunnel}(c) for the case of zero bias and $\Delta+\mu_\textsuperscript{S/D}\ll\delta E$, such that the system is in the Coulomb blockade regime and no sequential transport occurs. Due to the product $D_\textsuperscript{S} D_\textsuperscript{D} f_\textsuperscript{S}(1-f_\textsuperscript{D})$, the integrand $I(\omega,\omega')$ in Eq.~(\ref{eqcot}) is only non vanishing at low temperatures in the blue region of the $\omega-\omega'$ plane, depicted in Fig.~\ref{fig:cotunnel}(c). Upon increasing temperature, the product is also non-vanishing along the orange stripes and on the orange spot.
In Fig.~\ref{fig:cotunnel}(c) the roots of the denominators are represented by dashed lines. It is evident that the integral of $\hat K_\textsuperscript{EC}$ has a large magnitude only in the case the root line $\omega=\omega'$ and the colored regions meet when varying the bias voltage. Thus at low temperatures and $V_\textsuperscript{SD}=0$ no transport is possible as the corner of the blue region and the $\omega=\omega'$ line cannot touch. Upon increasing temperature, transport is accessible through the orange regions at $\omega'=\mu_\textsuperscript{D}-\Delta$ and $\omega=\mu_\textsuperscript{S}-\Delta$, see scheme in Fig.~\ref{fig:cotunnel}(c). This corresponds to the gate independent resonance at zero bias. In this simple resonance picture we obtain the elastic forward cotunneling rate (see appendix E) in the middle of a Coulomb diamond by a first approximation of the integrand in Eq.~(\ref{eqcot})
\begin{eqnarray}
\Gamma_\textsuperscript{EC}^{\chi\to\chi}&=&N_\chi\hbar\left(\frac{2}{U}\right)^2 \Gamma_S \Gamma_D \int \frac{d\omega}{2\pi} D(\omega,\Delta)D(\omega+eV_\textsuperscript{SD},\Delta) \nonumber \\*
&&\times f(\omega)[1-f(\omega+eV_\textsuperscript{SD})],\label{cotton}
\end{eqnarray}
where we directly pointed out the bias dependence of the rate and introduced a degeneracy factor $N_\chi$ depending on the state $|\chi\rangle$. Also including the backward process the linear conductance is then approximated by
\begin{eqnarray}
G=\frac{dI}{dV_\textsuperscript{SD}}\Bigg|_{V_\textsuperscript{SD}=0}\approx N_\chi\frac{e^2}{\hbar}\left(\frac{2}{U}\right)^2 \frac{\hbar^2\Gamma_S \Gamma_D}{k_BT} \int \frac{d\omega}{2\pi} D^2(\omega,\Delta)f(\omega)f(-\omega).
\end{eqnarray}
This expression already shows a Boltzmann like behavior $\exp[-\Delta/(k_BT)]$ for low temperatures $T\ll\Delta/k_B$ and reproduces the normal conducting result $G=N_\chi\frac{e^2}{h}\frac{\hbar^2\Gamma_S\Gamma_D}{U^2}$ in the limit $\Delta\ll k_BT$. In particular, the former asymptotic characteristics indicates a transport property based on thermal excitation.
Analogously, subgap thermal replicas of the standard inelastic cotunneling lines are expected. We present a detailed analysis of the inelastic processes in appendix F and quote here the approximate result for the inelastic cotunneling rate
\begin{eqnarray}
\Gamma_\textsuperscript{EC}^{\chi\to\chi'}&=&N_\chi\hbar\left(\frac{2}{U}\right)^2 \Gamma_S \Gamma_D \int \frac{d\omega}{2\pi} D(\omega,\Delta)D(\omega-\delta+eV_\textsuperscript{SD},\Delta) \nonumber \\*
&&\times f(\omega)[1-f(\omega-\delta+eV_\textsuperscript{SD})],
\end{eqnarray}
similar to what was found in Ref.~\cite{grove09}.
\section{Comparison of theoretical and experimental predictions}
In the following we use the BCS gap $\Delta$, the excitation energy $\delta$, and the charging energy $E_\textsuperscript{C}$ extracted from the measured differential conductance plots to calculate the current through the CNT by means of the generalized master equation. Since the measured data revealed a relatively large critical temperature we could assume a temperature independent gap size in the considered temperature regime $T<T_c/2$. The calculations are performed by approximating the divergent DOS $D_l(\omega, \Delta)$ with a smoothened function~\bibnote{We replace the Heaviside function $\Theta(|\omega|-\Delta)\to\frac{1}{\exp(\gamma^{-1}(\omega+\Delta))+1}+\frac{1}{\exp(\gamma^{-1}(-\omega+\Delta))+1}$ by a blurred step function. Despite $\gamma$ is introduced empirically in this work, it can be shown that higher order processes involving quasiparticles lead to level broadening in the quantum dot and thus also to regularization of the divergence caused by the BCS density of states~\cite{levy97} similar to that provided by $\gamma$ here.} controlled by an empirical parameter $\gamma$ similar to the Dynes parameter~\cite{Dynes78}. A good fit to the experimental data for sample \textsf{A} is obtained by $\gamma\approx5.0\,\mu$eV, a coupling strength $\hbar\Gamma=0.01\,$meV and a conversion factor $\alpha=0.1$ for the gate voltage. The results of our transport calculations for sample \textsf{A} are shown in Fig.~\ref{fig:stab}(a)-(c) for temperature $T=1.7\,$K, such that $k_BT/\Delta=0.56$.
Fig.~\ref{fig:stab}(d) shows the corresponding experimental data for diamond \textcircled{\bf\footnotesize{3}}. A short analysis of diamond \textcircled{\bf\footnotesize{2}} is given in the appendix B.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{Fig4new}
\caption{(a) Calculated differential conductance of a CNT with level splitting $\delta=1.3\,$meV and charging energy $E_\textsuperscript{C}=15\,$meV. The temperature is $T=1.7\,$K and the BCS gap $\Delta=0.26\,$meV. The onset of inelastic and elastic cotunneling at $V_\textsuperscript{SD}=\pm(2\Delta+\delta)/e$ and $V_\textsuperscript{SD}=\pm2\Delta$, respectively, yields horizontal transition lines. Also gate independent features at bias voltages $V_\textsuperscript{SD}=\pm\delta/e$ and at zero bias can be pointed out. (b) Right panel: Zoom into the right corner of diamond \textcircled{\bf\scriptsize{3}} indicated in (a). Left panel: Bias trace corresponding to the gate voltage marked by the dashed white line in the right panel. In the bias trace the peaks indicated by stars are due to thermally activated quasiparticles. (c) Calculated bias traces for different temperatures. The peaks marked by stars correspond to thermal replicas of the standard cotunneling processes. To compare with the experiment we add
a conductance offset
of about $0.002\,e^2/h$ to our numerical data. (d) Equivalent experimental data for comparison. The bias-dependent background results from the gradual increase of the conductance in vicinity of the diamond edges.}\label{fig:stab}
\end{figure}
In the bias and gate voltage range of Fig.~\ref{fig:stab}(a) pronounced sequential tunneling lines and elastic and inelastic cotunneling features are seen. For a better resolution we restrict the gray scale of the differential conductance below the maximum value. In Fig.~\ref{fig:stab}(b) we focus on the Coulomb diamond denoted \textcircled{\bf\footnotesize{3}}. Beside the density plot we show the bias trace taken at the gate voltage marked by a white line, which supports the good quantitative agreement with the experimental data of Fig.~\ref{fig:sampleA}(d). The standard cotunneling peaks (arrows) as well as their thermal replicas (stars) can be clearly recognized. The thermal behavior of the cotunneling features is illustrated in Figs.~\ref{fig:stab}(c),(d) where the calculated and the measured differential conductance curves for different temperatures are presented. For the calculated curves we choose the same gate voltage as for the white dashed line in Fig.~\ref{fig:stab}. For the experimental data we
averaged over a series
of gate voltages marked by the box in Fig.~\ref{fig:sampleA}(d). In both cases we emphasize that the standard cotunneling peaks are almost temperature independent, whereas the thermal replicas at zero bias and at $V_\textsuperscript{SD}=\pm\delta/e$ rise with increasing temperature.
\section{Conclusions}
In summary, we report on new cotunneling transport properties of a CNT contacted to two superconducting Nb leads based on thermally assisted quasiparticle tunneling. We observe the thermal replica of the elastic and inelastic cotunneling resonances with increasing temperature above $600\,$mK. These lead to an extra zero-bias peak and to an inelastic peak corresponding to the lowest excitation energy in the $dI/dV$ characteristics. To explain these non-equilibrium phenomena we derive a generalized master equation based on the RDM approach in the charge conserved regime, applicable to any intradot interaction and finite superconducting gap. Modeling the CNT with a low-energy interacting spectrum, we find a remarkable agreement with the experimental results concerning the thermal behavior of the additional cotunneling peaks.
\ack
\input acknowledgement.tex
|
1,116,691,500,963 | arxiv | \section{Introduction}
\label{sec:Intro}
The lattice Boltzmann method (LBM) \cite{chen1998lattice} has grown as an alternative tool for fluid flow simulation.
Differently from other numerical methods based on a direct discretization of the
conservation equations, the LBM is based on a discretized form of the Boltzmann transport equation
known as the lattice Boltzmann equation (LBE) \cite{shan1998discretization}.
In particular, for phase change phenomena and multiphase flow simulation, several models
were developed within the LBM framework \cite{gunstensen1991lattice,swift1996lattice,shan1993lattice,luo1998unified}.
One of the most popular is the pseudopotential method \cite{shan1993lattice,shan1994simulation}.
It consists in the definition of an artificial interaction potential which is capable of
inducing phase separation. In this way, it is not necessary to track the interface between
multiple phases as they are maintained by the short-range attraction force imposed to the fluid.
This type of procedure is called diffuse interface modeling \cite{anderson1998diffuse}, since the density field varies
continuously between the different phases due to the action of the force field, instead of
having an exact interface location.
The original pseudopotential method was developed by \citet{shan1993lattice}.
The authors proposed an interaction force that could maintain different phases in equilibrium.
The drawbacks of this procedure involves lack of thermodynamic consistency and
non-adjustable surface tension.
In a subsequent work, \citet{shan1994simulation} focused on the macroscopic behavior of their
method. The authors addressed the effects of the proposed interaction force into the pressure tensor.
With this approach, the authors were able to study the equilibrium properties of a fluid governed by this resulting pressure tensor. It is known that
in diffuse interface models, the pressure tensor plays a key role in the phase-change process,
controlling liquid-gas density ratio and surface tension \cite{li2016lattice}.
A different interaction force was proposed by \citet{zhang2003lattice}, but
this model suffers from the same issues of the \citeauthor{shan1993lattice} approach.
The first improvement was done
by \citet{kupershtokh2009equations}, who were able to adjust the liquid-gas coexistence curve by combining the previous interaction forces. However,
this technique was still not able to allow controlling surface tension without affecting the liquid-gas densities.
A similar procedure was also proposed later \cite{gong2012numerical}.
This technique allowed successful applications of LBM to multiphase simulations,
such as simulations of pool boiling \cite{gong2017direct,ma20193d}.
The procedures aforementioned are classified as nearest neighbor
interaction forces, since their implementation requires only information from the fluid properties at
the nodes adjacent to the node of interest. One of the further attempts to enhance multiphase behavior
consists in the multirange interaction forces \cite{shan2006analysis}, which use larger numerical stencils
involving nodes at greater distances.
\citet{sbragaglia2007generalized} proposed a multirange model capable of adjusting the
liquid-gas density curve and the surface tension. However, \citet{li2013achieving}
noticed that this model had some issues, since the density ratio of the system varied considerably
with the change in surface tension. Recently, \citet{kharmiani2019alternative} proposed
a consistent interaction potential that permits to control independently the liquid-gas
density ratio and surface tension. But one of the terms that constitutes the proposed
force is calculated in two steps and it can be argued that this procedure is equivalent to a multirange
approach, since it requires information from distances greater than the adjacent nodes.
The disadvantages of the multirange model involve being
computationally more expensive and the boundary conditions need to be modified \cite{kruger2017lattice}.
Besides that,
considering a first principles approach mapping a Molecular Dynamics simulation onto the lattice Boltzmann framework \cite{parsa2017lattice} we will argue below,
that interactions should only involve adjacent nodes in the vast majority of practical simulations.
In order to incorporate the effects of an external force field into the LBE,
no matter if it is a nearest-neighbour or multirange approach,
one may use numerical procedures known as forcing schemes.
Very common examples from literature are the forcing schemes developed by \citet{guo2002discrete},
\citet{shan1993lattice}, \citet{kupershtokh2004new} and \citet{wagner2006thermodynamic}.
The use of a suitable forcing scheme in a numerical solution has been shown to have great importance,
since some authors have observed distinguished
behaviors for different schemes, even when the same external force field was applied
\cite{li2012forcing,huang2011forcing}.
\citet{li2012forcing} identified that such distinct behaviors were caused by
distinguished terms introduced into the pressure tensor by the forcing schemes, and that
affected the multiphase properties of the method.
Based on this finding, the authors proposed a source term for the LBE in order to
change the pressure tensor and to control the liquid-gas coexistence curve of the pseudopotential method.
Later, the procedure was extended to allow the surface tension control without affecting the liquid
and vapor densities \cite{li2013achieving}. This procedure is very attractive because the numercial scheme
involves only properties at the adjacent nodes, resulting in a computationally efficient method.
Most subsequent approaches in the literature followed this reasoning
\cite{lycett2015improved,huang2016third,zhai2017pseudopotential}.
Also, it was discovered that higher order discretization errors caused by the forcing schemes
play a big role in multiphase flows \cite{wagner2006thermodynamic,lycett2015improved} and these errors must be taken into
account for proper determination of the pressure tensor.
These procedures based on the work of \citet{li2012forcing} allowed many applications of the pseudopotential method
\cite{li2015lattice,li2018enhancement,hu20192d}.
Even though many theoretical developments in forcing schemes were achieved concerning the design of
pressure tensors that allow the control of the desired equilibrium multiphase properties,
this knowledge was still not properly employed to devise interaction forces to overcome the limitation of previous models \cite{shan1993lattice,zhang2003lattice,kupershtokh2009equations}.
Some attempts were done but they involve the use of multirange interactions which reduce the method computational efficiency.
Based on the current developments in the pseudopotential literature, in this work, we developed a strategy to control the liquid-gas density ratio and the surface
tension by means of an appropriate interaction force field using only nearest-neighbor interactions, without resorting to a change in the forcing scheme. The procedure starts by considering the desired pressure tensor, which allows for the control of the equilibrium properties of the pseudopotential method. We then derive an external force field which replicates the effects of this pressure tensor in the momentum conservation equation. The final step of our procedure is implementing
this external force in the LB method by using the classical forcing scheme developed by \citet{guo2002discrete}.
The present paper is organized as follows. In Sec.~\ref{sec:TB}, the theoretical background related to LBM and pseudopotential method will be briefly discussed, with particular focus on the pressure tensor role.
In Sec.~\ref{sec:GIPFP}, a fundamental approach to analyze the form of the interaction force will be discussed. This analysis is used as a foundation for the argument that using adjacent nodes in the pseudopotential method suffices to practical simulations.
Then, in Sec.~\ref{sec:MOTED}, it will be shown how to discretize the terms of the desired pressure tensor using finite differences. Later, an interaction force will be devised to replicate the effect of the desired pressure tensor in the conservation equations as shown in Sec.~\ref{sec:FA}. Numerical simulations will be presented in Sec.~\ref{sec:NS} to validate the proposed interaction force. Finally, a brief conclusion drawn from theoretical and numerical studies will be made in Sec.~\ref{sec:Conclusion}.
\section{Theoretical Background}
\label{sec:TB}
\subsection{The Lattice Boltzmann Equation} \label{sec:TLBE}
The LBE can be written as:
\begin{equation} \label{eq:TLBE}
f_i(t+1,\bm{x}+\bm{c}_i) - f_i(t,\bm{x}) = \Omega_i(\bm{f},\bm{f}^{eq}) + S_i,
\end{equation}
where $f_i$ are the particle distribution functions related with the velocity $\bm{c}_i$
and $\bm{f}$ is a vector with components $[\bm{f}]_i=f_i$.
Also, $t$ and $\bm{x}$ are the time and space coordinates, respectively.
The term $\Omega_i(\bm{f},\bm{f}^{eq})$ is the collision operator and it is, in general, dependent
on $\bm{f}$ and the equilibrium distribution function, $\bm{f}^{eq}$.
For the two-dimensional nine velocities set (D2Q9), the velocities $\bm{c}_i$
are given by:
\begin{equation}
\label{eq:VelocitySet}
\bm{c}_i =
\begin{cases}
(0,0), ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ i = 0, \\
(1,0), (0,1), (-1,0), (0,-1), ~~~~~ i = 1,...,4, \\
(1,1), (-1,1), (-1,-1), (1,-1), ~ i = 5,...,8. \\
\end{cases}
\end{equation}
The simplest form of $\Omega_i(\bm{f},\bm{f}^{eq})$ is the single-relaxation time,
also known as BGK collision operator \cite{bhatnagar1954model}, described in Eq.~(\ref{eq:BGKCO}).
One can improve stability and to some extent accuracy by allowing different relaxation times for different modes. This is known as the
multi-relaxation time (MRT) collision operator, shown in Eq.~(\ref{eq:MRTCO}).
\begin{subequations}
\begin{equation}
\label{eq:BGKCO}
\Omega_i(\bm{f},\bm{f}^{eq}) = - \frac{1}{\tau} (f_i - f_i^{eq}),
\end{equation}
\begin{equation}
\label{eq:MRTCO}
\Omega_i(\bm{f},\bm{f}^{eq}) = - \left[ \bm{M}^{-1} \bm{\Lambda} \bm{M} \right]_{ij} (f_j - f_j^{eq}),
\end{equation}
\end{subequations}
where the parameter $\tau$, in Eq.~(\ref{eq:BGKCO}), is the relaxation time. In Eq.~(\ref{eq:MRTCO}),
$\bm{\Lambda}$ is the relaxation matrix and $\bm{M}$ is the matrix that converts $(\bm{f} - \bm{f}^{eq})$
into a set of moments. The particular form of these matrices can vary, as discussed by \citet{kaehler2013derivation}, but the hydrodynamic modes of mass, momentum, and stress tensor have to be eigenvectors of the collision matrix. The eigenvalues of this matrix then represent now a set of relaxation times that can be different for the different eigenvectors.
The MRT collision operator has been widely used in multi-phase simulations \cite{li2013lattice,li2015lattice,mu2017nucleate}. Note that the MRT equation recovers the BGK collision operator if all relaxation times of the MRT collision operator are equal.
The form of the matrices $\bm{M}$ and $\bm{\Lambda}$ are presented in
Appendix~\ref{sec:MRT}.
A popular form of the equilibrium distribution function is:
\begin{equation}
\label{eq:SEDF}
f_i^{eq} = w_i \bigg( \rho + \frac{c_{i \alpha}}{c_s^2} \rho u_{\alpha}
+ \frac{(c_{i \alpha}c_{i \beta} - c_s^2 \delta_{\alpha \beta})}{2 c_s^4}
\rho u_{\alpha}u_{\beta} \bigg),
\end{equation}
where the terms $w_i$ are the weights related with each velocity $\bm{c}_i$, and $c_s$ is
the lattice sound speed. For D2Q9 set, the weights $w_i$
are given by $w_0 = 4/9$, $w_{1,2,3,4} = 1/9$ and $w_{5,6,7,8} = 1/36$, and $c_s$ is equal $1/\sqrt{3}$.
Also, $\rho$ and $\bm{u}$ are the fluid density and velocity, respectively given by (\ref{eq:FD}) and (\ref{eq:FMD}).
The last term in the right-hand side of Eq.~(\ref{eq:TLBE}), $S_i$, is what defines the forcing scheme,
i.e. this term is responsible for adding the effects of an external force field, $F_{\alpha}$,
in the recovered macroscopic conservation equations.
One of the most widely used forcing scheme in literature was developed by \citet{guo2002discrete},
and it can be described as follows:
\begin{equation}
\label{eq:GFS}
S_i = C_{ij} w_j \bigg( \frac{c_{j \alpha}}{c_s^2} F_{\alpha}
+ \frac{(c_{j \alpha}c_{j \beta} - c_s^2 \delta_{\alpha \beta})}{c_s^4}
F_{\alpha}u_{\beta} \bigg),
\end{equation}
where the term $C_{ij}$ depends on whether the BGK, Eq.~(\ref{eq:GFSBGK}), or the MRT,
Eq.~(\ref{eq:GFSMRT}), collision operator is being used. Both definitions can, respectively,
be given by:
\begin{subequations}
\begin{equation}
\label{eq:GFSBGK}
C_{ij} = \bigg( 1 - \frac{1}{2 \tau} \bigg) \delta_{ij},
\end{equation}
\begin{equation}
\label{eq:GFSMRT}
C_{ij} = \left[ \bm{M}^{-1} \bigg( \bm{I} - \frac{\bm{\Lambda}}{2} \bigg) \bm{M} \right]_{ij},
\end{equation}
\end{subequations}
where $\bm{I}$ is the identity matrix. The relation between particle distribution
functions $f_i$ and the actual fluid velocity $\bm{u}$ depends on the forcing scheme.
For the \citeauthor{guo2002discrete} forcing scheme, density and velocity
fields are given by:
\begin{subequations}
\begin{equation}
\label{eq:FD}
\rho = \sum_i f_i,
\end{equation}
\begin{equation}
\label{eq:FMD}
\rho \bm{u} = \sum_i f_i \bm{c}_i + \frac{\bm{F}}{2}.
\end{equation}
\end{subequations}
The momentum density shown in Eq.~(\ref{eq:FMD}) needs to take into account the force field
term, $\bm{F}/2$, in order for the numerical scheme to recover second-order
accurate conservation equations under the influence of an external force field.
The LBE describes the evolution of particle distribution functions, however, the variables
of interest are the macroscopic flow fields. The correspondence between the LBE and the
macroscopic behavior that it simulates can be shown through different approaches.
The standard procedure is the Chapman-Enskog analysis, and one alternative is the
recursive substitution developed by \citet{wagner1997theory} and further developed by \citet{holdych2004truncation} and \citet{kaehler2013derivation}.
Up to second order terms, both procedures result in the same behavior, and it is not known if differences at higher orders will occur.
Either approach recovers the mass and momentum conservation equations to second order:
\begin{subequations}
\begin{equation}
\label{eq:MCE}
\partial_t \rho + \partial_{\alpha} (\rho u_{\alpha}) = 0,
\end{equation}
\begin{equation}
\label{eq:NSMCE}
\partial_t (\rho u_{\alpha}) + \partial_{\beta} (\rho u_{\alpha} u_{\beta}) =
- \partial_{\beta} p_{\alpha \beta}
+ \partial_{\beta} \tau_{\alpha \beta} + F_{\alpha},
\end{equation}
\end{subequations}
where the stress tensor, $\tau_{\alpha \beta}$, is given by
$\tau_{\alpha \beta} = \rho c_s^2 (\tau - 0.5) (\partial_{\beta} u_{\alpha} + \partial_{\alpha} u_{\beta})$
for the BGK collision operator, Eq.~(\ref{eq:BGKCO}). The pressure tensor is given by
$p_{\alpha \beta}=\rho c_s^2 \delta_{\alpha \beta}$. When MRT collision operator is applied,
it is possible to adjust the bulk and shear viscosity independently in the stress tensor,
since a greater number of relaxation times are used.
A more thorough analysis applying the MRT collision operator can be seen in the work
of \citet{kaehler2013derivation}.
Even though the LBE recovers the correct form of the Navier-Stokes up to second order terms,
several studies have shown that the third order spatial discretization errors due to the
forcing scheme play an important role in pseudopotential methods. These errors must be
taken in account for the correct multiphase behavior prediction of the method.
Third order analysis of the LBE considering different forcing schemes have been carried out in the LB literature.
\citet{zhai2017pseudopotential}, through Chapman-Enskog analysis, evaluated
the recovered macroscopic equations up to the third order, considering the \citeauthor{guo2002discrete} forcing scheme.
\citet{lycett2015improved} also investigated third order therms of a generic
forcing scheme, using the technique developed by \citeauthor{holdych2004truncation}.
From the results of these studies, one can show that third order discretization error produced
by the \citeauthor{guo2002discrete} forcing scheme is given by:
\begin{equation}
\label{eq:TOSDEFS}
E_{\alpha}^{3rd} = \frac{c_s^2}{12} \partial_{\beta} \Big[
(\partial_{\gamma} F_{\gamma}) \delta_{\alpha \beta}
+ \partial_{\alpha} F_{\beta} + \partial_{\beta} F_{\alpha} \Big],
\end{equation}
this term should be added in the right-hand side of Eq.~(\ref{eq:NSMCE}) in
order to take into account the influence of higher order error in pseudopotential methods.
\subsection{Pressure Tensor and Phase Change} \label{sec:PMPC}
A common approach to address to multiphase lattice Boltzmann simulations is to define the force
field to be implemented in LBE, and then to analyze the resulting pressure tensor, from
which it is possible to draw conclusions of key multiphase features, such as equation of
state, liquid-gas coexistence curve and surface tension.
In this work, we use a general pressure tensor as starting point, and show how it is
related to multiphase flow properties. Afterwards, in next sessions, it is shown how it
can be implemented through a discrete force in LBE, and how it is possible to devise a better method
when compared to original Shan-Chen formulation.
A general pressure tensor from a single-phase pseudopotential method can be written as:
\begin{equation}
\label{eq:GPTEF}
\begin{aligned}
p_{\alpha \beta} &= \left( c_s^2 \rho + G \psi^2 + C_1 G (\partial_\gamma \psi)(\partial_\gamma \psi)
+ C_2 G \psi \partial_{\gamma} \partial_{\gamma} \psi \right) \delta_{\alpha \beta} \\
&+ C_3 G (\partial_{\alpha} \psi) (\partial_{\beta} \psi)
+ C_4 G \psi \partial_{\alpha}\partial_{\beta} \psi,
\end{aligned}
\end{equation}
where $C_{1,2,3,4}$ are arbitrary coefficients, $\psi$ is a density-dependent interaction
potential, $\psi=\psi(\rho)$, and $G$ is a parameter that controls the strength of interaction.
One should notice that for a uniform state, the pressure tensor is simplified to
$p_{\alpha \beta}=\left( c_s^2 \rho + G \psi^2 \right) \delta_{\alpha \beta}$.
This term plays the role of the equation of state, and upon this fact, \citet{yuan2006equations}
proposed the following definition:
\begin{equation}
\label{eq:DED}
\psi = \sqrt{\frac{P_{EOS}-c_s^2 \rho}{G}},
\end{equation}
where the term $P_{EOS}$ represents any desired equation of state to be introduced into the method.
When this technique is used, parameter $G$ no longer controls the interaction strength,
and it can be seen as an auxiliary parameter to keep the term inside the square root
positive.
Observing the recovered momentum conservation equation in Eq.~(\ref{eq:NSMCE}), one
may notice that what affects momentum balance is the divergence of the pressure tensor,
$-\partial_{\beta}p_{\alpha \beta}$, and not the pressure tensor itself.
Therefore, as pointed out by \citet{sbragaglia2007generalized},
different pressure tensors can reproduce identical hydrodynamic behaviors, as long as their
divergences are equal to each other.
By applying the following tensor identity (for more details refer to Appendix~\ref{sec:TI}):
\begin{eqnarray}
\label{eq:TI}
\partial_{\beta} \big[ \psi \partial_{\alpha} \partial_{\beta} \psi - \big( \psi \partial_{\gamma} \partial_{\gamma} \psi \big) \delta_{\alpha \beta} \big] = &&~
\partial_{\beta} \big [ (\partial_{\gamma} \psi) (\partial_{\gamma} \psi) \delta_{\alpha \beta} \nonumber \\
&& - (\partial_{\alpha} \psi) (\partial_{\beta} \psi) \big],
\end{eqnarray}
it is possible to show that the divergence of the tensor given by
Eq.~(\ref{eq:GPTEF}) is equivalent to the divergence of the following pressure tensor:
\begin{equation}
\label{eq:GPTRF}
\begin{aligned}
p_{\alpha \beta} &= \left( c_s^2 \rho + G \psi^2 + A_1 G (\partial_\gamma \psi)(\partial_\gamma \psi)
+ A_2 G \psi \partial_{\gamma} \partial_{\gamma} \psi \right) \delta_{\alpha \beta} \\
&+ A_3 G \psi \partial_{\alpha}\partial_{\beta} \psi,
\end{aligned}
\end{equation}
where $A_{1,2,3}$ are arbitrary coefficients that obey the following relations:
$A_1=C_1+C_3$, $A_2=C_2+C_3$ and $A_3=C_4-C_3$.
The reduced form of pressure tensor, Eq.~(\ref{eq:GPTRF}), is going to be used
throughout the text.
A suitable problem to check liquid-gas coexistence curve and thermodynamic consistency obtained
from the pressure tensor presented before is the planar interface between two phases in
mechanical equilibrium \cite{shan2008pressure}. Assuming $x$ and $y$ as the coordinates in,
respectively, the normal and tangential direction to the interface, one may simplify the pressure
tensor, once there is no gradients in $y$-direction, to:
\begin{subequations}
\begin{equation}
\label{eq:PTNNPI}
p_{x x} = c_s^2 \rho + G \psi^{2}
+ G \Big[ A_1 \Big( \frac{d \psi}{dx} \Big)^{2}
+ (A_2+A_3) \psi \frac{d^{2} \psi}{dx^{2}} \Big],
\end{equation}
\begin{equation}
\label{eq:PTTTPI}
p_{y y} = c_s^2 \rho + G \psi^{2}
+ G \Big[ A_1 \Big( \frac{d \psi}{dx} \Big)^{2}
+ A_2 \psi \frac{d^{2} \psi}{dx^{2}} \Big],
\end{equation}
\begin{equation}
\label{eq:PTNTPI}
p_{x y} = p_{y x} = 0.
\end{equation}
\end{subequations}
The mechanical equilibrium condition implies that the pressure tensor component $p_{xx}$ must be
constant and equal to the bulk pressure $p_{0}$ along the $x$ axis. By imposing this condition,
\citet{shan2008pressure} deduced that the gas and liquid densities obtained by the pseudopotential
method must satisfy the following relation:
\begin{equation}
\label{eq:PGLDR}
\int_{ \rho_{g} }^{ \rho_{l} } \left( p_{0} - c_s^2 \rho - G \psi^{2} \right) \frac{\dot{\psi}}{\psi^{1+\epsilon}} d \rho = 0,
\end{equation}
where $\epsilon=-2A_1/(A_2+A_3)$ and $\rho_{l}$, $\rho_{g}$ are the densities of the liquid and vapor phases, respectively. The dot, as in $\dot{\psi}$, denotes the derivative with respect to density $\rho$.
Another consequence of the equilibrium condition is that the bulk pressure of liquid and
vapor regions far away from the interface must also be equal to $p_0$:
\begin{subequations}
\begin{equation}
\label{eq:BPLP}
p_{0} = c_s^2 \rho_{l} + G [\psi(\rho_{l})]^{2},
\end{equation}
\begin{equation}
\label{eq:BPVP}
p_{0} = c_s^2 \rho_{g} + G [\psi(\rho_{g})]^{2}.
\end{equation}
\end{subequations}
Together, Eqs.~(\ref{eq:PGLDR}), (\ref{eq:BPLP}) and (\ref{eq:BPVP}) compose a
well-posed problem that can be solved for $p_{0}$, $\rho_{l}$ and $\rho_{g}$.
In fact, this problem resembles the Maxwell equal-area rule, which states that for
a given temperature, a thermodynamic consistent phase-change obeys the
following gas-liquid density relation:
\begin{equation}
\label{eq:MLVDR}
\int_{ \rho_{g} }^{ \rho_{l} } \left( p_{0} - P_{EOS} \right) \frac{d \rho}{\rho^{2}} = 0.
\end{equation}
By comparing Eqs.~(\ref{eq:MLVDR}) and (\ref{eq:PGLDR}), \citeauthor{lycett2015improved}
were able to conclude that correct thermodynamic consistency will be achieved when:
\begin{equation}
\label{eq:CTC}
\frac{\dot{\psi}}{\psi^{1+\epsilon}} d \rho \propto \frac{d \rho}{\rho^{2}}.
\end{equation}
From Eq.~(\ref{eq:CTC}) it is clear that the thermodynamic consistency of the
pseudopotential method depends on the equation of state used to define the interaction potential
and on parameter $\epsilon$, which in turn, are related to coefficients of pressure tensor.
Another important aspect of multiphase simulation is to properly control the surface tension.
According to \citet{rowlinson2013molecular}, the surface tension in diffuse interface models
can be defined as:
\begin{equation}
\label{eq:STDIM}
\gamma = \int_{-\infty}^{\infty} \Big( p_{xx} - p_{yy} \Big) dx,
\end{equation}
where, again, $x$ and $y$ are the interface normal and tangential directions, respectively.
Equation (\ref{eq:STDIM}) implies that the surface tension depends only on the anisotropic
part of the pressure tensor.
And, by consequence, it can be adjusted by the parameter $A_3$.
For a planar interface, Eqs.~(\ref{eq:PTNNPI}) and (\ref{eq:PTTTPI}) must be inserted into Eq.~(\ref{eq:STDIM}), resulting in the following relation:
\begin{equation}
\label{eq:STPI}
\gamma_{pi} = \int_{-\infty}^{\infty} A_3 \psi \frac{d^{2} \psi}{dx^{2}} dx,
\end{equation}
In order to compute the surface tension of the planar interface case $\gamma_{pi}$,
one can obtain the density profile that solve Eq.~(\ref{eq:PTNNPI}) (for specific values of the parameters $A_1$, $A_2$ and $A_3$) using a numerical method.
This differential equation can be solved replacing the derivatives
by finite difference approximations,
as for example, second order central
differences.
The resultant nonlinear set of equations can be solved using Newton-Raphson method with
the phase densities (obtained by solving Eq.~(\ref{eq:PGLDR})) at the borders
as boundary conditions.
With the knowledge of the density profile $\rho(x)$, the interaction potential profile
is determined $\psi(x)=\psi(\rho(x))$.
After that the surface tension can be computed using a numerical integration procedure
to integrate Eq.~(\ref{eq:STPI}).
\subsection{Shan-Chen method} \label{sec:EBPT}
The pseudopotential method originated when \citet{shan1993lattice} proposed
a interaction force similar to:
\begin{equation}
\label{eq:SCF}
F_\alpha^{SC} = - \psi ( \bm{x} ) \frac{2G}{c_s^2} \sum w_i \psi ( \bm{x} + \bm{c}_i ) c_{i\alpha}.
\end{equation}
Using Taylor series expansion, a continuum form of the \citeauthor{shan1993lattice} force is obtained:
\begin{equation}
\label{eq:CFSCF}
F_\alpha^{SC} = - G \Big( \partial_{\alpha} \psi^2 + c_{s}^{2} \psi \partial_{\alpha} \Delta \psi + ... \Big).
\end{equation}
The momentum conservation equation, Eq.~(\ref{eq:NSMCE}), shows that the natural pressure tensor
of the LBM is $p_{\alpha \beta}=c_s^2 \rho \delta_{\alpha \beta}$.
Neglecting the higher order terms in Eq.~(\ref{eq:CFSCF}), it is possible to introduce this force
into the pressure tensor using the relation
$-\partial_{\beta} p_{\alpha \beta}^{SC}=-\partial_{\alpha} (\rho c_s^2\delta_{\alpha \beta}) + F_{\alpha}^{SC}$.
The following relation is obtained:
\begin{align}
\label{eq:SCPTWDE}
p_{\alpha \beta}^{SC} = & \bigg( c_s^2 \rho + G \psi^2 - \frac{c_s^2 G}{2} (\partial_\gamma \psi)(\partial_\gamma \psi) \bigg) \delta_{\alpha \beta} \nonumber \\
& + c_s^2 G \psi \partial_{\alpha} \partial_{\beta} \psi.
\end{align}
This pressure tensor does not give the correct results for the coexistence
curve. It is necessary to take into account the effect of the third order spatial discretization
errors of the forcing scheme. For the \citeauthor{guo2002discrete} forcing scheme,
this error is given by Eq.~(\ref{eq:TOSDEFS}).
It is possible to evaluate the new pressure tensor, by substituting Eq.~(\ref{eq:CFSCF})
into Eq.~(\ref{eq:TOSDEFS}). For simplification, here will be considered
$F_{\alpha}^{SC} \approx -G \partial_{\alpha} \psi^2$. In this way, using Eq.~(\ref{eq:TI}),
the discretization errors assume the form:
\begin{align}
E_{\alpha}^{3rd} = & - \partial_{\beta} \left( \frac{c_s^2 G}{2} (\partial_\gamma \psi)(\partial_\gamma \psi)
+ \frac{c_s^2 G}{2} \psi \partial_{\gamma} \partial_{\gamma} \psi \right) \delta_{\alpha \beta} \nonumber \\
& = -\partial_{\beta} p_{\alpha \beta}^{3rd},
\end{align}
where $p_{\alpha \beta}^{3rd}$ is the effect caused by the third order discretization errors
in the pressure tensor. Adding $p_{\alpha \beta}^{3rd}$ to Eq.~(\ref{eq:SCPTWDE}), the correct
form of the pressure tensor for the pseudopotential method using the \citeauthor{shan1993lattice} force and the
\citeauthor{guo2002discrete} forcing scheme is obtained:
\begin{align}
\label{eq:CFPTSF}
p_{\alpha \beta}^{SC} = & \Big( c_s^2 \rho + G \psi^2 + \frac{c_s^2 G}{2} \psi \partial_{\gamma} \partial_{\gamma} \psi \Big) \delta_{\alpha \beta} \nonumber\\
& + c_s^2 G \psi \partial_{\alpha} \partial_{\beta} \psi.
\end{align}
This highlights a key limitation of the Shan-Chen method: it is not possible to adjust
the coexistence density curve, dependent on the pressure tensor, and the surface tension independently since they are both derived from the interaction potential $\psi$.
\section{Incorporating the Pressure Tensor into the LBM}
\label{sec:DPTPM}
We propose a top-down approach to overcome the limitations inherent in the Shan-Chen method.
The starting point is the complete pressure tensor, Eq.~(\ref{eq:GPTRF}).
Suitable force fields are devised to add the effect of the desired terms
of this tensor into the recovered macroscopic conservation equations.
Then, this interaction forces are directly discretized and later they are incorporated in the LBE.
In Sec.~\ref{sec:GIPFP}, we present a general inter-particle force for the pseudopotential model.
After that, in Sec.~\ref{sec:MOTED}, we discuss how to obtain numerical approximations to discretize the force terms.
In Sec.~\ref{sec:FA} we discuss the method used to incorporate
the effect of the desired pressure tensor in the recovered macroscopic conservation equation.
\subsection{Fundamental inter-particle force calculation}
\label{sec:GIPFP}
Originally the Shan-Chen method was developed with a microscopic interaction picture in mind. We review here an approach to make this relation more direct. A fundamental approach to analyze a lattice Boltzmann method is given by the Molecular Dynamics Lattice Boltzmann (MDLG) approach developed by Parsa \textit{et al.} in \cite{parsa2017lattice}. The key idea here is to map a Molecular Dynamics (MD) simulation onto a lattice gas. The Boltzmann average of this lattice gas is then, in some sense, the most fundamental definition of a lattice Boltzmann method. This approach has proven useful in analyzing the fluctuations in non-ideal systems \cite{parsa2019large}. Here we use a theoretical approach to write down a fundamental representation of the lattice Boltzmann forcing term.
In a MD simulation, the conservative force $\bm{F}$ on one particle is computed considering the
potential energy $V_{jk}$ between particles $j$ and $k$ by:
\begin{eqnarray}
\label{eq:CFOP}
\bm{F}_{j} = - \sum_{k} \partial_{\bm{x}_{k}} V_{jk} ( | \bm{x}_{j} - \bm{x}_{k} | ).
\end{eqnarray}
Formally, we can write this as a continuous force field obtained from an integral over densities:
\begin{eqnarray}
\label{eq:CFF}
\bm{F}(\bm{x},t) = \rho(\bm{x},t) \int d\bm{x}' \rho(\bm{x}',t) \partial_{\bm{x}'} V( | \bm{x} - \bm{x}' | ),
\end{eqnarray}
where we define the density as $\rho(\bm{x},t)=\sum_{j} \delta(\bm{x}-\bm{x}_{j}(t))$. The key here is that in
LB we have lattice cells that receive momentum from their neighboring cells. This is why we now coarse-grain the MD simulation onto a lattice. We define a discrete covering of lattice cells, and a function $\Delta_\zeta(\bm{x})$ which indicates whether the position $x$ is contained in the lattice cell $\zeta$. We then integrate the force
field over a lattice site to give:
\begin{eqnarray}
\label{eq:FFOLS}
&&~ \tilde{\bm{F}}(\zeta,t) = \int d\bm{x} \bm{F}(\bm{x},t) \Delta_{\zeta} (\bm{x}) \nonumber \\
&& = \int d\bm{x} \rho(\bm{x},t)\Delta_{\zeta} (\bm{x}) \int d\bm{x}' \rho(\bm{x}',t) \partial_{\bm{x}'} V( | \bm{x} - \bm{x}' | ),
\end{eqnarray}
where the lattice space is represented by:
\[
\Delta_{\zeta}(\bm{x}) =
\begin{cases}
1,\mbox{if $\bm{x}$ is in } \zeta \\
0, \mbox{ else} \\
\end{cases}.
\]
Now in order to consider the interaction between particles from different lattice sites
the last integral is translated into the next sum, which means that the space is now fully decomposed into
lattice sites:
\begin{eqnarray}
\label{eq:FCIBDLS}
&&~ \int d\bm{x}' \rho(\bm{x}',t) \partial_{\bm{x}'} V( | \bm{x} - \bm{x}' | ) \nonumber \\
&& = \sum_{\eta} \int d\bm{x}' \rho(\bm{x}',t) \Delta_{\eta}(\bm{x}') \partial_{\bm{x}'} V( | \bm{x} - \bm{x}' | ).
\end{eqnarray}
Thus, this sum over $\eta$ is introduced into Eq.~(\ref{eq:FFOLS}) to obtain a force
representation related to the lattice Boltzmann force that means also a sum over
neighboring lattice sites:
\begin{eqnarray}
\label{eq:MT}
&&~ \tilde{\bm{F}}(\zeta,t) = \int d\bm{x} \rho(\bm{x})\Delta_{\zeta} (\bm{x}) \nonumber \\
&& \times \sum_{\eta} \int d\bm{x}' \rho(\bm{x}') \Delta_{\eta}(\bm{x}') \partial_{\bm{x}'} V( | \bm{x} - \bm{x}' | ).
\end{eqnarray}
However, this is only an instantaneous force. A lattice Boltzmann (or lattice gas) method has a finite time step $\Delta t$, and the forcing term includes all the momentum absorbed during this finite time-step \cite{li2007symmetric}. The total amount of momentum $\bm{a}$ obtained is then:
\begin{equation}
\bm{a}(\zeta,T) = \int_{T\Delta t}^{(T+1)\Delta t} \tilde{\bm{F}}(\zeta,t) \; dt,
\end{equation}
where $T$ is the integer time of the simulation.
This is a fluctuating quantity, since it depends on the microscopic details of initial
particle occupation. The next step is the application of the Boltzmann average, to look
to all possible distributions that are compatible with the given macroscopic state.
The definition of the probability of finding a particular configuration is then assumed
to follow some local equilibrium assumption.
\begin{eqnarray}
\label{eq:FFCIBDLS}
\tilde{\bm{a}} = \langle \bm{a}(\zeta,T) \rangle = \int d\rho(\bm{x},t) P(\rho(\bm{x},t)) \bm{a}(\zeta,T).
\label{eqn:a}
\end{eqnarray}
In an isothermal equilibrium system, the probability for a configuration $\rho(\bm{x})$ would be given by:
\begin{eqnarray}
\label{eq:DP}
P(\rho(\bm{x})) = \frac{1}{Z} e^{ - \frac{H(\rho(\bm{x}))}{k_{B}T} },
\end{eqnarray}
where $H(\rho(\bm{x}))$ is the energy associated with the configuration $\rho(\bm{x})$, $k_B$ is the Boltzmann constant and $T$ the temperature. $Z$ is the partition function.
In a general non-equilibrium situation, however, finding the probability of a density configuration is more challenging. Nonetheless, it is usual to make the assumption of local equilibrium for each lattice site, i.e. assuming that the particle configurations are in (or very close to) a local-equilibrium configuration with the constraint of the coarse-grained lattice densities at different lattice sites, which will be out of equilibrium.
Analytically deriving a force using Eq.~(\ref{eq:FFCIBDLS}) is a difficult computational task which we leave to a future publication. Here we want to point to a feature that appears when the size of the lattice is much larger than the interparticle interaction range and the mean-square displacement during a timestep $\Delta t$ is likewise much smaller than that a lattice site, as is common in macroscopic and mesoscopic lattice Boltzmann applications: in this case the term $\bm{a}$ only depends on a close neighborhood of lattice sites around the site we are considering.
We therefore propose, as an Ansatz, a general force for LB as expressed by Eq.~(\ref{eq:Ansatz}) which preserves the locality of the forcing term predicted by (\ref{eq:FFCIBDLS}).
Like the standard Shan-Chen approach this force contains the interaction potential function $\psi = \psi(\rho)$ which can be adjusted
to obtain the desired pressure tensor given by Eq.~(\ref{eq:GPTEF}):
\begin{eqnarray}
\label{eq:Ansatz}
F_{\alpha} = \sum_{i} \sum_{j} A_{ij} \psi(\rho(\bm{x}+\bm{c}_i)) \psi(\rho(\bm{x}+\bm{c}_j)).
\end{eqnarray}
As in the original Shan-Chen approach the function $\psi$ and the tensor $A_{ij}$ are then adjusted, such that we obtain the desired expression of pressure tensor, Eq.~(\ref{eq:GPTEF}).
It should be noted that, in contrast to the approaches by \cite{sbragaglia2007generalized,kharmiani2019alternative}, the general force field represented by Eq.~(\ref{eq:Ansatz})
is formulated considering the nearest-neighbor lattices only.
\subsection{Interaction potential moments}
\label{sec:MOTED}
The pressure tensor, Eq.~(\ref{eq:GPTRF}), is composed by the interaction potential function and its
spatial derivatives. Thus, any attempt to evaluate it shall inevitably involve some numerical approximations for these derivatives. One of the simplest procedures would be using finite difference stencils
to perform these approximations.
A deeper and thorough explanation about these can be found in any classic finite difference method textbook \cite{leveque2007finite}.
By analyzing Eq.~(\ref{eq:SCF}), one may realize that the numerical scheme of the Shan-Chen force
can be interpreted as calculating the discrete first order moment of the term $\psi(\bm{x}+\bm{c}_i)$.
In this section, a procedure to obtain the finite difference schemes written in the notation
of these moments will be presented.
As only nearest-neighbor interactions are being considered, for the position $\bm{x}$, the
operations must be done only with the values of the interaction potential $\psi(\bm{x}+\bm{c}_i)$.
The Taylor series expansion of this term is given as follow:
\begin{eqnarray}
\label{eq:TSEED}
\psi(\bm{x}+\bm{c}_i) = &&~ \psi(\bm{x}) + c_{i \alpha} \partial_{\alpha} \psi(\bm{x})
+ \frac{1}{2} c_{i \alpha}c_{i \beta} \partial_{\alpha} \partial_{\beta} \psi(\bm{x}) \nonumber \\
&& + \frac{1}{6}c_{i \alpha}c_{i \beta}c_{i \gamma} \partial_{\alpha} \partial_{\beta} \partial_{\gamma} \psi(\bm{x}) + ...
\end{eqnarray}
In Eq.~(\ref{eq:TSEED}), one may observe that in each of the right-hand side terms, there is a
polynomial in variables related to the lattice velocities.
As for example, the first three terms involves, respectively, $1$, $c_{i \alpha}$ and $c_{i \alpha}c_{i \beta}$.
Since it is a common practice to represent functions by discrete Hermite expansions in the LBM literature,
it would be very convenient to rewrite the terms of Eq.~(\ref{eq:TSEED}) in the following form:
\begin{align}
\label{eq:HEED}
w_i \psi(\bm{x}+\bm{c}_i) = & w_i \bigg[ M^0 + \frac{ c_{ i\alpha } }{ c_s^2 } M_\alpha^1
+ \frac{ c_{i\alpha} c_{i\beta} - c_s^2 \delta_{ \alpha \beta } }{ 2c_s^4 } M_{ \alpha \beta }^2 \nonumber \\
& + ... \bigg].
\end{align}
The moments of $w_i \psi(\bm{x}+\bm{c}_i)$ are given by the following relations:
\begin{subequations}
\begin{equation}
\label{eq:ZOMED}
M^0 = \sum_i w_i \psi(\bm{x}+\bm{c}_i) \approx \psi(\bm{x}) + \frac{c_s^2}{2} \Delta \psi(\bm{x}),
\end{equation}
\begin{equation}
\label{eq:FOMED}
M_{\alpha}^1 = \sum_i w_i c_{i \alpha} \psi(\bm{x}+\bm{c}_i) \approx
c_s^2 \partial_{\alpha} \psi(\bm{x})
+ \frac{c_s^4}{2} \partial_{\alpha} \Delta \psi(\bm{x}),
\end{equation}
\begin{equation}
\label{eq:SOMED}
M_{\alpha \beta}^2 = \sum_i w_i (c_{i\alpha} c_{i\beta} - c_s^2 \delta_{ \alpha \beta }) \psi(\bm{x}+\bm{c}_i) \approx c_s^4 \partial_{\alpha} \partial_{\beta} \psi(\bm{x}).
\end{equation}
\end{subequations}
It is worth mentioning that, when using the D2Q9 lattice, there are nine linearly independent discrete Hermite polynomials.
In Eq.~(\ref{eq:HEED}), only six of them were used, since they suffice for the purposes of the present work.
\subsection{Developing a force approach}
\label{sec:FA}
As discussed in section~(\ref{sec:PMPC}), the terms of the pressure tensor controls the
multi-phase properties of the pseudopotential method. In particular, the following terms
are useful:
\begin{subequations}
\begin{equation}
\label{eq:PTCCC}
p^{(1)}_{\alpha \beta} = (\partial_\gamma \psi)(\partial_\gamma \psi) \delta_{\alpha \beta},
\end{equation}
\begin{equation}
\label{eq:PTCST}
p^{(2)}_{\alpha \beta} = \psi \partial_\alpha \partial_\beta \psi - (\psi \partial_\gamma \partial_\gamma \psi) \delta_{\alpha \beta}.
\end{equation}
\end{subequations}
The term $p_{\alpha \beta}^{(1)}$ affects directly the value of the parameter $\epsilon$
in Eq.~(\ref{eq:PGLDR}), influencing the shape of the saturation curve. On the other
hand the pressure $p_{\alpha \beta}^{(2)}$ is related with surface tension, because the
first term of the right hand side of Eq.~(\ref{eq:PTCST}) is anisotropic. Note that this
term also do not affect the $\epsilon$ parameter, thus not changing the density relation
for the planar interface. In such way, by introducing these terms, the deficiencies of
Shan-Chen method can be corrected. These pressure terms can be converted in equivalent
forces in the macroscopic governing equations using the relations:
\begin{subequations}
\begin{equation}
\label{eq:CCFT}
F_{\alpha}^{(1)} = - \partial_{\beta} p_{\alpha \beta}^{(1)} = - 2 (\partial_{\beta} \psi)(\partial_{\alpha} \partial_{\beta} \psi),
\end{equation}
\begin{equation}
\label{eq:STFT}
F_{\alpha}^{(2)} = - \partial_{\beta} p_{\alpha \beta}^{(2)} = - \partial_{\beta} \left[ \psi \partial_{\alpha} \partial_{\beta} \psi - (\psi \partial_{\gamma} \partial_{\gamma} \psi) \delta_{\alpha \beta} \right].
\end{equation}
\end{subequations}
Using the tensor identity, Eq.~(\ref{eq:TI}), it is possible to rewrite Eq.~(\ref{eq:STFT}) to
the following form:
\begin{equation}
\label{eq:SSTFT}
\begin{aligned}
F_{\alpha}^{(2)} =& - \partial_{\beta} \left[ (\partial_{\gamma} \psi) (\partial_{\gamma} \psi) \delta_{\alpha \beta}
- (\partial_{\alpha} \psi) (\partial_{\beta} \psi) \right] \\
=&~(\partial_{\alpha} \psi)(\partial_{\beta} \partial_{\beta} \psi) - (\partial_{\beta} \psi)(\partial_{\alpha} \partial_{\beta} \psi).
\end{aligned}
\end{equation}
Now that it was obtained explicit expressions for the interaction forces, it is necessary
to replace the spatial derivatives for numerical approximations. This can be done with
the moments defined by Eqs.~(\ref{eq:ZOMED}), (\ref{eq:FOMED}) and (\ref{eq:SOMED}).
Truncating the series in its first term and replacing the derivatives, the following
expressions are obtained:
\begin{subequations}
\begin{equation}
\label{eq:DFTCCC}
F^{(1)}_{\alpha} = - 2 \frac{M^{1}_{\beta}}{c_s^2} \frac{M^{2}_{\alpha \beta}}{c_s^4},
\end{equation}
\begin{equation}
\label{eq:DFTCST}
F^{(2)}_{\alpha} = \frac{M^{1}_{\alpha}}{c_s^2} \frac{M^{2}_{\beta \beta}}{c_s^4} -
\frac{M^{1}_{\beta}}{c_s^2} \frac{M^{2}_{\alpha \beta}}{c_s^4}.
\end{equation}
\end{subequations}
Comparing Eq.~(\ref{eq:ZOMED}) and (\ref{eq:SOMED}) it can be concluded that another
option is to use $M^{2}_{\beta \beta} = 2 c_s^2 (M^0 - \psi)$. Eq.~(\ref{eq:DFTCCC}) and
(\ref{eq:DFTCST}) are very useful and can be used to improve the Shan-Chen pseudopotential
method to achieve thermodynamic consistency and adjustable surface tension. Based
on this finding, the force term showed in Eq.~(\ref{eq:IFT}) is proposed:
\begin{equation}
\label{eq:IFT}
F_\alpha = F^{SC}_{\alpha} - \frac{3}{4} \epsilon c_s^2 G F^{(1)}_{\alpha}
+ \left( \sigma - 1 \right) c_s^2 G F^{(2)}_{\alpha}.
\end{equation}
This force represents the general force proposed in Sec.~\ref{sec:GIPFP}. The tensor $A_{ij}$
is derived as follow. Expressing the fitting potential function as
$\psi(\bm{x})=\psi(\bm{x}+\bm{c}_0)$ with $\bm{c}_0=0$, one can write:
\begin{eqnarray}
\psi(\bm{x}+\bm{c}_0) = \sum_j\psi(\bm{x}+\bm{c}_j)\delta_{j0}.
\end{eqnarray}
The Shan-Chen force, Eq.~(\ref{eq:SCPTWDE}), can be rewritten as:
\begin{eqnarray}
F_{\alpha}^{SC} = \sum_i \sum_j \left[ -\frac{2G}{c_s^2} w_i \delta_{j0} \right]
\psi(\bm{x}+\bm{c}_i) \psi(\bm{x}+\bm{c}_j).
\end{eqnarray}
Combining Eqs.~(\ref{eq:FOMED}) and (\ref{eq:SOMED}) with Eq.~(\ref{eq:DFTCCC}) results:
\begin{eqnarray}
F_{\alpha}^{(1)} = &&~ -2 \left[ \sum_i \frac{w_i}{c_s^2} c_{i\beta} \psi(\bm{x}+\bm{c}_i) \right] \nonumber \\
&& \times \left[ \sum_j \frac{w_j}{c_s^4} (c_{j\alpha}c_{j\beta}-c_s^2 \delta_{\alpha \beta})
\psi(\bm{x}+\bm{c}_j) \right], \nonumber \\
F_{\alpha}^{(1)} = &&~ \sum_i \sum_j \left[ -2 \frac{w_i}{c_s^2} \frac{w_j}{c_s^4}
c_{i\beta}(c_{j\alpha}c_{j\beta}-c_s^2\delta_{\alpha\beta}) \right] \nonumber \\
&& \times~ \psi(\bm{x}+\bm{c}_i) \psi(\bm{x}+\bm{c}_j).
\end{eqnarray}
Now noting that the second term of the right side of Eq.~(\ref{eq:DFTCST}) is equal to
$F_{\alpha}^{(1)}/2$ and that the first term can be calculated with the help of
Eqs.~(\ref{eq:FOMED}) and (\ref{eq:SOMED}) as follows:
\begin{eqnarray}
\frac{M_{\alpha}^{1}}{c_s^2} \frac{M_{\beta\beta}^{2}}{c_s^4} =
\left[ \sum_i \frac{w_i}{c_s^2} c_{i\alpha} \psi(\bm{x}+\bm{c}_i) \right] \nonumber \\
\times \left[ \sum_j \frac{w_j}{c_s^4} (c_{j\beta}c_{j\beta}-c_s^2\delta_{\beta\beta}) \right], \nonumber \\
= \sum_i \sum_j \left[ \frac{w_i}{c_s^2} \frac{w_j}{c_s^4}
c_{i\alpha} (c_{j\beta}c_{j\beta}-c_s^2\delta_{\beta\beta}) \right] \nonumber \\
\times~\psi(\bm{x}+\bm{c}_i) \psi(\bm{x}+\bm{c}_j).
\end{eqnarray}
The term $F_{\alpha}^{(2)}$ is formulated as:
\begin{eqnarray}
F_{\alpha}^{(2)} = \sum_i \sum_j \frac{w_i}{c_s^2} \frac{w_j}{c_s^4}
\bigg[ c_{i\alpha} (c_{j\beta}c_{j\beta}-c_s^2\delta_{\beta\beta}) \nonumber \\
- c_{i\beta}(c_{j\alpha}c_{j\beta}-c_s^2\delta_{\alpha\beta}) \bigg]
\psi(\bm{x}+\bm{c}_i) \psi(\bm{x}+\bm{c}_j),
\end{eqnarray}
substituting the above relations into the Eq.~(\ref{eq:IFT}),
the $A_{ij}$ tensor from Eq.~(\ref{eq:Ansatz}) can be determined:
\begin{eqnarray}
A_{ij} = -\frac{2G}{c_s^2} w_i \delta_{j0} \nonumber \\
+ \left[ \frac{3}{2} \epsilon - ( \sigma - 1 ) \right] c_s^2 G
\frac{w_i}{c_s^2} \frac{w_j}{c_s^4} c_{i\beta} (c_{j\alpha}c_{j\beta}-c_s^2\delta_{\alpha\beta}) \nonumber \\
+ ( \sigma - 1 ) c_s^2 G
\frac{w_i}{c_s^2} \frac{w_j}{c_s^4} c_{i\alpha} (c_{j\beta}c_{j\beta}-c_s^2\delta_{\beta\beta}).
\end{eqnarray}
When the above force is incorporated into the lattice Boltzmann equation using the
\citeauthor{guo2002discrete} force scheme, it results in the following pressure tensor in the momentum
conservation equation:
\begin{eqnarray}
\label{eq:PTIF}
p_{\alpha \beta} = p^{SC}_{\alpha \beta} - \frac{3}{4} \epsilon c_s^2 G p^{(1)}_{\alpha \beta}
+ \left( \sigma - 1 \right) c_s^2 G p^{(2)}_{\alpha \beta},
\end{eqnarray}
using Eqs.~(\ref{eq:CFPTSF}), (\ref{eq:PTCCC}) and (\ref{eq:PTCST}) in the above relation it is
obtained the final expression:
\begin{eqnarray}
\label{eq:PTIFT}
p_{\alpha \beta} = &&~ \bigg( c_s^2 \rho + G \psi^2 - \frac{3}{4} \epsilon c_s^2 G (\partial_\gamma \psi)(\partial_\gamma \psi) \nonumber \\
&&~ + \left( \frac{3}{2} - \sigma \right) c_s^2 G \psi \partial_{\gamma} \partial_{\gamma} \psi \bigg) \delta_{\alpha \beta}
\nonumber \\
&&~ + \sigma c_s^2 G \psi \partial_{\alpha} \partial_{\beta} \psi.
\end{eqnarray}
The force was written in such a way that the parameter $\epsilon$, Eq.~(\ref{eq:CTC}), of the consistency condition
appears explicitly. This way, it is easy to adjust the coexistence curve and then control
the surface tension through the coefficient $\sigma$ of the anisotropic term of the
pressure tensor Eq.~(\ref{eq:PTIFT}).
\section{Numerical Simulations}
\label{sec:NS}
\subsection{Static droplet and coexistence curve}
\label{sec:SDCC}
The first aspect of the presented model that is tested is the ability of control
the coexistence curve of the pseudopotential method by the choice of the parameter
$\epsilon$. Numerical simulations were performed using Carnahan-Starling (C-S) equation
of state:
\begin{eqnarray}
\label{eq:CSEOS}
P_{EOS} = k \bigg[ c \rho T \frac{1 + b\rho + (b\rho)^2 - (b\rho)^3}{(1-b\rho)^3} - a \rho^2 \bigg],
\end{eqnarray}
the parameters were chosen to be $a=3.852462257$, $b=0.1304438842$ and $c=2.785855166$ which
are the same values used in reference \cite{kupershtokh2009equations}. These authors
introduced the scaling factor $k$ in Eq.~(\ref{eq:CSEOS}), which can also be used to increase
the stability of the pseudopotential method \cite{hu2013equations}. This parameter is set as $k=0.01$.
Following \cite{huang2011forcing}, the computational domain is given by a mesh of 200 $\times$ 200 nodes
with periodic boundary condition. A liquid droplet is initialized in the center of the domain using
the function:
\begin{eqnarray}
\label{eq:ISD}
\rho(x,y) = \frac{\rho_{l}+\rho_{g}}{2} - \frac{\rho_l-\rho_g}{2} \text{tanh}
\bigg[ \frac{2 (R-R_0)}{W} \bigg],
\end{eqnarray}
where $W=5$ and $R=\sqrt{(x-x_0)^2+(y-y_0)^2}$, with $(x_0,y_0)$ being the central position
of the computational domain. For a specific temperature, the values of $\rho_g$ and $\rho_l$
are initialized as the saturation densities obtained with Maxwell equal area rule.
The velocity was set as zero everywhere.
With the initialization of macroscopic fields, the equilibrium distribution function,
Eq.~(\ref{eq:SEDF}), is determined in each lattice and the particle distribution function
is set as equal to the equilibrium function. The lattice Boltzmann equation was solved using
the BGK collision operator, Eq.~(\ref{eq:BGKCO}), with $\tau = 0.8$. Simulations were carried until the following convergence criteria has being obeyed:
\begin{equation}
\label{eq:SSC}
\frac{ \sum \mid [\rho(t)-\rho(t-100)] \mid }{ \sum \mid \rho(t) \mid } < 10^{-6}.
\end{equation}
Setting $G=-1$, $\epsilon=0$, $\sigma=1$ in Eq.~(\ref{eq:IFT}), simulations were performed
for different temperatures. One can notice that using these parameters is equivalent to
use the original Shan-Chen force, Eq.~(\ref{eq:SCF}). Then, the value of $\epsilon$ is
adjusted until the saturation curve of the pseudopotential method matches with the one given
by the equal area rule. The value $\epsilon=1.73$ was found to provide this adjustment.
Results can be seen in Fig.~(\ref{fig:SDCC}).
\begin{figure}
\includegraphics[width=80mm]{./Coexistence_Curve.eps}
\caption{Coexistence densities curves for planar interface. Comparison between
results obtained with the lattice Boltzmann simulations for two values of
the $\epsilon$ parameter and the results obtained analytically with the Maxwell
equal area rule.}
\label{fig:SDCC}
\end{figure}
\subsection{Young-Laplace Test}
\label{sec:YLT}
In order to evaluate the influence of the parameter $\sigma$ in the surface tension,
further static droplet simulations were performed.
The numerical procedure is similar to the ones
used to produce the coexistence curve with the difference that all tests are conducted
with a fixed temperature of $T_r=0.8$ and fixed $\epsilon$ of value 1.73. Then the parameter
$\sigma$ is specified and when the droplet reaches an equilibrium stage, the surface tension
is measured using:
\begin{equation}
\label{eq:YLR}
\Delta P = \gamma \bigg( \frac{1}{R_1} + \frac{1}{R_2} \bigg),
\end{equation}
also known as the Young-Laplace relation. Fixing $\sigma$, simulations
are performed for different radius and it is expected that the surface tension to remain
constant. In the end, all this procedure is repeated for different values of $\sigma$.
In Eq.~(\ref{eq:YLR}), $\gamma$ is the surface tension, $\Delta P$ is the pressure difference across
the interface. The parameters $R_1$ and $R_2$ are the radius of curvature of the
interface. In the present case of a planar (2 dimension) droplet, there is only
one radius of curvature equal to the radius of the droplet. In order to measure the
radius, it was defined that the interface of the droplet was located in the region that
the density is equal to $\rho_m=(\rho_l+\rho_g)/2$.
In order to obtain a comparison for the lattice Boltzmann simulation results,
the surface tension of the planar interface case was computed using the procedure described in the end of Sec. \ref{sec:PMPC}.
Eq.~(\ref{eq:PTNNPI}) was solved numerically to obtain the density profile.
One should note that this equation depends only on the values of $A_1$ and $A_2+A_3$, which are given by Eq.~(\ref{eq:PTIFT}), being $A_1=-3 \epsilon c_s^2/4$ and $A_2 + A_3=3c_s^2/2$.
In this way the density profile does not depend on the $\sigma$ parameter, which influences only the surface tension value by the
coefficient $A_3$ in Eq.~(\ref{eq:STPI}), given by $A_3=\sigma c_s^2$.
The boundary conditions used are the phase densities. For a reduced temperature
$T_r=0.8$ and $\epsilon=1.73$, Eq.~(\ref{eq:PGLDR}) provide $\rho_{g}\approx0.1580$ and $\rho_{l}\approx2.3530$
as the vapor and liquid densities, respectively.
A spatial domain of length $L=30$ was used and the differential equation was solved using three different mesh sizes $\Delta x=0.1$, $\Delta x=0.05$ and $\Delta x=0.025$.
The density profile is shown in Fig.~(\ref{fig:TDP}). It can be observed convergence of results since the profiles are very close even with the mesh refinement.
\begin{figure}
\includegraphics[width=80mm]{./Theoretical_Density_Profile.eps}
\caption{Theoretical density profile for a planar interface case. It was adopted $\epsilon=1.73$ and a reduced temperature of $T_r=0.8$.
Boundary conditions were $\rho_g=0.1580$ and $\rho_l=2.3530$.}
\label{fig:TDP}
\end{figure}
After that the surface tension was computed using Eq.~(\ref{eq:STPI}).
It was obtained that the surface tension for a planar interface $\gamma_{pi}$ is given by
the expression $\gamma_{pi}\approx0.0148\sigma$, for the specified conditions.
A comparison between this results with the Young-Laplace test can be seen in Fig.~(\ref{fig:YLT}).
It is expected a small difference between the surface tension
values obtained with the droplet and the planar interface tests
because for the second case, the density profile is obtained
considering that the pressure is constant along the normal direction to the interface. This is not true for the
static droplet case. However, for large droplet radius, one may expect a
better agreement in results, since the interface curvature tends to zero, which approximates the case to a planar
interface problem. And this is exactly the behavior observed in Fig.~(\ref{fig:YLT}). For a radius of $50$ lattice sites,
which corresponds to $1/R=0.02$, the results of both cases were
very close. It was also observed that the method succeeds in controlling the surface tension by adjusting the parameter $\sigma$.
\begin{figure}
\includegraphics[width=80mm]{./Young_Laplace_Test.eps}
\caption{Young-Laplace tests for static droplet with $\sigma$ varying from 0.2 to 1.4 are performed for $\epsilon=1.73$ and
reduced temperature $T_r=0.8$. The solid lines represents
the theoretical surface tension for a planar interface $\gamma_{pi}(\sigma)$.}
\label{fig:YLT}
\end{figure}
The force term, Eq.~(\ref{eq:IFT}), was devised in such a way that the surface tension
could be adjusted without affecting the coexistence densities. In order to test this property,
further tests were performed. The static droplet was simulated in a similar way as previous examples,
but in this case, it was set $T_r=0.8$, $\epsilon=1.73$, $R_0=50$ (initial radius of the droplet)
and only $\sigma$ was varied. For each test, the surface tension and the densities of the
liquid and gas phases were measured. The results can be seen in Table~(\ref{tab:DVT}).
A comparison was carried out between the surface tension obtained
by simulations ($\gamma$) with the planar interface theoretical value ($\gamma_{pi}$),
for the same reduced temperature and $\epsilon$ parameter. Also, the phase densities results were compared
with the ones obtained by the Maxwell equal area rule, which are given by $\rho_{gm}=0.1665$ and $\rho_{lm}=2.3550$ for the vapor and liquid phase, respectively.
On Table~(\ref{tab:DVT}), it is observed that the surface tension can be widely varied without affecting significantly the phase densities.
\begin{table*}
\caption{Comparison between the variation of the surface tension with the variation of the
liquid and vapor densities, obtained with the adjust of the parameter $\sigma$
for a droplet of radius $R=50$ modelled by the C-S equation of state with a reduced
temperature $T_r=0.8$. It was also presented the theoretical surface tension value for
a planar interface $\gamma_{pi}$ and a comparison between the phase densities obtained
by simulations with the vapor and liquid densities consistent with the Maxwell equal area
rule, given by $\rho_{gm}=0.1665$ and $\rho_{lm}=2.3550$.} \label{tab:DVT}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
$\sigma$ & $\gamma$ & $\gamma_{pi}$ & $100\cdot\gamma/\gamma_{pi}$ & $\rho_g$ & $100\cdot\rho_g/\rho_{gm}$ & $\rho_l$ & $100\cdot\rho_l/\rho_{lm}$ \\ \hline
4 & 0.0603 & 0.0592 & 101.86 & 0.1595 & 95.80 & 2.3725 & 100.74 \\
2 & 0.0290 & 0.0296 & 97.97 & 0.1658 & 99.58 & 2.3644 & 100.40 \\
1 & 0.0145 & 0.0148 & 97.97 & 0.1688 & 101.40 & 2.3603 & 100.23 \\
1/2 & 0.0074 & 0.0074 & 100.00 & 0.1704 & 102.34 & 2.3583 & 100.14 \\
1/4 & 0.0039 & 0.0037 & 105.41 & 0.1711 & 102.76 & 2.3573 & 100.10 \\
1/8 & 0.0020 & 0.00185 & 108.11 & 0.1715 & 103.00 & 2.3568 & 100.08 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Droplet Oscillation}
\label{sec:DO}
The next case is a dynamic test. It consists in a elliptic droplet oscillating in a vapor
medium. Here, the C-S equation of state was used again.
Two simulations were conducted, for the reduced temperatures $T_r=0.6$ and $T_r=0.7$.
The surface tension values and the phase densities for these reduced temperatures
can be seen in Table~(\ref{tab:PRCases}).
The Young-Laplace test was applied to obtain the values of the surface tension.
\begin{table}[htbp]
\caption{\label{tab:PRCases}
Saturation densities and surface tension obtained through static droplet test for the
reduced temperatures $T_r=0.6$ and $T_r=0.7$ using the Carnahan-Starling (C-S) equation
of state.
}
\begin{ruledtabular}
\begin{tabular}{cccc}
\textrm{$T_r$}&
\textrm{$\rho_g$}&
\textrm{$\rho_l$}&
\textrm{$\gamma$}\\
\colrule
0.6 & 0.0224 & 3.1192 & 0.0461\\
0.7 & 0.0700 & 2.7504 & 0.0267\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=80mm]{./Droplet_Oscillation_1.eps}
\caption{Oscillation of an elliptic droplet for a fluid modelled by
the C-S equation of state with reduced temperature Tr = 0.6.}
\label{fig:DO1}
\end{figure}
\begin{figure}
\includegraphics[width=80mm]{./Droplet_Oscillation_2.eps}
\caption{Oscillation of an elliptic droplet for a fluid modelled by
the C-S equation of state with reduced temperature Tr = 0.7.}
\label{fig:DO2}
\end{figure}
It is desired to initialize an elliptic profile of major radius $R_{max}=30$ and minor radius
$R_{min}=27$ in a $200 \times 200$ grid. As the pseudopotential method is a diffuse interface technique,
a diffuse profile is initialized using Eq.~(\ref{eq:ISD}). But now $R_o$ is a function of space coordinates
$R_0=R_0(x,y)$ and it is given by the following relations:
\begin{subequations}
\begin{equation}
\label{eq:REL}
R_0(\theta) = \frac{R_{min}}{ \sqrt{ 1-(e\cos(\theta))^2 } },
\end{equation}
\begin{equation}
\theta(x,y) = \arctan \left( \frac{y-y_0}{x-x_0} \right),
\end{equation}
\begin{equation}
e = \sqrt{1- \left( \frac{R_{min}}{R_{max}} \right)^2},
\end{equation}
\end{subequations}
with $(x_0,y_0)$ being the central position of the computational domain. The initial
distribution function field is initialized equal to the equilibrium function
$f_i(t=0,\bm{x})=f_i^{eq}(t=0,\bm{x})$. It is clear that the initial state is not in equilibrium,
so it is expected some error due to the chosen initialization procedure. To solve this case,
it is used the lattice Boltzmann equation with the Gram-Shmidt based MRT collision operator, Eq.~(\ref{eq:MRTCO}).
This option is based on the fact that this collision term is more stable at low viscosity, which
is necessary in a dynamic test as viscosity dissipates perturbations rapidly. The force
scheme used is given by Eqs.~(\ref{eq:GFS}) and (\ref{eq:GFSMRT}). The relaxation matrix
(more details in Appendix~\ref{sec:MRT}) used is given by:
\begin{eqnarray}
\label{eq:RM}
\bm{\Lambda} = diag\left( 1,1,1,1,1,1,1,\tau^{-1},\tau^{-1} \right),
\end{eqnarray}
here, it was used $\tau=0.65$ which results in a kinematic viscosity $\nu=(\tau-0.5)/3=0.05$.
The droplet oscillation period is given analytically, according to \cite{lamb1932hydrodynamics},
by the relation:
\begin{eqnarray}
\label{eq:DOP}
T_a = 2 \pi \left[ n(n^2-1) \frac{\gamma}{\rho_l R_m^3} \right]^{-\frac{1}{2}},
\end{eqnarray}
where $R_m=\sqrt{R_{max}R_{min}}$ and $n=2$ for an initial elliptic shape \cite{li2013lattice,mukherjee2007pressure}.
The analytical result for $T_r=0.6$ is $T_a\approx3204$. The simulation is conducted for $4000$ time steps.
The distance between the right extremity of the ellipse to its center is measured at each $100$ time steps.
The results are shown on Fig.~(\ref{fig:DO1}).
The numerical period of oscillation obtained is $T_n=3200$ which represents an absolute relative error
of $0.1\%$ of the analytical solution.
For the case with reduced temperature $T_r=0.7$ the droplet has a thicker interface width in
comparison with the case for $T_r=0.6$. In this way, it is expected a larger deviation in the
solution. The analytical result using information from
Table~(\ref{tab:PRCases}) is $T_a\approx3953$. Again, the distance between the right extremity of the
ellipse to its center is measured at each $100$ time steps. The numerical period of oscillation is $T_n = 3600$
which represents an absolute relative error of $9\%$ of the analytical solution. Results are shown in Fig.~(\ref{fig:DO2}).
\section{Conclusion}
\label{sec:Conclusion}
In the present work, an interaction force able to control the liquid-gas density ratio
and the surface tension in the pseudopotential LBM was devised.
First, the pressure tensor was written in a generic form and the role of each term was
analyzed. Attention was paid to the property that different pressure tensors can result in the same divergence, reducing the number of terms necessary to describe
the pressure tensor.
After, the \citeauthor{shan1993lattice} model was studied by means of an
equivalent pressure tensor including the third order spatial discretizetion errors caused by the Guo forcing scheme.
Later, it was presented finite difference approximations for the terms that constitute
the pressure tensor. This approximations were written in the same notation as the
moments of the distribution function.
To devise the new interaction force, suitable terms of the generic pressure tensor were
selected to complement the \citeauthor{shan1993lattice} model. Then it was derived
an external force field able to replicate the effects of this pressure tensor terms in
the conservation equations. This force field was converted into a numerical scheme using
the finite difference approximations presented in Sec.~\ref{sec:MOTED}. The result
is a numerical force to be implemented into the LBM with the Guo forcing scheme.
Numerical simulations of a static droplet showed the ability of the method in control
the liquid-gas density ratio and surface tension. Also, good results with dynamic tests
were obtained. The proposed numerical scheme is versatile as the force was tested with
BGK and MRT collision operator with no change in the procedure to calculate the external
force. The new feature of this force is that it permits the control of these multiphase properties considering only nearest-neighbor interactions, which provides computational efficiency in comparison with current interaction forces available in the literature.
\begin{acknowledgments}
The authors acknowledge the support received from CAPES (Coordination for the Improvement of Higher Education Personnel, Finance Code 001), from CNPq (National Council for Scientific and Technological Development, process 304972/2017-7) and FAPESP (S\~ao Paulo Foundation for Research Support, 2016/09509-1 and 2018/09041-5), for developing research that have contributed to this study.
\end{acknowledgments}
|
1,116,691,500,964 | arxiv | \section{Introduction}
The Landau equation from plasma physics models the evolution of a particle density $f(t,x,v)\geq 0$ in phase space, see e.g. \cite{chapmancowling, lifschitzpitaevskii}. In spatial dimension $d$, the equation is given by
\begin{align}
\partial_t f + v\cdot \nabla_x f &= Q_L(f,f) := \nabla_v \cdot \left( \int_{\mathbb R^d} a(v-w)[f(w)\nabla f(v) - f(v)\nabla f(w)] \, \mathrm{d} w \right),\label{e:main}\\
a(z) &= a_{d,\gamma}|z|^{\gamma+2}\left(I - \frac z {|z|}\otimes \frac z{|z|}\right).
\end{align}
Here, $t\in [0,T_0]$, $x\in\mathbb R^d$, $v\in\mathbb R^d$, $\gamma \geq -d$, and $a_{d,\gamma}>0$ is a physical constant. The Landau equation arises as the limit of the Boltzmann equation as grazing collisions predominate \cite{alexandre2004landau}. We are interested in both the case of \emph{moderately soft potentials}, $\gamma\in (-2,0)$ and \emph{very soft potentials}, $\gamma \in [-d,-2]$. The case $d=3, \gamma = -3$ corresponds to Coulomb interaction between particles at small scales
As opposed to the Boltzmann collision operator, which is a purely integro-differential operator of fractional order, $Q_L$ is an operator of diffusion type whose coefficients depend nonlocally on $f$. In particular, the Landau equation \eqref{e:main} can be written in divergence form
\begin{equation}\label{e:divergence}
\partial_t f + v\cdot \nabla_x f = \nabla_v\cdot \left[\overline a(t,x,v)\nabla_v f\right] + \overline b(t,x,v)\cdot\nabla_v f + \overline c(t,x,v) f,
\end{equation}
or in nondivergence form
\begin{equation}\label{e:nondivergence}
\partial_t f + v\cdot \nabla_x f = \mbox{tr}\left[\overline a(t,x,v)D_v^2 f\right] + \overline c(t,x,v) f,
\end{equation}
with the coefficients $\overline a(t,x,v)\in \mathbb R^{d\times d}$, $\overline b(t,x,v) \in \mathbb R^d$, and $\overline c(t,x,v)\in \mathbb R$ defined by
\begin{align}
\overline a(t,x,v) &:= a_{d,\gamma}\int_{\mathbb R^d} \left( I - \frac w {|w|} \otimes \frac w {|w|}\right) |w|^{\gamma + 2} f(t,x,v-w) \, \mathrm{d} w,\label{e:a}\\
\overline b(t,x,v) &:= b_{d,\gamma}\int_{\mathbb R^d} |w|^\gamma w f(t,x,v-w)\, \mathrm{d} w,\label{e:b}\\
\overline c(t,x,v) &:= c_{d,\gamma}\int_{\mathbb R^d} |w|^\gamma f(t,x,v-w)\, \mathrm{d} w, \label{e:c}
\end{align}
for some constants $a_{d,\gamma}$, $b_{d,\gamma}$, and $c_{d,\gamma}$. When $\gamma = -d$, the expression for $\overline c$ must be replaced by $c_{d,\gamma} f$. We use both formulations \eqref{e:divergence} and \eqref{e:nondivergence}, which are equivalent as long as, say, $f\in H_{loc}^1$ and $f$ has enough decay so that $\overline a$, $\overline b$, and $\overline c$ are well-defined.
We make the following assumptions on the mass density, energy density, and entropy density:
\begin{align}
0<m_0\leq \int_{\mathbb R^d} f(t,x,v)\, \mathrm{d} v \leq M_0,\label{e:M0}\\
\int_{\mathbb R^d} |v|^2 f(t,x,v)\, \mathrm{d} v \leq E_0,
\qquad \text{and}\label{e:E0}\\
\int_{\mathbb R^d} f(t,x,v) \log f(t,x,v) \, \mathrm{d} v \leq H_0,\label{e:H0}
\end{align}
uniformly in $t\geq 0$ and $x\in\mathbb R^d$. In the spatially homogeneous case, i.e.~when $f$ is assumed to be independent of $x$, the mass and energy are conserved, and the entropy is monotonically decreasing; hence, in this case, it would suffice to assume that the initial data have finite mass, energy, and entropy. It is not currently known whether these hydrodynamic quantities stay under control for $t>0$ in the inhomogeneous case, so we include \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0} as \emph{a priori} assumptions.
We are interested in the regularity of weak solutions to \eqref{e:main}. We use the following notion of weak solution, which is implicitly used in \cite{golse2016} and \cite{cameron2017landau}:
\begin{definition}
We say $f:[0,T_0]\times \mathbb R^d\times\mathbb R^d\to\mathbb R_+$ is a weak solution of \eqref{e:divergence} if $f$, $\nabla_v f$, $\partial_t f + v\cdot \nabla_x f \in L^2_{loc}(\mathbb R^{2d+1})$, the coefficients $\overline a$, $\overline b$, and $\overline c$ are well-defined, and
\[\int_{\mathbb R^{2d+1}} (\partial_t f + v\cdot \nabla_x f)\phi \, \mathrm{d} v\, \mathrm{d} x \, \mathrm{d} t = \int_{\mathbb R^{2d+1}} \left(- \langle \overline a \nabla_v f, \nabla_v \phi\rangle + (\overline b \cdot \nabla_v f + \overline c f)\phi\right)\, \mathrm{d} v \, \mathrm{d} x \, \mathrm{d} t \]
for all $\phi \in H_0^1(\mathbb R^{2d+1})$.
\end{definition}
Our main result states that weak solutions immediately become smooth, for any initial data that is bounded by a Gaussian and regular enough for a weak solution to exist:
\begin{theorem}\label{t:main}
Let $\gamma \in (-2,0)$, and let $f:[0,T_0]\times\mathbb R^d\times\mathbb R^d\to\mathbb R_+$ be a bounded weak solution of the Landau equation \eqref{e:main} satisfying the bounds \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0}. There exists $\mu_0>0$ depending on $d$, $\gamma$, $m_0$, $M_0$, $E_0$, and $H_0$ such that if the initial data $f_{\rm in}$ satisfies
\[f_{\rm in}(x,v) \leq C_0 e^{-\mu|v|^2}\]
for some $C_0>0$ and $\mu>0$, then $f\in C^\infty((0,T_0]\times \mathbb R^d\times\mathbb R^d)$, and for any $\mu'<\min\{\mu_0,\mu\}$, any integer $j \geq 0$, and any multi-indices $\beta$ and $\eta$ with non-negative integer coordinates, the partial derivatives of $f$ satisfy the pointwise estimates
\begin{equation}\label{e:pointwise}
|\partial_t^j\partial_x^\beta\partial_v^\eta f(t,x,v)| \leq C \left(1+t^{-q}\right)e^{-\mu'|v|^2}.
\end{equation}
The constants $C, q\geq 0$ depend on $d$, $\gamma$, $m_0$, $M_0$, $E_0$, $H_0$, $\mu'$, $j$, $|\beta|$, $|\eta|$, and $C_0$.
For $\gamma\in [-d,-2]$, if we make the additional assumption that for all $t\in[0,T_0]$ and $x\in \mathbb R^d$,
\begin{equation}\label{e:4thmoment}
\int_{\mathbb R^d}|v|^p f(t,x,v)\, \mathrm{d} v \leq P_0,
\end{equation}
where $p$ is the smallest integer such that $p>\dfrac{d|\gamma|}{2+\gamma+d}$, then the same conclusion holds, with all constants depending additionally on $P_0$ and $\|f\|_{L^\infty([0,T_0]\times \mathbb R^d\times \mathbb R^d)}$. If $\gamma \leq -d/2-1$, the constants also depend on $T_0$.
\end{theorem}
The question of global-in-time existence of smooth solutions to \eqref{e:main} for non-perturbative initial data remains a challenging open problem. In the case of moderately soft potentials, Theorem \ref{t:main} implies a physically meaningful continuation criterion: any loss of smoothness of $f$ can be detected at the macroscopic level by a breakdown of the bounds on the mass, energy, or entropy density.
Our proof of Theorem \ref{t:main} relies on three elements:
\begin{enumerate}
\item[1.] The local H\"older continuity of solutions to \eqref{e:main}, which was established in \cite{wang2011ultraparabolic} and \cite{golse2016}.
\item[2.] Decay of the solution $f$ for large velocities, and corresponding decay in the local estimates, which is needed to pass regularity of $f$ to regularity of the coefficients $\overline a$ and $\overline c$ in \eqref{e:nondivergence}.
\item[3.] Local Schauder-type estimates for kinetic Fokker-Planck equations with H\"older continuous coefficients, which we prove in Section \ref{s:schauder} and apply iteratively in Section \ref{s:landau}.
\end{enumerate}
The second point is where our assumption that $f_{\rm in}$ is bounded by a Gaussian comes in. In \cite{cameron2017landau}, it was shown that this upper bound is propagated for all $t\in (0,T_0]$ when $\gamma\in (-2,0)$. We extend this to $\gamma\in [-d,-2]$ in Theorem \ref{t:gaussian}, under more restrictive assumptions; however, if we could guarantee by any other method that sufficiently high moments of the solution are finite (as in the hypotheses of \cite{chen2009smoothing} and \cite{liu2014regularization}, see below), our proof would still go through. It was shown in \cite{cameron2017landau} that solutions of \eqref{e:main} satisfying the hydrodynamic bounds \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0} satisfy \emph{a priori} pointwise decay proportional to $(1+|v|)^{-1}$ for arbitrary initial data, but this is not strong enough for our purposes because of the slowly decaying kernels in \eqref{e:a} and \eqref{e:c}. It was also shown in \cite{cameron2017landau} that \emph{a priori} Gaussian decay cannot hold without any decay assumption on $f_{\rm in}(x,v)$.
\subsection{Related work}
In \cite{chen2009smoothing}, the authors show that classical solutions of \eqref{e:main} defined on a three-dimensional torus are $C^\infty$ in all three variables, provided that infinitely many moments of the solution and its first eight derivatives in $x$ and $v$ remain bounded uniformly in time and provided that the solution remains bounded away from vacuum.
A corresponding result for solutions defined on $\mathbb R^3$ was shown in \cite{liu2014regularization}, in the case $\gamma\in [-3,-2)$. Our Theorem \ref{t:main} extends these results in the case where $f_{\rm in}$ is bounded by a Gaussian.
The assumptions \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0} are much weaker than the \emph{a priori} regularity hypotheses of \cite{chen2009smoothing} and \cite{liu2014regularization}, and are defined in terms of physically relevant hydrodynamic quanitites. At least in the case $\gamma\in (-2,0)$, our estimates do not depend quantitatively on the $L^\infty$ norm of $f$.
Local H\"older estimates for kinetic equations with rough coefficients were proven by Wang-Zhang \cite{wang2011ultraparabolic} and Golse-Imbert-Mouhot-Vasseur \cite{golse2016}, and this is the starting point for the application of our Schauder estimates. The first global regularity estimates for \eqref{e:main} in this setting (weak solutions with bounded mass, energy, and entropy) were established in \cite{cameron2017landau}. The ellipticity constants of the diffusion operator $Q_L$ degenerate as $|v|\to \infty$ in a non-isotropic way (see Appendix \ref{s:A}). To deal with this, we use a change of variables derived in \cite{cameron2017landau} to obtain an equation with universal ellipticity constants in a small cylinder (see Lemma \ref{l:T}).
Regarding the existence theory for \eqref{e:main}, global-in-time classical solutions have only been constructed in the close-to-equilibrium setting: see the work of Guo \cite{guo2002periodic} in the $x$-periodic case, and Mouhot-Neumann \cite{mouhot2006equilibrium} in the whole space. For general initial data, Villani \cite{villani1996global} constructed so-called renormalized solutions with defect measure for the Landau equation. More recently, He-Yang \cite{he2014boltzmannlandau} established the short-time existence of spatially periodic classical solutions to \eqref{e:main} in the Coulomb case ($\gamma = -d$) with initial data in a weighted $H_{x,v}^7$ space, by taking the grazing collisions limit in their estimates on the Boltzmann collision operator. They assume that the mass density of the initial data is uniformly bounded away from zero. Since this lower bound along with the bounds \eqref{e:M0}, \eqref{e:E0}, \eqref{e:H0}, and \eqref{e:4thmoment} can be shown to propagate for a short time, our Theorem \ref{t:main} combined with \cite{he2014boltzmannlandau} provides a $C^\infty$ solution to the Cauchy problem for suitable initial data. However, on physical grounds, the equation should be expected to be well-posed even with vacuum regions in the initial data. We explore this issue, as well as short-time existence for a broader range of $\gamma$, in a forthcoming paper.
For the spatially homogeneous Landau equation, $C^\infty$ smoothing was established in \cite{desvillettes2000landau} in the case $\gamma>0$ and \cite{villani1998landau} in the $\gamma = 0$ case. For $\gamma \in (-2,0)$, the upper bounds of \cite{silvestre2015landau} also imply smoothing via parabolic regularity theory. For $\gamma\in [-d,-2]$, the result of Theorem \ref{t:main} is new even in the space homogeneous case, to the best of our knowledge.
\subsection{Schauder estimates}
Our main technical tools are local Schauder-type estimates for linear kinetic Fokker-Planck equations of the form
\begin{equation}\label{e:holder}
\partial_t u + v\cdot \nabla_x u = \mbox{tr}(AD_v^2 u) + g,
\end{equation}
with $A$ and $g$ H\"older continuous (see Theorem \ref{t:weak-schauder} below). Schauder estimates have been established in the more general setting of ultraparabolic equations by Manfredini \cite{manfredini1997ultraparabolic}, DiFrancesco-Polidoro \cite{difrancesco2006schauder}, and Bramanti-Brandolini \cite{bramanti2007schauder}, among others. However, there are two complications involved in bootstrapping regularity estimates in this context: based on the natural scaling of the equation, Schauder estimates should be expected to bound two derivatives in $v$, one derivative in $t$, and two-thirds of a derivative in $x$ (i.e. the $\frac 2 3$-H\"older norm in $x$) of $u$, which is not enough to directly conclude $u$ is a classical solution. Even worse, Schauder estimates do not provide $C^\alpha$ estimates on $\partial_t u$, but rather on $\partial_t u + v\cdot\nabla_x u$. This is related to the non-symmetric Lie group structure of the equation, which shows up in the representation formula \eqref{e:convolution} of the solution.
To get around this, we prove a second estimate that bounds $\partial_t u$ and $\nabla_x u$ in terms of the $C^{1+\alpha}$-norm of $g$. We give elementary proofs of the estimates we need, using the explicit fundamental solution for constant-coefficient equations.
\subsection{Organization of the paper} In Section \ref{s:schauder}, we prove regularity estimates for kinetic equations with H\"older continuous coefficients. In Section \ref{s:landau}, we apply these estimates iteratively to weak solutions of the Landau equation.
In Appendix \ref{s:A}, we review the bounds on the coefficients $\overline a$, $\overline b$, and $\overline c$ in \eqref{e:nondivergence}.
\subsection{Notation}
We let $z=(t,x,v)$ denote a point in $\mathbb R_+\times \mathbb R^d\times \mathbb R^d$. For any $z_0=(t_0,x_0,v_0)$, define the Galilean transformation
\[\mathcal S_{z_0}(t,x,v) := (t_0+t, x_0 + x +tv_0,v_0+v).\]
We also have
\[\mathcal S_{z_0}^{-1}(t,x,v) := (t-t_0, x - x_0 -(t-t_0)v_0,v-v_0).\]
For $r>0$, define the scaling $\delta_r$ by
\[\delta_r(t,x,v) = (r^2t,r^3x,rv).\]
The class of equations of the form \eqref{e:holder} is invariant under $S_{z_0}$ and $\delta_r$. We also define the quasimetric
\[\rho(z,z') := \|\mathcal S_{z}^{-1} z'\| = |t'-t|^{1/2} + |x'-x - (t'-t)v|^{1/3} + |v'-v|,\]
where
\[\|z-z'\| := |t-t'|^{1/2} + |x-x'|^{1/3} + |v-v'|.\]
For any $r>0$ and $z_0 = (t_0,x_0,v_0)$, let
\[ Q_r(z_0) :=
(t_0-r^2,t_0] \times \{x : |x-x_0 - (t-t_0) v_0| < r^3 \}\times B_r(v_0), \]
and $Q_r = Q_r(0,0,0)$.
We say a constant is universal if it depends only on $\gamma$, $d$, $m_0$, $M_0$, $E_0$, and $H_0$ when $\gamma \in (-2,0)$. When $\gamma\in [-d,-2]$, we also allow universal constants to depend on $P_0$ and $\|f\|_{L^\infty([0,T_0]\times \mathbb R^d\times \mathbb R^d)}$. The notation $A\lesssim B$ means that $A\leq CB$ for a constant $C$ that depends on the quantities listed in the statement of the given lemma or theorem, and $A\approx B$ means that $A\lesssim B$ and $B\lesssim A$.
\section{Schauder estimates for linear kinetic equations}\label{s:schauder}
In this section, we obtain regularity estimates for equations of the form \eqref{e:holder}. We begin by defining H\"older norms and semi-norms that correspond to $\rho$.
\begin{definition} Let $Q\subseteq \mathbb R^{2d+1}$. For $u:Q\to \mathbb R$, define
\begin{align*}
[u]_{\alpha,Q} &:= \sup_{\substack{z,z'\in Q,\\ z\neq z'}} \frac{|u(z) - u(z')|}{\rho(z,z')^\alpha}\\
[u]_{\alpha,x,Q} &:= \sup_{\substack{(t,x,v),(t,x',v)\in Q,\\ x\neq x'}} \frac{|u(t,x,v) - u(t,x',v)|}{|x-x'|^{\alpha}},\\
[u]_{\alpha,t,Q} &:= \sup_{\substack{(t,x,v),(t',x,v)\in Q,\\ t\neq t'}}\frac{|u(t,x,v) - u(t',x,v)|}{|t-t'|^{\alpha}+|(t'-t)v|^{2\alpha/3}}\\
|u|_{0,Q} &:= \sup_{z\in Q} |u(z)|\\
|u|_{\alpha,Q} &:= |u|_{0,Q} + [u]_{\alpha,Q}\\
[u]_{1+\alpha,Q} &:= [\nabla_v u]_{\alpha,Q_1} + [u]_{(1+\alpha)/2,t,Q} + [u]_{(1+\alpha)/3,x,Q}\\
|u|_{1+\alpha,Q} &:= |u|_{0,Q} + |\nabla_v u|_{0,Q} + [u]_{1+\alpha,Q}\\
[u]_{2+\alpha,Q} &:= [D_v^2 u]_{\alpha,Q} + \left[\partial_t u\right]_{\alpha,Q} + [u]_{(2+\alpha)/3,x,Q}\\
|u|_{2+\alpha,Q} &:= |u|_{0,Q} + |\partial_t u|_{0,Q} + |\nabla_v u|_{0,Q} + |D_v^2 u|_{0,Q} + [u]_{2+\alpha,Q}.
\end{align*}
For $\beta\in (0,3)$, if $|u|_{\beta,Q} <\infty$, we say $u\in C^{\beta}(Q)$.
\end{definition}
If $u$ is in $C^\alpha(Q)$ by this definition, then in particular, $u$ is $\frac \alpha 3$-H\"older continuous in the Euclidean metric on $\mathbb R^{2d+1}$. We use the following lemma repeatedly:
\begin{lemma}[Interpolation Inequalities]\label{l:interp}
Let $Q = Q_r(z_0)$ for some $z_0\in \mathbb R^{2d+1}$ and $r>0$, and let $u\in C^{2+\alpha}(Q)$. There exists a constant $C$, depending only on the dimension, such that for any $\varepsilon>0$,
\begin{align*}
[u]_{\alpha,Q} &\leq \varepsilon^{2}[u]_{2+\alpha,Q} + C\varepsilon^{-\alpha}|u|_{0,Q},\\
|\partial_t u|_{0,Q} &\leq \varepsilon^\alpha [\partial_t u]_{\alpha,Q} + C\varepsilon^{-2}|u|_{0,Q},\\
|\nabla_v u|_{0,Q} &\leq \varepsilon^{1+\alpha}[u]_{2+\alpha,Q} + C\varepsilon^{-1}|u|_{0,Q},\\
[\nabla_v u]_{\alpha,Q} &\leq \varepsilon [u]_{2+\alpha,Q} + C\varepsilon^{-(1+\alpha)}|u|_{0,Q},\\
|D_v^2 u|_{0,Q} &\leq \varepsilon^\alpha [u]_{2+\alpha,Q} + C\varepsilon^{-2}|u|_{0,Q}.
\end{align*}
If $D_v^3 u, \nabla_x u\in C^{\alpha}(Q)$, we also have
\begin{align*}
|D_v^3 u|_{0,Q} &\leq \varepsilon^\alpha[D_v^3 u]_{\alpha,Q} + C\varepsilon^{-3}|u|_{0,Q}\\
|\nabla_x u|_{0,Q} &\leq \varepsilon^{\alpha} [\nabla_x u]_{\alpha,Q} + C\varepsilon^{-3}|u|_{0,Q}.
\end{align*}
\end{lemma}
The method of proving inequalities of this type is standard. (See, for example, \cite{manfredini1997ultraparabolic} or \cite[Theorem~8.8.1]{krylov_holder}). Briefly, it suffices to prove the case $\varepsilon = 1$ by scaling. To prove the first inequality, one estimates $|u(z) - u(z')|$ by writing $z-z'$ as a sum of segments parallel to the coordinate axes, and applying the mean value inequality. The details are omitted.
Finally, we define the non-scale-invariant H\"older seminorms that correspond to our regularity estimates:
\begin{definition}\label{d:prime}
For $Q \subseteq \mathbb R^{2d+1}$, $u:Q\to \mathbb R$, and $\alpha,\beta\in (0,1)$, define
\begin{align*}
[u]_{2+\alpha,\beta,Q}' :&= [D_v^2 u]_{\alpha,Q} + [u]_{(2+\alpha)/3,x,Q} + [u]_{\beta,t,Q},\\
[u]_{3+\alpha,Q}'' &:= [\partial_t u]_{\alpha,Q} + [\nabla_x u]_{\alpha,Q} + [D_v^3 u]_{\alpha,Q}.
\end{align*}
\end{definition}
\subsection{Constant coefficients}
Consider the equation
\begin{equation}\label{e:simple}
u_t + v\cdot \nabla_x u - \Delta_v u = g,
\end{equation}
in $(-1,0]\times \mathbb R^d\times \mathbb R^d $
with zero initial data at $t=-1$. The explicit fundamental solution for this equation is given by
\begin{equation}\label{e:Gamma}
\Gamma(z) := \begin{cases}\dfrac {C_d}{t^{2d}} \mbox{exp}\left(-\dfrac{|v|^2}{t} - \dfrac {3 v\cdot x}{t^2} - \dfrac {3|x|^2}{t^3}\right), &t>0,\\
0, &t\leq 0,\end{cases}
\end{equation}
where $C_d = (\sqrt{3} / (2\pi))^d$. More precisely, if $g$ is, say, continuous, bounded, and has support contained in $\{t > -1\}$ then \eqref{e:simple} is uniquely solved by
\begin{equation}\label{e:convolution}
u(z) = \int_{\mathbb R^{2d+1}} \Gamma\left(\mathcal S_{\zeta}^{-1} z\right) g(\zeta) \, \mathrm{d} \zeta,
\end{equation}
where $\zeta = (s,y,w)$ and $\mathcal S_{\zeta}^{-1} z = (t-s,x-y-(t-s)w,v-w)$. The fundamental solution $\Gamma$ is a special case of the solution constructed by H\"ormander \cite{hormander1967} for more general hypoelliptic equations. (See also \cite{lanconelli1994evolution, manfredini1997ultraparabolic}.) The following lemma provides a useful characterization of the homogeneity of the fundamental solution:
\begin{lemma}\label{l:Gamma-estimates}
For any partial derivative $\partial_t^j \partial^\beta_x\partial^\eta_v \Gamma$ of $\Gamma$, with $\beta$ a multi-index of order $k\geq 0$ and $\eta$ a multi-index of order $\ell\geq 0$, there exists a constant $C = C(d,j,k,\ell)$ such that for all $t>0$ and $p,q\geq 0$
\[\int_{\mathbb R^d}\int_{\mathbb R^d}|\partial_t^j \partial^\beta_x\partial^\eta_v \Gamma(t+\xi_1,y+\xi_2,w+\xi_3)||y|^p |w|^q \, \mathrm{d} w \, \mathrm{d} y \leq C t^{-(\ell/2 + j + 3k/2)+3p/2 + q/2}. \]
Further, if $\xi \in [0,1]\times \mathbb R^d \times \mathbb R^d$ and $\|\xi\| \leq t^{1/2}/2$, then
\[
\int_{\mathbb R^d}\int_{\mathbb R^d}|\partial_t^j\partial^\beta_x\partial^\eta_v \Gamma(z + \xi)||y|^p |w|^q \, \mathrm{d} w \, \mathrm{d} y
\leq C t^{-(\ell/2 + j + 3k/2) + 3p/2 + q/2},
\]
where $z = (t,x,v)$.
\end{lemma}
\begin{proof}
It is straightforward to show by induction that every partial derivative of $\Gamma$ can be written
\[\partial_t^j \partial^\beta_x\partial^\eta_v \Gamma(t,y,w) = P_{j,\beta,\eta}\left(\frac 1 {t^{1/2}} , \frac {y_1} {t^2}, \ldots,\frac {y_d}{t^2},\frac {w_1} {t}, \ldots,\frac {w_d}{t} \right)\Gamma(t,y,w),\]
with $P_{j,\beta,\eta}$ a homogeneous polynomial where each term is of degree exactly $\ell + 2j + 3k$. Since $\mbox{exp}( -|w|^2/t - 3 w\cdot y / t^2 - 3 |y|^2 / t^3) \leq \mbox{exp}(- |w|^2/(16 t) - 3|y|^2/(5t^3))$, formula \eqref{e:Gamma} for $\Gamma$ implies
\begin{align*}
\int_{\mathbb R^d}\int_{\mathbb R^d} |\partial_t^j \partial^\beta_x\partial^\eta_v&\Gamma(t,y,w)||y|^p |w|^q \, \mathrm{d} w \, \mathrm{d} y\\
& = \frac {C_d} {t^{2d}} \int_{\mathbb R^d} \int_{\mathbb R^d} \left| P_{j,\beta,\eta}\left(\frac 1 {t^{1/2}} , \frac {y_1} {t^2}, \ldots,\frac {y_d}{t^2},\frac {w_1} {t}, \ldots,\frac {w_d}{t} \right)\Gamma(t,y,w)\right||y|^p |w|^q\, \mathrm{d} w \, \mathrm{d} y\\
&\leq C_d \int_{\mathbb R^d} \int_{\mathbb R^d} \left|P_{j,\beta,\eta}\left(\frac 1 {t^{1/2}}, \frac {y_1} {t^{1/2}}, \ldots,\frac {y_d}{t^{1/2}},\frac {w_1} {t^{1/2}}, \ldots,\frac {w_d}{t^{1/2}} \right)\right.\\
&\quad \qquad \qquad\qquad\left. \times \mbox{exp}\left( -\dfrac{|\overline w|^2}{16} - \dfrac {3|\overline y|^2}{5 }\right)\right| t^{3p/2+q/2}|\overline y|^p |\overline w|^q \, \mathrm{d} \overline w \, \mathrm{d} \overline y\\
&\lesssim \left(\frac 1 {t^{1/2}}\right)^{\ell + 2j + 3k} t^{3p/2+q/2},
\end{align*}
where $\overline w = w/t^{1/2}$ and $\overline y = y/t^{3/2}$.
The proof of the second claim is almost identical, using the fact that $t\lesssim t + \xi_t \lesssim t$, where $\xi := (\xi_t, \xi_x, \xi_v)$.
\end{proof}
We now prove our main regularity estimates in the constant-coefficient case:
\begin{lemma}\label{l:convolution}
Suppose that $g \in C^\alpha(Q_1)$ has compact support in $Q_1$, for some $\alpha\in(0,1)$. Then the solution $u$ of \eqref{e:simple} in $Q_1$ satisfies
\[\begin{split}
[D_v^2u]_{\alpha,Q_1} + [u]_{(2+\alpha)/3,x,Q_1}
&\lesssim [g]_{\alpha,Q_1},
\end{split}\]
where the implied constant depends only on $\alpha$ and the dimension $d$. We also have $[u]_{\beta,t,Q_1}\lesssim [g]_{\alpha,Q_1}$ for any $\beta\in (0,1)$, so that
\[[u]_{2+\alpha,\beta,Q_1}' \lesssim [g]_{\alpha,Q_1}, \]
with $[\cdot]_{2+\alpha,\beta,Q_1}$ as in Definition \ref{d:prime}. In particular, $[u]_{1+\beta,Q_1} \lesssim [g]_{\alpha,Q_1}$
for any $\beta\in (0,1)$.
\end{lemma}
\begin{proof}
First, we estimate $[D_v^2 u]_{\alpha,Q_1}$. Since $g$ has compact support in $Q_1$, \eqref{e:convolution} implies that, for any $(t,x,v) \in Q_1$,
\begin{align*}
\partial_{v_iv_j} u(z) &= \int_{-1}^t \int_{\mathbb R^d}\int_{\mathbb R^d} \partial_{v_iv_j}\Gamma(t-s,x-y-(t-s)w,v-w) g(s,y,w)\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&= \int_{0}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d}\partial_{v_iv_j}\Gamma(s,y,w) g(t-s,x-y-s(v-w),v-w)\, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s,
\end{align*}
for $1\leq i,j\leq d$. Let $z = (t,x,v)$ and $z' = (t',x',v')$ be fixed points in $Q_1$ with $t \leq t'$. Further, let $h =\rho(z,z')$ and fix any $i,j\in \{1,\dots, d\}$. We write
\begin{align*}
&\partial_{v_iv_j} u(z) - \partial_{v_iv_j}u(z')\\
&= \left(\int_0^{2h^2} + \int_{2h^2}^{1+t}\right)\int_{\mathbb R^{d}}\int_{\mathbb R^d}\partial_{v_iv_j}\Gamma(s,y,w) \delta g(s,y,w) \, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s\\
&\quad - \int_{1+t}^{1+t'} \int_{\mathbb R^{d}}\int_{\mathbb R^d}\partial_{v_iv_j}\Gamma(s,y,w)g(t'-s,x'-y-s(v'-w),v'-w)\, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s\\
&=: I_1 + I_2 + I_3,
\end{align*}
where
\[\delta g(s,y,w) := g(t-s,x-y - s(v-w),v-w) - g(t'-s,x'-y-s(v'-w),v'-w).\]
We make the convention that if $2h^2 \geq 1+t$, then $I_2=0$.
Since $\mbox{spt}(g) \subset Q_1$, we have $|\delta g(s,y,w)-\delta g(s,y,0)| \leq 2[g]_{\alpha,Q_1}((s|w|)^{\alpha/3} + |w|^{\alpha})$. Observe that for any $s>0$, $y\in \mathbb R^d$,
\[\int_{\mathbb R^d} \partial_{v_iv_j}\Gamma(s,y,w) \, \mathrm{d} w = 0.\]
This allows us to estimate $I_1$ as follows:
\begin{align*}
|I_1| &= \left|\int_0^{2h^2}\int_{\mathbb R^{d}}\int_{\mathbb R^d} \partial_{v_iv_j}\Gamma(s,y,w) [\delta g(s,y,w)-\delta g(s,y,0)]\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\right|\\
&\leq 2 [g]_{\alpha,Q_1} \int_0^{2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d}|\partial_{v_iv_j}\Gamma(s,y,w)| ((s|w|)^{\alpha/3} + |w|^{\alpha})\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1}\int_0^{2h^2} s^{\alpha/2 - 1} \, \mathrm{d} s \lesssim [g]_{\alpha,Q_1}h^{\alpha},
\end{align*}
where the second-to-last inequality follows from Lemma \ref{l:Gamma-estimates}.
Changing variables in $I_2$ and adding and subtracting a term, we have
\begin{equation*}
\begin{split}
I_2
&= \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d} [\partial_{v_iv_j}\Gamma(t - s,x-y,v-w)g(s,y-(t-s)w,w)\\
&\qquad \qquad \qquad - \partial_{v_iv_j} \Gamma(t'-s,x'-y,v'-w)g(s,y-(t'-s)w,w)]\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&= \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d} \partial_{v_iv_j}\Gamma(t - s,x-y,v-w) \\
&\qquad \qquad \qquad \times [g(s,y-(t-s)w,w) - g(s,y-(t'-s)w,w)]\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\quad + \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d}[\partial_{v_iv_j}\Gamma(t - s,x-y,v-w) - \partial_{v_iv_j}\Gamma(t' - s,x'-y,v'-w)]\\
&\qquad \qquad \qquad \times g(s,y-(t'-s)w,w) \, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&=: I_2' + I_2''.
\end{split}
\end{equation*}
Re-defining $\delta g(s,y,w) := g(s,y-(t-s)w,w) - g(s,y-(t'-s)w,w)$, we have
\[|\delta g(s,y,w) - \delta g(s,y,v)| \leq [g]_{\alpha,Q_1}\left((|t-s|^{1/3}+|t'-s|^{1/3})|v-w|^{1/3} + 2|v-w|\right),\]
which implies
\begin{align*}
|I_2'| &= \left|\int_{-1}^{t-2h^2} \int_{\mathbb R^d}\int_{\mathbb R^d} \partial_{v_iv_j}\Gamma(t-s,x-y,v-w)[\delta g(s,y,w) - \delta g(s,y,v)] \, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\right|\\
&\lesssim [g]_{\alpha,Q_1}\int_{2h^2}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d} |\partial_{v_iv_j}\Gamma(s,y,w)| \left((s^{\alpha/3} + |t'-t+s|^{\alpha/3})|w|^{\alpha/3} + |w|^\alpha\right)\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1} \int_{2h^2}^{1+t} s^{-1} \left(s^{\alpha/2} + h^{2\alpha/3} s^{\alpha/6}\right)\, \mathrm{d} s
\lesssim [g]_{\alpha,Q_1} h^{\alpha},
\end{align*}
by Lemma \ref{l:Gamma-estimates}. For $I_2''$, first note that
\[\begin{split}
I_2''
= \int_{-1}^{t-2h^2} \int_{\mathbb R^d}\int_{\mathbb R^d} &[\partial_{v_iv_j} \Gamma(t-s,x-y,v-w) - \Gamma(t'-s,x'-y,v'-w)]\\
& \times [g(s,y-(t'-s)w,w) - g(s,y-(t'-s)v,v)]\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s.
\end{split}\]
We next note that, with $\zeta = (s,y,w)$,
\begin{align*}
|\partial_{v_iv_j}&\Gamma(t - s,x-y,v-w) - \partial_{v_iv_j}\Gamma(t' - s,x'-y,v'-w)|\\
&\leq \max_{\|\xi\|\leq h, \xi_1 \geq 0} \left(h^2|\partial_t\partial_{v_iv_j}\Gamma(z-\zeta+\xi)| + h^3|\nabla_x\partial_{v_iv_j}\Gamma(z-\zeta+\xi)|+h|\nabla_v\partial_{v_iv_j}\Gamma(z-\zeta+\xi)|\right),
\end{align*}
where we denote $\xi = (\xi_1, \xi_2, \xi_3)\in \mathbb R\times \mathbb R^d\times \mathbb R^d$.
Using these two facts along with the second half of Lemma~\ref{l:Gamma-estimates}, we hav
\begin{align*}
|I_2''|
&\lesssim [g]_{\alpha,Q_1}\int^{1+t}_{2h^2} \int_{\mathbb R^d}\int_{\mathbb R^d} \max_{\|\xi\|\leq h, \xi_1\geq 0}\big[ h^2|\partial_t\partial_{v_iv_j} \Gamma(\zeta + \xi)| + h^3|\nabla_x\partial_{v_iv_j}\Gamma(\zeta + \xi)|\\
&\qquad +h|\nabla_v\partial_{v_iv_j}\Gamma(\zeta + \xi)|\big]
\left(|t'-t+s+\xi_1|^{\alpha/3}|w-\xi_3|^{\alpha/3} + |w-\xi_3|^\alpha\right)\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1}\int^{1+t}_{2h^2} \int_{\mathbb R^d}\int_{\mathbb R^d} \max_{\|\xi\|\leq h, \xi_1\geq 0}\big[ h^2|\partial_t\partial_{v_iv_j} \Gamma(\zeta + \xi)| + h^3|\nabla_x\partial_{v_iv_j}\Gamma(\zeta + \xi)|\\
&\qquad +h|\nabla_v\partial_{v_iv_j}\Gamma(\zeta + \xi)|\big]
\left((h^2 +s)^{\alpha/3}(|w|+ h)^{\alpha/3} + |w|^\alpha + h^\alpha\right)\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1} h^\alpha.
\end{align*}
Proceeding as in our estimate of $I_1$, with $g(t'-s,x'-y-s(v'-w),v'-w)$ playing the role of $\delta g(s,y,w)$, we obtain
\begin{align*}
|I_3|\lesssim [g]_{\alpha,Q_1} \left((1+t')^{\alpha/2} - (1+t)^{\alpha/2}\right) \lesssim [g]_{\alpha,Q_1} |t'-t|^{\alpha/2} \lesssim [g]_{\alpha,Q_1} h^\alpha,
\end{align*}
completing the estimate of $[D_v^2 u]_{\alpha,Q_1}$.
To estimate the $C^{(2+\alpha)/3}$ norm of $u$ in the $x$ variable, we define $h = |x'-x|$ and write
\begin{align*}
&u(t,x',v) - u(t,x,v) \\
&= \left(\int_0^{h^{2/3}} + \int_{h^{2/3}}^{1+t}\right) \int_{\mathbb R^d}\int_{\mathbb R^d} \Gamma(s,y,w)\\
&\qquad \times[g(t-s,x'-y-s(v-w),v-w) - g(t-s,x-y-s(v-w),v-w)]\, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&=: J_1 + J_2.
\end{align*}
Since $\int_{\mathbb R^d}\int_{\mathbb R^d} \Gamma(s,y,w)\, \mathrm{d} w \, \mathrm{d} y = 1$ for any $s>0$, we have
\begin{align*}
|J_1| &\leq [g]_{\alpha,Q_1} h^{\alpha/3} \int_0^{h^{2/3}} \int_{\mathbb R^d}\int_{\mathbb R^d} \Gamma(s,y,w) \, \mathrm{d} w \, \mathrm{d} y \, \mathrm{d} s\\
&\leq [g]_{\alpha,Q_1} h^{(2+\alpha)/3}.
\end{align*}
For $J_2$, we use a change of variables and then the fact that
\[
\int_{\mathbb R^d}\int_{\mathbb R^d} \Gamma(s, x' - y, w) \, \mathrm{d} y \, \mathrm{d} w
= \int_{\mathbb R^d}\int_{\mathbb R^d} \Gamma(s, x - y, w) \, \mathrm{d} y \, \mathrm{d} w
\]
to rewrite the convolution as follows:
\begin{align*}
|J_2| &= \left|\int_{h^{2/3}}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d} [\Gamma(s,x'-y,w) - \Gamma(s,x-y,w)] g(t-s,y-s(v-w),v-w) \, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\right|\\
&= \left|\int_{h^{2/3}}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d}[\Gamma(s,x'-y,w) - \Gamma(s,x-y,w)]\right.\\
&\qquad \qquad \times [g(t-s,y-s(v-w),v-w) - g(t-s,x-s(v-w),v-w)]\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\Big|\\
&\leq [g]_{\alpha,Q_1} h\int_{h^{2/3}}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d} \left( \max_{|\xi|\leq h}|\nabla_x \Gamma(s,x-y+\xi,w)| \right)|x-y|^{\alpha/3} \, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1} h \max_{|\xi|\leq h}\int_{h^{2/3}}^{1+t} \left(s^{-3/2 + \alpha/2} + s^{-3/2}|\xi|^{\alpha/3}\right)\, \mathrm{d} s
\lesssim [g]_{\alpha,Q_1} h^{(2+\alpha)/3},
\end{align*}
using Lemma \ref{l:Gamma-estimates}, that $|\xi|\leq h$, and that $h \leq s^{3/2}$ on the domain of integration.
The proof that $[u]_{\beta,t,Q_1} \leq [g]_{\alpha,Q_1}$ follows a similar outline, and is omitted.
\end{proof}
\begin{lemma}\label{l:dt-dx}
With $g$ and $u$ as in Lemma \textup{\ref{l:convolution}}, assume in addition that $g\in C^{1+\alpha}(Q_1)$ for some $\alpha \in (0,1)$. Then $u$ satisfies
\[[u]_{3+\alpha,Q_1}'' = [\partial_t u]_{\alpha,Q_1} + [\nabla_x u]_{\alpha,Q_1} + [D_v^3 u]_{\alpha,Q_1} \leq C[g]_{1+\alpha,Q_1},\]
where the constant depends on $\alpha$ and $d$.
\end{lemma}
\begin{proof}
First, we show the estimate $[\nabla_x u]_{\alpha,Q_1} \leq C[g]_{1+\alpha,Q_1}$. We proceed as in the previous lemma, taking advantage of the regularity of $g$ in $x$. We have
\begin{align*}
\partial_{x_i} u(z)
&= \int_{0}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d}\partial_{x_i}\Gamma(s,y,w) g(t-s,x-y-s(v-w),v-w)\, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s,
\end{align*}
for $1\leq i\leq d$. Let $z,z' \in Q_1$ with $t \leq t'$, and
let $h = \rho(z,z')$. We write
\begin{align*}
&\partial_{x_i} u(z) - \partial_{x_i}u(z')\\
&= \left(\int_0^{2h^2} + \int_{2h^2}^{1+t}\right)\int_{\mathbb R^{d}}\int_{\mathbb R^d}\partial_{x_i}\Gamma(s,y,w) \delta g(s,y,w) \, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s\\
&\quad - \int_{1+t}^{1+t'} \int_{\mathbb R^{d}}\int_{\mathbb R^d}\partial_{x_i}\Gamma(s,y,w)g(t'-s,x'-y-s(v'-w),v'-w)\, \mathrm{d} w\, \mathrm{d} y \, \mathrm{d} s\\
&=: I_1 + I_2 + I_3,
\end{align*}
where
\[\delta g(s,y,w) := g(t-s,x-y - s(v-w),v-w) - g(t'-s,x'-y-s(v'-w),v'-w).\]
We make the convention that if $2h^2 \geq 1+t$, then $I_2=0$.
Since $\mbox{spt}(g) \subset Q_1$, we have $|\delta g(s,y,w)-\delta g(s,0,w)| \leq 2[g]_{\alpha,Q_1}|y|^{(1+\alpha)/3}$. Observe that for any $s>0$, $y\in \mathbb R^d$,
\[\int_{\mathbb R^d} \partial_{x_i}\Gamma(s,y,w) \, \mathrm{d} y = 0.\]
This allows us to estimate $I_1$ as follows:
\begin{align*}
|I_1| &= \left|\int_0^{2h^2}\int_{\mathbb R^{d}}\int_{\mathbb R^d} \partial_{x_i}\Gamma(s,y,w) [\delta g(s,y,w)-\delta g(s,0,w)]\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\right|\\
&\leq 2 [g]_{\alpha,Q_1} \int_0^{2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d}|\partial_{x_i}\Gamma(s,y,w)| |y|^{(1+\alpha)/3}\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1}\int_0^{2h^2} s^{-3/2 + (1+\alpha)/2} \, \mathrm{d} s \lesssim [g]_{\alpha,Q_1}h^{\alpha},
\end{align*}
by Lemma \ref{l:Gamma-estimates}.
Changing variables in $I_2$, we have
\begin{equation*}
\begin{split}
I_2
&= \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d} [\partial_{x_i}\Gamma(t - s,x-y,v-w)g(s,y-(t-s)w,w)\\
&\qquad \qquad \qquad - \partial_{x_i} \Gamma(t'-s,x'-y,v'-w)g(s,y-(t'-s)w,w)]\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&= \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d} \partial_{x_i}\Gamma(t - s,x-y,v-w) \\
&\qquad \qquad \qquad \times [g(s,y-(t-s)w,w) - g(s,y-(t'-s)w,w)]\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&\quad + \int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d}[\partial_{x_i}\Gamma(t - s,x-y,v-w) - \partial_{x_i}\Gamma(t' - s,x'-y,v'-w)]\\
&\qquad \qquad \qquad \times g(s,y-(t'-s)w,w) \, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&=: I_2' + I_2''.
\end{split}
\end{equation*}
Re-defining $\delta g(s,y,w) := g(s,y-(t-s)w,w) - g(s,y-(t'-s)w,w)$, we have
\[|\delta g(s,y,w) - \delta g(s,x,w)| \leq 2[g]_{1+\alpha,Q_1}|x-y|^{(1+\alpha)/3},\]
which implies
\begin{align*}
|I_2'| &= \left|\int_{-1}^{t-2h^2} \int_{\mathbb R^d}\int_{\mathbb R^d} \partial_{x_i}\Gamma(t-s,x-y,v-w)[\delta g(s,y,w) - \delta g(s,x,w)] \, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\right|\\
&\lesssim [g]_{\alpha,Q_1}\int_{2h^2}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d} |\partial_{x_i}\Gamma(s,y,w)| |y|^{(1+\alpha)/3}\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&\lesssim [g]_{\alpha,Q_1} \int_{2h^2}^{1+t} s^{-3/2+(1+\alpha)/2} \, \mathrm{d} s \lesssim [g]_{\alpha,Q_1} h^{\alpha},
\end{align*}
by Lemma \ref{l:Gamma-estimates}. For $I_2''$, first note that with $\zeta = (s,y,w)$,
\begin{align*}
|\partial_{x_i}&\Gamma(t - s,x-y,v-w) - \partial_{x_i}\Gamma(t' - s,x'-y,v'-w)|\\
&\leq \max_{\|\xi\|\leq h} \left(h^2|\partial_t\partial_{x_i}\Gamma(z-\zeta+\xi)| + h^3|\nabla_x\partial_{x_i}\Gamma(z-\zeta+\xi)|+h|\nabla_v\partial_{x_i}\Gamma(z-\zeta+\xi)|\right).
\end{align*}
By applying Lemma \ref{l:Gamma-estimates} again and arguing as in the proof of Lemma \ref{l:convolution}, we have
\begin{align*}
|I_2''| &= \left|\int_{-1}^{t-2h^2} \int_{\mathbb R^{d}}\int_{\mathbb R^d}[\partial_{x_i}\Gamma(t - s,x-y,v-w) - \partial_{x_i}\Gamma(t' - s,x'-y,v'-w)]\right.\\
&\qquad \times [g(s,y-(t'-s)w,w) - g(s,x-(t'-s)w,w)] \, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\Big|\\
&\lesssim [g]_{1+\alpha,Q_1}\int_{2h^2}^{1+t} \int_{\mathbb R^d}\int_{\mathbb R^d} \max_{\|\xi\|\leq h}[ h^2|\partial_t\partial_{x_i} \Gamma(s,y,w)| + h^3|\nabla_x\partial_{x_i}\Gamma(s,y,w)|\\
&\qquad +h|\nabla_v\partial_{x_i}\Gamma(s,y,w)|]
|y-\xi_2|^{(1+\alpha)/3}\, \mathrm{d} y \, \mathrm{d} w \, \mathrm{d} s\\
&\lesssim [g]_{1+\alpha,Q_1} h^\alpha.
\end{align*}
Proceeding as in our estimate of $I_1$, with $g(t'-s,x'-y-s(v'-w),v'-w)$ playing the role of $\delta g(s,y,w)$, we obtain
\begin{align*}
|I_3|\leq C [g]_{1+\alpha,Q_1} \left((1+t')^{\alpha/2} - (1+t)^{\alpha/2}\right) \leq C [g]_{1+\alpha,Q_1} |t'-t|^{\alpha/2} \leq C[g]_{1+\alpha,Q_1} h^{\alpha},
\end{align*}
and the proof of the estimate on $[\nabla_x u]_{\alpha,Q_1}$ is complete.
Equation \eqref{e:simple} and Lemma \ref{l:convolution} imply the estimate on $[\partial_t u]_{\alpha,Q_1}$. We complete the proof by differentiating \eqref{e:simple} in $v$ and applying Lemma \ref{l:convolution} to estimate $[D_v^3 u]_{\alpha,Q_1}$, using our already-established estimate on $\nabla_x u$.
\end{proof}
Next, let $A_0$ be a (constant) symmetric, strictly positive definite, $d\times d$ matrix. Assume that $\sigma(A_0)\subset [\lambda,\Lambda]$ where $0<\lambda < \Lambda$.
\begin{lemma}\label{l:constant-coeffs}
If $g\in C^\alpha(Q_1)$ for some $\alpha\in (0,1)$, and $g$
has compact support in $Q_1$, then the solution $u$ of
\textup{\[\partial_t u + v\cdot \nabla_x u - \mbox{tr}(A_0 D_v^2 u) = g\]}
satisfies
\begin{align*}
[u]_{2+\alpha,\beta,Q_1}' &\leq C [g]_{\alpha,Q_1},
\end{align*}
for any $\beta\in (0,1)$. If, in addition, $g\in C^{1+\alpha}(Q_1)$, then
\begin{align*}
[u]_{3+\alpha,Q_1}'' &\leq C [g]_{1+\alpha,Q_1}.
\end{align*}
The constants $C$ depend on $d$, $\alpha$, $\beta$, $\lambda$, and $\Lambda$.
\end{lemma}
\begin{proof}
Let $P$ be such that $P^2 = A_0$, and define $u_P(t,x,v) := u(t,Px,Pv)$. Notice that $\sigma(P) \subset [\sqrt \lambda, \sqrt \Lambda]$. Then
\[\partial_t u_P + v\cdot \nabla_x u_P - \Delta_v u_P = (\partial_t u + v\cdot\nabla_x u-\Delta_v u)(t,Px,Pv) = g(t,Px,Pv) =: g_P(t,x,v),\]
and we can apply Lemma \ref{l:convolution} to $u_P = \displaystyle\int \Gamma (\mathcal S_{\zeta}^{-1} z) g_P(\zeta)\, \mathrm{d} \zeta$ to obtain
\begin{align*}
[u]_{2+\alpha,P(Q_{1})} &\leq C(P) [g]_{\alpha,P(Q_1)},
\end{align*}
where $P(Q_1) := (-1,0]\times P(B_1)\times P(B_1)$. To get an estimate on $Q_1$, we replace $u$ with $u(R^2t, R^3x,Rv)$, where $R>0$ depends only on $\lambda$ and $\Lambda$.
Similarly, if $D_v^3u, \nabla_x u,\partial_t u\in C^{\alpha}(Q_1)$, we apply Lemma \ref{l:dt-dx} to $u_P$.
\end{proof}
\subsection{Variable coefficients}
Let $L$ be an operator of the form
\[Lu = \mbox{tr}(A(z)D_v^2 u),\
where $A\in C^{\alpha}(Q_1)$, and $0<\lambda I \leq A(z) \leq \Lambda I$ for all $z\in Q_1$. We now study equations of the form
\begin{equation}\label{e:variable}
\partial_t u + v\cdot \nabla_x u - Lu = g.
\end{equation}
As is standard, we extend Lemma \ref{l:constant-coeffs} to solutions of \eqref{e:variable} by freezing the coefficients at a point $z$ and taking advantage of the closeness of $L$ to $L(z)$ in a small cylinder around $z$, where $L(z)$ refers to the operator $\mbox{tr}(A(z) D_v^2 u)$ with $z$ ``frozen''. We also remove the assumption that $u$ has compact support, which requires tracking how interior estimates on $Q_r$ scale for $r\in (0,1]$. For this, we need the following technical lemma:
\begin{lemma}\label{l:omega}
Let $\omega(r)\geq 0$ be bounded in $[r_0,r_1]$ with $r_0\geq 0$. Suppose for $r_0\leq r<R\leq r_1$, we have
\[\omega(r) \leq \mu \omega(R) + \frac A {(R-r)^p} + B\]
for some $\mu \in [0,1)$ and $A,B,p \geq 0$. Then for any $r_0\leq r<R\leq r_1$, there holds
\[\omega(r) \lesssim\left(\frac A {(R-r)^p} + B\right),\]
where the implied constant depends only on $\mu$ and $p$.
\end{lemma}
\begin{proof}
See \cite[Lemma 4.3]{hanlin}.
\end{proof}
\begin{theorem}\label{t:Schauder}
Fix $\alpha \in (0,1)$. Suppose that, $[u]_{2+\alpha,\beta,Q_1}' < \infty$ for all $\beta \in (0,1)$, and $A\in C^{\alpha}(Q_1)$. Then
\[ [u]_{2+\alpha,\beta,Q_{1/2}}' \lesssim \left([g]_{\alpha,Q_1} + |A|_{\alpha,Q_1}^{3+\alpha+2/\alpha} |u|_{0,Q_1}\right),\]
where \textup{$g := \partial_t u + v\cdot \nabla_x u - Lu$}. The implied constant depends only on $d,\alpha, \beta, \lambda$, and $\Lambda$
\end{theorem}
\begin{proof}
For $r\in (0,1]$, recall that
\[ [u]_{2+\alpha,\beta,Q_r}' = [D_v^2 u]_{\alpha,Q_r} + [u]_{(2+\alpha)/3,x,Q_r} + [u]_{\beta,t,Q_r}.\]
Let $r\in [\frac 1 4, \frac 3 4]$ be arbitrary.
For $1\leq i,j\leq d$, pick $z,z'\in Q_{r}$ such that
\[\frac{|\partial_{v_iv_j} u(z) - \partial_{v_iv_j} u(z')|}{\rho(z,z')^\alpha} \geq \frac 1 2 [\partial_{v_iv_j} u]_{\alpha,Q_{r}}.\]
Let $\theta\in (0,1/8)$ be a constant, to be chosen later. If $\rho(z,z') \geq \theta$, then by the interpolation inequalities in Lemma~\ref{l:interp},
\begin{equation}\label{eq:chris3}
[\partial_{v_iv_j} u]_{\alpha,Q_{r}}
\leq 2\theta^{-\alpha} |D_v^2 u|_{0,Q_{r}} \leq \frac{1}{12d^2} [u]_{2+\alpha,\beta,Q_{r}}' + C\theta^{-2}|u|_{0,Q_{r}}.
\end{equation}
On the other hand, if $\rho(z,z') < \theta$, let $\chi$ be a smooth cutoff such that $\chi(\tilde z) = 1$ if $\rho(\tilde z,z') < \theta$ and $\chi(\tilde z) = 0$ if $\rho(\tilde z,z') \geq 2\theta$. We can choose $\chi$ such that
\[\begin{split}
&|\nabla_v \chi|_{0,Q_1} \lesssim \theta^{-1},
\quad [\nabla_v \chi]_{0,Q_1} \lesssim \theta^{-1-\alpha},
\quad |\partial_t \chi+v\cdot \nabla_x \chi|_{0,Q_1} + |D_v^2 \chi|_{0,Q_1} \lesssim \theta^{-2},\\
&\text{ and }
\quad [\partial_t \chi+v\cdot \nabla_x \chi]_{\alpha,Q_1} + [D_v^2 \chi]_{\alpha,Q_1} \lesssim \theta^{-2-\alpha}.
\end{split}\
Using Lemma \ref{l:constant-coeffs}, we now have
\begin{align*}
[\partial_{v_iv_j} u]_{\alpha,Q_{r}} &\leq 2[\chi u]_{2+\alpha,\beta,Q_{r+2\theta}}'\\
&\lesssim [\partial_t (\chi u) + v\cdot \nabla_x (\chi u) - L(z')(\chi u)]_{\alpha,Q_{r+2\theta}}\\
&\lesssim [\partial_t(\chi u) + v\cdot \nabla_x (\chi u) - L(\chi u)]_{\alpha,Q_{r+2\theta}} + [(L - L(z'))(\chi u)]_{\alpha,Q_{r+2\theta}}.
\end{align*}
Let $R = r+2\theta$. To estimate the first term on the last line, note that
\[\partial_t(\chi u) + v\cdot \nabla_x (\chi u) - L(\chi u) = \chi g + u(\partial_t + v\cdot \nabla_x -L)\chi - 2(A(z)\nabla_v u)\cdot \nabla_v \chi.\]
By the interpolation inequalities in Lemma \ref{l:interp},
\begin{equation}\label{eq:chris1}
\begin{split}
[\partial_t(\chi u) +& v\cdot \nabla_x (\chi u) - L(\chi u)]_{\alpha,Q_{R}} \lesssim ([g]_{\alpha,Q_{R}} + (1+|A|_{0,Q_1})(\theta^{-2}[u]_{\alpha,Q_{R}} + \theta^{-1}[\nabla_v u]_{\alpha,Q_{R}}))\\
&\lesssim [g]_{\alpha,Q_{R}} + (1+|A|_{0,Q_1})\left(\theta^{\alpha} [u]_{2+\alpha,\beta,Q_{R}}' + C\theta^{-2-\alpha(2+\alpha)}|u|_{0,Q_{R}}\right).
\end{split}
\end{equation}
For the second term, note that $(L-L(z'))(\chi u) = \mbox{tr}((A(\tilde z)-A(z'))D_v^2(\chi u))$ for all $\tilde z \in Q_1$. Since $\mbox{spt}(\chi) \subset \{\tilde z : \rho(\tilde z,z')\leq 2\theta\}$, we have
\begin{equation}\label{eq:chris2}
\begin{split}
[(L-L(z'))(\chi u)]_{\alpha,Q_R} &\lesssim [A]_{\alpha,Q_1} \theta^\alpha \left([D^2_v u]_{\alpha,Q_R} + |D_v^2 u|_{0,Q_R}\right)\\
&\lesssim [A]_{\alpha,Q_1} \theta^\alpha \left( [u]_{2+\alpha,\beta,Q_R}' + \theta^{-2}|u|_{0,Q_R}\right),
\end{split}
\end{equation}
using the interpolation inequalities again. Combining~\eqref{eq:chris1} and~\eqref{eq:chris2}, we obtain, when $\rho(z,z') <\theta$,
\begin{equation}\label{eq:chris4}
[\partial_{v_iv_j} u]_{\alpha,Q_r}
\lesssim |A|_{\alpha,Q_1}\theta^\alpha \left([u]_{2+\alpha,\beta,Q_R}' + [g]_{\alpha,Q_R} + \theta^{-p}|A|_{\alpha,Q_1}|u|_{0,Q_1}\right),
\end{equation}
with $p = 2+\alpha(2+\alpha)$.
The combination of~\eqref{eq:chris3} and~\eqref{eq:chris4} implies that, for any fixed $\theta\in(0,1/8)$,
\[[\partial_{v_iv_j} u]_{\alpha,Q_r} \leq \left(C|A|_{\alpha,Q_1}\theta^\alpha+ \frac{1}{12d^2}\right) [u]_{2+\alpha,\beta,Q_R}' + C[g]_{\alpha,Q_R} + C\theta^{-p}|A|_{\alpha,Q_1}|u|_{0,Q_1}.\]
Summing over $i$ and $j$, and applying a similar argument to $[u]_{(2+\alpha)/3,x,Q_r}$ and $[u]_{\beta,t,Q_r}$, we obtain
\[[u]_{2+\alpha,\beta,Q_r}' \leq \left(C|A|_{\alpha,Q_1}\theta^\alpha+ \frac 1 4\right) [u]_{2+\alpha,\beta,Q_R}' + C[g]_{\alpha,Q_R} + C\theta^{-p}|A|_{\alpha,Q_1}|u|_{0,Q_1}.\]
Fix $\theta_0>0$ such that $C|A|_{\alpha,Q_1}\theta^\alpha < 1/4$ for all $\theta\in (0,\theta_0)$. Then, for each $R\in (r,r+2\theta_0)$, we have
\[[u]_{2+\alpha,\beta,Q_r}' \leq \frac 1 2 [u]_{2+\alpha,\beta,Q_R}' + C[g]_{Q_R} + C(R-r)^{-p}|A|_{\alpha,Q_1}|u|_{0,Q_1}.\]
Recall that $r\in [\frac 1 4, \frac 3 4]$ was arbitrary. Lemma \ref{l:omega} with $\omega(s) = [u]_{2+\alpha,\beta,Q_s}'$, $r_0 = 1/2$, and $r_1 = 1/2+2\theta_0$ implies
\[[u]_{2+\alpha,\beta,Q_r}' \leq C([g]_{\alpha,Q_1} + (R-r)^{-p}|A|_{\alpha,Q_1}|u|_{0,Q_1}),\]
for each $\frac 1 2 \leq r < R \leq \frac 1 2 + 2\theta_0$. Choose $r=\frac 1 2$ and $R = \frac 1 2 + \theta_0$, and the proof is complete.
\end{proof}
Next, we extend the estimate of Lemma \ref{l:dt-dx} to the variable-coefficient case. Here, we need to assume $A(z)$ in the operator $L$ is in $C^{1+\alpha}(Q_1)$.
\begin{theorem}\label{t:higher}
Assume that $D_v^3 u, \nabla_x u, \partial_t u\in C^{\alpha}(Q_1)$. Then
\[ [u]_{3+\alpha,Q_{1/2}}''\leq C \left(|g|_{1+\alpha,Q_1} + |A|_{1+\alpha,Q_1}^{5+\alpha+6/\alpha}|u|_{0,Q_1}\right),\]
where \textup{$g := \partial_t u + v\cdot \nabla_x u - \mbox{tr}(AD_v^2u)$}. The constant $C$ depends on $d, \alpha, \lambda$, and $\Lambda$.
\end{theorem}
\begin{proof}
For $r\in (0,1]$, recall
\[[u]_{3+\alpha,Q_r}'' = [\partial_t u]_{\alpha,Q_r} + [\nabla_x u]_{\alpha,Q_r} + [D_v^3 u]_{\alpha,Q_r}.\]
With $r$, $\theta$, and $R$ as in the proof of Theorem \ref{t:Schauder}, we can follow the argument of that proof to show
\[ [u]_{3+\alpha,Q_{r}}'' \leq \left( C|A|_{1+\alpha,Q_1} \theta^\alpha + \frac 1 4\right)[u]_{3+\alpha,Q_R}'' + C|g|_{1+\alpha,Q_R} + C\theta^{\alpha(4+\alpha)+6}|A|_{1+\alpha,Q_1}|u|_{0,Q_1}.\]
The conclusion of the proof is the same as Theorem \ref{t:Schauder}.
\end{proof}
In the last two theorems, we have worked with solutions whose pointwise derivatives exist \emph{a priori}. To pass to weak solutions in $H^1(Q_1)$, we need the following proposition:
\begin{proposition}\label{p:exist}
Given $g \in C^{\alpha}((-1,0]\times \mathbb R^d\times\mathbb R^d)$ with compact support in $Q_1$, then there exists a unique weak solution $u$ in $H^1((-1,0]\times\mathbb R^d\times\mathbb R^d)$ of~\eqref{e:variable}. Furthermore, $[u]_{2+\alpha,\beta,Q_1}'<\infty$.
If $g\in C^{1+\alpha}(Q_1)$, the same conclusion holds with $[u]_{3+\alpha,Q_1}'' <\infty$, where $[\cdot]_{2+\alpha,\beta,Q_1}'$ and $[\cdot]_{3+\alpha,Q_1}''$ are as in Definition \textup{\ref{d:prime}}.
\end{proposition}
\begin{proof}
Fix any $\beta \in (0,1)$ and assume that the matrix $A$ is uniformly bounded and coercive on $\mathbb R\times\mathbb R^d \times \mathbb R^d$. Define the norm
\[
\|u\|_{\mathcal{B}} := \max \left\{ |u|_{\alpha, Q_1(z_0)}+[u]_{2+\alpha,\beta, Q_1(z_0)} + [\partial_tu + v\cdot \nabla_x u]_{\alpha, Q_1(z_0)} : z_0 = (0,x_0,v_0), x_0,v_0\in\mathbb R^d\right\},
\]
and the Banach space
\[
\mathcal{B} := \{u \in C^{\alpha}([-1,0]\times \mathbb R^d\times \mathbb R^d) : \|\cdot\|_{\mathcal B} <\infty\},\]
endowed with $\|\cdot\|_{\mathcal B}$,
and
\[
\mathcal{V} := \{u \in C^\alpha([-1,0]\times \mathbb R^d \times \mathbb R^d) : u(-1,\cdot,\cdot) \equiv 0\},
\]
endowed with the analogous norm.
For any $\theta \in [0,1]$, define the operator $E_\theta: \mathcal{B} \to \mathcal{V}$ by
\[
E_\theta u := u_t + v\cdot \nabla_x u - (1-\theta) \Delta_v u - \theta Lu.
\]
From \Cref{t:Schauder}, we see that
\[
\|u\|_{\mathcal{B}} \lesssim \|E_\theta u\|_{\mathcal{V}}
\]
for all $u\in \mathcal{B}$. Also, from~\eqref{e:Gamma},
we see that $E_0$ is a onto.
Applying the method of continuity as in~\cite[Theorem~5.2]{gilbargtrudinger}, we obtain that $E_1$ is onto as well. Hence, $E_1$ is invertible.
The uniqueness follows from the maximum principle for weak subsolutions of \eqref{e:variable} in $H^1(Q_1)$, which is well known; see \cite[Proposition A.1]{cameron2017landau} for a proof. This finishes the first claim. The same argument applies in the second case when $g$ has one more derivative, using Theorem \ref{t:higher}.
\end{proof}
Given a weak solution $u\in C^0(\overline{Q_1})\cap H^1(Q_1)$, Proposition \ref{p:exist} implies $u$ is smooth enough to apply the estimates of Theorem \ref{t:Schauder} if $g\in C^\alpha(Q_1)$ and Theorem \ref{t:higher} if $g\in C^{1+\alpha}(Q_1)$. We collect the results of this section in the following theorem:
\begin{theorem}\label{t:weak-schauder}
Let $u\in C^0(\overline Q_1)\cap H^1(Q_1)$ be a weak solution of
\[\partial_t u + v\cdot \nabla_x u - L u = g\]
in $Q_1$, with \textup{$L = \mbox{tr}(AD_v^2 u)$} and $\lambda I \leq A \leq \Lambda I$.
\begin{enumerate}
\item[\textup{(a)}] If $g, A\in C^{\alpha}(Q_1)$ for some $\alpha \in (0,1)$, we have the estimate
\[ [D_v^2u]_{\alpha,Q_{1/2}} + [u]_{(2+\alpha)/3,x,Q_{1/2}} + [u]_{\beta,t,Q_{1/2}}\lesssim ( [g]_{\alpha,Q_1} + |A|_{\alpha,Q_1}^p|u|_{0,Q_1}),\]
for any $\beta\in (0,1)$.
\item[\textup{(b)}] If $g, A\in C^{1+\alpha}(Q_1)$ for some $\alpha \in (0,1)$, then
\[ [\partial_t u]_{\alpha,Q_{1/2}}+ [\nabla_x u]_{\alpha,Q_{1/2}} + [D_v^3 u]_{\alpha,Q_{1/2}} \lesssim (|g|_{1+\alpha,Q_1} + |A|_{1+\alpha,Q_1}^q|u|_{0,Q_1}).\]
\end{enumerate}
The implied constants depend on $d$, $\alpha$, $\beta$, $\lambda$, and $\Lambda$. The exponents $p,q >0$ depend only on $\alpha$.
\end{theorem}
\section{Smoothing for weak solutions of the Landau equation}\label{s:landau}
In this section, we apply the estimates of Section \ref{s:schauder} to the Landau equation. The diffusion operator $\mbox{tr}(\overline a(z)D_v^2 f)$ (or in divergence form, $\nabla_v\cdot (\overline a(z)\nabla_v f)$) is uniformly elliptic in any bounded set, but the ellipticity constants degenerate as $|v|\to \infty$. (See Appendix \ref{s:A}.) To deal with this, we apply a change of variables in a small cylinder around a given point $z_0$, which yields an equation with ellipticity constants that are independent of $z_0$. In the sequel, we undo this transformation to explicitly see the dependence of the estimates on $|v|$.
The following lemma was first proven in \cite{cameron2017landau} in the case of moderately soft potentials:
\begin{lemma}\label{l:T}
Let $z_0 =(t_0,x_0,v_0)\in \mathbb R_+\times \mathbb R^{d}\times \mathbb R^d$ be such that $|v_0|\geq 2$, and let $T$ be the linear transformation such that
\begin{equation*}
T e = \begin{cases} |v_0|^{1+\gamma/2} e , & e \cdot v_0 = 0\\
|v_0|^{\gamma/2}e, & e \cdot v_0 = |v_0|.\end{cases}
\end{equation*}
Let $\tilde T(t,x,v) = (t,Tx,Tv)$, and define
\begin{align*}
\mathcal T_{z_0}(t,x,v) &:= \mathcal S_{z_0} \circ \tilde T (t,x,v)\\
& = (t_0+t,x_0+T x + t v_0 ,v_0 + T v).
\end{align*}
Then:
\begin{enumerate}
\item[\textup{(a)}] There exists a constant $C>0$ independent of $v_0\in\mathbb R^d\setminus B_2$ such that for all $v\in B_1$,
\[ C^{-1} |v_0| \leq |v_0 + Tv| \leq C |v_0|.\]
\item[\textup{(b)}] Let $f$ be a weak solution of the Landau equation \eqref{e:divergence} satisfying \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0}, and if $\gamma < -2$, assume that $f$ satisfies \eqref{e:4thmoment}. Then there exists a radius
\[r_1 = c_1 \min\left(|v_0|, |v_0|^{-1-\gamma/2}\right)\min\left(1,\sqrt{t_0/2}\right),\] with $c_1$ universal, such that for any $r\in (0,r_1]$, the function $f_{z_0}(t,x,v) := f(\mathcal T_{z_0}(r^2t,r^3x,rv))$ satisfies
\begin{equation}\label{e:isotropic}
\partial_t f_{z_0} + v \cdot \nabla_x f_{z_0} = \nabla_v \cdot\left(\overline A(z)\nabla_v f_{z_0}\right) + \overline B(z)\cdot \nabla_v f_{z_0} + \overline C(z) f_{z_0},
\end{equation}
or equivalently,
\begin{equation}\label{e:isotropic-nondivergence}
\partial_t f_{z_0} + v \cdot \nabla_x f_{z_0} = \textup{\mbox{tr}}\left( \overline A(z)D_v^2 f_{z_0}\right) + \overline C(z) f_{z_0},
\end{equation}
in $Q_1$, and the coefficients
\[\begin{split}
&\overline A(z) = T^{-1}\overline a(\mathcal T_{z_0}(\delta_r(z))) T^{-1},
\quad \overline B(z) = rT^{-1}\overline b(\mathcal T_{z_0}(\delta_r(z))),~\text{ and}\\
&\overline C(z) = r^2\overline c(\mathcal T_{z_0}(\delta_r(z)))
\end{split}\]
satisfy
\begin{align*}
\lambda I &\leq \overline A(z) \leq \Lambda I,\\
|\overline B(z)| &\lesssim \begin{cases} 1, &-1\leq \gamma<0 ,\\[2ex]
\left(1+\|f(t,x,\cdot)\|_{L^\infty(B_\theta(v))}\right)^{-(\gamma+1)/d}, &-2\leq \gamma < -1,\\[2ex]
|v_0|^{\gamma/2+1}\left(1+\|f(t,x,\cdot)\|_{L^\infty(B_\theta(v))}\right)^{-(\gamma+1)/d}, &-d \leq \gamma <-2,
\end{cases}\\
|\overline C(v)| &\lesssim \begin{cases} |v_0|^{-1+\gamma/2}\left(1+\|f(t,x,\cdot)\|_{L^\infty(B_\theta(v))}\right)^{-\gamma/d}, &\dfrac{-2d}{d+2}\leq \gamma < 0,\\
|v_0|^{-3-\gamma/2-2\gamma/d}\left(1+\|f(t,x,\cdot)\|_{L^\infty(B_\theta(v))}\right)^{-\gamma/d}, &-d < \gamma < \dfrac{-2d}{d+2},\end{cases}
\end{align*}
with $\lambda$ and $\Lambda$ universal, and $\theta \lesssim 1+ |v_0|^{-2/d}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For $\gamma\in (-2,0)$, this lemma is proven in \cite[Lemma 4.1]{cameron2017landau}. In fact, that proof does not use $\gamma>-2$ in an essential way. The necessary ingredients are the upper and lower bounds of Proposition \ref{p:a} and Lemma \ref{l:very-soft} from the Appendix, which hold under our assumptions on $f$. The bounds on $\overline B$ and $\overline C$ come from Proposition \ref{p:bc} and Lemma \ref{l:very-soft}.
\end{proof}
The coefficients $\overline A$, $\overline B$, and $\overline C$ are dependent on $z_0$, which we refer to as the ``base point,'' and~$r$.
For any $z_0 = (t_0,x_0,v_0)$ with $|v_0|\leq 2$, we define $f_{z_0}(z) = f(\mathcal S_{z_0}\delta_{r_1} z)$, with $r_1$ as in Lemma \ref{l:T}(b). Note that in the notation of \cite{cameron2017landau}, our $f_{z_0}(t,x,v)$ is equal to $f_T(r_1^2t,r_1^3x,r_1v)$. The following proposition shows how the regularity of $f$ depends on the regularity of $f_{z_0}$.
\begin{proposition}\label{p:f0-f}
Let $f:[0,T_0]\times \mathbb R^d\times\mathbb R^d\to\mathbb R_+$ for some $T_0 > 0$. If $f_{z_0}$ is defined with base point $z_0\in (0,T_0]\times \mathbb R^d\times\mathbb R^d$, and some partial derivative $\partial_t^j \partial_x^\beta\partial_v^\eta f_{z_0}$ of order $M = 2j+3|\beta| +|\eta|$ exists in $C^\alpha(Q_1)$ for some $\alpha \in (0,1)$, then
\begin{align*}
|\partial_t^j \partial_x^\beta\partial_v^\eta f|_{\alpha,Q_{r_1}(z_0)} &\lesssim r_1^{-M-\alpha}(1+|v_0|)^{-\gamma \alpha/2} |\partial_t^j \partial_x^\beta\partial_v^\eta f_{z_0}|_{\alpha,Q_{1}}\\
&\lesssim \left(1+t_0^{-(M+\alpha)/2}\right) (1+|v_0|)^{M(1+\gamma/2)+\alpha}|\partial_t^j \partial_x^\beta\partial_v^\eta f_{z_0}|_{\alpha,Q_1},
\end{align*}
with $r_1$ as in Lemma \textup{\ref{l:T}}.
\end{proposition}
\begin{proof}
Let $\partial = \partial_t^j\partial_x^\beta\partial_v^\eta$. For $z,z'\in Q_{r_1}(z_0)$ with $|v_0|\geq 2$, we have
\begin{align*}
|\partial f(z) - \partial f(z')| &= r_1^{-M}|\partial f_{z_0}(\delta_{r_1}^{-1}\mathcal S_{z_0}^{-1} \tilde T^{-1} z) - \partial f_{z_0}(\delta_{r_1}^{-1}\mathcal S_{z_0}^{-1} \tilde T^{-1} z')|\\
&\leq [\partial f_{z_0}]_{\alpha,Q_1} r_1^{-M}\rho(\delta_{r_1}^{-1}\mathcal S_{z_0}^{-1} \tilde T^{-1}z,\delta_{r_1}^{-1}\mathcal S_{z_0}^{-1} \tilde T^{-1}z')^{\alpha}\\
&= [\partial f_{z_0}]_{\alpha,Q_1} r_1^{-M-\alpha}\rho(\mathcal S_{z_0}^{-1} \tilde T^{-1}z,\mathcal S_{z_0}^{-1} \tilde T^{-1}z')^{\alpha}\\
&\leq [\partial f_{z_0}]_{\alpha,Q_1} r_1^{-M-\alpha}\rho(\tilde T^{-1}z,\tilde T^{-1}z')^\alpha\\
&\leq [\partial f_{z_0}]_{\alpha,Q_1} r_1^{-M-\alpha} |v_0|^{-\gamma \alpha/2} \rho(z,z')^\alpha.
\end{align*}
In the case $|v_0|\leq 2$, we have $f(z) = f_{z_0}(\delta_{r_1}^{-1} \mathcal S_{z_0}^{-1} z)$, and a similar calculation applies.
\end{proof}
Next, we show that if the regularity estimates of $f_{z_0}$ decay sufficiently quickly as $|v|\to\infty$, they imply regularity of the coefficients of \eqref{e:isotropic-nondivergence}. Although it is enough to show that partial derivatives of $\overline A$ and $\overline C$ grow at most polynomially, we derive explicit rates for the sake of concreteness.
\begin{lemma}\label{l:coefficients}
Let $f_{z_0}$ be as in Lemma \textup{\ref{l:T}}. Assume that some partial derivative $\partial_t^j\partial_x^\beta\partial_v^\eta f_{z_0}$ of order $M = j + |\beta| + |\eta|$ exists in $C^{\alpha}(Q_1)$ for every $z_0\in (0,T_0]\times \mathbb R^d\times\mathbb R^d$, and satisfies
\[[\partial_t^j \partial_x^\beta \partial_v^\eta f_{z_0}]_{\alpha,Q_1} \leq C_0 \left(1+t_0^{-p}\right) (1+|v_0|)^{-q}\]
for some $p\geq 0$ and $q > d+ 2 + \gamma(1-\alpha/2)$. Then $\overline A(t,x,v)$ and $\overline C(t,x,v)$ enjoy the same regularity as $f_{z_0}$, and for any $z_0\in (0,T_0]\times \mathbb R^d\times\mathbb R^d$, one has
\begin{align*}
\left[\partial_t^j\partial_x^\beta\partial_v^\eta \overline A\right]_{\alpha,Q_1} &\lesssim \left(1+t_0^{-M/2-p}\right)(1+|v_0|)^{(M+\alpha)(1+\gamma/2)+2}\\
\left[\partial_t^j\partial_x^\beta\partial_v^\eta\overline C\right]_{\alpha,Q_1} &\lesssim \left(1+t_0^{-M/2-p+1}\right)(1+|v_0|)^{(M+\alpha-2)(1+\gamma/2)},
\end{align*}
where $\overline A$ and $\overline C$ are defined with base point $z_0$, and $r_1$ is as in Lemma \textup{\ref{l:T}}. The implied constant depends on $d$, $\gamma$, $q$, and $C_0$.
\end{lemma}
\begin{proof}
Let $\partial= \partial_t^j \partial_x^\beta \partial_v^\eta$. For some base point $z_0$ with $|v_0|\geq 2$, fix $z,z'\in Q_1$ and let $\tilde z = (\tilde t,\tilde x,\tilde v) = \mathcal T_{z_0}(\delta_{r_1}z)$ and $\tilde z' = \mathcal T_{z_0}(\delta_{r_1}z')$, with $r_1$ as in Lemma \ref{l:T}. For $w\in \mathbb R^d$, Proposition \ref{p:f0-f} implies
\begin{align*}
|\partial f(\tilde t,\tilde x,\tilde v-w) - \partial f(\tilde t',\tilde x',\tilde v'-w)| &\leq [\partial f]_{\alpha,Q_{r_1}(t_0,x_0,v_0-w)}\rho(\tilde z,\tilde z')^\alpha\\
&\lesssim (1+t_0^{-p})r_1^{-M-\alpha}|v_0-w|^{-q-\gamma\alpha/2}\rho(\tilde z,\tilde z')^\alpha.
\end{align*}
Recall $\overline A(z) = T^{-1}\overline a(\mathcal T_{z_0}(\delta_{r_1}z)) T^{-1}$. With $R = |v_0|/2$, the formula \eqref{e:a} for $\overline a$ implies
\begin{align*}
|\partial \overline A(z) - \partial \overline A(z')| &\leq |v_0|^{-\gamma} \int_{\mathbb R^d} |w|^{\gamma+2}|\partial f(\tilde t,\tilde x,\tilde v-w) - \partial f(\tilde t',\tilde x',\tilde v'-w)|\, \mathrm{d} w \\
&\lesssim (1+t_0^{-p})|v_0|^{-\gamma} r_1^{-M-\alpha} \rho(\tilde z,\tilde z')^\alpha \left(|v_0|^{-q-\gamma\alpha/2}\int_{B_R} |w|^{\gamma+2} \, \mathrm{d} w\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\left. + \int_{\mathbb R^d\setminus B_R(v_0)} |v_0-\overline w|^{\gamma+2} |\overline w|^{-q-\gamma\alpha/2}\, \mathrm{d} \overline w\right)\\
&\lesssim (1+t_0^{-p})r_1^{-M-\alpha} \rho(\tilde z,\tilde z')^\alpha |v_0|^{2} \\
&\lesssim (1+t_0^{-p})r_1^{-M}|v_0|^{2+\alpha(1+\gamma/2)}\rho(z,z')^\alpha.
\end{align*}
where $\overline w = v_0 - w$ and we have used $\rho(\tilde z,\tilde z') \lesssim |v_0|^{1+\gamma/2} r_1 \rho(z,z')$. A similar calculation applies to $\overline C(z) = r_1^2\overline c(\mathcal T_{z_0}\delta_{r_1}z)$. In the borderline case $\gamma = -d$, we have $\overline C(z) = c_{d,\gamma} r_1^2 f_{z_0}(z)$, and the conclusion of the lemma follows from the even stronger decay of $\partial f_{z_0}$
\end{proof}
\begin{remark}
The decay in the estimates of Lemma \textup{\ref{l:coefficients}} can be improved when $|\eta|> 0$ by integrating by parts in $w$. However, this would still not grant us enough decay to conclude $f\in C^\infty$ without any decay assumption on the initial data.
\end{remark}
Next, we show that Gaussian bounds in the initial data are propagated. This result was established in the case $\gamma\in (-2,0)$ in \cite[Theorem 1.2]{cameron2017landau
, under the assumption that the hydrodynamic bounds \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0} hold. To prove such a result when $\gamma \in [-d,-2]$, we also need \emph{a priori} bounds on $\|f\|_{L^\infty}$ and on sufficiently high moments of $f$.
\begin{theorem}\label{t:gaussian}
Let $\gamma\in [-d,-2]$, and let $f$ be a bounded weak solution of the Landau equation \eqref{e:nondivergence} satisfying the hydrodynamic bounds \eqref{e:M0}, \eqref{e:E0}, and \eqref{e:H0}. Assume, in addition, that
\[\int_{\mathbb R^d}|v|^p f(v)\, \mathrm{d} v \leq P_0,\]
where $p$ is the smallest integer such that $p>\dfrac{d|\gamma|}{2+\gamma+d}$. Then there exists $\mu_0>0$ such that if
\[f_{\rm in}(x,v) \leq C_0 e^{-\mu|v|^2},\]
for all $x\in \mathbb R^d,v\in\mathbb R^d$ and $\mu>0$, then
\begin{equation}\label{e:Gaussian_decay}
f(t,x,v) \lesssim e^{-\min\{\mu_0,\mu\}|v|^2},
\end{equation}
where $\mu_0$ and the implied constant in~\eqref{e:Gaussian_decay} depend on $C_0$, $M_0$, $E_0$, and $\|f\|_{L^\infty([0,T_0]\times \mathbb R^d\times \mathbb R^d)}$. If $\gamma \leq -d/2-1$, then the implied constant in~\eqref{e:Gaussian_decay} also depends on the time of existence $T_0$.
\end{theorem}
\begin{proof}
First, assume that $\gamma \in (-d/2-1,-2]$. Fix $\mu_0>0$ to be determined and let $\overline \mu = \min\{\mu,\mu_0\}$. Proceeding as in the proof of \cite[Theorem 1.2]{cameron2017landau}, we claim that $\phi(t,x,v) = e^{-\overline\mu|v|^2}$ is a supersolution to the linear Landau equation
\begin{equation}\label{e:linear-landau}
\partial_t \phi + v\cdot \nabla_x \phi = \mbox{tr}(\overline a D_v^2 \phi) + \overline c \phi,
\end{equation}
for $|v|$ large, where $\overline a$ and $\overline c$ are defined in terms of $f$. Since $\phi$ is radial in $v$, we have
\begin{equation*}
\partial_{v_i}\partial_{v_j}\phi = \frac {\partial_{rr}\phi}{|v|^2} v_i v_j + \frac{\partial_r\phi}{|v|} \left( \delta_{ij} - \frac{v_iv_j}{|v|^2}\right) = \left[\frac {4\overline\mu^2|v|^2 - 2\overline\mu}{|v|^2} v_i v_j - 2\overline\mu \left( \delta_{ij} - \frac{v_iv_j}{|v|^2}\right)\right] e^{-\overline\mu |v|^2}.
\end{equation*}
Proposition \ref{p:a} and Lemma \ref{l:very-soft} from the appendix imply
\begin{align*}
\overline a_{ij}\partial_{v_i}\partial_{v_j}\phi &\leq \left[(4\overline\mu^2(1+|v|)^2 - 2\overline\mu)C_1 (1+|v|)^{\gamma} - 2\overline\mu C_2 (1+|v|)^{\gamma+2}\right] e^{-\overline\mu |v|^2}\\
&= \left((4\overline\mu^2 C_1 - 2\overline\mu C_2) (1+|v|)^{\gamma+2} - 2\overline\mu C_1 (1+|v|)^{\gamma}\right) e^{-\overline\mu|v|^2}\\
&\leq -C (1+|v|)^{\gamma+2} \phi(v),
\end{align*}
for $|v|$ sufficiently large, provided that we choose $\mu_0 < C_2 / (2C_1)$, where we use the convention that repeated indices are summed over. With the bound on $\overline c$ from Lemma \ref{l:very-soft}, this implies
\[\overline a_{ij} \partial_{v_i}\partial_{v_j}\phi + \overline c \phi \leq \left[-C(1+|v|)^{\gamma+2} + C(1+|v|)^{\gamma+2-\varepsilon}\right]\phi(v).\]
The first term on the right-hand side dominates for large $|v|$, and we have
\begin{equation}\label{e:aij-phi}
\overline a_{ij} \partial_{v_i}\partial_{v_j}\phi + \overline c \phi \leq -C |v|^{\gamma+2} \phi
\end{equation}
for $|v|\geq R_0$ for some large $R_0$. Choose $C_f$ such that $C_f\phi(t,x,v) > \|f\|_{L^\infty}$ for all $|v|\leq R_0$ and such that $C_f\phi(0,x,v) > f(0,x,v)$ for all $(x,v) \in \mathbb R^d \times \mathbb R^d$. In the second inequality we used that $\overline\mu \leq \mu$. Define the function
\[g(t,x,v) := [f(t,x,v) - C_f \phi(t,x,v)]_+.\]
If $|v|\leq R_0$, then $g(t,x,v) = 0$ by our choice of $C_1$. If $|v|> R_0$, then by \eqref{e:aij-phi}, $\phi$ is a supersolution to \eqref{e:linear-landau}. We conclude $g(t,x,v)$ is a subsolution of $\partial_t g + v\cdot \nabla_x g \leq \overline a_{ij}\partial_{v_iv_j} g + \overline c g$ in its entire domain; hence, by the maximum principle~\cite[Lemma A.2]{cameron2017landau}, we have $g\leq 0$ for all $t>0$, so $f(t,x,v) \leq C_1 \phi(t,x,v)$ for all $t>0$ for which $f$ is defined.
If $\gamma \leq -d/2-1$, the above argument does not apply because we do not have enough \emph{a priori} decay in $\overline c$ to conclude \eqref{e:aij-phi}. For this case, we define $h(t,x,v) = f(t,x,v)e^{\mu|v|^2}$. From the equation \eqref{e:nondivergence} for $f$, we have
\begin{align*}
\partial_t h + v\cdot \nabla_x h &= e^{\mu|v|^2}\left( \mbox{tr}\left[\overline a D_v^2(e^{-\mu|v|^2} h)\right] + \overline c e^{-\mu|v|^2} h\right)\\
&= \mbox{tr}\left[\overline a D_v^2 h\right] - 4 \mu v\cdot (\overline a \nabla_v h) + \left(\overline c - 2\mu \, \mbox{tr} (\overline a) + 4 \mu^2 \overline a_{ij} v_i v_j\right) h.
\end{align*}
Lemma \ref{l:very-soft} implies that $\|\overline c -2\mu \, \mbox{tr} (\overline a) + 4 \mu^2 \overline a_{ij} v_i v_j\|_{L^\infty([0,T_0]\times \mathbb R^{2d})} \leq C_0$ for some $C_0$, so that $\tilde h(t,x,v) = e^{-C_0t}h(t,x,v)$ is a supersolution of $\partial_t \tilde h + v\cdot \nabla_x \tilde h = \mbox{tr}(\overline a D_v^2 \tilde h) + \tilde b\cdot \nabla_v \tilde h$ with bounded drift $\tilde b_j = -4\mu v_i \overline a_{ij}$. The maximum principle for this class of equations (see for example \cite[Proposition A.1]{cameron2017landau}) implies $h(t,x,v) \leq e^{C_0t}f_{\rm in}(x,v)e^{\mu|v|^2}$, which is uniformly bounded on any finite time interval. Note that, since $\|f\|_{L^\infty([0,T_0]\times \mathbb R^d\times\mathbb R^d)}$ is finite, this argument also applies in the case $\gamma = -d$.
\end{proof}
We are now in a position to prove our main result.
\begin{proof}[Proof of Theorem \textup{\ref{t:main}}]
Let $f$ be a weak solution of the Landau equation \eqref{e:divergence} such that $f_{\rm in}(x,v) = f(0,x,v) \lesssim e^{-\mu|v|^2}$ for some $\mu>0$. Without loss of generality, we may assume $\mu \leq \mu_0$, with $\mu_0$ as in the statement of the theorem. By applying \cite[Theorem~1.2]{cameron2017landau} if $\gamma \in (-2,0)$ or \Cref{t:gaussian} if $\gamma \in [-d,-2]$, we see that, for all $(t,x,v) \in [0,T_0]\times\mathbb R^d \times \mathbb R^d$,
\begin{equation}\label{e:decay}
f(t,x,v) \lesssim e^{-\mu|v|^2},
\end{equation}
where the implied constant is independent of $T_0$ if $\gamma>-d/2 -1$. The dependence of the implied constant in~\eqref{e:decay} on $T_0$ in the case $\gamma \leq -d/2-1$ propagate to the rest of our estimates. Throughout this proof, as we absorb algebraic-in-$v$ factors into factors with Gaussian decay in $v$, $\mu'$ denotes a changing, positive constant, with $\mu'<\mu \leq \mu_0$. The constant $\mu'$ changes only finitely many times, by an arbitrarily small amount, so the final conclusion is valid for any $\mu'<\mu$.
Let $f_{z_0}$ be as in Lemma \ref{l:T} with base point $z_0\in [0,T_0]\times \mathbb R^d\times \mathbb R^d$. Since \Cref{l:T} locally controls the coefficients in the equation for $f_{z_0}$~\eqref{e:isotropic}, we may apply \cite[Theorem~2]{golse2016} to obtain:
\begin{align*}
|f_{z_0}|_{\alpha,Q_{1/2}} \lesssim \|f_{z_0}\|_{L^2(Q_1)}+|\overline C f_{z_0}|_{0,Q_1},
\end{align*}
for some $\alpha\in (0,1)$. Using the Gaussian decay of $f$~\eqref{e:decay}, this implies $|f_{z_0}|_{\alpha,Q_{1/2}} \lesssim e^{-\mu'|v_0|^2}$. By rescaling, we have $|f_{z_0}|_{\alpha,Q_{1}} \lesssim e^{-\mu'|v_0|^2}$. Next, Lemma~\ref{l:coefficients} with $M=p=0$, along with the local upper bounds on $\overline A$ and $\overline C$ in Lemma \ref{l:T}, implies that the coefficients $\overline A$ and $\overline C$ in \eqref{e:isotropic-nondivergence} satisfy
\[ \left|\overline A\right|_{\alpha,Q_{1}} + \left|\overline C\right|_{\alpha,Q_{1}} \lesssim (1+|v_0|)^{k_0},\]
for some $k_0\in \mathbb R$, with $\alpha$ as above.
We apply the Schauder estimate, Theorem \ref{t:weak-schauder}(a), to $f_{z_0}$ in $Q_1$ to obtain
\[ [f_{z_0}]_{1+\alpha,Q_{1/2}} \leq C([\overline C f_{z_0}]_{\alpha,Q_1} + |\overline A|_{\alpha,Q_1}^{p}|f_{z_0}|_{0,Q_1}) \lesssim e^{-\mu'|v_0|^2},\]
for any $z_0\in (0,T_0]\times \mathbb R^d\times \mathbb R^d$, where $p>0$ depends on $\alpha$. By Lemma \ref{l:coefficients} again, this implies $\overline A, \overline C\in C^{1+\alpha}(Q_{1/2})$, with
\begin{align*}
\left|\overline A\right|_{1+\alpha,Q_{1/2}} &\lesssim r_1^{-2} (1+|v_0|)^{k_1} \lesssim (1+t_0^{-1})(1+|v_0|)^{k_1},\\
\left|\overline C\right|_{1+\alpha,Q_{1/2}} &\lesssim (1+|v_0|)^{\ell_1},
\end{align*}
for $k_1, \ell_1 \in \mathbb R$. We can now apply Theorem \ref{t:weak-schauder}(b) to obtain
\begin{align*}
[\partial_t f_{z_0}]_{\alpha,Q_{1/4}}& + [\nabla_x f_{z_0}]_{\alpha,Q_{1/4}} + [D_v^3 f_{z_0}]_{\alpha,Q_{1/4}}\\
&\lesssim (|\overline C f_{z_0}|_{1+\alpha,Q_{1/2}} + |\overline A|_{1+\alpha,Q_{1/2}}^q|f_{z_0}|_{0,Q_{1/2}})\\
&\lesssim (1+t_0^{-q}) e^{-\mu'|v_0|^2}.
\end{align*}
where $q>0$ depends on $\alpha$. Again, by taking a larger constant we have
\[[D_v^3 f_{z_0}]_{\alpha,Q_{1}} + [\partial_t f_{z_0}]_{\alpha,Q_{1}} + [\nabla_x f_{z_0}]_{\alpha,Q_{1}} \lesssim (1+t_0^{-q}) e^{-\mu'|v_0|^2}. \]
From here, we can inductively apply Theorem \ref{t:weak-schauder}(a) and (b) to conclude $f_{z_0}\in C^{\infty}(Q_1)$. In more detail, assume that all partial derivatives $\partial_t^j \partial_x^\beta\partial_v^\eta f_{z_0}$ with
\begin{equation}\label{e:M}
2j + 3|\beta| + |\eta| \leq M
\end{equation}
exist in $C^{\alpha}(Q_1)$, and that for every such partial derivative $\partial f_{z_0}$ and $z_0\in (0,T_0]\times \mathbb R^d\times \mathbb R^d$, we have
\begin{equation}\label{e:partial}
[\partial f_{z_0}]_{\alpha,Q_{1}} \leq C\left(1+t_0^{-q}\right) e^{-\mu'|v_0|^2},
\end{equation}
for some $q>0$. Then Lemma \ref{l:coefficients} implies that $\overline A$ and $\overline C$ in \eqref{e:isotropic-nondivergence} satisfy
\begin{equation}\label{e:AC}
\begin{split}
\left[\partial\overline A\right]_{1+\alpha,Q_{1/2}} &\lesssim r_1^{-M} (1+t_0^{-q})(1+|v_0|)^{k} \lesssim (1+t_0^{-q'})(1+|v_0|)^k\\
\left[\partial\overline C\right]_{1+\alpha,Q_{1/2}} &\lesssim r_1^{-M+2} (1+t_0^{-q}) (1+|v_0|)^k \lesssim (1+t_0^{-q'+1}) (1+|v_0|)^\ell,
\end{split}
\end{equation}
for some $q'>0$ and $k,\ell\in \mathbb R$. Letting $\partial = \partial_t^j\partial_x^\beta\partial_v^\eta$ be a partial derivative satisfying \eqref{e:M}, we can therefore differentiate equation \eqref{e:isotropic-nondivergence} to obtain an equation for $\partial f_{z_0}$ of the form
\begin{align*}
\partial_t (\partial f_{z_0}) + v\cdot \nabla_x (\partial f_{z_0}) &= \mbox{tr}(\overline A(z)\partial f_{z_0}) + \overline C(z)\partial f_{z_0} + \mathcal F(f_{z_0}(z),\overline A(z),\overline C(z)),
\end{align*}
for some differential operator $\mathcal F$ of order at most $M$ (counted with the scaling of \eqref{e:M}). Applying Theorem \ref{t:weak-schauder}(a) and our inductive hypothesis \eqref{e:partial}, we have
\begin{align*}
[\partial f_{z_0}]_{1+\alpha,Q_{1/2}} &\lesssim \left([\overline C(z) \partial f_{z_0}+ \mathcal F(f_{z_0}(z),\overline A(z),\overline C(z))]_{\alpha,Q_1} + |A|_{\alpha,Q_1}^p|f_{z_0}|_{0,Q_1}\right)\\
&\lesssim \left(1+t_0^{-q''}\right)e^{-\mu'|v_0|^2},
\end{align*}
with $q''>0$. By \eqref{e:AC}, we have enough regularity of $\overline C(z)$ and $\mathcal F(f_{z_0}(z),\overline A(z),\overline C(z))$ to apply Theorem \ref{t:weak-schauder}(b):
\[[D_v^3 \partial f_{z_0}]_{\alpha,Q_{1/4}} + [\partial_t \partial f_{z_0}]_{\alpha,Q_{1/4}} + [\nabla_x \partial f_{z_0}]_{\alpha,Q_{1/4}} \lesssim \left(1+t_0^{-q'''}\right) e^{-\mu'|v_0|^2}. \]
As above, we may replace $Q_{1/4}$ with $Q_1$ by taking a larger implied constant. Such an estimate holds for each partial derivative $\partial f_{z_0}$ satisfying \eqref{e:M}, so we have shown \eqref{e:partial} holds with some $q>0$ for $\partial_t^j\partial_x^\beta\partial_v^\eta f_{z_0}$ whenever
\[ 2j + 3|\beta| + |\eta| \leq M + 3.\]
We conclude $f_{z_0}\in C^\infty(Q_1)$ for any $z_0\in (0,T_0]\times \mathbb R^d\times \mathbb R^d$. By Proposition \ref{p:f0-f}, we have that $f\in C^\infty((0,T_0]\times \mathbb R^d\times \mathbb R^d))$ with the pointwise estimates \eqref{e:pointwise}.
\end{proof}
|
1,116,691,500,965 | arxiv |
\section{Introduction}
A mobile connection is our window to the world.
The current social, economic, and political drive to reach global wireless coverage and digital inclusion acknowledges connectivity as vital for accessing fair education, medical care, and business opportunities in a post-pandemic society.
Sadly, nearly half of the population on Earth remains unconnected.
Indeed, rolling out optical fibers and radio transmitters to every location on the planet is not economically viable,
and reaching the billions who live in rural or less privileged areas has remained a chimera for decades.
The long-overdue democratization of wireless communications requires a wholly new design paradigm to realize ubiquitous and sustained connectivity in an affordable manner.
Meanwhile, in more urbanized and populated areas, even 5G may eventually fall short of satiating our appetite for mobile internet and new user experiences.
Life in the 2030s and beyond will look quite different from today’s:
hordes of network-connected UAVs (uncrewed aerial vehicles) will navigate 3D aerial highways---be it for public safety or to deliver groceries to our doorstep---,
and flying taxis will re-shape how we commute and, in turn, where we live and work.
The bold ambition of reaching for the sky will take the data transfer capacity, latency, and reliability needs for the underpinning network to an extreme,
requiring dedicated radio resources and infrastructure for aerial services \cite{geraci2021will,wu20205g}.
In a quest for anything, anytime, anywhere connectivity---
even up in the air---,
next-generation mobile networks may need to break the boundary of the current ground-focused paradigm and fully embrace aerial and spaceborne communications \cite{RinMaaTor2020,giordani2020non}.
To this end,
the wireless community has already rolled up its sleeves in (re)search for technology enhancements towards a fully integrated terrestrial plus non-terrestrial network (NTN) able to satisfy both ground and aerial requirements.
At first glance,
terrestrial networks (TNs) could be:
(i) re-engineered and optimized to support aerial users \cite{MozLinHay2021,chowdhury2021ensuring},
or (ii) complemented by NTN infrastructure such as low Earth orbit (LEO) satellite constellations or aerial base stations (BSs) to further enhance performance \cite{kodheli2020satellite,KarKhoAlf2021}. Cost-related factors may advocate for a progressive roadmap.
In the present paper, we discuss the opportunities and challenges lying behind a 3D integrated TN-NTN.
We begin by providing examples of key use-cases, overviewing the building blocks of an integrated TN-NTN architecture,
and summarizing the most relevant 3GPP standardization activities.
We then introduce the case study of a conventional terrestrial operator pursuing aerial connectivity through two plausible choices:
(i) deploying dedicated uptilted cells---or partnering with a specialized aerial operator doing so---reusing the same spectrum;
(ii) leasing infrastructure or solutions from a LEO satellite operator.
We conclude by reviewing the main hurdles that stand in the way to an integrated TN-NTN and pointing out key open problems worthy of further research.
\section{Use-cases, Architecture, and Standardization}
In this section, we describe the main use-cases and components of a plausible integrated TN-NTN, and we summarize the major NTN and UAV standardization advancements.
\subsection{Use-cases}
The opportunities unlocked by integrating TN and NTN capabilities could lead to a vast number of new applications and services.
In what follows,
we provide a representative down-selection of the key use-cases.
\emph{Critical communications:}
Connectivity from space or air can empower ultra-reliable critical communications in the absence of cellular coverage or during an emergency or natural disaster. In this case, when the ground network becomes dysfunctional and the importance of providing rapid and resilient connectivity cannot be overstated,
NTNs can ensure replacement coverage through direct access from space/air or even via satellite- or cellular- backhauled UAV radio access nodes.
\emph{Massive IoT and immersive communications:}
NTNs can cover large areas of land or sea populated with both static and nomadic sensory nodes, all collecting real-time data. Aggregating and displaying the latter through AR/VR applications will provide users with spatial and contextual awareness, enabling immersive human-machine interaction, likely one of the 6G killer apps. Depending on the latency requirements and sensory node capabilities, data aggregation could be handled by a LEO constellation in the field of view of a ground gateway or aerial BSs. NTN broadcast/multicast could then pursue content scalability and uninterrupted delivery to users in cars, trains, and vessels.
\emph{Aerial communications:}
Beyond standalone TNs,
primarily designed for 2D usage,
an integrated TN-NTN could support reliable data and control links to multiple UAVs, electrical vertical take-off and landing vehicles (eVTOLs), and aircrafts.
These services would be guaranteed in specific 3D areas---aerial corridors or waypoint trajectories---where end-devices will be allowed to fly at different heights. The potential of UAVs may only truly be unleashed once the network capabilities and regulations allow for autonomous operation beyond visual line-of-sight (LoS) \cite{ZenGuvZha2020,SaaBenMoz2020}.
\subsection{Architecture}
A simplified integrated TN-NTN architecture is illustrated in Fig.~\ref{fig:architecture}, with service links connecting a user terminal---either handheld/IoT or VSAT---to TN/NTN BSs, feeder links connecting the NTN segment to the ground core network, and (optionally) inter-satellite and/or inter-high-altitude platform stations (HAPS) links.
\begin{figure}
\centering
\includegraphics[width=\figwidth]{
Figures/architecture_v03.png}
\caption{Exemplified integrated TN-NTN.
NTN BS functionalities can be placed onboard satellites or at the NTN gateway, respectively entailing a regenerative or transparent satellite payload \cite{kodheli2020satellite}.}
\label{fig:architecture}
\end{figure}
\subsubsection*{Network platforms}
The 3D TN-NTN will avail of a multi-layered multi-band infrastructure, arranged hierarchically, with the following nodes operating at different altitudes and offering user-centric coverage and service:
\begin{itemize}
\item
TN BSs of various size, power, height, and orientation,
operating in sub-6 GHz, mmWave, and eventually THz bands,
and deployed with different densities.
Along with conventional downtilted BSs,
mobile operators may choose to deploy dedicated infrastructure,
e.g., uptilted cells, to serve aerial users.
\item
Geostationary orbit (GSO) satellites,
orbiting the equatorial plane at an altitude of about 35786~km,
and creating fixed beams with a footprint radius of up to 3000 km.
\item
Non-GSO satellites,
such as LEO,
deployed at altitudes between 300–1500~km,
creating footprints of up to 1000 km radius per beam.
Unlike their GSO counterpart,
LEO satellites move fast with respect to a given point on the Earth,
with an orbital period of just a few hours,
and thus require large constellations for coverage continuity.
\item
Aerial BSs such as HAPSs,
placed in the stratosphere at around 20 km, and
creating multiple cells sized about 10 km each,
or UAV radio access nodes,
flying at heights somewhere between 0-1~km.
\item
Ground gateways connecting aerial and spaceborne platforms to the core network through so-called feeder links.
\end{itemize}
\subsubsection*{Terminals}
The end-devices of a 3D TN-NTN can be classified as follows:
\begin{itemize}
\item
Stationary and vehicular ground users (GUEs),
in areas ranging from dense urban to suburban, rural, and remote.
\item
UAVs, eVTOLs, and aircrafts, demanding in-flight connectivity at altitudes of few hundred meters, 1--3~km, and 10--12~km, respectively \cite{MozLinHay2021}.
\end{itemize}
Satellite-connected devices can either be handheld/IoT or equipped with a very-small-aperture terminal (VSAT), depending on the use-case and the carrier frequency of the service link. Indeed, the more benign link budget in the S-band (sub-6 GHz) enables direct access to omni- or semi-directional handheld/IoT terminals.
Operating in the Ka-band (mmWave spectrum) incurs a higher attenuation, which must be compensated with a larger antenna gain by employing a VSAT. The latter can be either fixed or mounted on a moving platform (buses, trains, vessels, or aircrafts), thus giving options for either mobile or fixed broadband access.
\subsection{Standardization}
Standardization work on non-terrestrial communications in 3GPP dates back to 2017 \cite{LinRomEul2021}.
This effort can be classified nowadays into two main areas, namely NTN enhancements and TN support for UAVs.
The former aims at defining a global standard for future spaceborne communications, fostering an explosive growth in the satellite industry.
Activities within the latter serve the twofold purpose of ensuring that mobile standards meet the connectivity needs for safe UAV operations, and that other users of the network do not experience a loss of service due to their proximity to UAVs.
The objectives and outputs of the 3GPP work carried out from Rel-15 up to Rel-17,
along with the topics currently under study for Rel-18 are outlined as follows and summarized in Table~\ref{tab:3GPP}.
\input{0Y_Table_3GPP}
\subsubsection*{NTN enhancements}
In 3GPP parlance,
the term NTN refers to utilizing satellites or HAPS to offer connectivity services and complement terrestrial networks, especially in remote areas where cellular coverage is unavailable. In Rel-17, 3GPP introduced a set of basic features to enable 5G NR operation over NTNs in FR1, i.e., up to 7.125 GHz.
3GPP Rel-18 will enhance 5G NR NTN operation by
improving coverage for handheld terminals,
studying deployments above 10 GHz,
addressing mobility and service continuity between TN-NTN as well as across different NTNs,
and investigating regulatory requirements for network-verified user location \cite{Lin2022}.
\subsubsection*{Support for UAVs}
3GPP introduced 4G LTE support for UAVs back in Rel-15,
including signaling for subscription-based aerial user identification,
reporting of UAV height, location, speed, and flight path,
and new measurement reports to address aerial interference up to a certain density of low-altitude UAVs.
In subsequent releases, 3GPP addressed application layer support and security for connected UAVs, also defining the service interactions between UAVs and the UAV traffic management (UTM) system.
As 5G use-cases evolve, Rel-18 will introduce 5G NR support for devices onboard aerial vehicles,
studying additional triggers for conditional handover, BS uptilting, and signaling to indicate UAV beamforming capabilities, among other enhancements \cite{Lin2022}.
\section{Opportunities}
In this section, we consider two multi-operator case studies to illustrate how terrestrial networks could be
(i) efficiently re-engineered to support non-terrestrial end-devices such as UAVs,
or (ii) opportunistically complemented by non-terrestrial infrastructure to augment their current capabilities.
The main system-level assumptions for these two setups are summarized in Table~\ref{table:parameters}.
\input{0X_Table_Parameters.tex}
\subsection{Example I: Re-designing TNs for NTN Terminals}
As the penetration of aerial users increases, a terrestrial mobile network operator (MNO) may choose to cater for UAV connectivity or partner
with another MNO intending to do so \cite{MozLinHay2021,EAN}. The latter gives rise to the following hypothetical setup with two operators sharing the same spectrum, namely:
\begin{itemize}
\item
A terrestrial operator, ${\mathsf{MNO}_{\sf T}}$, running a standard network comprised of downtilted cells to serve legacy GUEs.
\item
An aerial operator, ${\mathsf{MNO}_{\sf A}}$, running a dedicated network of uptilted BSs reserved exclusively for connected UAVs.
\end{itemize}
The deployment sites of both operators are on a hexagonal layout and comprised of three co-located BSs,
each covering one sector (i.e., a cell) spanning an angular interval of $120^{\circ}$.
Let ${\mathsf{ISD}_{\sf T}}$ and ${\mathsf{ISD}_{\sf A}}$ denote the respective inter-site distances,
whereby we fix the former to 500~m, and vary the latter to study its effect.
We assume 15 GUEs for each ${\mathsf{MNO}_{\sf T}}$ cell,
and for all values of ${\mathsf{ISD}_{\sf A}}$,
we keep the UAV density constant and according to 3GPP Case~3 in TR~36.777,
yielding \{1,~4,~9\} UAVs/cell under ${\mathsf{ISD}_{\sf A}} = \{500,~1000,~1500\}$~m, respectively.
GUEs are located both outdoor at $1.5$~m and indoor in buildings consisting of several floors.
UAVs fly outdoor at a height of $150$~m.
We assume all GUEs and UAVs to have a single omnidirectional antenna,
and to connect to the strongest cell of their respective serving operators.
Both UAVs and GUEs employ the open-loop power control policy specified in 3GPP TR 36.213.
The models reported in 3GPP TR 38.901 and TR 36.777 are invoked to characterize the propagation features of all links.
We assume the BSs of ${\mathsf{MNO}_{\sf T}}$ and ${\mathsf{MNO}_{\sf A}}$ to be respectively downtilted by $-12^\circ$ and uptilted by $45^\circ$,
the former being commonplace for ${\mathsf{ISD}_{\sf T}} = 500$~m,
and the latter yielding the best UAV performance in most cases.
Each cell is equipped with an $8 \times 8$ massive MIMO array of cross-polarized semi-directive elements,
each connected to a separate RF chain, resulting in a total of 128 RF chains.
For both operators,
we assume perfect channel state information, and consider two different multi-user precoding paradigms:
\begin{itemize}
\item
\emph{Zero-forcing (ZF) precoding},
where each BS spatially multiplexes a subset of its users.
On one hand,
this paradigm requires low-to-no coordination for radio resource allocation
since all scheduling, beamforming, and networking decisions are performed individually by each BS.
On the other hand, such a simplification comes at the cost of inter-MNO co-channel interference.
\item
\emph{Eigendirection-aware (EDA) precoding},
where BSs dedicate a certain number of spatial degrees of freedom to place radiation nulls, thereby canceling interference on the dominant eigendirections of the inter-cell channel subspace \cite{GarGerLop2019}.
This approach requires
coordination between ${\mathsf{MNO}_{\sf T}}$ and ${\mathsf{MNO}_{\sf A}}$ for channel state information acquisition,
possibly entailing them to belong to the same network provider.
\end{itemize}
We focus our analysis on the uplink, the more data-hungry direction for UAVs, whose generated transmissions may pose a threat to legacy GUEs \cite{geraci2021will}. Fig.~\ref{fig:SINR_MNOA_UL} shows the SINR attained by UAVs and GUEs for various values of ${\mathsf{ISD}_{\sf A}}$ and the two precoding schemes.
These results show the following:
\begin{itemize}
\item
Offloading UAVs from ${\mathsf{MNO}_{\sf T}}$ sees their SINR reduced, unless the deployment of ${\mathsf{MNO}_{\sf A}}$ is sufficiently dense. Importantly, UAVs remain in coverage even under ${\mathsf{ISD}_{\sf A}}=1500$~m. Offloading, however, provides UAVs with higher data rates, as shown later.
\item
As ${\mathsf{ISD}_{\sf A}}$ is reduced,
UAVs are no longer forced to connect to far-off dedicated BSs, and can afford reducing their transmission power and interference generated. This results in an increasing SINR for UAVs and GUEs alike.
\item
Upgrading from ZF to EDA precoding allows both operators to neutralize the increased intercell interference arising from spectrum sharing. For ${\mathsf{MNO}_{\sf T}}$, this countermeasure is key to preserve the legacy GUEs performance.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\figwidth]{
Figures/Figure_UL_SINR_combined.png}
\caption{Uplink SINR for UAVs (top) and GUEs (bottom) with ${\mathsf{MNO}_{\sf T}}$ and ${\mathsf{MNO}_{\sf A}}$ sharing the same spectrum, for ${\mathsf{ISD}_{\sf T}}=500$~m and a variable ${\mathsf{ISD}_{\sf A}}$, and employing ZF (blue) or EDA (orange) precoding. ${\mathsf{ISD}_{\sf A}}=\infty$ denotes all GUEs and UAVs served by standalone ${\mathsf{MNO}_{\sf T}}$. Solid and transparent bars denote 95\%-tile and median, respectively.}
\label{fig:SINR_MNOA_UL}
\end{figure}
While not shown for space constraints, similar observations can be made for the downlink, with two caveats:
\begin{itemize}
\item
UAVs turn from originators to main victims of inter-MNO interference.
Reducing ${\mathsf{ISD}_{\sf A}}$ allows ${\mathsf{MNO}_{\sf A}}$ a corresponding power reduction, which can be used to trade off UAVs and GUEs performance.
\item
The benefits of EDA nullsteering are mostly confined to---and needed by---${\mathsf{MNO}_{\sf A}}$.
Indeed, the dominant channel eigendirections for both operators correspond to users most vulnerable to downlink interference.
Intuitively, in the presence of UAVs,
their strong LoS channels dominate said subspace and most nulls target receiving UAVs.
\end{itemize}
Under the right deployment and interference mitigation choices, the dual-MNO paradigm can offer comparable SINRs to a setup where GUEs and UAVs are all served by ${\mathsf{MNO}_{\sf T}}$. However, the spatial and spectrum reuse gains provided by ${\mathsf{MNO}_{\sf A}}$ reflect in the UAV data rates, reported in Fig.~\ref{fig:rate_MNOA_UL} for the uplink. These largely benefit from increasing the deployment density of ${\mathsf{MNO}_{\sf A}}$ and employing EDA precoding. Focusing on the 95\%-tile, standalone ${\mathsf{MNO}_{\sf T}}$ with ZF provides 36~Mbps as opposed to the 134~Mbps achievable with ${\mathsf{MNO}_{\sf T}}$-plus-${\mathsf{MNO}_{\sf A}}$ and ${\mathsf{ISD}_{\sf A}}=500$~m.
The former may be sufficient for remote UAV controlling through HD video, whereas the latter may also empower 8K real-time video live broadcast (for future VR applications) and $4\!\times\!4$K AI surveillance (for control and anti-collision in building-intensive areas, lacking positioning accuracy) \cite{geraci2021will}.
\begin{figure}
\centering
\includegraphics[width=\figwidth]{
Figures/Figure_UL_rate_UAV.png}
\caption{Uplink UAV rates with ${\mathsf{MNO}_{\sf T}}$ and ${\mathsf{MNO}_{\sf A}}$ sharing the same spectrum, for ${\mathsf{ISD}_{\sf T}}=500$~m and a variable ${\mathsf{ISD}_{\sf A}}$, and employing ZF (blue) or EDA (orange) precoding. ${\mathsf{ISD}_{\sf A}}=\infty$ denotes all GUEs and UAVs served by standalone ${\mathsf{MNO}_{\sf T}}$. Solid and transparent bars denote 95\%-tile and median, respectively.}
\label{fig:rate_MNOA_UL}
\end{figure}
\subsection{Example II: Complementing TNs with NTN Infrastructure}
While primarily targeting underserved areas,
NTNs may also be leveraged to augment urban connectivity,
e.g., with ${\mathsf{MNO}_{\sf T}}$ opportunistically leasing spectrum and infrastructure from a satellite service provider.
In this example, we study the benefits of such an arrangement when offering service to passengers onboard eVTOLs, flying at 1500~m over an urban area \cite{MozLinHay2021}. Let us define:
\begin{itemize}
\item
The same operator ${\mathsf{MNO}_{\sf T}}$ as in Example I.
\item
A satellite operator ${\mathsf{MNO}_{\sf S}}$,
availing of a LEO constellation
and operating in an orthogonal S-band (sub-6~GHz).
\end{itemize}
Each LEO BS of ${\mathsf{MNO}_{\sf S}}$ generates multiple Earth-moving beams pointing to the ground in a hexagonal fashion,
each creating one corresponding NTN cell \cite{SedFelLin2020}.
Due to its orbital movement,
the LEO satellite may be seen by the users under a variable elevation angle,
defined as the angle between the line pointing towards the satellite and the local horizontal plane,
whereby angles closer to $90^{\circ}$ yield shorter LEO-to-user distances, and are more likely to be in LoS.
Besides the elevation angle, the NTN performance is affected by the beam frequency reuse factor (FRF).
With ${\mathsf{FRF}} = 1$,
all frequency resources are fully reused across all beams,
whereas with ${\mathsf{FRF}} = 3$,
they are partitioned into three sets,
each reused every three beams.
The assumptions reported in 3GPP TR 38.811 and 38.821 are used to characterize the main NTN propagation features.
This time we focus on the downlink, likely the predominant direction for eVTOL occupants. For the latter, Fig.~\ref{fig:SINR_DL_MNOS} shows the CDF of the downlink SINR experienced when all are served by ${\mathsf{MNO}_{\sf T}}$ and when their traffic is offloaded to ${\mathsf{MNO}_{\sf S}}$. For ${\mathsf{MNO}_{\sf S}}$, various LEO elevation angles are considered. The following remarks can be made:
\begin{itemize}
\item
A standalone ${\mathsf{MNO}_{\sf T}}$ employing ZF struggles to guarantee coverage to eVTOLs as they proliferate. Indeed, increasing their number from 0.1 to 1 per cell incurs a progressively larger outage, i.e., SINR~$<-5$~dB, reaching up to $18\%$ of the cases (solid black). This is due to the insufficient angular separation between users, caused by their density and sheer height, which also renders nullsteering (not shown) unhelpful
\item
Offloading traffic from ${\mathsf{MNO}_{\sf T}}$ to ${\mathsf{MNO}_{\sf S}}$ yields universal coverage with SINRs ranging between $-3$~dB and $17$~dB for the elevation angles and beam FRFs considered.
\item
Moving from ${\mathsf{FRF}}=3$ to ${\mathsf{FRF}} = 1$ entails full reuse---and thus inter-beam interference---, degrading the median downlink SINR by approximately $8$~dB and $14$~dB for elevation angles of $90^{\circ}$ and $87^{\circ}$, respectively.
\item
The SINR experiences a prominent degradation when the LEO satellite moves from $90^{\circ}$ to $87^{\circ}$,
owing to a larger propagation distance and a lower antenna gain, with the median loss in excess of 8~dB for ${\mathsf{FRF}}=1$.
Nonetheless, all offloaded users still remain in coverage, even in the presence of inter-beam interference (${\mathsf{FRF}} = 1$).
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\figwidth]{
Figures/Figure_DL_SINR_vsNumUAVs_v02.png}
\caption{Downlink SINR for eVTOL passengers when connected to ${\mathsf{MNO}_{\sf T}}$ and when offloaded to ${\mathsf{MNO}_{\sf S}}$. For the latter, various LEO satellite elevation angles and beam FRFs are considered.}
\label{fig:SINR_DL_MNOS}
\end{figure}
As for the achievable rates, assuming one eVTOL passenger per cell over an area of $\SI{10.8}{km^2}$---the size of Sant Mart\'{i}, Barcelona's business district---yields a total of 150 users, out of which those in outage ($18\%$, i.e., 27 users) could be offloaded to ${\mathsf{MNO}_{\sf S}}$. Under an ideal elevation angle of $90^{\circ}$ and ${\mathsf{FRF}}=3$, they would experience median rates of 3~Mbps. Reducing the density of eVTOLs rapidly increases their experienced rates as both their absolute number shrinks and so does the outage percentage from ${\mathsf{MNO}_{\sf T}}$. Specifically, 0.5 and 0.2 eVTOLs per cell respectively yield 75 and 30 eVTOLs in total. Out of these, 8.8\% and 2.6\% experience SINRs below $-5$~dB, for a total of 7 and 1 eVTOLs incurring outage, respectively. When offloaded to ${\mathsf{MNO}_{\sf S}}$, their median rates would be of around 11~Mbps and 80~Mbps, respectively.
While our findings are encouraging, they also suggests that in a future with hordes of high-altitude vehicles, broadband aerial communications may require higher NTN spatial reuse through narrow beams and possibly operating in the Ka-band \cite{giordani2020non,SedFelLin2020}. This option may be viable for relayed access through a more directive receiver mounted onboard the eVTOL \cite{EAN}.
\section{Challenges and Research Directions}
The availability of TN plus NTN segments is a prerequisite for realizing a 3D wireless network.
Jointly and optimally designing and operating all platforms and nodes requires further disruptive and interdisciplinary research.
In this section,
we identify the key obstacles that stand in the way along with the most needed technological enablers.
\subsection{The Challenge of Extreme Heterogeneity}
One chief challenge in realizing an integrated TN-NTN arises from its extreme heterogeneity,
reflected at different levels as outlined below.
\subsubsection*{Radio propagation features}
NTNs comprise systems and end-devices at different altitude layers, each with own service features.
For instance, GEO satellites provide stable and continuous links to ground devices with a considerable propagation delay,
whereas LEO satellites are characterized by lower-delay interfaces, but may suffer from service discontinuity depending on the constellation density. The type of service provided by each layer must be mapped to the user demand, factoring in the interplay of different layers, through dynamic TN-NTN quality of experience management and scheduling.
\subsubsection*{Node and device capabilities}
By design, GEO satellites differ from LEO satellites in terms of redundancy mechanisms, antenna designs, transceivers, operational frequency, and/or internal resources (e.g., storage, processing, and power availability). The variance in capabilities is yet more apparent with aerial vehicles, as they are conceived for largely different purposes and environments, and terminals, whose antennas range from small and isotropic to active ones capable of tracking. The above further exacerbates the need for network management,
to guarantee a near-optimal use of radio resources while leveraging this heterogeneity.
\subsubsection*{Ownership and operations}
Mega-constellations are emerging to expand Internet coverage through hundreds or thousands of satellites, bringing about frequency coordination and collision avoidance issues, among others.
While current systems lack interoperability, with each operator featuring a vertically integrated stack, 3GPP standardization will be crucial for interconnection,
giving way to more heterogeneous scenarios. With multiple systems designed and operated in an ad-hoc fashion, their decentralized management and optimization may be a cornerstone to realizing a practical integrated TN-NTN.
\subsection{Research Directions}
Its extreme heterogeneity makes realizing a 3D network a remarkable endeavor. In the sequel, we propose much-needed research towards an integrated TN-NTN \cite{geraci2021will,kodheli2020satellite}.
\subsubsection*{3D radio access}
Next-generation networks will have to connect flying end-devices at all heights, including their occupants. Our preliminary results vouch for exploiting dedicated uptilted cells and NTN platforms to support aerial services. Nonetheless, operators will have to seek optimal performance-cost tradeoffs, ensuring coexistence between aerial and legacy ground users, and between different co-channel technologies. This goal calls for sophisticated interference management schemes leveraging time, frequency, power, and spatial degrees of freedom, and designed atop realistic air-to-ground channel models.
\subsubsection*{3D mobility management and multi-connectivity}
Integrated TN-NTN will face the upcoming and unprecedented mobility challenges brought about both by flying end-devices and by a mobile infrastructure, dynamically dealing with user cell selection, re-selection, and configuration. Beyond current power-triggered procedures,
novel use-case-specific and asymmetric approaches will be required, also accounting for the handover direction, e.g., within a vertical layer (within a LEO constellation or inter-HAPS) or across technologies (ground-to-air/space or vice versa). Optimal mobility management policies will need to trade off reliability, spectral- and energy-efficient load balancing, and signaling overhead caused by conditional handover preparations and radio link failures.
\subsubsection*{3D network management and orchestration}
Meeting the heterogeneous and ever more stringent traffic needs across a 3D wireless network will require optimal load distribution, defining the slices of radio resources to be assigned to each service class, accounting for the features of the available TN/NTN radio links, and following their rapidly varying topology. A service orchestrator should dynamically allocate resources at NTN nodes according to their availability, mobility patterns, architecture hierarchy, and incoming traffic, ensuring seamless service continuity to the end-user in spite of intermittent service link availability and feeder link disruptions. Besides communications, computation and caching resources scattered across TN and NTN nodes will also need to be optimally allocated and leveraged.
\section{Conclusion}
In this paper, we connected the dots between ground, aerial, and spaceborne communications, and reviewed the key opportunities and challenges brought about by integrating terrestrial and non-terrestrial networks.
We studied augmenting a ground deployment with uptilted cells, and also complementing it with a LEO constellation. We found both to be promising avenues for supporting aerial communications, under the right design choices: the former entails advanced interference mitigation capabilities, the latter hinges on a sufficiently dense constellation---to guarantee near-zenith coverage---and a carefully designed beam reuse.
\section*{Acknowledgments}
\section*{Biographies}
\small
\noindent
\textbf{Giovanni Geraci} is an Assistant Professor at Univ. Pompeu Fabra in Barcelona. He is an IEEE ComSoc Distinguished Lecturer, co-edited the book ``UAV Communications for 5G and Beyond'', and received the IEEE ComSoc EMEA Outstanding Young Researcher Award.
\vspace{0.2cm}
\noindent
\textbf{David L\'{o}pez-P\'{e}rez} is an Expert and Technical Leader at Huawei Research in Paris. He was a Bell Labs Distinguished Member of Technical Staff and has co-authored 150+ research articles, 50+ filed patents, and two books on small cells and ultra-dense networks.
\vspace{0.2cm}
\noindent
\textbf{Mohamed Benzaghta} is a Ph.D. candidate at Univ. Pompeu Fabra in Barcelona. He received B.Sc. and M.Sc. degrees from Atilim Univ. in Ankara and his research interests include the integration of terrestrial and non-terrestrial wireless communications.
\vspace{0.2cm}
\noindent
\textbf{Symeon Chatzinotas} is Full Professor and Head of the SIGCOM Research Group at SnT, University of Luxembourg, where he is acting as a PI for more than 20 projects.
He was the co-recipient of the 2014 IEEE Distinguished Contributions to Satellite Communications Award and has co-authored more than 450 technical papers.
|
1,116,691,500,966 | arxiv | \section{Introduction}
An instanton on a Taub-NUT space is a connection, given by a $u(n)$-valued one-form $i A,$ on an n-dimensional Hermitian bundle ${\cal E}$ over the Taub-NUT space with the curvature two-form $F=dA-i A\wedge A$ satisfying the self-duality condition
\begin{equation}
F=*F.
\end{equation}
Here $*$ denotes the Hodge star operator taking a two-form to its dual. We require the connection A to have finite action $S=\int {\rm tr} F\wedge *F.$
Everywhere outside one point $0$ the Taub-NUT space itself can be thought of as being fibered by a circle $S^1$ over a base ${\mathbb R}^3\backslash 0$. Choosing $\tau\sim\tau+4\pi$ to be the periodic coordinate on the $S^1$ fiber and $\vec{x}=(x_1,x_2,x_3),$ with $x_1, x_2,$ and $x_3$ coordinates on ${\mathbb R}^3,$ the Taub-NUT metric\footnote{The factor of $\frac{1}{4}$ in the metric is chosen for future convenience and the apparent singularity at the origin of ${\mathbb R}^3$ is merely a coordinate singularity.} is
\begin{equation}
ds^2=\frac{1}{4}\left(\left( l+\frac{1}{|\vec{x}|}\right) d\vec{x}^2+\frac{1}{\left( l+\frac{1}{|\vec{x}|}\right)}(d\tau+\vec{\omega}\cdot d\vec{x})^2\right),
\end{equation}
where $l$ is some fixed parameter determining the asymptotic size of the $S^1$ and $\frac{\partial}{\partial x_i}\frac{1}{|\vec{x}|}=\epsilon_{ijk}\frac{\partial}{\partial x_j} \omega_k.$ This metric degenerates to a flat metric on ${\mathbb R}^4$ as $l\rightarrow 0.$ Its noncompact cycle ${\cal C}: \{(\tau, \vec{x}) | x_1=x_2=0, x_3\geq 0\}$ becomes a plane in this limit.
The Taub-NUT space is equipped with a natural line bundle with a connection $a=\frac{1}{2V}(d\tau+\omega).$ This connection has a self-dual curvature $da.$ As a matter of fact, it has a one parameter family of such line bundles with the following Abelian connections
\begin{equation}
a_s=s a=\frac{s}{2}\frac{d\tau+\omega}{V},
\end{equation}
parameterized by $s\in[-l/2,l/2].$ These connections are Abelian instantons, as their curvature is self-dual in the orientation $(\tau, x_1, x_2, x_3)$ and has a finite action. Note, that the relation between the left and right ends of this interval is given by tensoring with a line bundle ${\cal L}_l,$ which is trivial since
\begin{equation}
\int_C d(a_{l/2}-a_{-l/2})=2\pi.
\end{equation}
\subsection{Background}
There has been a lot of work exploring instantons in various backgrounds. The ADHM original construction \cite{Atiyah:1978ri} provides all instantons on ${\mathbb R}^4$. Nahm modified this construction in \cite{Nahm:1979yw,NahmCalorons} to provide calorons, i.e. instantons on ${\mathbb R}^3\times S^1.$ Orbifolding the ADHM construction Kronheimer and Nakajima \cite{KN} obtained instantons on ALE spaces. In \cite{Nekrasov:1998ss} Nekrasov and Schwarz modified the ADHM construction to construct instantons on noncommutative ${\mathbb R}^4.$ All of these constructions have string theory interpretations \cite{DM,Johnson:1996py,Diaconescu:1996rk} and emerge from the sigma model analysis of appropriate D-brane configurations.
Based on these general constructions some explicit solutions at a general position were obtained in \cite{Kraan:1998xe, Lee:1998bb} for a caloron and in \cite{Bianchi:1995xd} for instantons on certain ALE spaces.
We would like to point out that in all these cases the underlying space is flat, or it has a useful flat limit.
Here, we aim to find a general construction for generic\footnote{Some special instanton solutions on the Taub-NUT space were obtained in
\cite{Pope:1978kj,BoutalebJoutei:1979iz,Kim:2000mg,Etesi:2003ei}.} instantons on an essentially curved space.
In particular, building on the bow formalism introduced in \cite{Cherkis:2008ip} to study the moduli spaces of instantons on the Taub-NUT space, we find expressions for the instanton connection. As an illustration of our construction we find the explicit general solution for a single instanton on a Taub-NUT space.
\subsection{Instanton Number and Monopole Charges}
A generic self-dual $U(n)$ configuration on the Taub-NUT space possesses two types of topological charges: an instanton number $k_0$ and $n$ monopole charges $m_1, m_2, \ldots , m_n.$ The instanton number as well as the monopole charges are given by integers. A detailed discussion of various charges of instantons on muti-Taub-NUT spaces and their relation with the corresponding brane configurations appeared recently in \cite{Witten:2009xu}. Here we define charges in a somewhat different fashion.
For any given $\vec{x}\in{\mathbb R}$ consider the monodromy $W(\vec{x},\tau)\in U(n)$ satisfying $(\partial_\tau-iA_\tau) W(\vec{x},\tau)=0$ and $W(\vec{x},0)=1,$ so that the monodromy around the circle $S^1_{\vec{x}}$ is $W(\vec{x}, 4\pi).$ The finite action condition implies that the conjugacy class of $\lim_{|\vec{x}|\rightarrow\infty} W(\vec{x}, 4\pi)$ is well defined and does not depend on the direction in which we approach infinity. We write the eigenvalues of $\lim_{|\vec{x}|\rightarrow\infty} W(\vec{x}, 4\pi)$ as $$\exp\left(\frac{2\pi i \lambda_1}{l} \right), \exp\left(\frac{2\pi i \lambda_2}{l} \right),\ldots, \exp\left(\frac{2\pi i \lambda_n}{l}\right).$$ Here we restrict our attention to the so called `maximal symmetry breaking' case presuming all $\lambda_j$ are distinct and ordered: $-\frac{l}{2}<\lambda_1<\lambda_2<\ldots<\lambda_n<\frac{l}{2}.$
Consider a sphere $S_R^2=\{\vec{x} | |\vec{x}|=R\}\in{\mathbb R}^3$ of large radius $R.$ Any point on this sphere determines a $\tau$-circle in the Taub-NUT space, so that the union of all these circles is a squashed three sphere $S^3_R.$ Thus, $S^3_R$ is fibered by circles over the $S^2_R$ and, for a Taub-NUT space, this fibration is the Hopf fibration $S^1\rightarrow S^3_R\rightarrow S^2_R.$ Since the total action is finite there is a gauge transformation on $S^3_R$ such that for large radius $R$ the connection $A$ restricted to $S^3_R$ approaches one with $\tau$-independent components. Let us write this connection with $\tau$-independent components in the form $A=\hat{A}-\hat{\Phi} \frac{d\tau+\omega}{V}.$ Then, the self-duality condition for $A$ is equivalent \cite{Kronheimer:1985} to the Bogomolny equation
\begin{equation}
\hat{F}=*_3 D_{\hat{A}}\hat{\Phi},
\end{equation}
for $(\hat{A}, \hat{\Phi}).$ Here $*_3$ is the three-dimensional Hodge star operator for the flat metric $dx_1^2+dx_2^2+dx_3^2$ and $\hat{F}$ is the curvature form of $\hat{A}.$ The asymptotic eigenvalues of $\hat{\Phi}$ are determined by the eigenvalues of the monodromy operator $W(\vec{x}, 4\pi).$ Moreover, since the asymptotic behavior of $\hat{\Phi}$ eigenvalues is the same as for a BPS monopole, the eigenvalues of $\hat{\Phi}$ are
\begin{align}
&\lambda_1+\frac{j_1}{x}+O(x^{-2}), &&\lambda_2+\frac{j_2}{x}+O(x^{-2}), &\ldots\ \ &, &&\lambda_n+\frac{j_n}{x}+O(x^{-2}),
\end{align}
with $j_1, j_2, \ldots, j_n$ integers.
Let us describe this construction in different terms, making clear that $j$'s are indeed integers. Considering the eigen-spaces of the monodromy operator $W(\vec{x}, \tau+4\pi) W^{-1}(\vec{x}, \tau)$ we split the bundle ${\cal E}|_{S^3_R}$ into $k$ eigen-line bundles ${\cal E}|_{S^3_R}=\L_{\lambda_1}\oplus\L_{\lambda_2}\oplus\ldots\L_{\lambda_n}.$ Since each eigenvalue $\lambda$ is independent of the base, each of these line bundles can be trivialized on all $S^1$ Hopf fibers simultaneously. Thus we have a well defined pushdown line bundles over the base of the Hopf fibration $S^2_R.$ Chern classes of these are $j_1, j_2, \ldots, j_n.$ We now use these integers to define the monopole charges of the configuration.
Let $M=\min(j_1, j_1+j_2,\ldots, j_1+j_2+\ldots+j_n).$ The monopole charges of an instanton on a Taub-NUT are defined as $$(m_1, m_2, \ldots, m_n)=(j_1-M, j_1+j_2-M,\ldots, j_1+j_2+\ldots+j_n-M).$$ Note, that from the way they are defined, one of these charges, say $m_p,$ must vanish. Nevertheless, we keep it among the charges and its position $p$ is significant as will be clear momentarily.
Intuitively, since the total action is finite, the asymptotic connection can be put into a form independent of the $\tau$ coordinate. Then, asymptotically, it can be reduced to a monopole on the base ${\mathbb R}^3$ \cite{Kronheimer:1985}. It is the charges of this monopole that we defined above.
The instanton number is less straightforward to define. One can write an explicit expression given by the Chern number minus the contributions of the monopole charges. To make clear that it is integer, we define it here as an index of the Weyl operator for the connection $A+\frac{1}{2}(\lambda_p+\lambda_{p+1})a$:
\begin{equation}
k_0={\rm Ind}\ {\backslash\!\!\!\!D}_{A+\frac{1}{2}(\lambda_p+\lambda_{p+1})a}.
\end{equation}
Thus a general $U(n)$ instanton on a Taub-NUT has an instanton number $k_0$ and monopole charges $(m_1, m_2,\ldots, m_n).$
Kronheimer \cite{Kronheimer:1985} demonstrated equivalence of the `pure monopole' case, i.e. the case with $k_0=0,$ to singular monopoles studied in \cite{Cherkis:1997aa, Cherkis:1998hi}. In particular, explicit solutions for $k_0=0$ and $m=1$ (that is $(1,0)$ monopole charges) are equivalent to singular monopole solutions presented in \cite{Cherkis:2007qa,Cherkis:2007jm}. In this paper we focus our attention on the pure instanton case of vanishing monopole charges, and obtain explicit solutions with $k_0=1,$ i.e. a single $SU(2)$ instanton on the Taub-NUT space with no monopole charge. The explicit metric on the moduli space of such solutions was found in \cite{Cherkis:2008ip}.
\section{Ingredients}
The data specifying an instanton on a Taub-NUT space will be encoded in terms of a {\em bow diagram}.
There are two basic ingredients in our construction: arrows and strings.
\begin{figure}[htbp]
\begin{center}
\subfigure[Linear maps (arrows and limbs).]
{
\includegraphics[width=0.4\textwidth]{QuiverLeg2Dots.eps}
}
\hspace{1.5cm}
\subfigure[Nahm Data (string).]
{
\includegraphics[width=0.4\textwidth]{NahmSquiggle.eps}
}
\caption{Components of bow diagrams.}
\label{ingredients}
\end{center}
\end{figure}
\subsection{Arrows and Limbs}
Figure 1a represents a pair of complex vector spaces $V={\mathbb C}^v $ and $W={\mathbb C}^w$ with maps $J: V\rightarrow W$ and $I: W\rightarrow V.$ The linear space formed by the pair of maps $(I,J)$ has a natural hyperk\"ahler structure, which is respected by the action of $U(v)$ and $U(w).$ The hyperk\"ahler moment map of the $U(v)$ action $g_v:(I,J)\mapsto(g_v^{-1} I, J g_v)$ is given by
\begin{equation}
\mu_V^{\mathbb C}=\mu_V^1+i\mu_V^2=I J,\ \ \ \mu_V^{\mathbb R}=\mu_V^3=\frac{1}{2}(J^\dagger J-I I^\dagger),
\end{equation}
while for the $U(w)$ action $g_w: (I,J)\mapsto ( I g_w, g_w^{-1} J )$ the moment map is
\begin{equation}
\mu_W^{\mathbb C}=\mu_W^1+i\mu_W^2=-J I,\ \ \ \mu_W^{\mathbb R}=\mu_W^3=\frac{1}{2}(I^\dagger I-J J^\dagger).
\end{equation}
It is convenient to assemble the pair $(I,J)$ into
\begin{equation}
Q=\left(\begin{array}{c} J^\dagger\\ I\end{array}\right)\ \mathrm{and}\
\stackrel{\rotatebox{180}{Q}}{ }=\left(\begin{array}{c} I^\dagger\\ -J\end{array}\right),
\end{equation}
(pronounced ``kyu'' and ``yuk'') so that $Q: W\rightarrow S\otimes V$ and $\stackrel{\rotatebox{180}{Q}}{ }: V\rightarrow S\otimes W$
with the three complex structures $e_j=-i\sigma_j$ acting on $Q$'s. $S\approx {\mathbb C}^2$ is a two dimensional space of spinors providing the representation of quaternions, with the quaternionic units
$e_j=i\sigma_j,$ i.e.
$$
e_1=-i
\Bigl(
\begin{array}{cc}
\scriptstyle 0 & \scriptstyle 1 \\
\scriptstyle 1 & \scriptstyle 0
\end{array}
\Bigr),\
e_2=-i
\Bigl(
\begin{array}{cc}
\scriptstyle 0 & \scriptstyle -i \\
\scriptstyle i & \scriptstyle 0
\end{array}
\Bigr),\
e_3=-i
\Bigl(
\begin{array}{cc}
\scriptstyle 1 & \scriptstyle 0 \\
\scriptstyle 0 & \scriptstyle -1
\end{array}
\Bigr).
$$
The natural metric on the linear space of all pairs of maps is
\begin{equation}
ds^2={\rm tr}_W dQ^\dagger dQ={\rm tr}_W(dJ dJ^\dagger+dI^\dagger dI).
\end{equation}
The tree symplectic forms
\begin{equation}
\omega_j\equiv g(\cdot, e_j\cdot)=\frac{1}{2}{\rm tr}_W(dQ^\dagger\wedge e_j dQ),
\end{equation}
can be combined into ${\backslash\!\!\!\omega}\equiv\omega_j\sigma_j= i {\rm Vec}\,{\rm tr}_V\, dQ\wedge dQ^\dagger.$ Here we introduce a `vector operation' ${\rm Vec}$ defined by
\begin{equation}
{\rm Vec}(1_{2\times 2}\otimes M^0 +\sigma_j\otimes M^j)=\sigma_j \otimes M^j.
\end{equation}
Since $-i\sigma_j$ represent the quaternionic imaginary units, this operation basically amounts to taking the imaginary part of a quaternion.
With this notation the moment maps are
\begin{equation}
{\backslash\!\!\!\mu}_V=\mu_V^i \sigma_i={\rm Vec}(Q Q^\dagger)\ \ \mathrm{and}\ \
{\backslash\!\!\!\mu}_W=\mu_W^i \sigma_i={\rm Vec}(\stackrel{\rotatebox{180}{Q}}{ } \stackrel{\rotatebox{180}{Q}}{ }^\dagger).
\end{equation}
\subsection{The String}
Figure 1b represents an interval $\cal I$ parameterised by $s$ with a bundle $E\rightarrow\cal I$ endowed with a Hermitian structure, a connection $D_s=d/ds+iT_0,$ and a triplet $\vec{T}=(T_1,T_2,T_3)$ of endomorphisms of $E.$ In other words for a given trivialization of $E$ we have a quadruplet of Hermitian matrix valued functions $(T_0(s), T_1(s), T_2(s), T_3(s)).$ These also form a linear space with a natural flat metric
$ds^2=\int{\rm tr}_E\left(dT_0^2+dT_1^2+dT_2^2+dT_3^2\right)$
and a hyperk\"ahler structure all invariant with respect to the following gauge group action
\begin{equation}
g(s): \left(\begin{array}{c}T_0(s)\\ T_1(s)\\ T_2(s)\\ T_3(s)\end{array}\right)\mapsto
\left(\begin{array}{c}g^{-1} T_0 g-i g^{-1}\frac{d}{ds}g\\ g^{-1}T_1 g\\ g^{-1}T_2 g\\ g^{-1}T_3 g\end{array}\right).
\end{equation}
The corresponding moment maps are
\begin{eqnarray}
\mu^1&=&\frac{d}{ds}T_1+i[T_0,T_1]+i[T_2,T_3],\\
\mu^2&=&\frac{d}{ds}T_2+i[T_0,T_2]+i[T_3,T_1],\\
\mu^3&=&\frac{d}{ds}T_3+i[T_0,T_3]+i[T_1,T_2].
\end{eqnarray}
It is convenient to introduce ${\backslash\!\!\!\!\,T}=\sigma_1\otimes T_1+\sigma_2\otimes T_2+\sigma_3\otimes T_3$ so that the moment map ${\backslash\!\!\!\mu}=[\frac{d}{ds}+i T_0, {\backslash\!\!\!\!\,T}]+{\rm Vec}\,{\backslash\!\!\!\!\,T} {\backslash\!\!\!\!\,T}.$
Assembling the Nahm data into a quaternion $T=T_0+T_j\otimes e_j=T_0-i{\backslash\!\!\!\!\,T}$ we write the above metric on the linear space of all the Nahm data in the form
\begin{equation}
ds^2=\frac{1}{2}\int {\rm tr}_S\, {\rm tr}_E\, \delta T^\dagger\delta T ds.
\end{equation}
The three symplectic forms $\omega_j=g(\cdot, e_j\cdot)$ are encoded in
\begin{equation}
{\backslash\!\!\!\omega}=\frac{i}{2}\int {\rm tr}_E \delta T\wedge\delta T^\dagger ds.
\end{equation}
Note that the moment maps can be written in terms of the Weyl operator ${\backslash\!\!\!\!D}=-D_s+{\backslash\!\!\!\!\,T}$ and its conjugate ${\backslash\!\!\!\!D}^\dagger=D_s+{\backslash\!\!\!\!\,T}$ as
\begin{equation}
{\backslash\!\!\!\mu}={\rm Vec}(D_s+{\backslash\!\!\!\!\,T})(-D_s+{\backslash\!\!\!\!\,T}).
\end{equation}
\section{The Taub-NUT as a Hyperk\"ahler Quotient}
This section contains a description of the Taub-NUT space using the ingredients we have defined in the previous section. This description will naturally lead us to a family of self-dual harmonic forms\footnote{A description of these in terms of the hyperk\"ahler reduction recently appeared in \cite{Witten:2009xu}.} which are essential for the instanton construction that follows. Our exposition in this section is close to that of Gibbons and Rychenkova \cite{Gibbons:1996nt}.
Just as for the construction \cite{KN} of instantons on ALE spaces it was essential to know the realization of the underlying ALE space as a hyperk\"ahler quotient of linear spaces \cite{Kronheimer}, this section contains the realization of the Taub-NUT space as a hyperk\"ahler quotient setting the groundwork for the construction of instantons on it.
\subsection{Taub-NUT Bow Data}
The bow diagram on Figure \ref{fig:TN} represents Nahm data of rank $1$ assosiated with a Hermitian line bundle $e\rightarrow\cal I$ on an interval $[-l/2,l/2]$ of length $l,$ as well as maps $b_{10}$ and $b_{01}$ between the one-dimensional complex vector spaces $e_0=e|_{s=-l/2}$ and $e_1=e|_{s=l/2}$ at the ends of the interval.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{TNone.eps}
\caption{A Taub-NUT Bow Diagram.}
\label{fig:TN}
\end{center}
\end{figure}
A gauge transformation $h(s)$ acts on these data as follows:
\begin{equation}
\left(\begin{array}{c} t_0 \\ t_j \\ b_{01} \\ b_{10} \end{array}\right)\mapsto
\left(\begin{array}{c} h^{-1}t_0 h + i h^{-1}\frac{d}{ds} h \\ h^{-1}t_j h \\ h^{-1}(-{\scriptstyle \frac{l}{2}}) b_{01} h({\scriptstyle \frac{l}{2}}) \\ h^{-1}({\scriptstyle \frac{l}{2}}) b_{10} h(-{\scriptstyle \frac{l}{2}}) \end{array}\right)
\end{equation}
Introducing ${\bf t}=t_1+i t_2 $ and ${\bf\scriptstyle D}=d/ds-i t_0-t_3$ the vanishing of moment maps can be written in complex notation as
\begin{eqnarray}\label{eq:TNmom}
&&[{\bf\scriptstyle D}, {\bf t}]-\delta{\scriptstyle (s+\frac{l}{2})} b_{01} b_{10}+\delta{\scriptstyle (s-\frac{l}{2})}b_{10} b_{01}=0,\\
&&[{\bf\scriptstyle D}^\dagger, {\bf\scriptstyle D}]+[{\bf t}^\dagger, {\bf t}]+\delta{\scriptstyle (s+\frac{l}{2})}(b_{10}^\dagger b_{10}-b_{01} b_{01}^\dagger)+\delta{\scriptstyle (s- \frac{l}{2})}(b_{01}^\dagger b_{01}-b_{10} b_{10}^\dagger)=0.\nonumber
\end{eqnarray}
Let us distinguish some point $s_0$ on the Nahm interval. Say this point divides this interval into two intervals of lengths $l_L$ and $l_R,$ i.e. $l_L+l_R=l$ and at this distinguished point $s=s_0=l_L-l/2=l/2-l_R.$ Let us assume $s_0>0.$ We shall perform the hyperk\"ahler quotient step-by-step, so that the last step is the quotient with respect to the $U(1)$ at the distinguished point\footnote{To be exact, this $U(1)$ is the quotient of the group of all gauge transformation on the interval by the subgroup formed by the gauge transformation that equal to identity at $s_0.$} $s_0.$ This will allow us to associate with any point $s_0$ a natural line bundle over the Taub-NUT space and its natural connection corresponding to this $U(1).$
\subsection{The Family of Connections}
First we perform hyperk\"ahler reduction on each open interval separately. The intervals are of lengths $l_L$ and $l_R.$ Since the computations are identical, we focus on the interval of length $l_R$ to the right of $s_0.$ As the Nahm data is Abelian, the vanishing of the moment maps implies $d t_j/ds=0,$ thus the vector $\vec{t}=(t_1, t_2, t_3)$ is constant. The connection $t_0$ can be made constant using gauge transformations that are trivial at the ends of the interval. There is a large gauge transformation $g=\exp(2\pi i (s-s_0)/l_R)$ satisfying $g(s_0)=g(l/2)=1.$ This gauge transformation takes $t_{0R}$ to $t_{0R}+2\pi/l_R.$ Thus the result of this hyperk\"ahler reduction is ${\mathbb R}^3\times S^1$ with coordinates $t_{1R},t_{2R},t_{3R}$ and $t_{0R}\sim t_{0R}+2\pi/l_R$ and the metric
\begin{equation}
ds^2=\int_{s_0}^l \left( dt_{0R}^2+d{\vec{t}_R}^{\ 2}\right) ds=l_R\left(dt_{0R}^2+d\vec{t}_R^{\ 2}\right).
\end{equation}
The resulting metric on the Nahm data on the left interval is given by the same expression with $l_R$ replaced by $l_L.$
Now we perform the hyperk\"ahler reduction with respect to the $U(1)$ at $s=l/2,$ which can be realized by $h=\exp(i\phi\frac{s-s_0}{l_R}).$ Exploiting the fact that all the data is Abelian, we assemble the linear data $(b_{01}, b_{10})$ into a quaternion
\begin{equation}
q=q^0+q^i e_i=\left( b_-, b_+\right)=\left(\begin{array}{cc}
\bar{b}_{01} & \bar{b}_{10} \\-b_{10}& b_{01}
\end{array}\right).
\end{equation}
Here $b_-$ and $b_+$ play the roles of $\stackrel{\rotatebox{180}{Q}}{ }$ and $Q.$ The natural metric is
\begin{equation}
ds^2=\frac{1}{2}{\rm tr}_S dq^\dagger dq=db^\dagger_- db_-=d b^\dagger_+ db_+,
\end{equation}
and the resulting symplectic forms are given by ${\backslash\!\!\!\omega}=i{\rm Vec}\, dq\wedge dq^\dagger.$
A gauge transformation $h(s)$ with $h(-l/2)=\exp(i\phi_L)$ and $h(l/2)=\exp(i\phi_R)$ sends $q$ to $q\exp\big(e_3(\phi_R-\phi_L)\big).$ The resulting moment maps are ${\backslash\!\!\!\mu}_L=-\frac{1}{2}q\sigma_3 q^\dagger$ and ${\backslash\!\!\!\mu}_R=\frac{1}{2}q\sigma_3 q^\dagger.$
The rightmost $U(1)$ acts as
\begin{equation}
\exp({i\phi\frac{s-s_0}{l_R}}): (q,t_{0R}, \vec{t}_R)\mapsto(q e^{e_3 \phi}, t_{0R}-\phi/l_R, \vec{t}_R),
\end{equation}
with the moment map $\mu_1 e_1+\mu_2 e_2+\mu_3 e_3=\frac{1}{2}q e_3 \bar{q}=t_1 e_1+t_2 e_2+t_3 e_3.$ Let $q=a e^{e_3 \psi/2}$ where $a$ is a pure imaginary quaternion, and let $\vec{x}=(x_1, x_2, x_3)$ be such that $x_1 e_1+x_2 e_2+x_3 e_3=q e_3 \bar{q}.$ The periodic coordinate $\psi\sim\psi+4\pi$ and the components of $\vec{x}$ provide new coordinates on ${\mathbb R}^4$. Then the flat metric on the set of octuplets $(t_{0R}, \vec{t}_{0R}, b_{01}, b_{10})$ is
\begin{eqnarray}
ds^2&=&\frac{1}{2}{\rm tr}_S dq^\dagger dq+l_R\bigl(
{dt}_{0R}^2+{d\vec{t}_R}^2\bigr)\\ &=&
\frac{1}{4}\left(\frac{1}{|\vec{x}|}d\vec{x}^2+|\vec{x}| (d\psi+\omega)^2\right)+l_R\left({dt}_{0R}^2+{d\vec{t}_R}^2\right),
\end{eqnarray}
where
\begin{equation}\label{Eq:b-relations}
i |\vec{x}| (\omega+d\psi)=\frac{i}{2}{\rm tr}\big(q e_3 dq^\dagger-dq e_3 q^\dagger\big)=db_-^\dagger b_- -b_-^\dagger db_-= -db_+^\dagger b_+ + b_+^\dagger db_+.
\end{equation}
One can easily verify that $\omega=\omega_j dx_j$ satisfies $\epsilon_{ijk}\partial_j\omega_k=\partial_i \frac{1}{|\vec{x}|}.$
The $U(1)$ is acting by $e^{i\phi}: (\psi, t_{0R})\mapsto(\psi+2\phi, t_{0R}-\phi/l_R).$ The invariant of this action is $\sigma=\psi+2 l_R t_{0R}$ and the vanishing of the moment maps implies $\vec{t}_R=-\frac{1}{2}\vec{x}.$ One can readily verify that the above metric becomes
\begin{equation}
ds^2=\frac{1}{4}\left(\bigg(l_R+\frac{1}{|\vec{x}|}\bigg) d\vec{x}^2+\frac{(d\sigma+\omega)^2}{l_R+1/|\vec{x}|}\right)+l_R r\bigg(l_R+\frac{1}{|\vec{x}|}\bigg)\left(dt_{0R}+\frac{1}{2} \frac{d\sigma+\omega}{l_R+1/|\vec{x}|}\right)^2.
\end{equation}
After factoring out the $e^{i\phi}$ action the result is
\begin{equation}
ds^2=\frac{1}{4}\left(\bigg(l_R+\frac{1}{|\vec{x}|}\bigg) d\vec{x}^2+\frac{(d\sigma+\omega)^2}{l_R+1/|\vec{x}|}\right).
\end{equation}
The last step in the hyperk\"ahler reduction procedure is the hyperk\"ahler quotient with respect to the $U(1)$ at the distinguished point $s=s_0.$ In order to represent this action we use the gauge transformation
\begin{equation}
h(s)=\begin{cases}\exp\big(i\frac{s}{s_0}\varepsilon\big)& {\rm for}\ s\leq s_0\\
\exp\Big(i\frac{l/2-s}{l/2-s_0}\varepsilon\Big)& {\rm for}\ s>s_0\end{cases},
\end{equation}
that is continuous and equals identity at $s=0$ and at $s=l/2.$
At $s=s_0$ this gauge transformation is $h(s_0)=e^{i\varepsilon}.$ It has the following action
\begin{equation}
h(s):\left(\begin{array}{c} t_{0L}\\ \vec{t}_L\\ t_{0R}\\ \vec{t}_R \\q\end{array}\right)
\mapsto
\left(\begin{array}{c} t_{0L}-\varepsilon/s_0\\ \vec{t}_L\\ t_{0R}+\varepsilon/l_R\\ \vec{t}_R \\q\exp\Big(e_3\frac{l}{2 s_0}\varepsilon\Big)\end{array}\right).
\end{equation}
The corresponding moment map is ${\backslash\!\!\!\mu}=\frac{l_L}{s_0}{\backslash\!\!\!t}_L-{\backslash\!\!\!t}_R+\frac{l}{2s_0}\frac{1}{2}{\backslash\!\!\!x}.$ Since the vanishing of the moment maps of the first stage of the reduction implies ${\backslash\!\!\!t}_R=-\frac{1}{2}{\backslash\!\!\!x},$ it follows that ${\backslash\!\!\!\mu}=\frac{l_L}{s_0}\big({\backslash\!\!\!t}_L+\frac{1}{2}{\backslash\!\!\!x}\big).$ Putting ${\backslash\!\!\!\mu}$ equal to zero we have ${\backslash\!\!\!t}_L=-\frac{1}{2}{\backslash\!\!\!x}$ as well, so $\vec{t}$ is constant on ${\cal I}.$
So far, including the data on the left interval, we have the metric
\begin{equation}
ds^2=\frac{1}{4}\left(\bigg(l_R+\frac{1}{|\vec{x}|}\bigg) d\vec{x}^2+\frac{(d\sigma+\omega)^2}{l_R+1/|\vec{x}|}\right)+l_L\left(dt_{0L}^2+d\vec{t}_L^2\right).
\end{equation}
Under the above gauge transformation the angle $\sigma=\psi+2 l_R t_{0R}\mapsto \sigma+2\frac{l_L}{s_0}\varepsilon.$ The invariant coordinate is $\tau=\sigma-2l_L t_{0L}=\psi+2 l_R t_{0R}-2 l_L t_{0L},$ and we choose $\varepsilon\sim\varepsilon+2\pi$ instead of $\sigma$ as a coordinate along the circle of the gauge transformation.
In these coordinates the above metric can be rewritten as
\begin{multline}\label{eq:hkq}
ds^2=\frac{1}{4}\left[\bigg(l+\frac{1}{|\vec{x}|}\bigg)d\vec{x}^2+\frac{1}{l+1/|\vec{x}|}(d\tau+\omega)^2\right]\\
+\frac{l_L \Big(l+\frac{1}{|\vec{x}|}\Big)}{s_0^2\big(l_R+1/|\vec{x}|\big)}\left(d\varepsilon+\frac{s_0}{2}\frac{d\tau+\omega}{\big(l+1/|\vec{x}|\big)}\right)^2.
\end{multline}
The first part of the expression (\ref{eq:hkq}) is the resulting hyperk\"ahler metric of the Taub-NUT space
\begin{equation}
4\, ds_{TN}^2=\bigg(l+\frac{1}{|\vec{x}|}\bigg)d\vec{r}^2+\frac{1}{l+1/|\vec{x}|}(d\tau+\omega)^2,
\end{equation}
here the one-form $\omega$ satisfies $d\omega=*_3 d \frac{1}{|\vec{x}|}.$
The second part of the expression in Eq.~(\ref{eq:hkq}) provides the natural connection $D=d+i s_0 a$ with the one form $s_0 a,$ where
\begin{equation}
a=\frac{1}{2} \frac{d\tau+\omega}{l+\frac{1}{|\vec{x}|}}.
\end{equation}
\subsection{A Basis of Self-dual Two-forms}
Let $V=l+1/|\vec{x}|,$ so that $a=\frac{d\tau+\omega}{2V}.$ Here we observe the following relation
\begin{equation}\label{Eq:SDForms}
\left(\frac{1}{2}d{\backslash\!\!\!x}-i a\right)^\dagger\wedge\left(\frac{1}{2}d{\backslash\!\!\!x}-i a\right)=\frac{i}{2}\sigma_k\left(\frac{d\tau+\omega}{V}\wedge dx^k+\frac{1}{2}\epsilon_{ijk} dx^i dx^J\right).
\end{equation}
The components of the right-hand-side are self-dual two-forms in the orientation $(\tau, x^1, x^2, x^3)$ providing a basis of self-dual two-forms on the Taub-NUT. Let us note for future use that since $\frac{1}{2} d{\backslash\!\!\!x}-ia=-(d{\backslash\!\!\!t}+ia),$ in terms of the $\tau$ and $\vec{t}$ coordinates the combination $(d{\backslash\!\!\!t}+ia)^\dagger\wedge(dt+ia)$ is self-dual.
\section{Instanton Data}
Instanton data for an $SU(2)$ instanton with no monopole charges is represented by the bow diagram in Figure \ref{fig:Instanton}.
It consists of
\begin{itemize}
\item a rank $k_0$ vector bundle $E\rightarrow[-l/2,l/2]$ with the Nahm data $(T_0, \vec{T})$ on the intervals $[-l/2,-\lambda], [-\lambda,\lambda],$ and $[\lambda, l/2]$ (we do not presume two-sided continuity at $s=\pm\lambda$ across different intervals),
\item linear maps $B_{10}: E_{-l/2}\rightarrow E_{l/2}$ and $B_{01}: E_{l/2}\rightarrow E_{-l/2},$
\item linear maps
$I_L: W_L\rightarrow E_{-\lambda},\
J_L: E_{-\lambda}\rightarrow W_L,\
I_R: W_R\rightarrow E_\lambda,$ and
$J_R:E_\lambda\rightarrow W_R.$
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{InstOnly.eps}
\caption{The bow diagram for an $SU(2)$ Instanton on the Taub-NUT.}
\label{fig:Instanton}
\end{center}
\end{figure}
The group of gauge transformations acts on these data as follows
\begin{equation}
g: \left(\begin{array}{c} T_0 \\ T_j \\ B_{01} \\ B_{10} \\ I_{\alpha} \\ J_{\alpha} \end{array}\right)\mapsto
\left(\begin{array}{c} g^{-1}{\scriptstyle (s)}T_0 g{\scriptstyle (s)} -i g^{-1}{\scriptstyle (s)}\frac{d}{ds} g{\scriptstyle (s)} \\ g^{-1}{\scriptstyle (s)}T_j g{\scriptstyle (s)} \\g^{-1}{(-\scriptstyle \frac{l}{2})} B_{01} g({\scriptstyle \frac{l}{2}}) \\ g^{-1}({\scriptstyle \frac{l}{2}}) B_{10} g(-{\scriptstyle \frac{l}{2}}) \\ g^{-1}{\scriptstyle( \lambda}_{\alpha}{\scriptstyle)} I_{\alpha} \\ J_{\alpha} g{\scriptstyle (\lambda}_\alpha{\scriptstyle)}
\end{array}\right),
\end{equation}
where the index $\alpha$ takes values $L$ and $R$ and we introduced $\lambda_L=-\lambda$ and $\lambda_R=\lambda.$
Introducing the complex notation $D=\frac{d}{ds}+iT_0-T_3$ and $T=T_1+i T_2,$ the moment maps are written as
\begin{align}\label{Eq:InstMom}
&&[D, T] + \delta{\scriptstyle (s+\frac{l}{2})} B_{01} B_{10} - \delta{\scriptstyle (s-\frac{l}{2})} B_{10} B_{01} + \sum_{\alpha\in\{L,R\}} \delta{\scriptstyle(s-\lambda}_\alpha{\scriptstyle)} I_\alpha J_\alpha=0,\\
&&[D^\dagger, D]+[T^\dagger, T] +\delta{\scriptstyle (s+\frac{l}{2})}(B_{10}^\dagger B_{10}- B_{01} B_{01}^\dagger)+\delta{\scriptstyle (s-\frac{l}{2})}(B_{01}^\dagger B_{01}-B_{10} B_{10}^\dagger)+\nonumber\\
&&+\sum_{\alpha\in\{L,R\}} \delta{\scriptstyle(s-\lambda}_\alpha{\scriptstyle)} (J_\alpha^\dagger J_\alpha-I_\alpha I^\dagger_\alpha)=0.\nonumber
\end{align}
These conditions can be written compactly if we introduce
\begin{equation}
B_-=\left(\begin{array}{c}
B_{10}^\dagger \\ -B_{01}\end{array}\right),\
B_+=\left(\begin{array}{c}
B_{01}^\dagger \\ B_{10}\end{array}\right),
\end{equation}
and ${\backslash\!\!\!\!D}=-\frac{d}{ds}-i T_0+{\backslash\!\!\!\!\,T}=
\bigl(\begin{smallmatrix}
-D&T^\dagger\\ T&D^\dagger
\end{smallmatrix}\bigr).$ Then the moment maps are given by
\begin{equation}
{\backslash\!\!\!\mu}={\rm Vec}\left({\backslash\!\!\!\!D}^\dagger {\backslash\!\!\!\!D}+\sum_\alpha\delta(s-\lambda_\alpha)Q_\alpha Q^\dagger_\alpha+\delta(s+\frac{l}{2}) B_- B_-^\dagger+\delta(s-\frac{l}{2}) B_+ B_+^\dagger\right).
\end{equation}
\section{The Nahm Transform}
\subsection{The Weyl Operator}
A central role in the ADHM-Nahm transform \cite{Atiyah:1978ri,Nahm:1979yw} is played by a certain linear operator. In the case at hand it is a modification of the Weyl operator. The details of similar construction can be found in \cite{NahmCalorons,Hurtubise:1989qy} for the case of calorons.
Let $H$ be the space of $L^2$ sections of $S\otimes E$ that are continuous on ${\cal I}$ and have $L^2$ derivatives on ${\cal I}\backslash\{\lambda_L, \lambda_R\}.$ Let $\tilde{\cal H}$ be the direct sum of the space of $L^2$ sections of $S\otimes E$ with spaces $W_L, W_R, E_{-l/2},$ and $E_{l/2}.$ Given the instanton data of the bow diagram in Figure \ref{fig:Instanton} we introduce the operator $\mathfrak{D}: {\cal H}\to\tilde{\cal H}$ acting by
\begin{equation}
\mathfrak{D}: f\mapsto \left(
\begin{array}{c}
\bigl(-\frac{d}{ds}-i T_0+{\backslash\!\!\!\!\,T}\bigr)f\\
(J_L, I^\dagger_L) f(-\lambda)\\
(J_R, I^\dagger_R) f(\lambda)\\
\bigr(B_{01}, B^\dagger_{10}\bigr) f(l/2)\\
\bigl(-B_{10}, B^\dagger_{01}\bigr) f(-l/2)
\end{array}
\right).
\end{equation}
Let us denote by $\psi$ an $L^2$ section of the restriction of $S\otimes E$ to ${\cal I}\backslash\{\lambda_L, \lambda_R\},$ $\chi_\alpha\in E_{\lambda_\alpha},$ $v_{-}\in E_{-l/2}$ and $v_{+}\in E_{l/2}.$
Integrating by parts we find that the cokernel of $\mathfrak{D}$ is given by $(\psi(s), \chi_L, \chi_R, v_{-}, v_{+})\in\tilde{\cal H}$ satisfying
\begin{align}
&\left(\frac{d}{ds}+iT_0+{\backslash\!\!\!\!\,T}\right)\psi=0,\ \mathrm{on}\ {\cal I}\backslash\{\alpha_L, \alpha_R\},\\
&\psi(\lambda_\alpha +)-\psi(\lambda_\alpha -)=-Q_\alpha \chi_\alpha,\\
&\psi(l/2)\phantom{-}=\left(\begin{array}{c} B^\dagger_{01}\\ B_{10}\end{array}\right) v_{-},\\
&\psi(-l/2)=-\left(\begin{array}{c} -B^\dagger_{10}\\ B_{01}\end{array}\right)v_{+}.
\end{align}
In other words the dual operator takes the form
\begin{equation}
\begin{split}
\mathfrak{D}^\dagger&=\left(\begin{array}{cc}
-D^\dagger & T^\dagger \\
T & D
\end{array}\right)
\oplus\Biggl(\mathop{\oplus}_{\scriptscriptstyle\alpha\in\{L,R\}} \delta{\scriptstyle(s-\lambda}_\alpha\scriptstyle{)}\left(\begin{array}{c} J_\alpha^\dagger \\ I_\alpha\end{array}\right)\Biggr)\\
&\oplus
\left(\delta{\scriptstyle (s+\frac{l}{2})}
\left(\begin{array}{c}
B_{10}^\dagger \\ -B_{01}\end{array}\right)
,\ \delta{\scriptstyle (s-\frac{l}{2})}
\left(\begin{array}{c}
B_{01}^\dagger \\ B_{10}\end{array}\right)
\right),\\
&={\backslash\!\!\!\!D}^\dagger\oplus\delta(s-\lambda_\alpha)Q_\alpha\oplus\Big(\delta({\scriptstyle s+\frac{l}{2}}) B_- , \delta({\scriptstyle s-\frac{l}{2}}) B_+\Big).
\end{split}
\end{equation}
In terms of $\mathfrak{D}$ and $\mathfrak{D}^\dagger$ the moment map conditions of Eqs.~(\ref{Eq:InstMom}) can be written as
\begin{equation}
{\rm Vec} (\mathfrak{D}^\dagger \mathfrak{D})=0.
\end{equation}
For a given point of the Taub-NUT space of Figure \ref{fig:TN}, corresponding to $(t_0,\vec{t}, b_{10}, b_{01})$ satisfying Eqs.(\ref{eq:TNmom}), we can twist the above operator as follows
\begin{equation}\label{Twist}
\begin{split}
\mathfrak{D}_t^\dagger&=
\left(\begin{array}{cc}
-D^\dagger-t_3 & T^\dagger-t^\dagger \\
T-t & D+t_3
\end{array}\right)
\oplus\Biggl(\mathop{\oplus}_{\scriptscriptstyle\alpha\in\{L,R\}} \delta(s-\lambda_\alpha)\left(\begin{array}{c}J_\alpha^\dagger \\ I_\alpha\end{array}\right)\Biggr)\\
&\quad \oplus
\left(\delta{\scriptstyle (s+\frac{l}{2})}\left(\begin{array}{cc}B_{10}^\dagger & -b^\dagger_{10}\\ -B_{01} & -b_{01}\end{array}\right)
+\delta{\scriptstyle (s-\frac{l}{2})}\left(\begin{array}{cc} -b^\dagger_{01} & B_{01}^\dagger \\ b_{10}&B_{10}\end{array}\right)\right).
\end{split}
\end{equation}
To be exact, whenever adding two operators with one of them belonging to the instanton bow and another to the Taub-NUT bow data we understand both operators to be tensored with identity so that they act on the tensor product of the corresponding spaces. For example, $T-t$ stands as a shorthand for $T\otimes 1-1\otimes t.$ Unfortunately, in this case using the rigorous notation would make the formula above much harder to read. We also allow this shorthand since for a case of a single instanton the vector spaces are one-dimensional and the bow data is Abelian, so, conveniently, the expression in Eq.~(\ref{Twist}) makes perfect sense as it is written.
\subsection{The Connection}
From now on we understand $\psi$ to be a section of ${\mathbb C}^2\otimes E\otimes e \rightarrow {\cal I}\backslash\{-\lambda,\lambda\},$ $v_{-}\in E_{-l/2}\otimes e_{l/2}$ and $v_{+}\in E_{l/2}\otimes e_{-l/2}.$ We combine $v_+$ and $v_-$ into a spinor $v=\bigl(\begin{smallmatrix}
v_+\\ v_-
\end{smallmatrix}\bigr)$ and denote the data $(\psi(s),\chi_L,\chi_R,v)$ by $\mbox{\boldmath$\psi$}.$ The twisted operator $\mathfrak{D}_t^\dagger$ acts on the linear Hermitian space formed by such data.
For $$\mbox{\boldmath$\psi$}_1=(\psi_1(s),\chi_{L1},\chi_{R1}^+,v_1) \text{and} \mbox{\boldmath$\psi$}_2=(\psi_2(s),\chi_{L2},\chi_{R2},v_2)$$ the natural Hermitian product is given by $(\mbox{\boldmath$\psi$}_1,\mbox{\boldmath$\psi$}_2)=v_1^\dagger v_2+(\chi_{L1})^\dagger \chi_{L2}+(\chi_{R1})^\dagger \chi_{R2}+\int_{-l/2}^{l/2} \psi_1^\dagger(s) \psi_2(s) ds.$ We also define the operator ${\bf s}$ acting on $\mbox{\boldmath$\psi$}$ as follows
\begin{equation}
{\bf s}:\left(\psi(s),\chi_L,\chi_R,v\right)\mapsto \left(s\psi(s),-\lambda\chi_L,\lambda\chi_R,
{
\Bigl(\begin{array}{cc}\scriptstyle -l/2& \scriptstyle 0\\ \scriptstyle 0& \scriptstyle l/2\end{array}\Bigr)
}
v\right).
\end{equation}
Once we find the orthonormal basis of solutions of $\mathfrak{D}^\dagger_t\mbox{\boldmath$\psi$}=0$ we arrange them as columns of the matrix $\Psi,$ then the orthonormality condition reads $(\Psi,\Psi)={\mathbb I}.$ The instanton connection $\nabla_\mu=\partial_\mu-i A_\mu$ is induced on the kernel of $\mathfrak{D}^\dagger_t$ by the connection $D_\mu=\partial_\mu+i{\bf s}a_\mu,$ thus $\nabla_\mu=(\Psi, D_\mu \Psi)$ and the associated $su(2)$-valued one-form $A=A_0d\tau+A_j dx^j$ is given by
\begin{equation}\label{eq:connection}
A=\left(\Psi,\Bigl(i\frac{\partial}{\partial\tau}-\frac{\bf s}{V}\Bigr)\Psi\right) d\tau+\left(\Psi, \Bigl(i\frac{\partial}{\partial x_j}-\omega_j\frac{\bf s}{V}\Bigr)\Psi\right) dx_j
\end{equation}
\section{The ADHM Limit}
To compare with the ADHM construction we solve $\mathfrak{D}_t^\dagger\Psi=0$ at the ends of the interval $s=\pm\frac{l}{2}$ to find
\begin{align}\label{eq:ends}
\psi{\scriptstyle (-\frac{l}{2})}&=-\left(\begin{array}{cc}
B_{10}^\dagger & -b_{10}^\dagger \\
-B_{01} & -b_{01}\end{array}\right) \left(\begin{array}{c} v_{+} \\ v_{-}\end{array}\right),
&
\psi{\scriptstyle(\frac{l}{2})}&=
\left(\begin{array}{cc}
-b_{01}^\dagger & B_{01}^\dagger \\
b_{10} & B_{10}
\end{array}\right)
\left(\begin{array}{c} v_{+} \\ v_{-}\end{array}\right),
\end{align}
The Nahm equations imply that $\vec{t}$ is constant on $[-l/2,l/2].$
It is illustrative to consider first the case of a single instanton. In this case $\vec{T}$ is constant on each of the three intervals of ${\cal I}\backslash\{-\lambda,\lambda\}.$
Moreover, the values on the left and on the right intervals are equal, thus for some constant vectors $\vec{T}_1$ and $\vec{T}_2$
\begin{equation}\label{eq:piece}
\vec{T}(s)=\begin{cases}
\vec{T}_1 &\text{for} -l/2<s<-\lambda\ \text{or}\ \lambda>s>l/2,\\
\vec{T}_2 &\text{for} -\lambda<s<\lambda.
\end{cases}
\end{equation}
Let $\vec{z}_1=\vec{t}-\vec{T}_1$ and $\vec{z}_2=\vec{t}-\vec{T}_2,$ then the Weyl equation $\mathfrak{D}_t^\dagger\Psi=0$ becomes equivalent to
\begin{multline}
\left[ e^{-{\backslash\!\!\!z}_1(\frac{l}{2}-\lambda_2)}
\Bigl(\begin{array}{cc}
\scriptstyle -b_{01}^\dagger & \scriptstyle B_{01}^\dagger \\
\scriptstyle b_{10} & \scriptstyle B_{10}
\end{array}\Bigr)
+ e^{{\backslash\!\!\!z}_2(\lambda_2-\lambda_1)} e^{{\backslash\!\!\!z}_1(\lambda_1+\frac{l}{2})}
\Bigl(\begin{array}{cc}
\scriptstyle B_{10}^\dagger & \scriptstyle -b_{10}^\dagger \\
\scriptstyle-B_{01} & \scriptstyle -b_{01}
\end{array}\Bigr)
\right]
\left(\begin{array}{c}
v_{+} \\
v_{-}
\end{array}\right)+\\
\label{Eq:ADHM}
+e^{\vec{\sigma}\cdot\vec{z}_2(\lambda_2-\lambda_1)}
\Bigl(\begin{array}{c} \scriptstyle J^\dagger_L \\ \scriptstyle I_L\end{array}\Bigr)\chi_L+
\Bigl(\begin{array}{c} \scriptstyle J^\dagger_R \\ \scriptstyle I_R\end{array}\Bigr)\chi_R=0.
\end{multline}
Clearly in the limit of $l\rightarrow 0$ (and since $\lambda<l/2,$ we have $\lambda\rightarrow 0$) the above expression reduces to the ADHM linear equation.
For the case of instantons of general charge the exponentials in the above equation become path-ordered exponentials involving the corresponding nonabelian data $T$. Each of these represents parallel transport along an interval. In the $l\rightarrow 0$ limit, however, all of the intervals in the bow diagram contract to a point and the exponential factors all become identities. Therefore, for a general case, the equation \eqref{Eq:ADHM} for the kernel of $\mathfrak{D}_t$ reduces to the ADHM linear equation.
\section{Proof of Self-duality}
The core of this proof is close to the original argument of Nahm \cite{Nahm:1979yw}, but requires some adjustments. We shall need the following relations
\footnote{These follow from Eq.(\ref{Eq:b-relations}) and $b_\pm b_\pm^\dagger=t\pm{\backslash\!\!\!t}.$ Namely, (\ref{Eq:b-relations}) implies $4itVa b_-=2b_-(db_-^\dagger) b_- -b_-d(b_-^\dagger b_-)=2(d(b_-b_-^\dagger))b_--2(db_-)(b_-^\dagger b_-)-b_-d(b_-^\dagger b_-)=2(dt-d{\backslash\!\!\!t})b_--4tdb_--2dt b_-=-4tdb_--2d{\backslash\!\!\!t}\, b_-,$ thus $2t(db_-+ila b_-)=-(d{\backslash\!\!\!t}+ia)b_-.$}
\begin{align}
(d+i l a)b_-&=-\frac{1}{2t}\big(d{\backslash\!\!\!t}+ia\big) b_-& \text{and}&& (d-il a)b_+&=\frac{1}{2t}\big(d{\backslash\!\!\!t}+ia\big)b_+.
\end{align}
We also use the fact that $\mathfrak{D}^\dagger_t\mathfrak{D}_t=1\otimes\Delta,$ with $\Delta$ positive definite (except for some $(\tau, \vec{t})$ corresponding to a finite number of isolated points on the Taub-NUT). Thus, it has a well defined inverse $G=\big(\mathfrak{D}^\dagger_t\mathfrak{D}_t\big)^{-1},$ given by the Green's function of $\Delta,$ that commutes with the quaternions and the $\sigma$-matrices.
As expressed by Eq.~(\ref{eq:connection}), the connection induced by $D_\mu$ on the kernel of $\mathfrak{D}^\dagger$ is the instanton connection $A_\mu,$ therefore the covariant differential is $dt^\mu \nabla_\mu\equiv dt^\mu(\partial_\mu-i A_\mu)=Pdt^\mu D_\mu P=P(d+i s a)P=P\big(d+i s\frac{d\tau+\omega}{2V}\big)P.$
So the connection is $A_\mu=i(\Psi, D_\mu \Psi),$ then
\begin{eqnarray}
\partial_{[\mu} A_{\nu]}&=&i \big(D_{[\mu}\Psi, D_{\nu]}\Psi \big)-(\Psi, {\bf s}\Psi)\partial_{[\mu} a_{\nu]}\\
{}[A_\mu, A_\nu] &=& \big(D_{[\mu}\Psi, \Psi \big) \big(\Psi, D_{\nu]} \big).
\end{eqnarray}
It follows that the curvature is
\begin{equation}
\label{curvature}
F_{\mu\nu}=i[\nabla_\mu, \nabla_\nu]= \big(D_{[\mu}\Psi, (1-P) D_{\nu]}\Psi)-(\Psi,{\bf s}\Psi \big)\partial_{[\mu}a_{\nu]},
\end{equation}
where $1-P\equiv 1-\Psi\Psi^\dagger=\mathfrak{D}_t G \mathfrak{D}_t^\dagger.$ The second term in the curvature expression is self-dual since $da$ is, while for the first term we have
\begin{equation}\label{relation}
(D_{\mu}\Psi, (1-P) D_{\nu}\Psi)=\big([\mathfrak{D}_t^\dagger, D_\mu]\Psi, G [\mathfrak{D}_t^\dagger, D_\nu]\Psi\big).
\end{equation}
Since
\begin{equation}
\begin{split}
\mathfrak{D}_t^\dagger&=\bigg(\frac{d}{ds}+i T_0+{\backslash\!\!\!\!\,T}-{\backslash\!\!\!t}\bigg)\oplus\delta(s-\lambda_\alpha) Q_\alpha\\
&\quad \oplus\Bigl(\delta(s+l/2)(B_-, -b_+)+\delta(s-l/2)(-b_-, B_+) \Bigr).
\end{split}
\end{equation}
The commutator $dt^\nu [\mathfrak{D}_t^\dagger, D_\nu]$ is given by
\begin{equation}\label{Eq:comm}
\begin{split}
[\mathfrak{D}_t^\dagger, d+i \mathbf{s} a]&=\big(d{\backslash\!\!\!t}+ia\big)\oplus 0\oplus\Big(\delta(s-l/2)(d+i l a)b_-,\, \delta(s+l/2)(d-i l a)b_+\Big)\\
&=\big(d{\backslash\!\!\!t}+ia\big)\bigg(1\oplus 0\oplus\Big(-\frac{1}{2t}\delta(s-l/2) b_-, \frac{1}{2t}\delta(s+l/2) b_+\Big)\bigg).
\end{split}
\end{equation}
As the Green's function $G=(\mathfrak{D}^\dagger \mathfrak{D})^{-1}$ is scalar, that is, it commutes with the $\sigma$-matrices, and $\big(d{\backslash\!\!\!t}-ia\big)\wedge\big(d{\backslash\!\!\!t}+ia\big)= \big(\frac{1}{2}d{\backslash\!\!\!x}+ia \big)\wedge \big(\frac{1}{2}d{\backslash\!\!\!x}-ia \big)$ is self-dual according to Eq.(\ref{Eq:SDForms}), the curvature two-form $F=F_{\mu\nu}dx^\mu dx^\nu$ is self-dual as well due to Eqs.(\ref{curvature},\ref{relation},\ref{Eq:comm}).
\section{Instantons for the $U(n)$ Gauge Group}
Generalizing our construction to instantons with the gauge group $U(n)$ is fairly straightforward.
The corresponding bow diagram is given in Figure \ref{fig:UnInst}. The positions of the marked points $\lambda_1,\ldots, \lambda_n$ partitioning the interval $[-l/2, l/2]$ are given by the asymptotic of the eigenvalues of the instanton connection monodromy around the Taub-NUT circle. All of our previous discussion including the proof of the self-duality and the ADHM limit remains valid.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{UnTNbow.eps}
\caption{Bow diagram for $U(n)$ Instanton on Taub-NUT.}
\label{fig:UnInst}
\end{center}
\end{figure}
\section{The Geometric Meaning of the Nahm Transform for Curved Manifolds}
The conventional Nahm transform \cite{DK} of some self-dual configuration (or of a dimensional reduction of a self-dual configuration) on a flat manifold $M={\mathbb R}^4/\Lambda$ results in some data on a dual space $N$ of flat connections on $M$. The kernel of the Nahm transform is the Poincar\'e bundle ${\mathfrak P}\rightarrow M\times N.$ Let us denote the two projections of the product $M\times N$ on $M$ and $N$ by $p_M$ and $p_N$ respectively, so that we have the following diagram
\begin{equation}
\xymatrix{
&{\mathfrak P}\ar[d]& \\
&\ar[dl]_{p_M}M\times N\ar[dr]^{p_N}& \\
M& &N.
}
\end{equation}
Then for an instanton bundle ${\cal E}\rightarrow M$ its Nahm transform is $p_{N*}\big({\mathfrak P}\otimes p^*_M {\cal E}\big).$
Thus the Poincar\'e bundle ${\mathfrak P}$ plays the role of the kernel of this transform.
For example, for the case of a caloron, flat connections on ${\mathbb R}^3\times S^1$ have the form $\eta=s dt_0,$ where $t_0$ is the coordinate parameterizing the $S^1$ factor. The space of such connections forms the dual circle $\hat{S}^1$ parameterized by $s.$ The Poincar\'e bundle over the product $({\mathbb R}^3\times S^1)\times\hat{S}^1$ has a natural connection $\eta$ with curvature ${\cal F}=ds\wedge dt_0,$ and it can be trivilalized on either one of the two base components, making both pushforward operations $p_{N*}$ and $p_{M*}$ simple and well defined.
\begin{equation}\label{NahmMonDiag}
\xymatrix{
&{\mathfrak P}\ar[d]& &{\cal F}=d\eta=ds\wedge dt_0 \\
&\ar[dl]_{p_M}\big({\mathbb R}^3\times S^1\big)\times \hat{S}^1\ar[dr]^{p_N}& &\\
\stackrel{\vec{t},\ t_0}{{\mathbb R}^3\times S^1}& \eta=s dt_0 &\stackrel{s}{\hat{S}^1}.&
}
\end{equation}
For a curved manifold $M$ without flat connections, such as the Taub-NUT space, a generalization of the Nahm transform is less straightforward. In order to have a version of the Nahm transform in the diagram \eqref{NahmDiagram} below, one has to answer two questions: 1) What is the correct choice of the `dual' manifold $N$? and 2) What is the kernel ${\mathfrak{M}}$ generalizing the Poincar\'e bundle?
\begin{equation}\label{NahmDiagram}
\xymatrix{
&{\mathfrak{M}}\ar[d]& \\
&\ar[dl]_{p_M}(\mbox{\rm Taub-NUT})\times {\cal I} \ar[dr]^{p_N}& \\
\mbox{\rm Taub-NUT}& \eta= s \frac{d\tau+\omega}{2V} &{\cal I}.
}
\end{equation}
We propose that for the Taub-NUT space the appropriate choice of $N$ is the space ${\cal I}$ of self-dual Abelian connections on the Taub-NUT (or rather, in order to have a hyperk\"ahler space, the direct product of ${\cal I}$ and ${\mathbb R}^3$). In order to answer the second question, we digress to discuss a generalization of instantons in four dimensions to instantons on higher-dimensional spaces.
Instantons on higher dimensional hyperk\"ahler manifolds were defined in \cite{MCS} in the following manner. Consider the operator $\aleph=I\otimes I+J\otimes J+K\otimes K$ acting on two-forms. Due to the defining quaternionic identities it satisfies
\begin{equation}
\aleph^2=2\aleph+3,
\end{equation}
and can have eigenvalues $3$ or $-1.$
On a four-dimensional hyperk\"ahler manifold this operator is related to the Hodge star operation by $*=\frac{1}{2}(\aleph-1),$
thus on a general hyperk\"ahler manifold the equations
\begin{align}
\aleph {\cal F}&=3 {\cal F} &&\text{and} & \aleph {\cal F}&=-{\cal F}
\end{align}
respectively generalize the self-duality and anti-self-duality conditions to higher dimensions.
Before we proceed, let us observe that the complex structures $I, J,$ and $K$ act on the vierbein $e^{\hat{\mu}}$ of the Taub-NUT
\begin{align}
e^{\hat{0}}&=\frac{1}{2\sqrt{V}}(d\tau+\omega),& e^{\hat{j}}&=\frac{1}{2}\sqrt{V} dx^{\hat{j}},
\end{align}
by acting with the left multiplication on the quaternionic combination $e^{\hat{0}}+I e^{\hat{1}}+J e^{\hat{2}}+K e^{\hat{3}}.$
Just as the four-dimensional self-duality equations become Bogomolny equations under the reduction to three dimensions, we reduce an eight-dimensional self-duality condition $3 {\cal F}=\aleph {\cal F}$ to five dimensions producing the following system of equations on ${\cal I}\times$Taub-NUT:
\begin{align}
3 {\cal F}_{\hat{0}s}&=\hat{D}_1\Phi_1+\hat{D}_2\Phi_2+\hat{D}_3\Phi_3,\nonumber\\
3 {\cal F}_{\hat{1}s}&=-\hat{D}_0\Phi_1-\hat{D}_3\Phi_2+\hat{D}_2\Phi_3,\nonumber\\
\label{Eq:Monotone}
3 {\cal F}_{\hat{2}s}&=\hat{D}_3\Phi_1-\hat{D}_0\Phi_2-\hat{D}_1\Phi_3,\\
3 {\cal F}_{\hat{3}s}&=-\hat{D}_2\Phi_1+\hat{D}_1\Phi_2-\hat{D}_0\Phi_3,\nonumber\\
2{\cal F}_{\hat{\mu}\hat{\nu}}&=\epsilon_{\hat{\mu}\hat{\nu}\hat{\rho}\hat{\sigma}} {\cal F}_{\hat{\rho}\hat{\sigma}}.\nonumber
\end{align}
Here $\Phi_1, \Phi_2,\Phi_3$ are the components of the eight-dimensional connection in the reduced three directions of ${\mathbb R}^3.$ We used the curvature vierbein components ${\cal F}={\cal F}_{s\hat{\rho}} ds\wedge e^{\hat{\rho}}+{\cal F}_{\hat{\mu}\hat{\nu}} e^{\hat{\mu}}\wedge e^{\hat{\nu}}$ and $\hat{D}_0=2\sqrt{V}D_0$ and $\hat{D}_j=\frac{2}{\sqrt{V}}D_j-4\omega_j D_0,$ which appear in the covariant differential decomposition $D=d\tau D_0+dx^j D_j=e_{\hat{0}} \hat{D}_{\hat{0}}+e^{\hat{j}}\hat{D}_{\hat{j}}.$
Since Eqs.(\ref{Eq:Monotone}) emerge via a dimensional reduction of higher-dimensional self-duality equations, one might call an object satisfying these equations an {\em Instapole} or a {\em Monotone}\footnote{We hope someone will come up with a more poetic name for it.}.
In our case ${\cal F}=d\eta=d\left(s\frac{d\tau+\omega}{2V}\right)=ds\wedge a+s da=\frac{1}{\sqrt{V}}ds\wedge e^0+s da$ as dictated by the relation (\ref{NahmMonDiag}) between $\cal I$ and the Taub-NUT. We observe that
$\Phi_1=t_1=-\frac{1}{2} x_1, \Phi_2=t_2=-\frac{1}{2} x_2, \Phi_3=t_3=-\frac{1}{2} x_3$ augment ${\cal F}$ to produce a solution to the system of Eq.~(\ref{Eq:Monotone}).
This is exactly the solution defining the object generalizing the Poincar\'e bundle in the diagram (\ref{NahmDiagram}) that leads to the twisting that we used in Eq.(\ref{Twist}). It plays the role of the kernel in this generalization of the Nahm transform.
\section{Example of One Instanton}
Let us now focus on a single $SU(2)$ instanton on the Taub-NUT, i.e. a self-dual curvature configuration with $k_0=1$ and $m=0.$
For a single instanton the $T$'s in the Nahm data are Abelian and the Nahm equations are solved by
\begin{equation}
\vec{T}(s)=\begin{cases}
\vec{T}_1 &\text{for $-l/2<s<-\lambda$ or $\lambda>s>l/2,$}\\
\vec{T}_2 &\text{for $-\lambda<s<\lambda.$}
\end{cases}.
\end{equation}
We interpret $\vec{x}=-2\vec{T}_1$ and $\vec{x}=-2\vec{T}_2$ as the locations of the instanton constituents. Let $\vec{z}_{1}=\vec{t}-\vec{T}_{1}$ and $\vec{z}_{2}=\vec{t}-\vec{T}_{2}$ denote the position relative to the two constituents and let $\vec{y}=\vec{T}_2-\vec{T}_1=\vec{z}_1-\vec{z}_2$ be the displacement between them. The $\tau$ coordinate of the instanton position is proportional to $T_0.$ Since the Taub-NUT metric is invariant with respect to shifts of $\tau,$ without loss of generality $T_0$ can be put to zero.
We also gauge away $t_0$ in favor of the phase of $b_\pm.$
Let the two-component spinors $Q_+$ and $Q_-$ be such that $Q_\pm Q_\pm^\dagger=y\pm{\backslash\!\!\!y}.$
Using the component expressions for the spinors we introduced earlier
\begin{equation}
b_-=
\left(
\begin{array}{r}
b^\dagger_{01}\\
-b_{10}
\end{array}
\right),\
b_+=
\left(
\begin{array}{r}
b^\dagger_{10}\\
b_{01}
\end{array}
\right),\
B_-=
\left(
\begin{array}{r}
B^\dagger_{10}\\
-B_{01}
\end{array}
\right),\
B_+=
\left(
\begin{array}{r}
B^\dagger_{01}\\
B_{10}
\end{array}
\right),
\end{equation}
and, since in this case all of the components are simply complex numbers, it is straightforward to verify that
\begin{equation}
b^\dagger_-B_-=B^\dagger_+b_+=e^{ i\tau/2}{\cal P},\
b^\dagger_+ B_+=B^\dagger_- b_-=e^{-i\tau/2}{\cal P},
\end{equation}
where
\begin{equation}\label{Eq:P}
{\cal P}=\sqrt{(T_1+t)^2-z_1^2}.
\end{equation}
The moment maps at $s=\pm l/2$ imply that
\begin{equation}
b_\pm b^\dagger_\pm=|\vec{t}\,|\pm{\backslash\!\!\!t}\ , \
B_\pm B^\dagger_\pm=|\vec{T}_1|\pm{\backslash\!\!\!\!\,T}_1,\
\end{equation}
and the vanishing of the moment maps at $s=\pm\lambda$ implies $Q_R=Q_+$ and $Q_L=Q_-.$
\subsection{Solving the Weyl Equation}\label{Sec:Solution}
On each interval $\mathfrak{D}_t=-\partial_s+{\backslash\!\!\!\!\,T}-{\backslash\!\!\!t}=-\partial_s-{\backslash\!\!\!z}$ and $\mathfrak{D}^\dagger_t=\partial_s+{\backslash\!\!\!\!\,T}-{\backslash\!\!\!t}=\partial_s-{\backslash\!\!\!z},$ with ${\backslash\!\!\!z}={\backslash\!\!\!z}_1$ or ${\backslash\!\!\!z}_2$ in accordance with Eq.~(\ref{eq:piece}).
It follows therefore that $\psi(s)$ has the form
\begin{equation}\label{eq:psi}
\psi(s)=\begin{cases}
e^{{\backslash\!\!\!z}_1(s+l/2)}\psi_L &\text{for $-l/2<s<-\lambda,$}\\
e^{{\backslash\!\!\!z}_2 s} \Pi &\text{for $-\lambda<s<\lambda,$}\\
e^{{\backslash\!\!\!z}_1(s-l/2)}\psi_R &\text{for $\lambda<s<l/2,$}
\end{cases}
\end{equation}
for some constant $\psi_L, \psi_R,$ and $\Pi.$ As we shall soon verify, the kernel of $\mathfrak{D}_t^\dagger$ is two-dimensional, so from now on we shall understand $\psi(s), \chi_{L}, \chi_R$ and $v$ to be two-column matrices, so that their first columns deliver one of the solutions and their second columns deliver the second, linearly independent, solution of $\mathfrak{D}_t^\dagger\mbox{\boldmath$\psi$}=0.$
Let $A_L=(B_-, -b_+)$ and $A_R=(-b_-, B_+)$
then the $\mathfrak{D}_t^\dagger\Psi=0$ conditions read
\begin{align}
\label{Lend}
\psi_R-A_R v&=0&&\text{at $s=\frac{l}{2}$}\\
\label{MidL}
e^{-{\backslash\!\!\!z}_1 (l/2-\lambda)}\psi_R-e^{{\backslash\!\!\!z}_2\lambda}\Pi+Q_R\chi_R&=0&&\text{at $s=\lambda,$}\\
\label{MidR}
e^{-{\backslash\!\!\!z}_2\lambda}\Pi-e^{{\backslash\!\!\!z}_1 (l/2-\lambda)}\psi_L+Q_L\chi_L&=0&&\text{at $s=-\lambda,$}\\
\label{Rend}
\psi_L+A_L v&=0&&\text{at $s=-\frac{l}{2}.$}
\end{align}
It is useful to note the following relations
\begin{alignat}{3}
\nonumber
A_L A^\dagger_L&=T_1+t+{\backslash\!\!\!z}_1, & A^\dagger_R A_L&=A_L A^\dagger_R=-e^{i\tau/2}{\cal P}, & A_L A_R^{-1}&=-\frac{e^{i\tau/2}}{{\cal P}}(T_1+t+{\backslash\!\!\!z}_1),\\
\nonumber
A_R A^\dagger_R&=T_1+t-{\backslash\!\!\!z}_1, & A^\dagger_L A_R&=A_R A^\dagger_L=-e^{-i\tau/2}{\cal P}, & A_R A_L^{-1}&=-\frac{e^{-i\tau/2}}{{\cal P}}(T_1+t-{\backslash\!\!\!z}_1),
\end{alignat}
and define $\mu_+$ and $\mu_-$ to be such that $\mu^2_+=A_L A_L^\dagger$ and $\mu^2_-=A_R A_R^\dagger$ namely
\begin{equation}\label{eq:mu}
\mu_\pm=\sqrt{\frac{T_1+t+{\cal P}}{2}}\pm\sqrt{\frac{T_1+t-{\cal P}}{2}}\frac{{\backslash\!\!\!z}_1}{z_1},
\end{equation}
then $\mu_+\mu_-={\cal P}.$
We choose
\begin{equation}
\label{eq:v}
v=-e^{i\tau/4} A_L^\dagger\frac{\mu_-}{{\cal P}}=e^{-i\tau/4}A_R^\dagger\frac{\mu_+}{{\cal P}},
\end{equation}
so that now $\psi_L=e^{i\tau/4}\mu_+,\ \psi_R=e^{-i\tau/4}\mu_-.$ From the matching conditions Eqs.(\ref{MidL}) and (\ref{MidR}) at $s=\pm\lambda$ it follows that
\begin{equation}\label{eq:Pi}
\Pi=\frac{1}{2g}\left(e^{-i\tau/4} e^{\lambda{\backslash\!\!\!z}_2}(y-{\backslash\!\!\!y}) e^{-(l/2-\lambda){\backslash\!\!\!z}_1}\mu_-+
e^{i\tau/4} e^{-\lambda{\backslash\!\!\!z}_2}(y+{\backslash\!\!\!y}) e^{(l/2-\lambda){\backslash\!\!\!z}_1}\mu_+\right),
\end{equation}
where the function $g$ is given by
\begin{equation}\label{Eq:g}
g=y \cosh 2z_2\lambda-\frac{\vec{z}_2\cdot\vec{y}}{z_2}\sinh 2z_2\lambda=\frac{1}{2}\left(e^{2{\backslash\!\!\!z}_2\lambda}Q_-Q^\dagger_-+Q_+Q^\dagger_+e^{-2{\backslash\!\!\!z}_2\lambda}\right),
\end{equation}
and that
\begin{equation}\label{eq:chi}
\left(\begin{array}{c} \chi_R\\ \chi_L\end{array}\right)=\left(\begin{array}{c} Q_+^\dagger e^{-\lambda{\backslash\!\!\!z}_2}\\ Q_-^\dagger e^{\lambda{\backslash\!\!\!z}_2}\end{array}\right) \Upsilon,
\end{equation}
with
\begin{equation}\label{Eq:Upsilon}
\Upsilon=\frac{e^{i\tau/4}e^{\lambda{\backslash\!\!\!z}_2}e^{(\frac{l}{2}-\lambda){\backslash\!\!\!z}_1}\mu_+-e^{-i\tau/4}e^{-\lambda{\backslash\!\!\!z}_2}e^{-(\frac{l}{2}-\lambda){\backslash\!\!\!z}_1}\mu_{-}}{2g}.
\end{equation}
\subsection{Normalization}
Let us now check the orthogonality and the normalization of the solution $\Psi$ delivered by Eqs.(\ref{eq:psi}, \ref{eq:v}, \ref{eq:chi}). To simplify our notation let us introduce $\alpha=\frac{1}{4z_1}\ln\frac{T_1+t+z_1}{T_1+t-z_1},$ so that $\mu_-^2=T_1+t-{\backslash\!\!\!z}_1={\cal P}e^{-2\alpha{\backslash\!\!\!z}_1},$ and in particular that $\sinh2\alpha z_1=z_1/{\cal P}$ and $\cosh2\alpha z_1=(T_1+t)/{\cal P}.$
Introduce $\Delta=\frac{l}{2}-\lambda+\alpha$ and let
\begin{eqnarray}
\label{Eq:Cosh}
c_1&=&\cosh 2 \Delta z_1=\frac{(T_1+t)\cosh(l-2\lambda)z_1+z_1\sinh(l-2\lambda)z_1}{{\cal P}}, \\
\label{Eq:Sinh}
s_1&=&\sinh 2 \Delta z_1=\frac{z_1\cosh(l-2\lambda)z_1+(T_1+t)\sinh(l-2\lambda)z_1}{{\cal P}}, \\
\label{Eq:CoSinh}
c_2&=&\cosh 2\lambda z_2,\ \ s_2=\sinh 2\lambda z_2.
\end{eqnarray}
In these terms $g=y c_2+\frac{y^2+z_2^2-z_1^2}{2z_2} s_2.$ Then we find that $(\Psi,\Psi)=N^2 {\mathbb I}_{2\times 2}$with the normalization factor
\begin{equation}
\label{Eq:Norm}
N^2=(\Psi, \Psi)=\frac{{\cal P}}{g}\Big( c_1 c_2+\frac{y}{z_1} s_1 c_2+\frac{y}{z_2} c_1 s_2+\frac{z_1^2+z_2^2+y^2}{2z_1 z_2} s_1 s_2-\cos \frac{\tau}{2}\Big).
\end{equation}
Let us also observe that both $\Pi$ and $\Upsilon$ appearing in the solution are scalar multiples of unitary matrices, since
\begin{align}
\Pi^\dagger\Pi&=\frac{{\cal P}}{g}\left(y c_1+\frac{\vec{y}\cdot\vec{z}_1}{z_1}s_1\right),&
\Upsilon^\dagger\Upsilon&=\frac{{\cal P}}{2 g^2}\left(c_1 c_2+s_1 s_2\frac{\vec{z}_2\cdot\vec{z}_1}{z_1 z_2}-\cos\frac{\tau}{2}\right).
\end{align}
\subsection{Connection}
We rewrite Eq.(\ref{eq:connection}) as
$A=A^{(0)} d\tau+A^{(3)}-\Phi\frac{1}{2V}(d\tau+\vec{\omega}\cdot d\vec{x}),$ where
\begin{align}
\Phi&=(\Psi_N,{\bf s}\Psi_N),& A^{(0)}&=i\left(\Psi_N,\frac{\partial}{\partial\tau}\Psi_N\right),& A^{(3)}&=i\left(\Psi_N,\frac{\partial}{\partial x_j}\Psi_N\right) d x^j,
\end{align}
for an orthonormalized solution $\Psi_N$ satisfying $(\Psi_N, \Psi_N)={\mathbb I}_{2\times 2}.$ Here we observe that for any $\Psi=N \Psi_N,$ with $N$ any nowhere vanishing scalar function, we have
\begin{multline}
(\Psi_N,\frac{\partial}{\partial x^\mu}\Psi_N)=\frac{1}{2}\left((\Psi_N,\frac{\partial}{\partial x^\mu}\Psi_N)-(\frac{\partial}{\partial x^\mu}\Psi_N,\Psi_N)\right)\\
=\frac{1}{2 N^2}\left((\Psi,\frac{\partial}{\partial x^\mu}\Psi)-(\frac{\partial}{\partial x^\mu}\Psi,\Psi)\right).
\end{multline}
Thus for the solution of Section \ref{Sec:Solution} which satisfies $(\Psi, \Psi)=N^2 {\mathbb I}_{2\times 2}$ we have
\begin{align}
\Phi&=\frac{1}{N^2}(\Psi,{\bf s}\Psi),\\
A^{(0)}&=\frac{i}{N^2}(\Psi, \frac{\partial}{\partial\tau}\Psi)=\frac{i}{2N^2}\left((\Psi, \frac{\partial}{\partial\tau}\Psi)-( \frac{\partial}{\partial\tau}\Psi, \Psi)\right),\\
A^{(3)}&=\frac{i}{N^2}(\Psi, d \Psi)=\frac{i}{2N^2}\Big(\Psi, (d-\overleftarrow{d}) \Psi\Big)\equiv\frac{i}{2N^2}\big((\Psi, d \Psi)-( d \Psi, \Psi)\big),
\end{align}
where we introduced the three-dimensional differential $d=dx^j\frac{\partial}{\partial x^j}=dt^j\frac{\partial}{\partial t^j}.$
Given our solution for $\Psi$ of Eqs.(\ref{eq:psi}, \ref{eq:v}, \ref{eq:chi}) one can apply the above formulas, performing some elementary integrals over $s$.
A straightforward if tedious calculation gives
\begin{multline}
N^2 A^{(0)}=
\frac{1}{4}\left(2t-c_1{\cal P}\right)\frac{{\backslash\!\!\!z}_1}{z_1^2}+\frac{1}{2{\cal P}}\left({\backslash\!\!\!\!\,T}_1-\frac{\vec{T}_1\cdot\vec{z}_1}{z_1}\frac{{\backslash\!\!\!z}_1}{z_1}\right)\\
\label{Eq:A0}
\quad +\frac{i}{2}\frac{s_2}{z_2}\Pi^\dagger(\partial_\tau-\overleftarrow{\partial}_\tau)\Pi+ig\Upsilon^\dagger(\partial_\tau-\overleftarrow{\partial}_\tau)\Upsilon,
\end{multline}
\begin{multline}
N^2\Phi=\left(1+2 l t-\Big(2\lambda c_1+\frac{s_1}{z_1}\Big){\cal P}\right)\frac{{\backslash\!\!\!z}_1}{2z_1^2}
+\frac{l}{{\cal P}}\left({\backslash\!\!\!\!\,T}_1-\frac{{\backslash\!\!\!\!\,T}_1\cdot{\backslash\!\!\!z}_1}{ z_1}\frac{{\backslash\!\!\!z}_1}{z_1}\right)\\
+\left(2\lambda c_2-\frac{s_2}{z_2}\right)\Pi^\dagger\frac{{\backslash\!\!\!z}_2}{2z_2^2}\Pi+2\lambda\Upsilon^\dagger\left(\big\{(c_2-1)\vec{y}\cdot\vec{z}_2-s_2 y z_2\big\}\frac{{\backslash\!\!\!z}_2}{z_2^2}+{\backslash\!\!\!y}\right)\Upsilon,
\label{Eq:Phi}
\end{multline}
\begin{eqnarray}
N^2 A^{(3)}&=&\frac{i}{2}\left\{
\frac{z_1}{{\cal P}^2}\left[{\backslash\!\!\!\!\,T}_1, d\frac{{\backslash\!\!\!z}_1}{z_1}\right]+\frac{1}{{\cal P}^3}\left(\frac{T_1+t}{z_1}\frac{\vec{z}_1\cdot d\vec{t}}{z_1}-\frac{\vec{t}\cdot d\vec{t}}{t}\right)[{\backslash\!\!\!\!\,T}_1, {\backslash\!\!\!z}_1]\right.\nonumber\\
&&\qquad -\left(1+{\cal P}\left(l-2 \lambda-\frac{s_1}{z_1}\right)-
2 \frac{T_1(T_1+t-{\cal P})}{{\cal P}^2}
\right)\frac{[{\backslash\!\!\!z}_1, d{\backslash\!\!\!t}]}{2 z_1^2}\nonumber\\
&&\qquad +\Pi^\dagger\left(\frac{s_2}{z_2}d-\overleftarrow{d}\frac{s_2}{z_2}-\Big(2\lambda-\frac{s_2}{z_2}\Big)\frac{[{\backslash\!\!\!z}_2, d{\backslash\!\!\!t}]}{2 z_2^2}\right)\Pi\nonumber\\
&&\qquad +\Upsilon^\dagger\left(2g d-\overleftarrow{d} 2g-2\lambda\frac{\vec{z}_2\cdot d\vec{t}}{z_2^2}[{\backslash\!\!\!y}, {\backslash\!\!\!z}_2]\right.\nonumber\\
\label{Eq:A3}
&&\qquad \phantom{+\Upsilon^\dagger\left(\right.}\ \left.\left.
-s_2\left[{\backslash\!\!\!y}, d\frac{{\backslash\!\!\!z}_2}{z_2}\right]+(c_2-1)\frac{y}{z_2^2}[{\backslash\!\!\!z}_2, d{\backslash\!\!\!t}]\right)\Upsilon
\right\},
\end{eqnarray}
where the functions ${\cal P}$ and $g$ are defined in Eqs.~\eqref{Eq:P} and \eqref{Eq:g}, the hyperbolic functions $c_1, s_1, c_2$ and $s_2$ are in Eqs.~(\ref{Eq:P}, \ref{Eq:Sinh}) and \eqref{Eq:CoSinh}, and $\Pi$ and $\Upsilon$ are given in Eqs.~\eqref{eq:Pi} and \eqref{Eq:Upsilon}. The normalization factor $N^2$ is read from Eq.~\eqref{Eq:Norm}.
\section{Conclusions}
We discussed topological charges of an instanton configuration on the Taub-NUT space with the maximal symmetry breaking by the monodromy at infinity. These are given by integer monopole charges and an integer instanton number. Solutions with vanishing instanton number correspond to singular monopoles \cite{Kronheimer:1985}. In their three-dimensional interpretation these have infinite energy, while as configurations on the Taub-NUT space they are smooth and have finite action. Thus one can regard the Taub-NUT background as a regularization. A simplest solution with zero instanton number was constructed in \cite{Cherkis:2007qa} and its physical properties were explored in \cite{Cherkis:2007jm}.
In this manuscript we focussed on the case with vanishing monopole charges. We presented the ADHM-Nahm data for this case. These data are conveniently encoded in a bow diagram, such as in Figure \ref{fig:Instanton} or Figure \ref{fig:UnInst}. We used the bow diagram description earlier in \cite{Cherkis:2008ip} to study the moduli spaces of instantons on the Taub-NUT. Here we give the details of the Nahm transform leading to the explicit instanton connection.
As an example illustrating this construction we find a single $SU(2)$ instanton on the Taub-NUT space in Eqs.(\ref{Eq:A0}, \ref{Eq:Phi}, \ref{Eq:A3}).
The bow diagram formalism we presented is not limited to the case of the Taub-NUT background. Rather, we chose to limit the scope of this paper to this case to simplify our presentation. In the forthcoming paper \cite{Cherkis:2010bn} we will give the bow-diagrammatic description of instantons with arbitrary charges on a general ALF spaces of either $A_k$- or $D_k$-type.
\section*{Acknowledgments}
It is our pleasure to thank Tamas Hausel and Juan Maldacena for a number of useful conversations. We are grateful to Christopher Blair for careful reading of the manuscript and for identifying a number of misprints in its original version. This work is supported by Science Foundation Ireland Grant No. 06/RFP/MAT050 and by the European Commission FP6 program MRTN-CT-2004-005104.
\pagebreak
\section*{Appendix}
\subsection*{Metric Conventions and Moment Maps}
The Nahm data on an interval of length $l$ can be organized into a quaternion
$t=t_0 e_0+\vec{t}\cdot\vec{e}$ with the metric and the symplectic forms
\begin{equation}\nonumber
ds^2=g(\cdot,\cdot)=l\frac{1}{2}{\rm tr} dt dt^\dagger,\
\omega_j(A,B):=g(A,e_j B)=-\frac{l}{4}{\rm tr} e_j(A B^\dagger-B A^\dagger),
\end{equation}
\begin{equation}
\omega_j=g(\cdot,e_j\cdot)=-\frac{l}{4}{\rm tr}\big(dt\wedge dt^\dagger e_j\big).
\end{equation}
With respect to $t\mapsto t+\epsilon$ the moment maps are
$\mu_j=-\frac{l}{4}{\rm tr} (t^\dagger-t)e_j=l t_j.$
For $q=(b_-,b_+)$ the metric is $ds^2=\frac{1}{2}{\rm tr} dq dq^\dagger=db_-^\dagger db_-=db_+^\dagger db_+.$ The moment map with respect to $q\mapsto q\, e^{\epsilon e_3}$ is $\mu_j=-\frac{1}{4}{\rm tr} q e_3 q^\dagger e_j.$
Coordinates on Taub-NUT are either $b_+, b_-$ or $\vec{t}, 2 l t_0=\tau\sim\tau+4\pi.$
The instanton moduli are $\vec{T}_1, \theta_1=(2l-4\lambda)T^L_0$ (or $B_+, B_-$) and $\vec{T}_2, \theta_2=4\lambda T^M_0.$
The relative coordinates are $\vec{z}_1=\vec{t}-\vec{T}_1, \vec{z}_2=\vec{t}-\vec{T}_2,$ and the relative position is $\vec{y}=\vec{T}_2-\vec{T}_1=\vec{z}_1-\vec{z}_2.$
We also collect the bifundamental data as
\begin{equation}
b_-=\left(\begin{array}{c}b_{01}^\dagger\\-b_{10}\end{array}\right),\
b_+=\left(\begin{array}{c}b_{10}^\dagger\\ b_{01}\end{array}\right),\
B_-=\left(\begin{array}{c}B_{10}^\dagger\\-B_{01}\end{array}\right),\
B_+=\left(\begin{array}{c}B_{01}^\dagger\\ B_{10}\end{array}\right),
\end{equation}
and the fundamental data as
\begin{equation}
Q_-=\left(\begin{array}{c}J_L^\dagger\\ I_L\end{array}\right),\
Q_+=\left(\begin{array}{c}J_R^\dagger\\ I_R\end{array}\right).
\end{equation}
\subsection*{Vanishing Moment Map Conditions}
For the Taub-NUT
\begin{equation}
\frac{d}{ds}{\backslash\!\!\!t}+{\rm Vec}\Bigg\{\delta\bigg(s+\frac{l}{2}\bigg)b_-b_-^\dagger+\delta\bigg(s-\frac{l}{2}\bigg)b_+b_+^\dagger\Bigg\}=0,
\end{equation}
and for the instanton Bow Data
\begin{multline}
\Big[\frac{d}{ds} + i T_0,{\backslash\!\!\!\!\,T}\Big]+{\rm Vec}\Bigg\{{\backslash\!\!\!\!\,T}\Tslash+\delta\bigg(s+\frac{l}{2}\bigg)B_-B_-^\dagger+\delta\bigg(s-\frac{l}{2}\bigg)B_+B_+^\dagger \\
+\delta(s+\lambda)Q_-Q_-^\dagger+\delta(s-\lambda)Q_+Q_+^\dagger\Bigg\}=0.
\end{multline}
These imply that $\vec{T}$ is constant on each interval and equals $\vec{T}_1$ for $|s|>\lambda$ and $\vec{T}_2$ for $|s|<\lambda.$ The conditions at $s=l/2, -l/2, \lambda, -\lambda$ are respectively
\begin{equation}
T_1+{\backslash\!\!\!\!\,T}_1=B_+B_+^\dagger,\ T_1-{\backslash\!\!\!\!\,T}_1=B_-B_-^\dagger,\
y+{\backslash\!\!\!y}=Q_+Q_+^\dagger,\ y-{\backslash\!\!\!y}=Q_-Q_-^\dagger.
\end{equation}
\subsection*{The Weyl Equation}
\begin{equation}
\Big(\frac{d}{ds}-{\backslash\!\!\!z}_{1,2}\Big)\psi(s)=0,
\end{equation}
\begin{align}
\psi(\lambda+)-\psi(\lambda-)&=-Q_+\chi_R,&
\psi(l/2)=(-b_-, B_+) v,\\
\psi(-\lambda+)-\psi(-\lambda-)&=-Q_-\chi_L,&
\psi(-l/2)=-(B_-, -b_+) v.
\end{align}
\subsection*{Solution of the Weyl Equation}
\begin{align}
v&=\frac{1}{P}\left(\begin{array}{r}
-e^{i\frac{\tau}{4}} B_-^\dagger\mu_- \\
e^{-i\frac{\tau}{4}} B_+^\dagger\mu_+\end{array}\right), &
\left(\begin{array}{c}
\chi_R\\ \chi_L
\end{array}\right)&=\left(\begin{array}{l}
Q_+^\dagger e^{-\lambda{\backslash\!\!\!z}_2}\\
Q_-^\dagger\, e^{\lambda{\backslash\!\!\!z}_2}
\end{array}\right) \Upsilon,
\end{align}
\begin{equation}
\psi(s)=\begin{cases}
e^{-i\frac{\tau}{4}}e^{(s-\frac{l}{2}){\backslash\!\!\!z}_1}\mu_- &\text{for $\lambda<s<l/2,$}\\
e^{s{\backslash\!\!\!z}_2}\Pi &\text{for $-\lambda<s<\lambda$,}\\
e^{i\frac{\tau}{4}}e^{(s+\frac{l}{2}){\backslash\!\!\!z}_1}\mu_+ &\text{for $-l/2<s<-\lambda.$}
\end{cases}
\end{equation}
Here
\begin{align}
2g&=2\big(y\cosh2\lambda z_2-\frac{\vec{y}\cdot\vec{z}_2}{z_2}\sinh2\lambda z_2\big),& {\cal P}&=\sqrt{(T_1+t)^2-z_1^2},
\end{align}
\begin{equation}
\mu_\pm=\sqrt{\frac{T_1+t+{\cal P}}{2}}\pm\sqrt{\frac{T_1+t-{\cal P}}{2}}\frac{{\backslash\!\!\!z}_1}{z_1},
\end{equation}
and
\begin{align}
\Upsilon&=\frac{1}{2g}\left\{
e^{i\frac{\tau}{4}}e^{\lambda{\backslash\!\!\!z}_2}e^{{\backslash\!\!\!z}_1d}\mu_+
-e^{-i\frac{\tau}{4}}e^{-\lambda{\backslash\!\!\!z}_2}e^{-{\backslash\!\!\!z}_1d}\mu_-\right\},\\
\Pi&=\frac{1}{2g}\left(e^{i\frac{\tau}{4}}e^{-\lambda{\backslash\!\!\!z}_2}(y+{\backslash\!\!\!y})e^{{\backslash\!\!\!z}_1 d}\mu_++e^{-i\frac{\tau}{4}} e^{\lambda{\backslash\!\!\!z}_2}(y-{\backslash\!\!\!y})e^{-{\backslash\!\!\!z}_1 d} \mu_- \right).
\end{align}
\newpage
\bibliographystyle{unstr}
|
1,116,691,500,967 | arxiv | \section{Bi-shortest path conjecture}
\label{s1}
Let $G = (V,E)$ be a finite directed graph (digraph)
with two distinct vertices $s, t \in V$.
We assume that
\begin{itemize}
\item[(j)] every vertex $v \in V \setminus \{t\}$ has an outgoing edge,
while $t$ has not;
\item[(jj)] $G$ contains a directed path from $s$ to $t$;
\item[(jjj)] every edge $e \in E$ belongs to such a path.
\end{itemize}
If (j) fails for $v$ we merge $v$ and $t$;
if (jjj) fails for $e$ we delete $e$ from $E$.
\medskip
Given a partition $V \setminus \{t\} = V_1 \cup V_2$
with non-empty $V_1$ and $V_2$,
assign an ordered pair of positive real numbers $(r_1(e), r_2(e))$ to every $e \in E$.
Fix $ i \in \{ 1, 2\}$ and a mapping $s_i$ that assigns
to each $v \in V_i$ an edge $e \in E$ going from $v$.
Delete all other edges going from $v$.
In the obtained digraph find a directed shortest path
(SP) from $s$ to $t$, assuming that
$r_{3-i}(e)$ are the lengths of the edges $e \in E$.
(One can use, for example, Dijkstra's SP algorithm.)
Doing so for $i = 1, 2$ and for every $s_i$ we obtain two sets of
directed $(s,t)$-paths.
We conjecture that these two sets intersect and
call this statement the {\em Bi-SP conjecture}.
\medskip
Without loss of generality
(WLOG) we can assume that all $(s,t)$-paths
have pairwise different lengths.
\medskip
It may happen that some mappings $s_i$ leave no $(s,t)$-path.
Then, we choose nothing.
Let us slightly modify the procedure
choosing in this case some symbolic path $c$.
Then we obtain a weak versions of the Bi-SP conjecture.
Indeed, if the obtained two sets of $(s,t)$-paths have only $c$ in common
then the Bi-SP conjecture fails, but the weak Bi-SP one holds.
\medskip
WLOG, we can restrict ourselves by the bipartite graphs
with parts $(V_1,V_2)$.
Indeed, if $E$ contains an edge $e = (u,w)$ such that both
$u,w \in V_i$, we subdivide $e$ by a vertex $v \in V_{3-i}$
into two edges $e' = (u,v)$ and $e'' = (v,w)$ choosing some lengths
$r_i(e') > 0$ and $r_i(e'') > 0$ such that
$r_i(e) = r_i(e') + r_i(e'')$ for $i = 1,2$.
\section{Finite $n$-person shortest path games}
\label{s2}
\subsection*{Players, positions, moves, and local costs}
Given a finite digraph $G =(V, E)$
satisfying assumption (j, jj, jjj) of Section~\ref{s1},
let us generalize case $n=2$ and consider an arbitrary integer $n \geq 2$.
Partition vertices into $n$ non-empty subsets
$V \setminus t = V_1 \cup \ldots \cup V_n$, assign an ordered $n$-tuple
of positive real numbers $r(e) = (r_1(e), \ldots, r_n(e))$ to each $e \in E$,
and consider the following interpretation:
$I = \{1, \ldots, n\}$ is a set of {\em players},
$V_i$ the set of {\em positions} controlled by player $i \in I$;
furthermore, $s = v_0$ and $t = v_t$ are respectively
the {\em initial} and {\em terminal} positions;
$e \in E$ the set of {\em legal moves}, and finally,
$r_i(e)$ is the cost of move $e \in E$ for player $i \in I$,
called the {\em local cost}.
\subsection*{Strategies, plays, and effective costs}
\label{ss2a}
A mapping $s_i$ that assigns a move $(v, v')$
to each position $v \in V_i$ is a strategy of player $i \in I$.
(We restrict ourselves and all players
to their pure stationary strategies;
no mixed or history dependent ones are considered in this paper.)
Each {\em strategy profile} $s = (s_1, \ldots, s_n)$
uniquely defines a play $p(s)$, that is, a walk in $G$
that begins in the initial position $s = v_0$ and
goes in accordance with $s$ in every position that appears.
Obviously, $p(s)$ either terminates in $t = v_t$ or cycles;
respectively, it is called a terminal or a cyclic play.
Indeed, after $p(s)$ revisits a position, it will
repeat its previous moves thus making a ``lasso".
The effective cost of $p(s)$ for a player $i \in I$ is additive, that is,
$$r_i(p(s)) = \sum_{e \in p(s)} r_i(e) \;\;\; \text{if $p(s)$ is a terminal play;}$$
$$r_i(p(s))= +\infty \;\;\; \text{if $p(s)$ is a cyclic play.}$$
In other word, each player $i \in I$ pays the local cost $r_i(e)$
for every move $e \in p(s)$.
Since a cyclic play $p(s)$ never finishes and all local costs are positive,
each player pays $+ \infty$.
All players are minimizers.
Thus, a finite $n$-{\em person SP game} is defined.
We study Nash-solvability (NS) of these games.
\section{Nash equilibrium and Nash-solvability}
\label{s3}
Recall that a {\em strategy profile} $s = (s_1, \dots, s_n)$ is called
a {\em Nash equilibrium} (NE) if
$r_i(s') \geq r_i(s)$ whenever $s'$ differs from $s$
only by the strategy of player $i$, that is,
$s_j = s'_j$ for all $j \neq i$.
In other words, no player $i \in I$ can make a profit
by changing his/her strategy provided all other players
keep their strategies unchanged.
\medskip
The Bi-SP conjecture means exactly that all finite two-person SP games
(with positive local costs) are NS.
Indeed, a pair of strategies $s = (s_1, s_2)$
realizes a bi-shortest path in $G$ if and only if $s$
is a NE in the corresponding two-person SP game.
\medskip
However, a three-person SP game, even with positive local costs, may be not NS; see
\cite[Tables 2,3 and Figure 2]{GO14}.
\bigskip
Digraph $G = (V,E)$ is called {\em bidirected} if
each non-terminal move in it is reversible, that is,
$(u,w) \in E$ if and only if $(w,u) \in E$
unless $u = t$ or $w = t$.
We conjecture that every $n$-person SP game on a finite bidirected digraph is NS.
\section{Essential properties of cost functions}
\label{s4}
\subsection*{$k$-total costs and rewards}
SP games can be viewed as a very special class within the so-called
finite deterministic stochastic games
with perfect information with $k$-total effective reward \cite{BEGM17}.
(Negated costs are called payoffs or rewards.)
The limit mean payoff \cite{Gil57,LL69}, most common in the literature, and
the total reward \cite{TV98,BEGM18},
correspond to $k=0$ and $k=1$, respectively \cite{BEGM17}.
The family of $k$-total effective rewards is nested with respect to $k$, that is,
$k$-total rewards can be properly embedded
into $(k+1)$-total rewards \cite{BEGM17}.
Mostly, the two-person zero-sum case is studied in the literature.
Yet, all main concepts and definitions
can be naturally extended to the $n$-person case;
in particular, to the two-person but not necessarily zero-sum case.
The obtained games may have no NE already for $n=2$ and $k=0$; see \cite{Gur88}.
Since they are $k$-nested, NS may fail for any $n \geq 2$ and $k \geq 0$.
Yet, NS becomes an open problem for $n = 2$ and $k = 1$,
provided we require that all local rewards are negative,
or in other words, that all local costs are positive \cite[Section 8]{BEGM17}.
This is an alternative view at the Bi-SP conjecture.
\subsection*{Positive costs and Gallai's Potential Transformation}
The latter requirement:
\begin{itemize}
\item[(i)] $\;\;\; r_i(e) > 0$ for each player $i \in I$ and directed edge $e \in E$
\end{itemize}
\noindent
can be replaced by a seemingly weaker
(but in fact, equivalent) one:
\begin{itemize}
\item[(ii)] $\;\;\; \sum_{e \in C} r_i(e) > 0$ for each player $i \in I$ and
directed cycle $C$ in $G$.
\end{itemize}
Implication (i) $\Rightarrow$ (ii) is obvious.
Conversely, if (ii) holds, one can enforce (i)
applying the following potential transformation \cite{Gal58}.
Choose an arbitrary mapping $x : V \rightarrow \RR$
and replace $r_i(e)$ by $r'_i(e) = r_i(e) + x(v) - x(v')$
for every $i \in I$ and $e = (v,v') \in E$.
Obviously, this transformation does not change the game, since
$r'(P) - r(P) = x(s) - x(t) = const$
for every directed $(s,t)$-path $P$.
Furthermore, $r'(C) = r(C)$ for every directed cycle $C$ in $G$, and
for each $r$ satisfying (ii)
there exists a potential $x$ such that (i) holds for $r'$ \cite{Gal58}.
\section{Subgame perfect NE-free shortest path games}
\label{s5}
NE $s = (s_1, \ldots, s_n)$ in a finite $n$-person SP game
is called {\em uniform} if it is a NE with respect to
every initial position $s = v_0 \in V \setminus t$.
In the literature uniform NE (UNE) are frequently
referred to as {\em subgame perfect NE}.
By definition, any UNE is a NE, but not vice versa.
A large family of $n$-person UNE-free games
can be found in \cite[Section 3.3]{GN21A} for $n > 2$,
and even for $n=2$ in \cite[the last examples in Figures 1 and 3]{BEGM12}.
All these games have terminal payoffs, which is a special case the additive one.
Hence, these games can be viewed as as a special case of the SP games.
Every NE-free game contains a UNE-free subgame \cite[Remark 3]{BGMOV18}.
Indeed, consider an arbitrary finite $n$-persoon NE-free SP game $\Gamma$
and eliminate the initial position $s = v_0$ from its graph $G$.
The obtained subgame $\Gamma'$ is UNE-free.
Indeed, assume for contradiction that $\Gamma'$ has a UNE $s = (s_1, \ldots, s_n)$.
Then, $\Gamma$ would also have a NE, which can be obtained by backward induction.
The player beginning in $s = v_0$ chooses a move that maximizes his/her reward,
assuming that $s$ is played in $\Gamma'$ by all players.
Clearly, $s$ extended by this move forms a NE in $\Gamma$, which is a contradiction.
Thus, searching for a NE-free SP games
one should begin with a UNE-free SP game then
trying to extend it with an acyclic prefix.
This was successfully realized in \cite{GO14,BGMOV18} for $n=3$.
However, for $n = 2$ all such tries failed.
\subsection*{Acknowledgement}
The authors was partially supported by the RSF grant 20-11-20203.
|
1,116,691,500,968 | arxiv | \section{Introduction}
The $\gamma$-ray binaries are high-mass stellar systems whose spectral energy distribution
contains a significant and persistent non-thermal component, at energies above 1 MeV and up to the TeV domain. Only a handful of these
objects are currently known (Dubus 2013; Paredes et al. 2013). Among this scarce group, one finds a dominant presence
of luminous, emission line optical stars with Oe or Be spectral type. Their unseen compact companion can be
either a neutron star or a black hole. Here, we will broadly refer to these systems as Be/$\gamma$-ray binaries.
They can also be considered as a sub-class of the more numerous normal
Be/X-ray binaries, which contain more than 90 confirmed and suspected objects
(Reig 2011), but detected only up to keV energies.
At present there are three confirmed Be/$\gamma$-ray binaries so far -- LS~2883/PSR~B1259$-$63 (Aharonian et al. 2005),
LSI+61$^{\circ}$303 / 2CG~135+01 (Albert et al. 2009), and MWC~148/HESS~J0632+057 (Aharonian et al. 2007).
Some other binaries may be related to this group, such as
the system MWC~656/ AGL~J2241+4454 with GeV transient $\gamma$-ray emission
(Lucarelli et al. 2010; Casares et al. 2012).
The nature of the compact object is certainly known in PSR~B1259$-$63
(e.g. Chernyakova et al. 2015 and references therein).
It is a neutron star acting as radio pulsar with a period of 47.76 ms (Shannon et al. 2014).
The neutron star has a mass $\sim 1.4$~$M_\odot$ and
is orbiting a O9.5Ve star with mass $M_1 \approx 30$~$M_\odot$
(Negueruela et al. 2011).
For MWC~656, Casares et al. (2014) have analyzed the radial velocities variability and
found convincing evidences for a black hole of 3.8 to 6.9 $M_\odot$
orbiting an B1.5-B2~IIIe primary with $M_1 \approx 10-16$~$M_\odot$.
More information on the compact object masses is needed to better
discriminate among different possible theoretical scenarios for high energy emission.
These mainly include the magnetospheric pulsar model and the microquasar jet
model where a black hole could be expected (see e.g. Paredes et al. 2013).
Other alternative scenarios could also play a role here, such as the
propeller regime in neutron stars proposed more
than three decades ago as a possible mechanism for TeV $\gamma$-ray emission (Wang \& Robertson 1985)
In this work, we estimate the range of masses for the compact objects in two Be/$\gamma$-ray binaries --
the systems LSI+61$^{\circ}$303\ and MWC~148.
\section{Mass of the compact object}
\label{calc}
In the following, we will assume that the inclination of the orbit $i_{orb}$ is approximately
equal to the inclination of the Be star equatorial plane with respect to the line of sight within a few degrees.
The question that it is plausible assumption is shortly addressed in Sect.\ref{D.1}.
In particular, we apply Kepler's third low:
\begin{equation}
P_{orb}^2 =\frac{4 \pi ^2 (a_1+a_2)^3}{G(M_1 + M_2)} \label{kepler}
\end{equation}
where $G$ is the gravitational constant,
$P_{orb}$ is the orbital period of the binary,
$a_1$ is the semi-major axis of the orbit of the primary,
$a_2$ is the semi-major axis of the orbit of the secondary,
$M_1$ is mass of the primary,
and $M_2$ is the mass of the compact object.
For the primary star mass, we use the best information available.
In the LSI+61$^{\circ}$303\ case, an appropriate range of values was derived from
its spectral type and the latest calibrations
by Hohle et al. (2010) based on revised Hipparcos data.
In the MWC~148 case, the most accurate primary mass values come from
the comparison of its average optical spectrum with several grids of stellar models (Aragona et al. 2010).
Given a pair $(M_1, M_2)$ and the system orbital period, we compute the relative semi-major axis $a = a_1 + a_2$ using Eq. \ref{kepler}.
From published Doppler radial velocity observations, we also have an estimate of the projected semi-major axis $a_1 \sin{i_{orb}}$
of the optically visible primary star. This parameter can be de-projected using the assumed value of the orbital inclination.
Then, we can obtain the secondary semi-major axis as $a_2 = a - a_1$, and get an
estimate of the secondary star mass as
$M_2 = M_1 (a_1/a_2)$. While keeping $M_1$ fixed,
the procedure is iterated until the secondary mass converges within a dozen iterations.
The calculation is consequently repeated across the whole range of allowed primary mass values.
A consistency check of the procedure is that the resulting $M_2$ estimate has to be above the strict lower limit to the compact object
mass provided by the well known concept of mass function.
This is given observationally by the following combination of projected semi-major axis and orbital inclination:
\begin{equation}
f(M_2) = \frac{4 \pi^2 (a_1 \sin{i_{{orb}}})^3}{G P_{orb}^2}
\end{equation}
\subsection{LSI+61$^{\circ}$303}
\label{est.1}
From a radio survey of the galactic plane,
LSI+61$^{\circ}$303\ (V615~Cas) was first proposed by Gregory \& Taylor (1978)
as a $\gamma$-ray source in the $COS \; B$ satellite
catalogue (Swanenburg et al. 1981). It became a confirmed TeV many years later (Albert et al. 2006).
A Bayesian analysis of radio observations gives the orbital period of the binary as $P_{orb}=26.4960 \pm 0.0028$~d (Gregory 2002).
The orbital eccentricity is $e \simeq 0.537$, obtained on the basis of the radial velocity measurements
of the primary (Casares et al. 2005; Aragona et al. 2009).
The inclination of the primary star Be disc in LSI+61$^{\circ}$303\ to the line of sight is probably $ i_{Be} \sim 70^{\circ}$ according to Zamanov et al. (2013).
Aragona et al. (2009) give $a_1 \sin i_{orb} = 8.64 \pm 0.52$~$R_\odot$.
For the primary, Grundstrom et al. (2007) suggested a B0V star.
A B0V star is expected to have on average $M_1 \approx 15.0 \pm 2.83 $~$M_\odot$\ (Hohle et al. 2010).
We calculated $M_2$ as described above for a few sets of parameters $a_1 \sin i_{orb} = 8.12$, 8.64, 9.16~$R_\odot$\ and
$i_{orb} =65^{\circ}$, $70^{\circ}$, $75^{\circ}$.
The specified lines are plotted in Fig.~\ref{f1.lsi}.
The red (dotted) lines are for $a_1 \sin i_{orb} = 8.12$~$M_\odot$, $i=65^0$
and $a_1 \sin i_{orb} = 9.16$~$M_\odot$, $i=65^{\circ}$.
The blue (dashed) lines are for $i=75^0$.
The black (solid) line represents $a_1 \sin i_{orb} = 8.64$~$R_\odot$, $i_{orb} =70^0$, corresponding to the average values
of separation and inclination.
Assuming a B0V star with mass in the range $ 12.17$~$M_\odot$ $ < M_1 < 17.83 $~$M_\odot$, we estimate the mass of the compact object
in the range $1.27 < M_2 < 1.98$~$M_\odot$, with most likely value $M_2 \approx 1.6$~$M_\odot$.
\begin{figure}
\vspace{8.7cm}
\special{psfile=LSI.1.eps hoffset=-15 voffset=-70 hscale=45 vscale=45 angle=0}
\caption[]{Mass of the compact object versus the mass of the primary for the $\gamma$-ray
binary LSI+61$^{\circ}$303.}
\label{f1.lsi}
\vspace{8.7cm}
\special{psfile=MWC148.1.eps hoffset=-15 voffset=-80 hscale=45 vscale=45 angle=0}
\caption[]{Mass of the compact object versus the mass of the primary for the $\gamma$-ray
binary MWC~148.}
\label{f1.MWC148}
\end{figure}
\subsection{MWC~148}
\label{est.2}
MWC~148 (HD 259440) was identified as the counterpart of the variable TeV source HESS J0632+057 (Aharonian et al. 2007,
Maier \& for the VERITAS Collaboration 2015).
We adopt $P_{orb} = 315 ^{+6}_{-4}$~d derived from the X-ray data (Aliu et al. 2014), which is consistent with the
previous result of $321 \pm 5$ days (Bongiorno et al. 2011).
For this object, Aragona et al. (2010) derived $M_1 = 13.2 - 19.0$~$M_\odot$\ from their spectral model fits.
From radial velocity measurements,
Casares et al. (2012) also estimated $a_1 \sin i_{orb} = 77.6 \pm 25.9$~$R_\odot$\ with an eccentricity of $e=0.83$.
The optical emission lines of MWC~148 are very similar to those of the bright well-known Be star $\gamma$~Cas (Zamanov et al. 2016).
All detected lines in the optical spectral range 4100 - 7500 \AA\ (Balmer lines, HeI lines and FeII lines)
have similar intensities, profiles, equivalent widths, and even a remarkable "wine-bottle" structure is apparent
in the H$\alpha$ line profile.
The emission lines are the most sensitive to the footpoint density and inclination angle (Hummel 1994).
Based on such a strong resemblance, we consider that the Be star geometry in MWC~148
should be similar to that of $\gamma$~Cas,
for which the inclination is $43^{\circ} \pm 3^{\circ}$ (Poeckert \& Marlborough 1978; Clarke 1990).
Therefore, we will proceed with our estimates of the compact object mass
in MWC~148 by adopting different inclinations in the vicinity of this value.
We calculate $M_2$ for different sets of parameters
$a_1 \sin i_{orb} = 51.7$, 77.6, 103.5~$R_\odot$\ and $i_{orb} =40^{\circ}$, $45^{\circ}$, $50^{\circ}$.
The lines corresponding to these values are plotted in
Fig.~\ref{f1.MWC148}.
The red (dotted) lines are for $i=40^{\circ}$.
The blue (dashed) lines are for $i=50^{\circ}$.
The black solid line represent $a_1 \sin i_{orb} = 77.6$~$R_\odot$,
$i_{orb} =45^{\circ}$, corresponding to the average values of separation and inclination.
Assuming a mass of primary star in the range $ 13.2 \le M_1 \le 19.0 $~$M_\odot$, we estimate mass of the compact object
in the range $2.1$~$M_\odot$$ < M_2 < 7.3$~$M_\odot$, with most likely value $M_2 \approx 4.0$~$M_\odot$.
\section{Discussion}
\label{D.1}
The two $\gamma$-ray binaries discussed here have non-zero eccentricities and misalignment
between the spin axis of the primary component and the spin axis of the binary orbit could be theoretically possible
(Brandt \& Podsiadlowski 1995;
Okazaki \& Hayasaki 2007;
Martin et al. 2014).
However, if a significant misalignment existed, then we would expect to see considerable variability in the
H$\alpha$ emission line at the time when the compact object crosses the circumstellar disc --
twice in each orbital period.
No such variability is detected in the observations of H$\alpha$ emission,
which means that any misalignment is less than the opening half-angle of the circumstellar disc.
The opening half-angle of the Be stars circumstellar disc are $\approx \! 10^{\circ}$ (Tycner et al. 2006; Cyr et al. 2015),
and in Sect.~\ref{calc}, we have
supposed that the orbital plane coincides with the equatorial plane of the Be star within a few degrees. Therefore, our main assumption
in this work appears to be justified at least for the two systems being considered.
Our derived mass ranges are also dependent on additional
assumptions on the physical properties of non-degenerate stars, specially the mass, according to the most recent data available.
Strict lower limits to the mass of the compact objects are set from the mass functions of the
different spectroscopic orbital solutions.
For LSI+61$^{\circ}$303\, the mass function is $f(M_2) = 0.0124 \pm 0.0022$~$M_\odot$ (Aragona et al. 2009),
for MWC~148 it is $f(M_2) = 0.06 ^{+0.15} _{-0.05}$ $M_\odot$ (Casares et al. 2012).
The resulting mass ranges for both objects are, of course, safely above these values and therefore
consistent with what is know from radial velocity observations.
The masses of neutron stars ($M_{NS}$) measured in binary stars are in the range 0.9~$M_\odot$\ $< M_{NS} <2.7$~$M_\odot$\
({\"O}zel et al. 2012). The compact stars with a mass between 1.4~$M_\odot$\ (Chandrasekhar limit) and 2.8~$M_\odot$\
should be neutron stars (e.g. Chamel et al. 2013).
The mass ranges calculated in Sect.\ref{calc}
point to the compact object in LSI+61$^{\circ}$303\ being most probably a neutron star with a mass $\approx 1.6$~$M_\odot$.
The spin period of the neutron star is expected to be $P_{spin} \approx 0.05 - 0.15$~s (Maraschi \& Treves 1981;
Zamanov 1995), although the observational search for pulsations
have not confirmed it yet
(Coe et al. 1982; Peracaula et al. 1997; McSwain et al. 2011; Ca{\~n}ellas et al. 2012).
There is a maximum mass a neutron star may have (e.g. Bombaci 1996).
Antoniadis et al. (2016) considering the mass function of neutron stars and
mass measurements in binary millisecond pulsar establish that this maximum mass is of about 2.15~$M_\odot$.
The compact stars with a mass above the Tolman-Oppenheimer-Volkoff limit should be black holes.
The measured masses of Galactic black holes are in the range 2.5-15~$M_\odot$\ ({\"O}zel et al. 2010).
Our estimate of the mass of the compact object in MWC~148 (Sect.~\ref{est.2}) points to that
it is likely to be a black hole with mass $\sim 4.0$~$M_\odot$.
The calculated mass range for LSI+61$^{\circ}$303\ is narrower than that of MWC~148, mainly because the projected semimajor axis
$a_1 \sin i_{orb}$ is known with a considerably better accuracy,
6\% and 33\% for LSI+61$^{\circ}$303\ and MWC~148, respectively.
In most systems with an early-type optical companion, $\gamma-$rays are usually believed to arise from
the interaction between the stellar wind of the primary
and a pulsar magnetosphere instead of a black hole. However, an active debate is still open with both pulsar
and microquasar models in dispute (see e.g. Dubus 2013; Massi \& Torricelli-Giamponi 2016).
Both interpretations are competing to explain not only the $\gamma$-ray emission, but also the changing milli-arscecond radio structures observed
with interferometric techniques. If future multiwavelength
observations confirm the black hole nature of the companion star in MWC148 proposed here, this
would have important consequences on our general understanding of $\gamma$-ray binaries.
The $\gamma$-ray binary class could be then a multiface phenomenon with very different physical scenarios
coexisting in different systems.
\section{Conclusions}
From the above considerations, it appears that:
(i) the compact object in LSI+61$^{\circ}$303\ is most probably a neutron star with mass $\sim 1.6$~$M_\odot$,
(ii) the compact object in MWC~148 is likely to be a black hole with a mass $\sim 4.0$~$M_\odot$.
The proposed non-uniform natures of the compact object in these two system suggests that different
physical scenarios, accounting for very high energy emission in binary systems, can actually
take place in real systems.
\acknowledgements
This work was partly supported by grant AYA2013-47447-C3-3-P from the Spanish Ministerio
de Econom\'{\i}a y Competitividad (MINECO),
and by the Consejer\'{\i}a de Econom\'{\i}a, Innovaci\'on,
Ciencia y Empleo of Junta de Andaluc\'{\i}a under research group FQM-322,
as well as FEDER funds.
|
1,116,691,500,969 | arxiv | \section{Introduction}
The concept of passivity is a foundation of circuit theory \cite{anderson2006}. It led to the generalized
concept of dissipativity \cite{willems1972}, \cite{willems1972b}, which has become a foundation of
nonlinear system theory \cite{hill1980,schaft2010}.
Yet the applications of nonlinear system theory have been dominated by mechanical
and electro-mechanical systems \cite{brogliato2007}, \cite{Desoer2009}, \cite{ortega1998}, \cite{sepulchre1997},
with significantly less attention to nonlinear circuits \cite{brayton1964,Camlibel2002}.
Starting with the seminal work of Chua \cite{chua1980} and the textbook of Chua and Desoer \cite{chua1987}, the
research on nonlinear circuits has somewhat diverged from the research on nonlinear dissipative
systems. The emphasis in nonlinear circuit theory has been on non-equilibrium behaviors
whereas the focus of dissipativity theory is an interconnection framework for systems that converge
to equilibrium.
Negative resistance devices are the essence of non-equilibrium behaviors such as
switches \cite{chen2009}, \cite{goto1960}, \cite{kennedy1991}, nonlinear
oscillations \cite{hu1986}, \cite{li2000}, or chaotic behavior \cite{kennedy1993}, \cite{saito1995}.
In contrast, dissipativity theory is a stability theory for physical systems that only dissipate energy and that relax to equilibrium
when disconnected from an external source of energy.
The present paper is a step towards generalizing passivity theory to the analysis of negative resistance circuits. In the spirit of passivity theory, we seek to analyze nonlinear circuits through
dissipation inequalities that are preserved by interconnection.
The two basic elements of dissipativity theory are the storage function and the supply function.
A dissipative system obeys a dissipation inequality, which expresses that the rate of change
of the storage does not exceed the supply. The physical interpretation is that the storage is
a measure of the internal energy, whereas the integral of the supply is a measure of the supplied energy.
For stability analysis purposes, the storage becomes a Lyapunov function.
The approach in this paper is based on two modifications of the basic theory. First, the analysis is in terms
of {\it incremental} variables, that is, differences of voltages and currents rather than voltages and currents.
Incremental analysis is classical in nonlinear circuit theory. Starting with the seminar work of \cite{lohmiller1998}, incremental
analysis has also been increasingly used in nonlinear stability theory \cite{angeli2002},
\cite{forni2014b},
and in nonlinear dissipativity theory \cite{forni2013}, \cite{proskurnikov2015}, \cite{stan2007}, \cite{schaft2013}.
Second, we allow for
dissipation inequalities that combine {\it signed} storage functions and {\it signed} supply rates.
Signed storage functions have the interpretation of a difference of energy stored in different storage
elements whereas signed supply rates account for ports that can deliver rather than absorb energy.
For analysis purposes, the interconnection theory developed in the present paper makes contact with the
dominance theory recently proposed in \cite{forni2017}, \cite{Forni2017b}. Signed Lyapunov functions
with a restricted number of negative terms are used to prove convergence to low-dimensional
dynamics that dominate the asymptotic behavior. A one-dimensional dominant behavior is sufficient
to model bistable switches whereas a two-dimensional dominant behavior is sufficient to model
nonlinear oscillators. Combined with the interconnection theory of this paper, dominance theory
opens the way to analysis of nonlinear switches and nonlinear oscillators in large nonlinear circuits.
We deliberately restrict the scope of the present paper to nonlinear circuits with negative resistance
to facilitate a concrete interpretation of the results. Not surprisingly,
the concepts are not restricted to electrical circuits and have a more general interpretation
in the general framework of dissipativity theory. For concreteness,
the entire paper is restricted to the passivity supply, an inner product between currents and voltages,
with the convenient interpretation of electrical power.
The paper is organized as follows. Section \ref{section:motivation} deals with the dissipation properties
of negative resistance devices and Section \ref{section:differential} extends dominance theory
in an incremental framework that is suitable for the analysis of circuits with piecewise linear characteristics.
In Section \ref{section:circuits:lossless:lure} we analyze basic electrical
switches and oscillators with one or two storage elements, whereas Section \ref{section:circuits:connection} covers the
design of coupling networks that allows us to interconnect circuits with different signatures in the supply rates.
{\small \textbf{Preamble.}
The circuits studied in this paper are built from interconnections of \emph{linear passive}
elements, such as capacitors and inductors, and \emph{nonlinear active} resistors. In concrete,
the time evolution of the family of circuits studied here is described by the state-space model
\begin{equation}
\Sigma: \begin{cases}
\dot{x} = f(x) + B u \quad x(0) = x_{0}
\\
y = C x + D u
\end{cases}
\label{eq:circuit:ss}
\end{equation}
where $x \in \RE^{n}$ is the state of the system and $u, y \in \RE^{m}$
are the so-called manifest variables.
For electrical circuits, the manifest variables are conjugated in terms of voltages
$v$, and currents $i$,
that is, the inner product $u^{\top} y$ has units of power.
The map $f: \RE^{n} \to \RE^{n}$ is Lipschitz continuous and models interactions between linear storage elements
and nonlinear resistors.
Moreover, the matrices $B$, $C$, and $D$ are of the appropriate dimensions and such that
the system is well-posed.
Henceforth, every circuit in this paper is assumed to be of the form \eqref{eq:circuit:ss}.
In what follows we will adopt a \emph{differential} (or incremental) approach, that is,
we will study circuit properties by looking at the difference between trajectories. For simplicity, we
denote the difference between any two generic signals $w_{1}, w_{2}$ as
$\Delta {w} := w_{1} - w_{2}$. In this way,
the mismatches between any two states/currents/voltages are denoted as
$\Delta {x}$, $\Delta {i}$ and $\Delta {v}$ respectively.
Finally, we will use symmetric matrices $P \in \RE^{n \times n}$ constrained
to have inertia $(p, 0, n-p)$, that is, with $p$ negative eigenvalues and $n-p$ positive
eigenvalues.}
\section{Signed supply rates for nonlinear resistors}
\label{section:motivation}
The nonlinear element shown in Figure \ref{fig:tunnelDiode} is a fundamental element
of nonlinear circuits. The voltage range where the nonlinear characteristic has a negative slope models
an element that can deliver energy rather than dissipating energy. Such an element is called
{\it active} in contrast to {\it passive} elements that can only absorb energy.
We follow the common terminology of {\it negative resistance} device \cite{chua1983}, \cite{kaplan1968}, with the usual caveat
that {\it negative} refers to the {\it increment }$\Delta {v}$ rather than to the value of the voltage $v$.
A more precise (but also heavier) terminology would be {\it negative incremental (or differential)}
resistance. The analysis in this paper will be exclusively in terms of {\it incremental} quantities,
which is common practice in nonlinear circuit theory.
\begin{figure}[htpb]
\centering
\includegraphics{tunnelDiode.eps} \quad
\raisebox{-8.0ex}{\begin{tikzpicture}
\begin{axis}[
xtick = {-1.78, -0.7, 0.7},
xticklabels = {0, $\overline{v}$, $\underline{v}$},
ytick = {-1.7, -0.7, 0.7},
yticklabels = {0, $\underline{i}$, $\overline{i}$},
x grid style={},
xlabel={$v \, [V]$},
xmajorgrids,
xmin=-2.5, xmax=2.5,
ylabel={$i \, [A]$},
ymajorgrids,
ymin=-2.5, ymax=2.5,
scale=0.5
]
\addplot [thick, black, forget plot] table [x=Y1, y=Y2, col sep=comma]{vi_nr_smooth.csv};
\addplot [black, no markers] coordinates {
(-1.44, -0.7)
(-1.44, -1.25)
(-1.66, -1.25)
};
\addplot [black, no markers] coordinates {
(-0.17, 0.26)
(0.17, 0.26)
(0.17, -0.28)
};
\node[] at (axis cs: -1.1, -1.2, 0.16) {$G^{d}$};
\node[] at (axis cs: 0.4, 0.4, 0.16) {$-G^{g}$};
\end{axis}
\end{tikzpicture}
}
\caption{Slope-bounded voltage-current characteristic of a tunnel diode. Tunnel diodes are
(incrementally) negative resistance devices. The region of
negative slope is called the \emph{active} region.}
\label{fig:tunnelDiode}
\end{figure}
We are motivated by the property that this nonlinear element satisfies the two inequalities
\begin{subequations}
\begin{align}
0 & \leq \Delta {i} \Delta {v} + G^{g} (\Delta {v})^{2}
\label{eq:tunnel:supply:1}
\\
0 & \leq -\Delta {i} \Delta {v} + G^{d} (\Delta {v})^{2}
\label{eq:tunnel:supply:2}
\end{align}
\label{eq:tunnel:supply}
\end{subequations}
where $G^{d} > 0$ and $-G^{g} < 0$ represent, respectively, the maximum positive slope and negative slope of the
voltage-current characteristic of Figure \ref{fig:tunnelDiode}. Both inequalities have an obvious energetic interpretation: the first inequality
expresses the shortage of passivity of the element: the element becomes passive when connected in parallel with a resistor
of resistance lesser than ${1/G^g}$. The second inequality expresses the shortage of anti-passivity of the element: the element
becomes purely a source of energy when connected to a negative resistance larger than $-1/G^d$.
In the language of dissipativity theory \cite{willems1972}, both inequalities are dissipation inequalities of the form $ \sigma(\Delta {i}, \Delta {v}) \ge 0$
for the family of quadratic supply rates
\begin{equation}
\label{eq:supply:1}
\sigma(\Delta {i}, \Delta {v}) =
\begin{bmatrix}
\Delta {i} \\ \Delta {v}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q} & \mathcal{I}
\\
\mathcal{I} & \mathcal{R}
\end{bmatrix}
\begin{bmatrix}
\Delta {i} \\ \Delta {v}
\end{bmatrix}
\end{equation}
where the signature matrix $\mathcal{I} \in \RE^{m \times m}$ is a diagonal matrix
with $\pm 1$ in the main diagonal
$\mathcal{I} = \diag [ \pm 1, \pm 1, \dots, \pm 1 ]$,
and $\mathcal{Q} \in \RE^{m \times m}$,
$\mathcal{R} \in \RE^{m \times m}$ are symmetric matrices. In the special case $\mathcal{I}=I$, this family
of supply rates characterize incrementally passive elements with an excess or a shortage
of passivity in the external variables \cite{sepulchre1997}. When $\mathcal{Q} = 0$,
the dissipativity property $\sigma(\Delta {i}, \Delta {v}) \ge 0$ is also equivalent
to the monotonicity of the voltage-current characteristic $i = g(v)$ \cite{bauschke2011}.
The map $g$ is called
strongly monotone for $\mathcal{R} >0$, hypomonotone for $\mathcal{R} < 0$ and monotone
for $\mathcal{R} = 0$.
We call (\ref{eq:supply:1}) a {\it signed} passivity supply
rate to stress that the only difference with respect to the conventional passivity supply is the signature
matrix $\mathcal{I}$ generalizing the conventional identity matrix $I$.
The element in Figure \ref{fig:tunnelDiode} is called a voltage-controlled resistor,
Figure \ref{fig:nr:vc:cc} (left). Namely, the current flowing through a voltage-controlled resistor
is a singled-valued function of the voltage across its terminals: $i = g(v)$. The nonlinear resistor is passive
when the function $g: \RE \to \RE$ is monotone increasing, otherwise it is active.
It follows from \eqref{eq:tunnel:supply} that whenever $G^{d} \neq G^{g}$, a voltage-controlled resistor fulfills
\begin{equation}
\label{eq:supply:nr}
0 \leq
\begin{bmatrix}
\Delta {i} \\ \Delta {v}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q} & \mathcal{I}
\\
\mathcal{I} & \mathcal{R}
\end{bmatrix}
\begin{bmatrix}
\Delta {i} \\ \Delta {v}
\end{bmatrix}
\end{equation}
where $\mathcal{I} = \sign(G^{d} - G^{g})$, $\mathcal{Q} = -\frac{2}{\vert G^{d} - G^{g} \vert}$
and $\mathcal{R} = \frac{2 G^{g} G^{d}}{\vert G^{d} - G^{g} \vert}$.
The dual element is the current-controlled resistor defined by
a singled-valued function of its flowing current: $v = r(i)$.
An active current-controlled resistor satisfies the sector condition
\begin{equation}
\label{eq:sector:ccnr}
-R^{g} (\Delta {i})^{2} \leq \Delta {i} \Delta {v} \leq R^{d} (\Delta {i})^{2}
\end{equation}
Equivalently, a current-controlled resistor satisfies \eqref{eq:supply:nr} with
$\mathcal{I} = \sign(R^{d} - R^{g})$, $\mathcal{Q} = \frac{2 R^{g} R^{d}}{ \vert R^{d} - R^{g} \vert}$
and $\mathcal{R} = - \frac{2}{ \vert R^{d} - R^{g} \vert}$.
Both types of controlled resistors appear naturally in devices such as tunnel diodes, DIAC's or neon lamps.
Additionally, they can be built from off-the-shelf components like transistors and operational amplifiers
\cite{chua1983}, \cite{kaplan1968}.
\begin{figure}[htpb]
\centering
\includegraphics{nr_vc_cc.eps}
\caption{Voltage-controlled resistor (left) and current-controlled resistor (right). The functions
$g$ and $r$ are assumed singled-valued and Lipschitz continuous. If $g$ or $r$ are monotone increasing
then the resistor is passive, otherwise it is active.}
\label{fig:nr:vc:cc}
\end{figure}
Describing negative resistors in terms of dissipation inequalities opens the way to the
use of dissipativity theory to characterize circuit interconnections.
As an illustration, consider the parallel interconnection
of a voltage-controlled negative resistance element with a capacitor (Figure \ref{fig:basic:sw:vc}, left).
Let $i^{c}, v^{c}$ and $i^{r}, v^{r}$ be the currents and voltages associated to the capacitor and the
controlled resistor, respectively.
The capacitor is a classical lossless element that satisfies the power-preserving equality
\begin{equation}
\label{eq:supply:capacitor}
\frac{d}{dt} {C \frac {(\Delta v^{c})^2}{2}} = \Delta {v^{c}} \Delta {i^{c}}
\end{equation}
In the language of dissipativity theory, the quantity on the left-hand side is the time-derivative of the
{\it storage} $C \frac{(\Delta v^{c})^2}{2}$.
The negative resistance element satisfies $-\Delta {v^{r}} \Delta {i^{r}} + G^{d} (\Delta v^{r})^{2} \ge 0$.
The parallel interconnection defined by $v^{cc}=v^{c}=v^{r}$ and $i^{cc}=i^{c}+i^{r}$~\footnote{The superindices in the variables
$i^{cc}$ and $v^{cc}$ indicate that the port under consideration is current-driven. In a similar way, $i^{vc}$ and
$v^{vc}$ will denote the variables associated to a voltage-driven port.} satisfies the dissipation (in)equality
\begin{equation}
-\frac{d}{dt} {C \frac {(\Delta {v}^{cc})^{2}}{2}} \le -\Delta v^{cc} \Delta i^{cc} + G^{d} (\Delta {v}^{cc})^{2}
\label{eq:sw1:ineq:dissipation}
\end{equation}
The quantity that appears on the left hand-side is the time-derivative of a {\it negative} storage.
More generally, the storage functions in this paper will be quadratic forms defined by a symmetric
matrix $P=P^T$ with $p$ negative eigenvalues (and $n-p$ positive eigenvalues). Such {\it signed} storage
functions generalize the conventional {\it positive definite} storages of passivity theory. Positive definite
storages are natural candidates for the stability analysis of closed equilibrium systems.
In its incremental form, stability analysis appears in the literature under different names,
including {\it contraction} theory \cite{lohmiller1998}, {\it incremental} stability analysis \cite{angeli2002},
or differential Lyapunov analysis \cite{forni2014b}.
{\it Signed} storages generalize this stability analysis for non-equilibrium behaviors characterized by a low-dimensional asymptotic behavior.
This generalization is the topic of dominance analysis, reviewed in the next section.
\begin{figure}[htpb]
\centering
\includegraphics[trim={0.3cm, 0.35cm, 0.35cm, 0.35cm}, clip]{basic_switch_vc_cc.eps}
\caption{Basic prototype circuits of a current-driven (left) and a voltage-driven (right)
$1$-passive circuit. The resistors $R_{vc}$ and $R_{cc}$ are voltage-controlled and current-controlled
resistors respectively.}
\label{fig:basic:sw:vc}
\end{figure}
\section{Differential dissipativity}
\label{section:differential}
\subsection{Dominant systems}
\label{section:dominance}
Dominance theory extends stability analysis to non-equilibrium behaviors. The approach
is based on the intuitive idea that the long run behavior of the system
is dictated by low-dimensional dynamics, identified through the study of the system
linearization \cite{forni2014b}, \cite{forni2017}, \cite{Forni2017b}.
In what follows we adapt the differential approach of \cite{Forni2017b} into
an incremental setting.
\begin{defn}
Let $f: \RE^{n} \setto \RE^{n}$ be a Lipschitz continuous map. A system of the form
%
\begin{equation}
\dot{x} \in f(x), \quad x \in \RE^{n},
\label{eq:inclusion}
\end{equation}
%
is $p$-dominant with rate $\lambda \geq 0$ if there exists a matrix
$P = P^{\top} \in \RE^{n \times n}$ with inertia $(p, 0, n-p)$ such that
%
\begin{equation}
\begin{bmatrix}
\Delta {\dot{x}} \\ \Delta {x}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P + \varepsilon I
\end{bmatrix}
\begin{bmatrix}
\Delta {\dot{x}} \\ \Delta {x}
\end{bmatrix}
\leq 0.
\label{eq:dominance}
\end{equation}
The property is strict if $\varepsilon > 0$.
\label{def:dominance}
\end{defn}
When $P$ is positive definite, \eqref{eq:dominance}
becomes the incremental analogue of the classical Lyapunov inequality, meaning that any two trajectories
converge to each other with decay rate at least $\lambda \geq 0$, \cite{Boyd1994}.
When $f$ is a differentiable map, \eqref{eq:dominance}
reduces to the simple matrix inequality
\begin{equation}
\frac{\partial f(x)}{\partial x}^{\top} P + P \frac{\partial f(x)}{\partial x}
+ 2 \lambda P \leq -\varepsilon I,
\label{eq:dominance:smooth}
\end{equation}
which provides a basic test for dominance, \cite{forni2017}, \cite{Forni2017b}.
\begin{thm}
Let $f: \RE^{n} \to \RE^{n}$ be a differentiable map. The closed system
\eqref{eq:inclusion} is $p$-dominant if and only if, there exists a matrix $P =
P^{\top}$
with inertia $(p, 0, n-p)$ such that \eqref{eq:dominance:smooth} holds.
\label{thm:dominance:smooth}
\end{thm}
\begin{pf}
First assume that \eqref{eq:inclusion} is $p$-dominant. Expanding the left-hand side of
\eqref{eq:dominance} and dividing by $\Vert \Delta {x} \Vert^{2} \neq 0$ yields,
%
\begin{displaymath}
\frac{\Delta {f}^{\top} P \Delta {x} +
\Delta {x}^{\top} P \Delta {f}
+ 2 \lambda \Delta {x}^{\top} P \Delta {x} +
\varepsilon \Delta {x}^{\top} \Delta {x}}{\Vert \Delta {x} \Vert^{2}} \leq 0.
\end{displaymath}
%
By letting $\delta_{x} = \lim_{\Delta {x} \to 0} \frac{\Delta {x}}{\Vert \Delta {x} \Vert}$ we
arrive to \eqref{eq:dominance:smooth}.
For the converse statement, let $x (\alpha) = \alpha x_{1} + (1 - \alpha) x_{2}$ and
let $\phi:\RE \to \RE$ be such that
%
\begin{multline*}
\phi(\alpha) = 2 \left( f(x(\alpha)) - f(x_{2}) + \lambda (
x(\alpha) - x_{2}) \right)^{\top} P \Delta {x}
\\
+ \varepsilon (x(\alpha) - x_{2})^{\top}
\Delta {x}
\end{multline*}
%
where $\Delta x = x_{1} - x_{2}$. Hence,
%
\begin{multline*}
\frac{d \phi(\alpha)}{d \alpha} = \Delta {x}^{\top} \left( \frac{\partial f(x)}{\partial x}^{\top}
P + P \frac{\partial f(x)}{\partial x} \right.
\\
\left. \phantom{\frac{1}{2}} + 2 \lambda
P + \varepsilon I \right) \Delta {x}
\leq 0.
\end{multline*}
The above inequality implies that $\phi$ is a non-increasing function. Therefore,
$\phi(1) \leq \phi(0) = 0$ and \eqref{eq:dominance} follows. This concludes the proof. $\hfill\qed$
\end{pf}
The property of dominance strongly constrains the asymptotic behavior
of the system as described for the following theorem.
\begin{thm}[{\cite[Theorem 2]{Forni2017b}}]
\label{theorem:dominance:constrain}
Let \eqref{eq:inclusion}
be strictly $p$-dominant with rate $\lambda \geq 0$.
For any given $x \in \mathbb{R}^n$, let $\Omega(x)$ be the $\omega$-limit set of $x$.
Then the flow of \eqref{eq:inclusion} on $\Omega(x)$ is topologically equivalent to the flow of
a $p$-dimensional system.
\label{eq:dominance:constrain}
\end{thm}
Additionally, the following corollary becomes useful in characterizing
the asymptotic behavior of a dominant system.
\begin{cor}
Under the assumptions of Theorem \ref{theorem:dominance:constrain},
every bounded trajectory of \eqref{eq:inclusion} converges to
%
\begin{itemize}
\item A unique equilibrium point if $p = 0$.
\item An equilibrium point if $p = 1$.
\item A simple attractor if $p = 2$.
\end{itemize}
\label{corollary:behavior}
\end{cor}
Summing up, closed dynamic systems with smaller degrees of dominance will show simpler behaviors
compared with systems with higher degrees. The following subsection extends the property of
dominance to open systems under the framework of dissipative systems.
\subsection{Signed dissipation inequalities}
Dissipativity theory \cite{willems1972}, \cite{willems1972b} is grounded in dissipation inequalities,
which generalize the physical characterization of a passive circuit as a system that can only absorb energy:
the variation of energy {\it stored} in the elements of the circuit (capacitors and inductors) is upper bounded
by the electrical power {\it supplied} to the circuit. For a linear circuit, the storage is a quadratic function of the state,
and the dissipation inequality takes the standard form
$$ \frac{d}{dt} x^{\top}Px \le - \lambda x^{\top} P x + v^{\top} i + i^{\top} v $$
The scalar $\lambda \ge 0$ determines a dissipation rate. Each pair of voltage $v_k$ and current $i_k$ appearing in
the voltage vector $v$ and voltage current $i$ determines a port of the circuit.
In matrix form, the quadratic dissipation inequality characterizing passivity reads
\begin{equation}
\begin{bmatrix}
\dot{x} \\
x
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P
\end{bmatrix}
\begin{bmatrix}
\dot{x}
\\
x
\end{bmatrix}
\leq
\begin{bmatrix}
v \\ i
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & I
\\
I & 0
\end{bmatrix}
\begin{bmatrix}
v \\ i
\end{bmatrix}
\label{eq:passivity:inequality}
\end{equation}
An incremental dissipation inequality is in term of the increments rather than absolute variables:
\begin{equation}
\begin{bmatrix}
\Delta \dot{x} \\
\Delta x
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{x}
\\
\Delta x
\end{bmatrix}
\leq
\begin{bmatrix}
\Delta v \\ \Delta i
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & I
\\
I & 0
\end{bmatrix}
\begin{bmatrix}
\Delta v \\ \Delta i
\end{bmatrix}
\label{eq:inc-passivity:inequality}
\end{equation}
Motivated by the signed supply rates and signed storages introduced in Section \ref{section:motivation}, we generalize the incremental passivity dissipation inequality (\ref{eq:inc-passivity:inequality}) to {\it signed} dissipation inequalities of the form
%
\begin{equation}
\begin{bmatrix}
\Delta {\dot{x}} \\
\Delta {x}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P + \varepsilon I
\end{bmatrix}
\begin{bmatrix}
\Delta {\dot{x}}
\\
\Delta {x}
\end{bmatrix}
\leq
\begin{bmatrix}
\Delta {v} \\ \Delta {i}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q} & \mathcal{I}
\\
\mathcal{I} & \mathcal{R}
\end{bmatrix}
\begin{bmatrix}
\Delta {v} \\ \Delta {i}
\end{bmatrix}
\label{eq:dissipation:inequality}
\end{equation}
for an arbitrary circuit with state $x \in \RE^n$ and $m$ ports defining the current $i \in \RE^m$ and voltage $v \in \RE^m$.
We only consider circuits composed of linear capacitors, linear inductors, and nonlinear resistors. The {\it signed} quadratic storage is
determined by the symmetric matrix $P$ with $p$ negative eigenvalues and $n-p$ positive eigenvalues. The {\it signed} supply
is determined by the signature matrix $\mathcal{I}$. The scalar $\lambda \geq 0$ is the dissipation rate.
The matrices $\mathcal{Q}, \mathcal{R}$ are symmetric as in \eqref{eq:supply:1}.
\begin{defn}
A nonlinear circuit is called {\it signed} passive if the inequality (\ref{eq:dissipation:inequality}) holds along any pair of trajectories.
The property is strict if $\varepsilon > 0$.
\label{def:signed:passive}
\end{defn}
Definition \ref{def:signed:passive} is very close to the classical definition of incremental passivity. The only
difference is that (i) we consider {\it signed} storages, i.e. {\it differences}
of positive storages and (ii) {\it signed} supply rates, i.e. {\it differences} of the classical {\it passivity} supply rates.
As illustrated in Section \ref{section:motivation}, such storages and supply rates appear naturally when considering
circuits with both passive and active elements and ports that can both absorb and deliver energy.
\subsection{Dissipative interconnections}
The central property of passivity theory is that passivity is preserved by interconnection. More precisely, port interconnections
of passive circuits are passive. In order to generalize this property to signed-passivity, we introduce the following definition.
\begin{defn}
\label{def:dissipative:connection}
Let $\Sigma_{a}$ and $\Sigma_{b}$ be signed-passive with a common rate $\lambda \ge 0$.
Their interconnection is called \emph{dissipative} if
\begin{equation}
\label{eq:dissipative:connection}
\Delta {i^{a}}^{\top} \mathcal{I}_{a} \Delta {v^{a}} + \Delta {i^{b}}^{\top} \mathcal{I}_{b} \Delta {v^{b}}
\leq \Delta {i} \mathcal{I} \Delta {v}
\end{equation}
If equality holds in (\ref{eq:dissipative:connection}), then the interconnection is called {\it neutral}.
\end{defn}
The conventional passivity supply assumes $\mathcal{I}=I$. In this case, an interconnection is {\it neutral}
if
$$ \Delta {i^{a}}^{\top} \Delta {v^{a}} + \Delta {i^{b}}^{\top} \Delta {v^{b}}
= \Delta {i}^{\top} \Delta {v}
$$
Hence, port interconnections of passive circuits are neutral. More generally,
let us consider the port interconnection of two signed-passive systems as
\begin{align}
\nonumber
i^{a} & = - i^{b} + i^{cc} & i^{b} & = - i^{vc}
\\
v^{a} & = v^{b} + v^{vc} & v^{a} & = v^{cc}
\label{eq:simple:pattern}
\end{align}
where we have set $i = [i^{cc \top}, i^{vc \top} ]^{\top}$ and
$v = [v^{cc \top}, v^{vc \top}]^{\top}$. Here the pairs $(i^{cc}, v^{cc})$ and $(i^{vc}, v^{vc})$
are associated to current-controlled and voltage-controlled ports, respectively,
see Figures \ref{fig:basic:sw:vc} and \ref{fig:basic:osc}.
Substitution of \eqref{eq:simple:pattern} on the left-hand side of \eqref{eq:dissipative:connection}
shows that port interconnections of signed-passive systems with supplies
sharing the same signature (i.e., $\mathcal{I}_{a} = \mathcal{I}_{b}$)
are neutral.
Note that a circuit is closed or terminated whenever $i^{cc} = 0$ and $v^{vc} = 0$.
The question of how to realize a neutral or dissipative interconnection when interconnecting signed-passive circuits
is deferred to Section \ref{section:circuits:connection}. But the definition allows for the following generalization
of the passivity theorem.
\begin{thm}
\label{thm:interconnection}
The dissipative interconnection of two signed-passive systems with a common dissipation rate
is signed-passive with the same rate. The storage of the interconnected system is the sum of the storages.
\end{thm}
\begin{pf}
Let us consider the aggregated state $x = [x_{a}^{\top}, x_{b}^{\top}]^{\top}$, and the block-diagonal
matrix $P = \diag [P_{a}, P_{b} ]$. The sum of storages satisfies,
%
\begin{multline}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P + \varepsilon I
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix} \leq
\\
\sum_{k \in {a, b}}
\begin{bmatrix}
\Delta i^{k} \\ \Delta v^{k}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q}_{k} & \mathcal{I}_{k}
\\
\mathcal{I}_{k} & \mathcal{R}_{k}
\end{bmatrix}
\begin{bmatrix}
\Delta i^{k} \\ \Delta v^{k}
\end{bmatrix}
\label{eq:sum:supplies}
\end{multline}
Simple, yet cumbersome, computations show that the substitution of the interconnection
pattern \eqref{eq:simple:pattern} into \eqref{eq:sum:supplies} together with
the dissipativity of the interconnection yield,
%
\begin{multline}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P + \varepsilon I
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix} \leq
\\
\begin{bmatrix}
\Delta i^{cc} \\ \Delta i^{vc} \\ \Delta v^{cc} \\ \Delta v^{vc}
\end{bmatrix}^{\top}
\begin{bmatrix}
\hat{\mathcal{Q}} & \hat{\mathcal{I}}
\\
\hat{\mathcal{I}} & \hat{\mathcal{R}}
\end{bmatrix}
\begin{bmatrix}
\Delta i^{cc} \\ \Delta i^{vc} \\ \Delta v^{cc} \\ \Delta v^{vc}
\end{bmatrix}
\label{eq:supply:connection}
\end{multline}
%
where $\hat{\mathcal{I}} = \diag [\mathcal{I}_{a}, \mathcal{I}_{b}]$ and
\begin{align*}
\hat{\mathcal{Q}} & =
\begin{bmatrix}
\mathcal{Q}_{a} & - \mathcal{Q}_{a}
\\
- \mathcal{Q}_{a} & \mathcal{Q}_{a} + \mathcal{Q}_{b}
\end{bmatrix}
&
\hat{\mathcal{R}} & =
\begin{bmatrix}
\mathcal{R}_{a} + \mathcal{R}_{b} & - \mathcal{R}_{b}
\\
- \mathcal{R}_{b} & \mathcal{R}_{b}
\end{bmatrix}
\end{align*}
and the result follows. $\hfill\qed$
\end{pf}
A key consequence of the passivity theorem is the property that when a passive system is terminated, it leads
to a stable equilibrium system. The storage becomes a Lyapunov function for the closed system.
The generalization of that result is as follows.
\begin{thm}
\label{thm:dominance:closedLoop}
Let $\Sigma_{a}$ be a strictly signed-passive circuit with rate $\lambda > 0$ and dominance degree $p$.
The terminated circuit built from the dissipative interconnection of $\Sigma_{a}$ with a resistor ($\Sigma_{b}$) defines a
$p$-dominant system with the same rate $\lambda > 0$ provided that
$\mathcal{Q}_{a} + \mathcal{Q}_{b} \leq 0$ and $\mathcal{R}_{a} + \mathcal{R}_{b} \leq 0$.
\end{thm}
\begin{pf}
Recall that a resistor (linear or nonlinear) satisfies \eqref{eq:supply:nr}. Thus, from Theorem \ref{thm:interconnection},
the interconnection satisfies \eqref{eq:supply:connection}. In addition, the termination of the ports,
i.e., $i^{cc} = 0$ and $v^{vc} = 0$, transforms \eqref{eq:supply:connection} into
%
\begin{multline*}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P + \varepsilon I
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{x} \\ \Delta x
\end{bmatrix} \leq
\\
\begin{bmatrix}
\Delta i^{vc} \\ \Delta v^{cc}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q}_{a} + \mathcal{Q}_{b} & 0
\\
0 & \mathcal{R}_{a} + \mathcal{R}_{b}
\end{bmatrix}
\begin{bmatrix}
\Delta i^{vc} \\ \Delta v^{cc}
\end{bmatrix} \leq 0
\end{multline*}
%
and the conclusion follows directly
from Definition \ref{def:dominance}. $\hfill\qed$
\end{pf}
\section{Elementary switching and oscillating circuits}
\label{section:circuits:lossless:lure}
In this section we review classical elementary circuits and illustrate their signed passivity properties.
\subsection{Switching circuits}
We start with the parallel nonlinear $RC$ circuit and the
series nonlinear $RL$ circuit shown in Figure \ref{fig:basic:sw:vc}.
For the nonlinear $RC$ circuit, we rewrite the dissipation inequality \eqref{eq:sw1:ineq:dissipation} in the matrix form with state $x = v^{c}$
\begin{multline}
\label{eq:cc_sw:ineq}
\begin{bmatrix}
\Delta {\dot x} \\ \Delta {x}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & -\frac{C}{2}
\\
-\frac{C}{2} & - \lambda C
\end{bmatrix}
\begin{bmatrix}
\Delta {\dot x} \\ \Delta {x}
\end{bmatrix}
\leq
\\
\frac{1}{2}
\begin{bmatrix}
\Delta {i}^{cc} \\ \Delta {v}^{cc}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & -1
\\
-1 & 2(G^{d} - \lambda C)
\end{bmatrix}
\begin{bmatrix}
\Delta {i}^{cc} \\ \Delta {v}^{cc}
\end{bmatrix}
\end{multline}
The dissipation inequality
involves the standard storage of a capacitor and the standard supply of a one port circuit,
but both with a negative signature.
The circuit is the port interconnection of a capacitor with a negative resistor. The interconnection
is neutral as a port interconnection of elements with negative signature ${\mathcal I}=-1$.
Terminating the circuit, that is, setting $i^{cc} = 0$, results in a $1$-dominant system when $G^{d} - \lambda C < 0$.
This closed circuit has one or three equilibria. With three equilibria, one of which unstable, the circuit is an elementary
example of bistable switch.
The dissipativity analysis of the series $RL$ circuit in Figure \ref{fig:basic:sw:vc} is similar.
Taking as state variable $\xi$, the circuit satisfies the dissipation inequality
\begin{multline}
\label{eq:vc_sw:ineq}
\begin{bmatrix}
\Delta {\dot \xi} \\ \Delta {\xi}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & -\frac{L}{2}
\\
-\frac{L}{2} & - \lambda L
\end{bmatrix}
\begin{bmatrix}
\Delta {\dot \xi} \\ \Delta {\xi}
\end{bmatrix}
\leq
\\
\frac{1}{2}
\begin{bmatrix}
\Delta {i}^{vc} \\ \Delta {v}^{vc}
\end{bmatrix}^{\top}
\begin{bmatrix}
2(R^{d} - \lambda L) & -1
\\
-1 & 0
\end{bmatrix}
\begin{bmatrix}
\Delta {i}^{vc} \\ \Delta {v}^{vc}
\end{bmatrix}
\end{multline}
The circuit is a bistable switch when
$R^{d} - \lambda L < 0$. Both circuits can be seen as abstract realizations of the classical Schmitt trigger circuit in which the negative
resistor is usually made by using an operational amplifier in positive feedback \cite{miranda2018a}.
\subsection{Oscillating circuits}
We proceed with the analysis of the nonlinear RLC circuits
shown in Figure \ref{fig:basic:osc}.
\begin{figure}[hptb]
\begin{center}
\includegraphics[trim={0.5cm, 0.3cm, 0.45cm, 0.3cm}, clip]{basic_oscillator_vc_cc.eps}
\end{center} \vspace{-3mm}
\caption{Basic prototype circuits of a current-controlled (left) and a voltage-controlled (right)
signed-passive circuits with degree of dominance $2$. }
\label{fig:basic:osc}
\end{figure}
The parallel nonlinear $RLC$ circuit is the port interconnection of the nonlinear $RC$
circuit in the previous section with a lossless inductor. The port interconnection is neutral
as an interconnection of two circuits with supply signature ${\mathcal I} = -1$. The total storage
is the sum of two negative storages
$$ - \frac{C}{2} (\Delta x)^2 - \frac{L}{2} (\Delta \xi)^2. $$
Defining the state $\Delta z = [ \Delta x \; \Delta \xi]^T$ and
$$ P = \left [ \begin{array}{cc} -\frac{C}{2} & 0 \\ 0 & -\frac{L}{2} \end{array} \right ] ,
$$
the interconnection satisfies the dissipation inequality
\begin{multline}
\begin{bmatrix}
\Delta {\dot{z}} \\
\Delta {z}
\end{bmatrix}^{\top}
\begin{bmatrix}
0 & P
\\
P & 2 \lambda P
\end{bmatrix}
\begin{bmatrix}
\Delta {\dot{z}}
\\
\Delta {z}
\end{bmatrix}
\leq
\\
\frac{1}{2}
\begin{bmatrix}
\Delta {i}^{cc} \\ \Delta {v}^{cc}
\end{bmatrix}^{\top}
\begin{bmatrix}
-2 \lambda L & -1
\\
-1 & 2(G^{d} - \lambda C)
\end{bmatrix}
\begin{bmatrix}
\Delta {i}^{cc} \\ \Delta {v}^{cc}
\end{bmatrix}
\end{multline}
The storage has a dominance degree 2 and the supply has a negative signature ${\mathcal I} =-1$.
When terminated, that is, when $i^{cc} = 0$, the circuit is 2-dominant for $G^{d} < \lambda C $.
It is a prototype of negative resistance nonlinear oscillator, such as the circuits studied by
Van der Pol \cite{vanDerPol1926} and Nagumo \cite{nagumo1962}.
The series interconnection in Figure \ref{fig:basic:osc} can be studied in a similar way,
as a neutral interconnection between the nonlinear $RL$ circuit in the previous section and a
lossless capacitor. The circuit is signed dissipative with the same storage and with the supply
$$
\sigma(\Delta_{i}, \Delta v) = \frac{1}{2}
\begin{bmatrix}
\Delta i^{vc} \\ \Delta v^{vc}
\end{bmatrix}^{\top}
\begin{bmatrix}
2(R^{d} -\lambda L) & -1
\\
-1 & -2 \lambda C
\end{bmatrix}
\begin{bmatrix}
\Delta {i}^{vc} \\ \Delta {v}^{vc}
\end{bmatrix}
$$
\section{Dissipative interconnections}
\label{section:circuits:connection}
We return to question of realizing dissipative interconnections satisfying (\ref{eq:dissipative:connection}). We illustrate the construction with
the \emph{static} coupling network shown in Figure \ref{fig:dissipative:interconnection}.
\begin{figure}[htpb]
\centering
\includegraphics[trim={0.4cm, 0.25cm, 0.3cm, 0.25cm}, clip]{dissipativeInterconnection2.eps}
\caption{Dissipative interconnection of circuits $\Sigma_{a}$ and $\Sigma_{b}$ through the coupling network $\Sigma_{c}$.}
\label{fig:dissipative:interconnection}
\end{figure}
The interconnection equations are
\begin{align}
\nonumber
i^{k} & = - \tilde{i}^{k} + i^{k, cc}, & \tilde{i}^{k} & = - i^{k, vc}
\\
v^{k} & = \tilde{v}^{k} + v^{k, vc}, & v^{k} & = v^{k, cc}
\label{eq:connection:pattern}
\end{align}
where the variables $i^{k, cc}$, $v^{k, cc}$, $i^{k, vc}$ and $v^{k, vc}$, $k \in \{a, b \}$, represent
the range of possible ports available after interconnection.
With this notation, a port is closed or terminated when $i^{k, cc} = 0$ and $v^{k, vc} = 0$, $k \in \{a, b \}$ which
is the case shown in Figure \ref{fig:dissipative:interconnection}.
The following theorem provides conditions on the coupling network $\Sigma_{c}$ guaranteeing a dissipative interconnection.
\begin{thm}
\label{thm:dissipative:coupling}
The interconnection between $\Sigma_{a}$ and $\Sigma_{b}$ is dissipative if and only if the coupling network $\Sigma_{c}$ is
signed-passive without any shortage of signed-passivity, i.e., if and only if $\Sigma_{c}$ satisfies,
%
\begin{equation}
0 \leq
\begin{bmatrix}
\Delta \tilde{i}^{a} \\ \Delta \tilde{i}^{b} \\ \Delta \tilde{v}^{a} \\ \Delta \tilde{v}^{b}
\end{bmatrix}^{\top}
\begin{bmatrix}
\tilde{\mathcal{Q}}_{a} & 0 & \mathcal{I}_{a} & 0
\\
0 & \tilde{\mathcal{Q}}_{b} & 0 & \mathcal{I}_{b}
\\
\mathcal{I}_{a} & 0 & \tilde{\mathcal{R}}_{a} & 0
\\
0 & \mathcal{I}_{b} & 0 & \tilde{\mathcal{R}}_{b}
\end{bmatrix}
\begin{bmatrix}
\Delta \tilde{i}^{a} \\ \Delta \tilde{i}^{b} \\ \Delta \tilde{v}^{a} \\ \Delta \tilde{v}^{b}
\end{bmatrix}
\label{eq:coupling:passive}
\end{equation}
with $\tilde{\mathcal{Q}}_{k} \leq 0$, $\tilde{\mathcal{R}}_{k} \leq 0$ for all $k \in \{a, b\}$. In addition, the interconnection is
neutral if and only if,
%
\begin{equation}
0 = \Delta \tilde{i}^{a} \mathcal{I}_{a} \Delta \tilde{v}^{a} + \Delta \tilde{i}^{b} \mathcal{I}_{b} \Delta \tilde{v}^{b}
\label{eq:coupling:neutral}
\end{equation}
\end{thm}
\begin{pf}
Computation of the left-hand side of \eqref{eq:dissipative:connection} under the
interconnection pattern \eqref{eq:connection:pattern} lead us to,
\begin{align*}
& \Delta i^{a} \mathcal{I}_{a} \Delta v^{a} + \Delta i^{b} \mathcal{I}_{b} \Delta v^{b}
\\
& = \sum_{k \in \{a, b \}} \left( -\Delta \tilde{i}^{k} + \Delta i^{k, cc} \right) \mathcal{I}_{k} \Delta v^{k}
\\
& = \sum_{k \in \{a, b \}} - \Delta \tilde{i}^{k} \mathcal{I}_{k} \left( \Delta \tilde{v}^{k} + \Delta v^{k, vc} \right) +
\Delta i^{k, cc} \mathcal{I}_{k} \Delta v^{k, cc}
\\
& = \sum_{k \in \{a, b \}} - \Delta \tilde{i}^{k} \mathcal{I}_{k} \Delta \tilde{v}^{k}
\\
& \qquad + \sum_{k \in \{a, b\}} \Delta i^{k, cc} \mathcal{I}_{k} \Delta v^{k, cc} + \Delta i^{k, vc} \mathcal{I}_{k} \Delta v^{k, vc}
\\
& \leq \sum_{k \in \{a, b\}} \Delta i^{k, cc} \mathcal{I}_{k} \Delta v^{k, cc} + \Delta i^{k, vc} \mathcal{I}_{k} \Delta v^{k, vc}
\end{align*}
where we have made use of \eqref{eq:coupling:passive} in the last step.
Hence, the conclusion follows by taking
\begin{align}
\nonumber
i & = [i^{a, cc}, i^{b, cc}, i^{a, vc}, i^{b, vc}]^{\top}
\\
v &= [v^{a, cc}, v^{b, cc}, v^{a, vc}, v^{b, vc}]^{\top}
\label{eq:vi:vector}
\end{align}
and $\mathcal{I} = \diag [\mathcal{I}_{a}, \mathcal{I}_{b}, \mathcal{I}_{a}, \mathcal{I}_{b} ]$. $\hfill\qed$
\end{pf}
The addition of the network $\Sigma_{c}$ adds \emph{signed} dissipation to both systems, allowing the following
generalization of Theorem \ref{thm:dominance:closedLoop}.
\begin{cor}
Let $\Sigma_{a}$ be a strictly signed-passive circuit with rate $\lambda > 0$ and dominance degree $p$.
The terminated circuit built from dissipative interconnection of $\Sigma_{a}$ with a resistor ($\Sigma_{b}$) through
a coupling $\Sigma_{c}$ defines a
$p$-dominant system with the same rate $\lambda > 0$ provided that
\begin{equation}
\sum_{k \in \{a, b \}}
\begin{bmatrix}
\Delta i^{k} \\ \Delta v^{k}
\end{bmatrix}^{\top}
\begin{bmatrix}
\mathcal{Q}_{k} + \tilde{\mathcal{Q}}_{k} & 0
\\
0 & \mathcal{R}_{k} + \tilde{\mathcal{R}}_{k}
\end{bmatrix}
\begin{bmatrix}
\Delta i^{k} \\ \Delta v^{k}
\end{bmatrix} \leq 0
\label{eq:dominance:condition:gral}
\end{equation}
\end{cor}
\begin{pf}
The proof is the same as in Theorem \ref{thm:dominance:closedLoop} but considering Theorem \ref{thm:dissipative:coupling} and
the interconnection pattern \eqref{eq:connection:pattern} instead. $\hfill\qed$
\end{pf}
Figures \ref{fig:dissipative:T}-\ref{fig:dissipative:Pi} illustrate practical realizations of dissipative interconnections
where resistive elements model power losses.
\begin{figure}[htpb]
\centering
\includegraphics[trim={0.4cm, 0cm, 0.3cm, 0.2cm}, clip]{activeInterconnection_T.eps}
\caption{``T'' interconnection of systems $\Sigma_{a}$ and $\Sigma_{b}$ using a current-controlled current source
for the cases when $\mathcal{I}_{a} = -\mathcal{I}_{b}$.}
\label{fig:dissipative:T}
\end{figure}
The ``T'' connection in Figure \ref{fig:dissipative:T} imposes the constraints
\begin{align*}
i^{a} & = -\tilde{i}^{a}, \quad i^{b} = -\tilde{i}^{b}
\\
v^{a} & = \tilde{v}^{a} = R_{a} \tilde{i}^{a} - \frac{R_{c}}{\alpha - 1}(\tilde{i}^{a} + \tilde{i}^{b})
\\
v^{b} & = \tilde{v}^{b} = R_{b} \tilde{i}^{b} - \frac{R_{c}}{\alpha - 1}(\tilde{i}^{a} + \tilde{i}^{b})
\end{align*}
where $\alpha > 1$. Without loss of generality we assume that $\mathcal{I}_{a} = -1$ and $\mathcal{I}_{b} = 1$.
It follows from direct computations that the ``T'' bridge satisfies \eqref{eq:coupling:passive} with
\begin{align*}
\tilde{\mathcal{Q}}_{a} & = R_{a} - \frac{R_{c}}{\alpha - 1}, & \tilde{\mathcal{R}}_{a} & = 0
\\
\tilde{\mathcal{Q}}_{b} & = \frac{R_{c}}{\alpha - 1} - R_{b}, & \tilde{\mathcal{R}}_{b} & = 0
\end{align*}
Hence, according to Theorem \ref{thm:dissipative:coupling}, the interconnection of $\Sigma_{a}$ and $\Sigma_{b}$ via the ``T'' bridge
is dissipative for the case $\mathcal{I}_{a} = -1$ and $\mathcal{I}_{b} = 1$ whenever $R_{a} \leq \frac{R_{c}}{\alpha -1} \leq R_{b}$.
The dual version of the ``T'' connection in Figure \ref{fig:dissipative:T} is the ``$\Pi$'' connection
as shown in Figure \ref{fig:dissipative:Pi}.
\begin{figure}[htpb]
\centering
\includegraphics[trim={0.4cm, 0cm, 0.3cm, 0.2cm}, clip]{activeInterconnection_Pi.eps}
\caption{``$\Pi$'' interconnection of systems $\Sigma_{a}$ and $\Sigma_{b}$ using a current-controlled current source
for the cases when $\mathcal{I}_{a} = -\mathcal{I}_{b}$.}
\label{fig:dissipative:Pi}
\end{figure}
In this case the connection imposes the relations
\begin{align*}
v^{a} & = \tilde{v}^{a}, \quad v^{b} = \tilde{v}^{b}
\\
-i^{a} & = \tilde{i}^{a} = \frac{1}{R_{a}} \tilde{v}^{a} - \frac{\alpha - 1}{R_{c}} \left( \tilde{v}^{a} - \tilde{v}^{b} \right)
\\
-i^{b} & = \tilde{i}^{b} = \frac{1}{R_{b}} \tilde{v}^{b} + \frac{\alpha - 1}{R_{c}} \left( \tilde{v}^{a} - \tilde{v}^{b} \right)
\end{align*}
where $\alpha > 1$. Hence direct computations show that the ``$\Pi$'' bridge
also satisfies \eqref{eq:coupling:passive} with
\begin{align*}
\tilde{\mathcal{Q}}_{a} & = 0, & \tilde{\mathcal{R}}_{a} & = \frac{1}{R_{a}} - \frac{\alpha - 1}{R_{c}}
\\
\tilde{\mathcal{Q}}_{b} & = 0, & \tilde{\mathcal{R}}_{b} & = \frac{\alpha - 1}{R_{c}} - \frac{1}{R_{b}}
\end{align*}
Following again Theorem \ref{thm:dissipative:coupling}, the
``$\Pi$'' bridge provides an interconnection that is dissipative whenever
$\frac{1}{R_{a}} \leq \frac{\alpha - 1}{R_{c}} \leq \frac{1}{R_{b}}$.
Both dissipative interconnections above can be implemented by using negative resistance devices as shown
in Figure \ref{fig:connections:T:Pi:implementation}. One should stress that the implementations in Figure \ref{fig:connections:T:Pi:implementation}
only consider the active range of the controlled resistors $R_{vc}$ and $R_{cc}$.
\begin{figure}[htpb]
\centering
\includegraphics[trim={0.4cm, 0cm, 0.3cm, 0.2cm}, clip]{activeInterconnection_wControlledResistor.eps}
\includegraphics[trim={0.4cm, 0cm, 0.3cm, 0.2cm}, clip]{activeInterconnection_wControlledResistor2.eps}
\caption{Implementation of dissipative ``T'' and ``$\Pi$'' interconnections via controlled resistors. Both interconnection networks are
dissipative for systems with opposite supply signature $\mathcal{I}_{a} = -\mathcal{I}_{b}$ in the active range of the controlled resistors.}
\label{fig:connections:T:Pi:implementation}
\end{figure}
\section{An example}
\begin{figure*}[htpb]
\centering
\includegraphics[trim={0.4cm, 0cm, 0.3cm, 0.2cm}, clip]{nagumo_passive3.eps}
\caption{Negative resistance oscillator connected to a passive load through a ``$\Pi$'' dissipative interconnection.}
\label{fig:fn}
\end{figure*}
We conclude this paper with an analysis of the circuit shown in Figure \ref{fig:fn}. The circuits $\Sigma_{a_{1}}$ and $\Sigma_{a_{2}}$ are the
negative resistance switches analyzed in Section \ref{section:circuits:lossless:lure}.
From \eqref{eq:cc_sw:ineq}-\eqref{eq:vc_sw:ineq} it becomes clear that their interconnection (denoted as $\Sigma_{a}$) is
neutral. In addition, Theorem \ref{thm:interconnection} reveals that the resulting circuit is signed-passive with a negative storage
(of dominance degree 2) and a passivity supply with negative signature $-1$, for all
$\lambda >\max\{ \frac{G^{d}}{C_0}, \frac{R^{d}}{L_{0}} \}$, where $G^{d}$ and $R^{d}$ are the positive slopes of the voltage-current characteristics
of $R_{cc}^{a}$ and $R_{vc}^{a}$ respectively.
The circuit $\Sigma_{b}$ is a classical linear $RC$ passive load. It has a positive definite storage and is passive, that is signed-passive with
positive signature supply $+1$, for $\lambda < \min_{k \in \{1, 2, 3 \} } \left\{ \frac{1}{R_{k} C_{k}} \right\}$.
The two circuits are interconnected through the ``$\Pi$'' bridge discussed in the previous section. This element makes the interconnection
of $\Sigma_{a}$ and $\Sigma_{b}$ dissipative. As a consequence, the interconnected circuit is signed-passive. Its storage is the difference
of two positive definite storages. It has a dominance degree 2. The supply of the interconnected system is a passivity supply with
positive signature $+1$. The terminated circuit is 2 dominant for any rate $\lambda$ satisfying
$$\max \left\{ \frac{G^{d}}{C_0}, \frac{R^{d}}{L_{0}} \right \} < \lambda < \min_{k \in \{1, 2, 3 \} } \left\{ \frac{1}{R_{k} C_{k}} \right\}.$$
The simulation in Figure \ref{fig:fn:trajectories:dissipative} is for the set of parameters $L_{0} = 50 mH$,
$C_{0} = 10 \mu F$, $C_{1} = C_{2} = C_{3} = 0.1 \mu F$, $R_{1} = R_{2} = R_{3} = R_{12} = R_{23} = 1 \Omega$,
$R_{a} = 20 \Omega$, and $R_{b} = 10 \Omega$.
The active resistors $R_{vc}^{a}$, $R_{cc}^{a}$ and $R_{vc}^{c}$ have voltage-current characteristics given by
\begin{align*}
g_{1}(x_{1}) &=
\begin{cases}
0.1 x_{1} & x_{1} < 2 V
\\
-0.1 x_{1} + 0.4 & 2 V \leq x_{1} \leq 3 V
\\
0.1 x_{1} - 0.2 & 3V < x_{1}
\end{cases}
\\
r_{2}(x_{2}) &=
\begin{cases}
10 x_{2} + 5 & x_{2} < -0.2 A
\\
-10 x_{2} + 1 & -0.2A \leq x_{2} \leq -0.1 A
\\
10 x_{2} + 3 & -0.1A < x_{2}
\end{cases}
\\
g_{2}(v) & =
\begin{cases}
0.1375 v + 0.9625 & v < -5 V
\\
-0.055 v & -5 V \leq v \leq 5 V
\\
0.1375 v - 0.9625 & 5 V \leq v
\end{cases}
\end{align*}
Note that the active resistor $R_{vc}^{c}$ has an active region with negative slope of $-0.055$ and satisfies
$\frac{1}{R_{a}} \leq 0.055 \leq \frac{1}{R_{b}}$, thus providing a dissipative coupling locally.
Also, with these set of parameters the circuit has a unique unstable equilibrium. The simulated behavior is bounded
and entirely in the active range of the controlled resistors.
By 2-dominance of the circuit, the trajectory must converge to a limit cycle.
\begin{figure}[htpb]
\centering
\begin{tikzpicture}
\begin{axis}[
xtick = {0.0, 0.1, 0.2},
xticklabels = {0.0, 0.1, 0.2},
ytick = {-30, -15, 0, 10},
yticklabels = {-30, -15, 0, 10},
x grid style={white!80.0!black},
xlabel={$t [s]$},
xmajorgrids,
xmin=-0.01, xmax=0.21,
y grid style={white!80.0!black},
ylabel={$x_{3}^{b} [mV]$},
ymajorgrids,
ymin=-30, ymax=10,
height=3.5cm, width=0.45\textwidth,
]
\addplot [thick, black, forget plot] table [x=t, y=Y2, col sep=comma]{nagumo-passive2.csv};
\end{axis}
\end{tikzpicture}
\caption{Time trajectory of the voltage across the capacitor $C_{3}$ of the circuit in Figure \ref{fig:fn}.}
\label{fig:fn:trajectories:dissipative}
\end{figure}
\linespread{0.98}
|
1,116,691,500,970 | arxiv | \subsection*{Abstract}
\camera{With increasing use of mobile devices, photo sharing
services are experiencing greater popularity.}
Aside from providing storage, photo sharing services enable
bandwidth-efficient downloads to mobile devices by performing
server-side image transformations (resizing, cropping).
On the flip side, photo sharing services have raised privacy
concerns such as leakage of photos to unauthorized viewers and the
use of \camera{algorithmic} recognition technologies by providers.
To address these concerns, we propose a privacy-preserving photo
encoding algorithm that extracts and encrypts a small, but
significant, component of the photo, while preserving the remainder
in a public, standards-compatible, part.
These two components can be separately stored.
This technique significantly reduces \iftechrep the signal-to-noise
ratio and \fi the accuracy of automated detection and recognition on
the public part, while preserving the ability of the provider to
perform server-side transformations to conserve download bandwidth
usage.
Our prototype privacy-preserving photo sharing system, {P3}\xspace, works
with Facebook, and can be extended to other services as well.
{P3}\xspace requires no changes to existing services or mobile application
software, and adds minimal photo storage overhead.
\section{The Examined Images}
\label{appx:rawimages}
In this section, we provide a set of images used for the evaluation.
Each raw image is presented with its canny edge detection result.
\begin{figure}
\centering
\includegraphics[viewport=0 200 800 550, scale=0.32, clip=true]{figures/eval_edge_orig_boat.pdf}
\caption{A boat image from USC-SIPI}
\label{fig:dataset_dist}
\vspace{-1ex}
\end{figure}
\begin{figure}
\centering
\includegraphics[viewport=0 200 800 550, scale=0.32, clip=true]{figures/eval_edge_orig_tree.pdf}
\caption{A tree from USC-SIPI}
\label{fig:dataset_dist}
\vspace{-1ex}
\end{figure}
\begin{figure}
\centering
\includegraphics[viewport=0 200 800 550, scale=0.32, clip=true]{figures/eval_edge_orig_vegi.pdf}
\caption{A vegitable image from USC-SIPI}
\label{fig:dataset_dist}
\vspace{-1ex}
\end{figure}
\begin{figure}
\centering
\includegraphics[viewport=0 200 800 550, scale=0.32, clip=true]{figures/eval_edge_orig_baboon.pdf}
\caption{A baboon image from USC-SIPI}
\label{fig:dataset_dist}
\vspace{-1ex}
\end{figure}
\section{{P3}\xspace: The Algorithm}
\label{sec:approach}
In this section, we describe the {P3}\xspace algorithm for ensuring privacy
of photos uploaded to PSPs.
In the next section, we describe the design and implementation of a
complete system for privacy-preserving photo sharing.
\begin{figure}[t]
\centering
\includegraphics[viewport=0 100 770 550, scale=0.30, clip=true]{figures/algodesc.pdf}
\caption{Privacy-Preserving Image Encoding Algorithm}
\label{fig:algodesc}
\vspace{-2ex}
\end{figure}
\subsection{Overview}
One possibility for preserving the privacy of photos is end-to-end
encryption.
\emph{Senders}\footnote{We use ``sender''
to denote the user of a PSP who uploads images to the PSP.} may
encrypt photos before uploading, and \emph{recipients} use a shared
secret key to decrypt photos on their devices.
This approach cannot provide image scalability, since the photo
representation is non-JPEG compliant and opaque to the PSP so it
cannot perform transformations like resizing and cropping.
Indeed, PSPs like Facebook reject attempts to upload fully-encrypted
images.
A second approach is to leverage the JPEG image compression pipeline.
Current image compression standards use a well-known \emph{DCT
dictionary} when computing the DCT coefficients.
A \emph{private} dictionary~\cite{K-SVD}, known only to the sender and
the authorized recipients, can be used to preserve privacy.
Using the coefficients of this dictionary, it may be possible for PSPs
to perform image scaling transformations.
However, as currently defined, these coefficients result in a non-JPEG
compliant bit-stream, so PSP-side code changes would be required in
order to make this approach work.
A third strawman approach might selectively hide faces by performing
face detection on an image before uploading.
This would leave a JPEG-compliant image in the clear, with the hidden
faces stored in a separate encrypted part.
At the recipient, the image can be reconstructed by combining the two
parts.
However, this approach does not address our privacy goals completely:
if an image is leaked from the PSP, attackers can still obtain
significant information from the non-obscured parts (e.g., torsos,
other objects in the background etc.).
Our approach on privacy-preserving photo sharing uses a
\emph{selective encryption} like this, but has a different design.
In this approach, called {P3}\xspace, a photo is divided into two
parts, a \emph{public} part and a \emph{secret} part.
The public part is exposed to the PSP, while the secret part is
encrypted and shared between the sender and the recipients (in a
manner discussed later).
Given the constraints discussed in Section~\ref{sec:motiv}, the public
and secret parts must satisfy the following requirements:
\begin{itemize}[topsep=-0.5ex,itemsep=-0.8ex, leftmargin=0.1cm]
\item It must be possible to represent the public part as a
JPEG-compliant image. This will allow PSPs to perform image
scaling.
\item However, intuitively, most of the ``important'' \emph{information} in the
photo must be in the secret part. This would prevent attackers from
making sense of the public part of the photos even if they were able
to access these photos. It would also prevent PSPs from successfully
applying recognition algorithms.
\item Most of the \emph{volume} (in bytes) of the image must reside in
the public part. This would permit PSP server-side image scaling to
have the bandwidth and latency benefits discussed above.
\item The combined size of the public and secret parts of the image
must not significantly exceed the size of the original image, as
discussed above.
\end{itemize}
Our {P3}\xspace algorithm, which satisfies these requirements, has two
components: a sender side encryption algorithm, and a recipient-side
decryption algorithm.
\subsection{Sender-Side Encryption}
JPEG compression relies on the \emph{sparsity} in the DCT domain of
typical natural images: a few (large magnitude) coefficients provide
most of the information needed to reconstruct the pixels.
Moreover, as the quality of cameras on mobile devices increases,
images uploaded to PSPs are typically encoded at high quality.
{P3}\xspace leverages both the sparsity and the high quality of these images.
First, because of sparsity, most information is contained in a few
coefficients, so it is sufficient to degrade a few such coefficients,
in order to achieve significant reductions in quality of the public
image.
Second, because the quality is high, quantization of each coefficient
is very fine and the least significant bits of each coefficient
represent very small incremental gains in reconstruction quality.
{P3}\xspace's encryption algorithm encode the most significant bits of (the
few) significant coefficients in the secret part, leaving everything
else (less important coefficients, and least significant bits of more
important coefficients) in the public part.
We concretize this intuition in the following design for {P3}\xspace sender
side encryption.
The selective encryption algorithm is, conceptually, inserted into the
JPEG compression pipeline after the quantization step.
At this point, the image has been converted into frequency-domain
quantized DCT coefficients.
While there are many possible approaches to extracting the most
significant information, {P3}\xspace uses a relatively simple approach.
First, it extracts the DC coefficients from the image into the secret
part, replacing them with zero values in the public part.
The DC coefficients represent the average value of each 8x8 pixel
block of the image; these coefficients usually contain enough
information to represent thumbnail versions of the original image with
enough visual clarity.
Second, {P3}\xspace uses a threshold-based splitting algorithm in which each
AC coefficient $y(i)$ whose value is above a threshold $T$ is
processed as follows:
\begin{itemize}[topsep=-0.5ex,itemsep=-0.8ex, leftmargin=0.1cm]
\item If $\lvert y(i) \rvert \leq T$, then the coefficient is represented in the public
part as is, and in the secret part with a zero.
\item If $\lvert y(i) \rvert > T$, the coefficient is replaced in the public
part with $T$, and the secret part contains the magnitude of the
difference as well as the sign.
\end{itemize}
Intuitively, this approach clips off the significant coefficients at
$T$.
$T$ is a tunable parameter that represents the trade-off between
storage/bandwidth overhead and privacy; a smaller $T$ extracts more
signal content into the secret part, but can potentially incur greater
storage overhead.
We explore this trade-off empirically in Section~\ref{sec:eval}.
Notice that both the public and secret parts are JPEG-compliant
images, and, after they have been generated, can be subjected to
entropy coding.
Once the public and secret parts are prepared, the secret part is
encrypted and, conceptually, both parts can be uploaded to the PSP (in
practice, our system is designed differently, for reasons discussed in
Section~\ref{sec:sysarch}).
We also defer a discussion of the encryption scheme
to Section~\ref{sec:sysarch}.
\begin{figure}
\centering
\includegraphics[viewport=5 390 770 550, scale=0.32, clip=true]{figures/processing_desc.pdf}
\vspace{-4ex}
\caption{{P3}\xspace Overall Processing Chain}
\label{fig:processing_desc}
\vspace{-2ex}
\end{figure}
\subsection{Recipient-side Decryption and Reconstruction}
While the sender-side encryption algorithm is conceptually simple, the
operations on the recipient-side are somewhat trickier.
At the recipient, {P3}\xspace must decrypt the secret part and reconstruct
the original image by combining the public and secret parts.
{P3}\xspace's selective encryption is \emph{reversible}, in the sense that,
the public and secret parts can be recombined to reconstruct the original image.
This is straightforward when the public image is stored unchanged, but
requires a more detailed analysis in the case when the PSP performs some
processing on the public image (e.g., resizing, cropping, etc) in
order to reduce storage, latency or bandwidth usage.
In order to derive how to reconstruct an image when the public image
has been processed, we start by expressing the reconstruction
for the unprocessed case as a series of linear operations.
\camera{
}
Let the threshold for our splitting algorithm be denoted $T$.
Let ${\bf y}$ be a block of DCT coefficients corresponding to a $8
\times 8$ pixel block in the
original image. Denote
${\bf x_p}$ and ${\bf x_s}$ the corresponding DCT coefficient values
assigned to the public and secret images,
respectively,
for \camera{the} same block\footnote{\camera{For ease of exposition, we represent these
blocks as 64x1 vectors}}.
For example, if one of those coefficients is such that $abs(y(i)) > T$, we will have
that $x_{p}(i)= T$ and $x_s(i) = sign(y(i)) (abs(y(i)) - T)$. Since
in our algorithm the sign information
is encoded either in the public or in the secret
part, depending on the coefficient magnitude, it is useful to
explicitly consider sign information here.
To do so we write ${\bf x_p} = {\bf S_p} \cdot {\bf a_p}$, and ${\bf
x_s} = {\bf S_s} \cdot {\bf a_s }$, where
\camera{
${\bf a_p}$ and ${\bf a_s }$ are absolute values of ${\bf x_p}$ and ${\bf x_s}$,
}
${\bf S_p}$ and ${\bf S_s}$ are diagonal matrices with sign information, i.e.,
${\bf S_p} = diag(sign({\bf x_p})), {\bf S_s} = diag(sign({\bf x_s}))$.
Now let ${\bf w}[i] = T$ if ${\bf S_s}[i] \neq 0$,
where $i$ is a \camera{coefficient} index, so
${\bf w}$ marks the positions of the above-threshold coefficients.
The key observation is that ${\bf x_p}$ and ${\bf x_s}$ \emph{cannot
be directly added} to recover ${\bf y}$ because the sign of a
coefficient above threshold is encoded correctly {\em only} in the
secret image.
Thus, even though the public image conveys sign information for that
coefficient, it might not be correct.
As an example, let $y(i) < -T$, then we will have that $x_{p}(i)= T$
and $x_s(i) = - (abs(y(i)) - T)$, thus $x_s(i) + x_p(i) \neq y(i)$.
For coefficients below threshold\camera{,} ${ y(i) }$ can be recovered
trivially since $x_s(i) = 0$ and $x_p(i) = y(i)$.
Note that incorrect sign in the public image occurs only for
coefficients $y(i)$ above threshold, and by definition, for all those
coefficients the public value is $x_p(i)=T$.
Note also that removing these signs increases significantly the
distortion in the public images and makes it more challenging for an
attacker to approximate the original image based on only the public
one.
In summary, the reconstruction can be written as a series of linear
operations:
\begin{eqnarray}
\label{eq:reconst_noproc}
{\bf y} &=& {\bf S_p}\cdot {\bf a_p} +
{\bf S_s}\cdot {\bf a_s} + \left( {\bf S_s} - {\bf S_s}^2
\right) \cdot {\bf w}
\end{eqnarray} where the first two terms correspond to directly adding
the correspondig blocks from the public and secret images, while the
third term is a correction factor to account for the incorrect sign of
some coefficients in the public image.
This correction factor is based on the sign of the coefficients in the
secret image and distinguishes three cases.
If $x_s(i) = 0$ or $x_s(i)> 0$ then $y(i) = x_s(i) + x_p(i) $ (no
correction), while if $x_s(i)< 0$ we have
\[
y(i) = x_s(i) + x_p(i) - 2T = x_s(i) +T -2T = x_s(i) -T .
\] Note that the operations can be very easily represented and
implemented with if/then/else conditions, but the algebraic
representation of (\ref{eq:reconst_noproc}) will be needed to
determine how to operate when the public image has been subject to
server-side processing.
In particular, from (\ref{eq:reconst_noproc}), and given that the DCT
is a linear operator, it becomes apparent that it would be possible to
reconstruct the images in the pixel domain.
That is, we could convert ${\bf S_p}\cdot {\bf a_p} $, ${\bf S_s}\cdot
{\bf a_s} $ and $\left( {\bf S_s} - {\bf S_s}^2 \right) \cdot {\bf w}$
into the pixel domain and simply add these three images pixel by
pixel.
Further note that the third image, the correction factor, does not
depend on the public image and can be completely derived from the
secret image.
\camera{
}
We now consider the case
where the PSP applies a linear operator ${\bf A}$ to the public
part.
Many interesting image transformations such as filtering,
cropping\footnote{\camera{Cropping at 8x8 pixel boundaries is a linear
operator; cropping at arbitrary boundaries can be approximated by
cropping at the nearest 8x8 boundary.}
},
scaling (resizing), and overlapping can be expressed by linear
operators.
Thus, when the public part is requested from the PSP, ${\bf A}\cdot
{\bf S_p} \cdot {\bf a_p}$ will be received.
Then the goal is for the recipient to reconstruct ${\bf A}\cdot {\bf
y}$ given the processed public image ${\bf A} \cdot {\bf S_p}\cdot
{\bf a_p}$ and the unprocessed secret information.
Based on the reconstruction formula of (\ref{eq:reconst_noproc}), and
the linearity of ${\bf A}$, it is clear
that the desired reconstruction can be obtained as follows
\begin{equation}
\label{eq:reconst}
{\bf A} \cdot {\bf y}
=
{\bf A} \cdot {\bf S_p} \cdot {\bf a_p} + {\bf A }
\cdot {\bf S_s}\cdot {\bf a_s} + {\bf A} \cdot \left( {\bf S_s} - {\bf S_s}^2
\right) \cdot {\bf w}
\end{equation}
Moreover, since the DCT transform is also linear, these operations can
be applied directly in the pixel domain, without needing to find a
transform domain representation. As an example, if cropping is
involved, it would be enough to crop the private image and the image
obtained by applying an inverse DCT to $\left( {\bf S_s} - {\bf S_s}^2
\right) \cdot {\bf w}$.
\camera{We have left an exploration of nonlinear operators to future work.
It may be possible to support certain types of non-linear
operations, such as pixel-wise color remapping, as found in popular
apps (e.g., Instagram).
If such operation can be represented as one-to-one mappings for all
legitimate values\footnote{\camera{Often, this is the case for most
color remapping operations.}}, e.g.
0-255 RGB values, we can achieve the same level of reconstruction
quality as the linear operators: at the recipient, we can reverse the
mapping on the public part, combine this with the unprocessed
secret part, and re-apply the color mapping on the resulting image.
However, this approach can result in some loss and we have left a
quantitative exploration of this loss to future work.
}
\subsection{\camera{Algorithmic} Properties of {P3}\xspace}
\mypar{Privacy Properties.}
By encrypting significant signal information, {P3}\xspace can preserve the
privacy of images by distorting them and by foiling
detection and recognition algorithms (Section~\ref{sec:eval}).
Given only the public part, the attacker can guess the threshold $T$
by assuming it to be the most frequent non-zero value.
If this guess is correct, the attacker knows the positions of the
significant coefficients, but not the range of values of these
coefficients.
Crucially, the sign of the coefficient is also not known.
Sign information tends to be ``random'' in that positive and negative
coefficients are almost equally likely and there is very limited
correlation between signs of different coefficients, both within a
block and across blocks.
It can be shown that if the sign is unknown, and no prior information
exists that would bias our guess, it is actually best, in terms of
mean-square error (MSE), to replace the coefficient with unknown sign
in the public image by 0.\footnote{\camera{If an adversary sees T in the
public part, replacing it with 0 will have an MSE of $T^2$.
However, if we use any non-zero values as a guess, MSE will be at least
$0.5 \times {( 2T )}^2 = {2T}^2$ because we will have a wrong sign
with probability 0.5 and we know that the magnitude is at least equal to T.
}}
Finally, we observe that \emph{proving} the privacy properties of our
approach is challenging.
If the public part is leaked from the PSP, proving that no human can
extract visual information from the public part would require having
an accurate understanding of visual perception.
Instead, we rely on metrics
commonly used in the signal processing community
\camera{in our evaluation (Section~\ref{sec:eval})}.
We note that the prevailing methodology in the signal processing
community for evaluating the efficacy of image and video privacy is
empirical subjective evaluation using user studies, or objective
evaluation using metrics~\camera{\cite{richardson2011h}}.
In Section~\ref{sec:eval}, we resort to an objective metrics-based
evaluation, showing the performance of {P3}\xspace on several image corpora.
\mypar{Other Properties.}
{P3}\xspace satisfies the other requirements we have discussed above.
It leaves, in the clear, a JPEG-compliant image (the public part), on
which the PSP can perform transformations to save storage and
bandwidth.
The threshold $T$ permits trading off increased storage for increased
privacy; for images whose signal content is in the DC component and a
few highly-valued coefficients, the secret part can encode most of
this content, while the public part contains a significant fraction of
the volume of the image in bytes.
As we show in our evaluation later, most images are sparse and satisfy
this property.
Finally, our approach of encoding the large coefficients decreases the
entropy both in the public and secret parts, resulting in better
compressibility and only slightly increased overhead overall relative
to the unencrypted compressed image.
However, the {P3}\xspace algorithm has an interesting consequence: since the
secret part cannot be scaled (because, in general, the transformations
that a PSP performs cannot be known a priori) and must be downloaded
in its entirety, the bandwidth savings from {P3}\xspace will always be lower
than downloading a resized original image.
The size of the secret part is determined by $T$: higher values of $T$
result in smaller secret parts, but provide less privacy, a trade-off
we quantify in Section~\ref{sec:eval}.
\section{Conclusions}
{P3}\xspace is a privacy preserving photo sharing scheme that leverages the
sparsity and quality of images to store most of the information in an
image in a secret part, leaving most of the volume of the image in a
JPEG-compliant public part, which is uploaded to PSPs.
{P3}\xspace's public parts have very low PSNRs and are robust to edge
detection, face detection, or sift feature extraction attacks.
These benefits come at minimal costs to reconstruction accuracy,
bandwidth usage and processing overhead.
\camera{ \mypar{Acknowledgements.} We would like to thank our shepherd
Bryan Ford and the anonymous referees for their insightful
comments. This research was sponsored in part under the
U.S. National Science Foundation grant CNS-1048824. Portions of the
research in this paper use the FERET database of facial images
collected under the FERET program, sponsored by the DOD Counterdrug
Technology Development Program Office~\cite{Phillips98feret,
Phillips00feret}. }
\section{Evaluation}
\label{sec:eval}
In this section, we report on an evaluation of {P3}\xspace.
Our evaluation uses objective metrics to characterize the privacy
preservation capability of {P3}\xspace, and it also reports, using a
full-fledged implementation, on the processing overhead induced by
sender and receiver side encryption.
\subsection{Methodology}
\mypar{Metrics.}
Our first metric for {P3}\xspace performance is the \emph{storage overhead}
imposed by selective encryption.
Photo storage space is an important consideration for PSPs, and a
practical scheme for privacy preserving photo storage must not incur
large storage overheads.
\iftechrep
We then measure the efficacy of privacy preservation using PSNR (peak
signal-to-noise ratio), a metric commonly used in signal
processing. While the shortcomings of this metric in terms of quantifying
perceptual quality are well known, it does provide a simple objective
way of quantifying degradation. Note also that the public images
\camera{that are highly degraded with values of PSNR} will be commonly
agreed to represent very poor quality.
To complement PSNR, we also present the visual representation of the
public part of an image, to let the reader judge the efficacy of
{P3}\xspace; lack of space prevents us from a more detailed exposition.
\fi
We then evaluate the efficacy of privacy preservation by measuring the
performance of state-of-the-art edge and face detection algorithms,
\camera{
the SIFT feature extraction algorithm, and a face recognition algorithm
}
on {P3}\xspace.
We conclude the evaluation of privacy by discussing the efficacy of
guessing attacks.
\iftechrep
\else
We have also used PSNR to quantify privacy~\cite{P3TR}, but have
omitted these results for brevity.
\fi
Finally, we quantify the reconstruction performance, bandwidth savings
and the processing overhead of {P3}\xspace.
\mypar{Datasets.}
We evaluate {P3}\xspace using \camera{four} image datasets.
First, as a baseline, we use the ``miscellaneous'' volume in the
USC-SIPI image dataset~\cite{USCdataset}.
This volume has 44 color and black-and-white images and contains
various objects, people, scenery, and so forth, and contains many
canonical images (including Lena) commonly used in the image
processing community.
Our second data set is from INRIA~\cite{INRIAdataset}, and contains 1491
full color images from vacation scenes including a mountain, a river,
a small town, other interesting topographies, etc.
This dataset contains has greater diversity than the USC-SIPI dataset
in terms of both resolutions and textures; its images vary in size up
to 5 MB, while the USC-SIPI dataset's images are all under
\camera{1 MB.}
We also use the Caltech face dataset~\cite{Caltechdataset} for our
face detection experiment.
This has 450 frontal color face images of about 27 unique faces
depicted under different circumstances (illumination, background,
facial expressions, etc.).
All images contain at least one large dominant face, and zero or more
additional faces.
\camera{Finally, the Color FERET Database~\cite{FERET} is used for
our face recognition experiment.
This dataset is specifically designed for developing, testing, and
evaluating face recognition algorithms, and contains 11,338 facial
images, using 994 subjects at various angles.}
\mypar{Implementation.}
We also report results from an implementation for
Facebook~\cite{Facebook}.
We chose the Android 4.x mobile operating system as our client
platform, since the bandwidth limitations together with the
availability of camera sensors on mobile devices motivate our work.
The \emph{mitmproxy} software tool~\cite{MITMPROXY} is used as a
trusted man-in-the-middle proxy entity in the system.
To execute a mitmproxy tool on Android, we used the
\textit{kivy/python-for-android} software~\cite{KivyP4A}.
Our algorithm described in Section~\ref{sec:approach} is implemented
based on the code maintained by the Independent JPEG Group, version
8d~\cite{IJG}.
We report on experiments conducted by running this prototype on
Samsung Galaxy S3 smartphones.
Figure~\ref{fig:screen} shows two screenshots of a Facebook page, with
two photos posted.
The one on the left is the view seen by a mobile device which has our
recipient-side decryption and reconstruction algorithm enabled.
On the right is the same page, without that algorithm (so only the
public parts of the images are visible).
\begin{figure}[t]
\centering
\includegraphics[viewport=0 150 720 540, scale=0.24, clip=true]{figures/eval_screenshot.pdf}
\caption{Screenshot(Facebook) with/without decryption}
\label{fig:screen}
\end{figure}
\subsection{Evaluation Results}
In this section, we first report on the trade-off between the
threshold parameter and storage size in {P3}\xspace.
We then evaluate various privacy metrics, and conclude with an
evaluation of reconstruction performance, bandwidth, and processing
overhead.
\subsubsection{The Threshold vs. Storage Tradoff}
\begin{figure}[t]
\centering
\subfigure[USC-SIPI]{
\includegraphics[viewport=0 0 275 270, width=1.5in, clip=true]{figures/eval_filesize_tradeoff_USC.pdf}
\label{fig:algo_size_tradeoff_usc}
}
\hspace{-2ex}
\subfigure[INRIA]{
\includegraphics[viewport=0 0 275 270, width=1.5in, clip=true]{figures/eval_filesize_tradeoff_INRIA.pdf}
\label{fig:algo_size_tradeoff_inria}
}
\vspace{-2ex}
\caption{Threshold vs. Size \camera{(error bars=stdev)}}
\label{fig:algo_size_tradeoff}
\end{figure}
\iftechrep
\begin{figure}[t]
\centering
\subfigure[USC-SIPI]{
\includegraphics[viewport=0 0 275 205, width=1.55in, clip=true]{figures/eval_psnr_tradeoff_USC.pdf}
\label{fig:algo_psnr_usc}
}
\hspace{-3ex}
\subfigure[INRIA]{
\includegraphics[viewport=0 0 275 205, width=1.55in, clip=true]{figures/eval_psnr_tradeoff_INRIA.pdf}
\label{fig:algo_psnr_inria}
}
\vspace{-3ex}
\caption{PSNR results}
\label{fig:algo_psnr}
\end{figure}
\fi
\begin{figure}[t]
\centering
\subfigure[Public Part]{
\includegraphics[width=3.1in, clip=true]{figures/algobase_public.pdf}
\label{fig:algo_base_public_boat}
}
\subfigure[Secret Part]{
\includegraphics[width=3.1in, clip=true]{figures/algobase_secret.pdf}
\label{fig:algo_base_secret_boat}
}
\vspace{-3ex}
\caption{Baseline - Encryption Result (T: 1,5,10,15,20)}
\label{fig:algo_base}
\vspace{-3ex}
\end{figure}
In {P3}\xspace, the threshold $T$ is a tunable parameter that trades off
storage space for privacy: at higher thresholds, fewer coefficients
are in the secret part but more information is exposed in the public
part.
Figure~\ref{fig:algo_size_tradeoff} reports on the size of the public
part (a JPEG image), the secret part (an encrypted JPEG image), and
the combined size of the two parts, as a fraction of the size of the
original image, for different threshold values $T$.
One interesting feature of this figure is that, despite the
differences in size and composition of the two data sets, their size
\emph{distribution as a function of thresholds is qualitatively
similar}.
At low thresholds (near 1), the combined image sizes exceed the
original image size by about 20\%, with the public and secret parts
being each about 50\% of the total size.
While this setting provides excellent privacy, the large size of the
secret part can impact bandwidth savings; recall that, in {P3}\xspace, the
secret part has to be downloaded in its entirety even when the public
part has been resized significantly.
Thus, it is important to select a better operating point where the
size of the secret part is smaller.
Fortunately, the shape of the curve of
Figure~\ref{fig:algo_size_tradeoff} for \emph{both datasets} suggests
operating at the knee \camera{of the ``secret'' line} (at a threshold
of in the range of 15-20), where the secret part is about 20\% of the original
image, and the \emph{total storage overhead is about 5-10\%}.
Figure~\ref{fig:algo_base}, which depicts the public and secret parts
(recall that the secret part is also a JPEG image) of a canonical
image from the USC-SIPI dataset, shows that for thresholds in this
range \camera{minimal} visual information is present in the public part,
with all of it being stored in the secret part.
We include these images to give readers a visual sense of the efficacy
of {P3}\xspace; we conduct more detailed privacy evaluations below.
This suggests that a threshold between 10-20 might provide a good
balance between privacy and storage.
We solidify this finding below.
\subsubsection{Privacy}
\iftechrep
\para{PSNR.}
One of the earliest objective metrics used for evaluating the quality of
image reconstruction is the peak signal-to-noise ratio (PSNR).
In Figure~\ref{fig:algo_psnr}, we present average PSNRs
\camera{and standard deviations of the public and secret
part of the USC-SIPI and the INRIA dataset,}
as a function of different thresholds,
when compared to the original image.
\camera{The secret parts show high PSNRs, especially
when we consider the fact that 35-40dB is regarded as perceptually
loseless in the image processing community.
Nonetheless, note that our encryption algorithm uses a single threshold
across entire image blocks and does not consider block energy distributions.
As a result, even if we get about 40dB in the secret part,
we can identify non-trivial block effects when we closely observe the image
(Figure~\ref{fig:algo_base}).
}
It is encouraging that the PSNR values
\camera{of the public part} are all around
\camera{10-15 dB,}
and that they increase only slightly with threshold.
The extraction of the DC component into the secret part plays a major
part in leading to such low PSNR values.
For the range of (low) PSNRs that we observe here (e.g., around
\camera{15} dB) it is widely accepted that quality is so degraded that
these images are practically useless.
However, this alone is not an indication that {P3}\xspace preserves privacy;
an examination of the public part of threshold 100 (not shown) reveals
some of the features in the original image.
At lower thresholds these features are no longer visible
(Figure~\ref{fig:algo_base}), but the difference in PSNR between a
threshold of 10 and 100 is negligible.
For this reason, we consider using several other metrics to quantify
the privacy obtained with {P3}\xspace.
These metrics quantify the efficacy of automated algorithms on the
public part; \emph{each automated algorithm can be considered to be
mounting a privacy attack on the public part.}
\else
In this
section, we use several metrics to quantify the privacy obtained with
{P3}\xspace.
These metrics quantify the efficacy of automated algorithms on the
public part; \emph{each automated algorithm can be considered to be
mounting a privacy attack on the public part.}
\fi
\begin{figure*}[t]
\subfigure[Edge Detection]{
\centering
\includegraphics[width=1.65in, clip=true]{figures/eval_canny_matchdist_0_100.pdf}
\label{fig:algo_canny_hamming}
}
\hspace{-3ex}
\subfigure[Face Detection]{
\centering
\includegraphics[width=1.65in, clip=true]{figures/eval_facedetect_tradeoff_CaltechFace.pdf}
\label{fig:algo_facedetect_tradeoff_caltechface}
}
\hspace{-3ex}
\subfigure[SIFT Feature]{
\centering
\includegraphics[width=1.65in, clip=true]{figures/eval_SIFT_tradeoff_USC.pdf}
\label{fig:algo_sift_tradeoff_usc}
}
\hspace{-3ex}
\subfigure[Face Recognition]{
\centering
\includegraphics[width=1.65in, clip=true]{figures/eval_facerec_FAFB_MahCosine.pdf}
\label{fig:algo_facerec_feret_fafb}
}
\caption{Privacy on Detection and Recognition Algorithms}
\end{figure*}
\para{Edge Detection.}
Edge detection is an elemental processing step in many signal
processing and machine vision applications, and attempts to discover
discontinuities in various image characteristics.
We apply the well-known Canny edge detector~\cite{CANNY} and its
implementation~\cite{CANNYimpl} to the public part of images in the
USC-SIPI dataset, and present images with the recognized edges in
Figure~\ref{fig:algo_edge_public_all}.
For space reasons, we only show edges detected on the public part of 4
canonical images for a threshold of \camera{1 and} 20.
\camera{The images with a threshold 20} do reveal several ``features'', and signal processing
researchers, when told that these are canonical images from a widely
used data set, can probably recognize these images.
However, a layperson who has not seen the image before very likely
will not be able to recognize any of the objects in the images (the
interested reader can browse the USC-SIPI dataset online to find the
originals).
We include these images to point out that visual privacy is a highly
subjective notion, and depends upon the beholder's prior experiences.
If true privacy is desired, end-to-end encryption must be used.
{P3}\xspace provides ``pretty good'' privacy together with the convenience
and performance offered by photo sharing services.
It is also possible to quantify the privacy offered by {P3}\xspace for edge
detection attacks.
Figure~\ref{fig:algo_canny_hamming} plots the fraction of matching
pixels in the image obtained by running edge detection on the public
part, and that obtained by running edge detection on the original
image (the result of edge detection is an image with binary pixel
values).
At threshold values below 20, \emph{barely 20\% of the pixels match}; at very
low thresholds, running edge detection on the public part results in a
picture resembling white noise, so we believe the higher matching rate
\camera{shown at low thresholds}
simply results from spurious matches.
We conclude that, for the range of parameters we consider, {P3}\xspace is
very robust to edge detection.
\mypar{Face Detection.}
Face detection algorithms detect human faces in photos, and were
available as part of Facebook's face recognition
API, until Facebook shut down the API~\cite{FacebookFace}.
To quantify the performance of face detection on {P3}\xspace, we use the
Haar face detector from the OpenCV library~\cite{OpenCV}, and apply it
to the public part of images from Caltech's face
dataset~\cite{Caltechdataset}.
The efficacy of face detection, as a function of different thresholds,
is shown in Figure~\ref{fig:algo_facedetect_tradeoff_caltechface}.
The y-axis represents the average number of faces detected; it is
higher than 1 for the original images, because some images have more
than one face.
{P3}\xspace \emph{completely foils face detection} for thresholds below 20;
at thresholds higher than about 35, faces are occasionally detected in
some images.
\begin{figure}[t]
\centering
\subfigure[T=1]{
\includegraphics[width=3.1in, clip=true]{figures/eval_edge_public_canonical1.pdf}
}
\subfigure[T=20]{
\includegraphics[width=3.1in, clip=true]{figures/eval_edge_public_canonical4.pdf}
}
\caption{Canny Edge Detection on Public Part}
\label{fig:algo_edge_public_all}
\end{figure}
\begin{figure}
\centering
\includegraphics[viewport=15 0 400 170, width=3.2in, clip=true]{figures/eval_bandwidth_overhead_INRIA.pdf}
\vspace{-5ex}
\caption{Bandwidth Usage Cost (INRIA)}
\label{fig:bandwidth_cost}
\end{figure}
\para{SIFT feature extraction.}
SIFT~\cite{SIFT} (or Scale-invariant Feature Transform) is a general
method to detect features in images.
It is used as a pre-processing step in many image detection and
recognition applications from machine vision.
The output of these algorithms is a set of feature vectors, each of
which describes some statistically interesting aspect of the image.
We evaluate the efficacy of attacking {P3}\xspace by performing SIFT feature
extraction on the public part.
For this, we use the implementation~\cite{SIFTimpl} from the designer
of SIFT together with the default parameters for feature extraction
and feature comparison.
Figure~\ref{fig:algo_sift_tradeoff_usc} reports the results of
running feature extraction on the USC-SIPI dataset.\footnote{
The SIFT algorithm is computationally expensive, and the INRIA data
set is large, so we do not have the results for the INRIA dataset.
(Recall that we need to compute for a large number of threshold
values). We expect the results to be qualitatively similar.
}
This figure shows two lines, one of which measures the total number of
features detected on the public part as a function of threshold.
This shows that as the threshold increases, predictably, the number of
detected features increases to match the number of features detected
in the original figure.
More interesting is the fact that, below the threshold of 10, \emph{no
SIFT features are detected}, and below a threshold of 20, only about
25\% of the features are detected.
However, this latter number is a little misleading, because we found
that, in general, SIFT detects \emph{different} feature vectors in the
public part and the original image.
If we count the number of features detected in the public part, which
are less than a distance $d$ (in feature space) from the nearest
feature in the original image (indicating that, plausibly, SIFT may
have found, in the public part, of feature in the original image), we
find that this number is far smaller; up to a threshold of 35, a very
small fraction of original features are discovered, and even at the
threshold of 100, only about \camera{4\%} of the original features
have been discovered.
We use the default parameter for the distance $d$ in the SIFT
implementation; changing the parameter does not change our
conclusions.\footnote{Our results use a distance parameter of 0.6
from~\cite{SIFTimpl}; we used 0.8, the highest distance parameter
that seems to be meaningful (~\cite{SIFT}, Figure 11) and the
results are similar.
}
\camera{
\para{Face Recognition.}
Face recognition algorithms take an aligned and normalized face image
as input and match it against a database of faces.
They return the best possible answer, e.g., the closest match or an
ordered list of matches, from the database.
We use the Eigenface~\cite{EigenFaces} algorithm and a well-known face
recognition evaluation system~\cite{CSUFaceRec} with the Color FERET
database.
On EigenFace, we apply two distance metrics, the Euclidean and the
Mahalinobis Cosine~\cite{CSUuserguide03}, for our evaluation.
We examine two settings: \textit{Normal-Public} setting considers the
case in which training is performed on normal training images in the
database and testing is executed on public parts.
The \textit{Public-Public} setting trains the database using public
parts of the training images; this setting is a stronger attack on {P3}\xspace
than \emph{Normal-Public}.
Figure~\ref{fig:algo_facerec_feret_fafb} shows a subset of our
results, based on the Mahalinobis Cosine distance metric and using the
FAFB probing set in the FERET database.
To quantify the recognition performance, we follow the methodology
proposed by \cite{Phillips00feret, Phillips98feret}.
In this graph, a data point at $(x,y)$ means that $y$\% of the time,
the correct answer is contained in the top $x$ answers returned by the
EigenFace algorithm.
In the absence of {P3}\xspace (represented by the \emph{Normal-Normal}
line), the recognition accuracy is over 80\%.
If we consider the proposed range of operating thresholds (T=1-20),
the recognition rate is below 20\% at rank 1.
Put another way, for these thresholds, more than 80\% of the time, the
face recognition algorithm provides the wrong answer (a false positive).
Moreover, our maximum threshold (T=20) shows about a 45\% rate at rank
50, meaning that less than half the time the correct answer lies in
the top 50 matches returned by the algorithm.
We also examined other settings, e.g., Euclidean distance and other
probing sets, and the results were qualitatively similar.
These recognition rates are so low that a face recognition attack on
{P3}\xspace is unlikely to succeed; even if an attacker were to apply face
recognition on {P3}\xspace, and even if the algorithm happens to be correct
20\% of the time, the attacker may not be able to distinguish between
a true positive and a false positive since the public image contains
little visual information.
}
\subsection{What is Lost?}
{P3}\xspace achieves privacy but at some cost to
reconstruction accuracy, as well as bandwidth and processing overhead.
\mypar{Reconstruction Accuracy.}
As discussed in Section~\ref{sec:approach}, the reconstruction of an
image for which a linear transformation has been applied should, in
theory, be perfect.
In practice, however, quantization effects in JPEG compression can
introduce very small errors in reconstruction.
Most images in the USC-SIPI dataset can be reconstructed, when the
transformations are known a priori, with an average PSNR of 49.2dB.
In the signal processing community, this would be considered
practically lossless.
More interesting is the efficacy of our reconstruction of Facebook and
Flickr's transformations.
In Section~\ref{sec:sysarch}, we described an exhaustive parameter
search space methodology to \emph{approximately} reverse engineer
Facebook and Flickr's transformations.
Our methodology is fairly successful, resulting in images with PSNR of
34.4dB for Facebook and 39.8dB for Flickr.
To an untrained eye, images with such PSNR values are generally
blemish-free.
Thus, using {P3}\xspace does not significantly degrade
the accuracy of the reconstructed images.
\mypar{Bandwidth usage cost.}
In {P3}\xspace, suppose a recipient downloads, from a PSP, a resized version
of an uploaded image\footnote{\camera{In our experiments, we mimic PSP
resizing using ImageMagick's convert program~\cite{ImageMagick}}}.
The total bandwidth usage for this download is the size of the resized
public part, together with the complete secret part.
Without {P3}\xspace, the recipient only downloads the resized version of the
original image.
In general, the former is larger than the latter and the difference
between the two represents the bandwidth usage cost, an important
consideration for usage-metered mobile data plans.
This cost, as a function of the {P3}\xspace threshold, is shown in
Figure~\ref{fig:bandwidth_cost} for the INRIA dataset (the USC dataset
results are similar).
For thresholds in the 10-20 range, this cost is modest: 20KB or less
across different resolutions (these resolutions are the ones Facebook
statically resizes an uploaded image to).
As an aside, the variability in bandwidth usage cost represents an
opportunity: users who are more privacy conscious can choose lower
thresholds at the expense of slightly higher bandwidth usage.
Finally, \camera{we} observe that this additional bandwidth usage can be
reduced by trading off storage: a sender can upload multiple encrypted
secret parts, one for each known static transformation that a PSP
performs.
We have not implemented this optimization.
\mypar{Processing Costs.}
On a Galaxy S3 smartphone, for a 720x720 image (the largest resolution
served by Facebook), it takes on average $152$ ms to extract the
public and secret parts, about $55$ ms to encrypt/decrypt the secret
part, and $191$ ms to reconstruct the image.
These costs are modest, and unlikely to impact user
experience.
\section{Introduction}
\label{sec:intro}
With the advent of mobile devices with high-resolution on-board
cameras,
photo sharing has become \camera{highly} popular.
Users can share photos either through photo sharing services like
Flickr or Picasa, or popular social networking services like Facebook
or Google+.
These \emph{photo sharing service providers} (PSPs) now have a large
user base, to the point where PSP photo storage subsystems have
motivated interesting systems research~\cite{Haystack}.
However, this development has generated privacy concerns
(Section~\ref{sec:motiv}).
Private photos have been leaked from a prominent photo sharing
site~\cite{PhotobucketFusking}.
Furthermore, widespread concerns have been raised about the
application of face recognition technologies in
Facebook~\cite{FacebookFace}.
Despite these privacy threats, it is not clear that the usage of photo
sharing services will diminish in the near future.
This is because photo sharing services provide several useful
functions that, together, make for a seamless photo browsing
experience.
In addition to providing photo storage, PSPs also perform several
server-side image transformations (like cropping, resizing and color
space conversions) designed to improve user perceived latency of photo
downloads and, incidentally, bandwidth usage (an important
consideration when browsing photos on a mobile device).
In this paper, we explore the design of a privacy-preserving photo
sharing algorithm (and an associated system) that \emph{ensures photo
privacy without sacrificing the latency, storage, and bandwidth
benefits provided by PSPs}.
This paper makes two novel contributions that, to our knowledge, have
not been reported in the literature (Section~\ref{sec:related}).
\camera{First, the}
design of the {P3}\xspace algorithm (Section~\ref{sec:approach}),
which prevents leaked photos from leaking \emph{information}, and
reduces the efficacy of automated processing (e.g., face detection,
feature extraction) on photos, while still permitting a PSP to apply
image transformations. It does this by splitting a photo into a
public part, which contains most of the \emph{volume} (in bytes) of
the original, and a secret part which contains most of the
original's \emph{information}.
\camera{Second, the}
design of the {P3}\xspace system (Section~\ref{sec:sysarch}),
which requires no modification to the PSP infrastructure or
software, and no modification to existing browsers or
applications. {P3}\xspace uses interposition to transparently
encrypt images when they are uploaded from clients, and
transparently decrypt and reconstruct images on the recipient side.
Evaluations (Section~\ref{sec:eval}) on \camera{four} commonly used image data
sets, as well as micro-benchmarks on an implementation of {P3}\xspace,
reveal several interesting results.
Across these data sets, there exists a ``sweet spot'' in the
parameter space that provides good privacy while at the same time
preserving the storage, latency, and bandwidth benefits offered by
PSPs.
At this sweet spot,
\iftechrep
the public part of the image has low PSNR and
\fi
algorithms like edge detection, face detection, \camera{face recognition},
and SIFT feature extraction are completely ineffective;
\emph{no} faces can be detected \camera{and correctly recognized}
from the public part, \emph{no} correct features can be extracted, and
a very small fraction of pixels defining edges are correctly
estimated.
{P3}\xspace image encryption and decryption are fast, and it is able to
reconstruct images accurately even when the PSP's image
transformations are not publicly known.
{P3}\xspace is proof-of-concept of, \camera{and a step towards}, easily
deployable privacy preserving photo storage.
Adoption of this technology will be dictated by economic incentives:
for example, PSPs can offer privacy preserving photo storage as a
premium service offered to privacy-conscious customers.
\section{Background and Motivation}
\label{sec:motiv}
The focus of this paper is on PSPs like Facebook, Picasa, Flickr, and
Imgur, who offer either direct \emph{photo sharing} (e.g., Flickr,
Picasa) between users or have integrated photo sharing into a social
network platform (e.g., Facebook).
In this section, we describe some background before motivating
privacy-preserving photo sharing.
\subsection{Image Standards, Compression and Scalability}
Over the last two decades, several standard image formats have been
developed that enable interoperability between producers and consumers
of images.
Perhaps not surprisingly, most of the existing PSPs like Facebook,
Flickr, Picasa Web, and many websites~\cite{JPEG-Usage1,
JPEG-Usage2, JPEG-Usage3} primarily use the most prevalent
of these standards, the JPEG (Joint Photographic Experts Group) standard.
In this paper, we focus on methods to preserve the privacy of JPEG
images; supporting other standards such as GIF and PNG (usually used
to represent computer-generated images like logos etc.) are left to
future work.
Beyond standardizing an image file format, JPEG performs lossy
compression of images.
A JPEG encoder consists of the following sequence of steps:
\mypar{Color Space Conversion and Downsampling.} In this step, the raw
RGB or color filter array (CFA) RGB image captured by
a digital camera is mapped to a YUV color space.
Typically, the two chrominance channels (U and V) are
represented at lower resolution than the luminance (brightness) channel (Y)
reducing the amount of pixel data to be encoded without significant
impact on perceptual quality
\mypar{DCT Transformation.} In the next step, the image is divided into
an array of blocks, each with $8\times 8$ pixels, and the
Discrete Cosine Transform (DCT) is applied to each block,
\camera{resulting in several \emph{DCT coefficients}. The mean value
of the pixels is called the DC coefficient. The
remaining are called AC coefficients.}
\mypar{Quantization.} In this step, these coefficients are quantized; this
is the only step in the processing chain where information is lost.
For typical natural images, information tends to be concentrated in
the lower frequency coefficients (which on average have larger
magnitude than higher frequency ones). For this reason, JPEG applies
different quantization steps to different frequencies. The degree
of quantization is user-controlled and can be varied in order to
achieve the desired trade-off between quality of the reconstructed
image and compression rate. We note that in practice, images shared
through PSPs tend to be uploaded with high quality (and high rate)
settings.
\mypar{Entropy Coding.} In the final step, redundancy in the quantized
coefficients is removed using variable length encoding of non-zero
quantized coefficients and of runs of zeros in between non-zero
coefficients.
\vspace{1ex}
Beyond storing JPEG images, PSPs perform several kinds of
transformations on images for various reasons.
First, when a photo is uploaded, PSPs statically resize the image to
several fixed resolutions.
For example, Facebook transforms an uploaded photo into a thumbnail, a
``small'' image (\camera{130x130}) and a ``big'' image (720x720).
These transformations have multiple uses: they can reduce
storage\footnote{We do not know if Facebook preserves the original
image, but high-end mobile devices can generate photos with
4000x4000 resolution and resizing these images to a few small fixed
resolutions can save space\camera{.}}, improve photo access latency for the
common case when users download either the big or the small image, and
also reduce bandwidth usage (an important consideration for mobile
clients).
In addition, PSPs perform \emph{dynamic} (i.e., when the image is
accessed) server-side transformations; they may resize the image to
fit screen resolution, and may also \emph{crop} the image to match the
view selected by the user.
(We have verified, by analyzing the Facebook protocol, that it
supports both of these dynamic operations).
These dynamic server-side transformations enable low latency access to
photos and reduce bandwidth usage.
Finally, in order to reduce user-perceived latency further, Facebook
also employs a special mode in the JPEG standard, called
\emph{progressive} mode.
For photos stored in this mode, the server delivers the coefficients
in increasing order (hence ``progressive'') so that the clients can
start rendering the photo on the screen as soon as the first few
coefficients are received, without having to receive all coefficients.
In general, these transformations \emph{scale} images in one fashion
or another, and are collectively called image scalability
transformations.
Image scalability is crucial for PSPs, since it helps them optimize
several aspects of their operation: it reduces photo storage, which
can be a significant issue for a popular social network
platform~\cite{Haystack}; it can reduce user-perceived latency, and
reduce bandwidth usage, hence improving user satisfaction.
\subsection{Threat Model, Goals and Assumptions}
In this paper, we focus on two specific threats to privacy that result
from uploading user images to PSPs.
The first threat is unauthorized access to photos.
A concrete instance of this threat is the practice of \emph{fusking},
which attempts to reverse-engineer PSP photo URLs in order to access
stored photos, bypassing PSP access controls.
Fusking has been applied to at least one PSP (Photobucket), resulting
in significant privacy leakage~\cite{PhotobucketFusking}.
The second threat is posed by \camera{automatic recognition} technologies,
by which PSPs may be able to infer social \camera{contexts} not explicitly
specified by users.
Facebook's deployment of face recognition technology has raised
significant privacy concerns in many countries (e.g.,~\cite{FacebookFace}).
The goal of this paper is \emph{to design and implement a system that
enables users to ensure the privacy of their photos (with respect to
the two threats listed above), while still benefiting from the image
scalability optimizations provided by the PSP.}
Implicit in this statement are several constraints, which make the
problem significantly challenging.
The resulting system must not require any software changes at the PSP,
since this is a significant barrier to deployment; an important
implication of this constraint is that the image stored on the PSP
must be JPEG-compliant.
For a similar reason, the resulting system must also be transparent to
the client.
Finally, our solution must not significantly increase storage
requirements at the PSP since, for large PSPs, photo storage is a
concern.
We make the following assumptions about trust in the various
components of the system.
We assume that all local software/hardware components on clients
(mobile devices, laptops etc.) are completely trustworthy, including
the operating system, applications and sensors.
We assume that PSPs are completely untrusted and may either by
commission or omission, breach privacy in the two ways described
above.
Furthermore, we assume eavesdroppers may attempt to snoop on the
communication between PSP and a client.
\section{Related Work}
\label{sec:related}
We do not know of prior work that has attempted to address photo
privacy for photo-sharing services.
Our work is most closely related to work in the signal processing
community on image and video privacy.
Early efforts at image privacy introduced techniques like
region-of-interest masking, blurring, or
pixellation~\cite{dufaux2010framework}.
In these approaches, typically a face or a person in an image is
represented by a blurred or pixelated version; as ~\cite{dufaux2010framework}
shows, these approaches are not particularly effective against
algorithmic attacks like face recognition.
A subsequent generation of approaches attempted to ensure privacy for
surveillance by scrambling coefficients in a manner qualitatively
similar to {P3}\xspace's algorithm~\cite{dufaux2010framework,EbrahimiTalk},
\camera{e.g., some of them randomly flips the sign information.}
However, this line of work has not explored designs under the
constraints imposed by our problem, namely the need for JPEG-compliant
images at PSPs to ensure storage and bandwidth benefits, and the
associated requirement for relatively small secret parts.
This strand is part of a larger body of work on selective encryption
in the image processing community.
This research, much of it conducted in the 90s and early 2000s, was
motivated by ensuring image secrecy while reducing the computation
cost of encryption~\cite{SE-Survey-Massoudi,SE-Survey-Liu}.
This line of work has explored some of the techniques we use such as
extracting the DC components~\cite{Tang96a} and encrypting the sign of
the coefficient~\cite{Shi98a,Wu00fastencryption}, as well as
techniques we have not, such as randomly permuting the
coefficients~\cite{Tang96a,Qiao97a}.
Relative to this body of work, {P3}\xspace is novel in being a selective
encryption scheme tailored towards a novel set of requirements,
motivated by photo sharing services.
In particular, to our knowledge, prior work has not explored selective
encryption schemes which permit image reconstruction when the
unencrypted part of the image has been subjected to transformations
like resizing or cropping.
Finally, a pending patent application by one of the
co-authors~\cite{ortega2002method} of this paper, includes the idea of
separating an image into two parts, but does not propose the {P3}\xspace
algorithm, nor does it consider the reconstruction challenges
described in Section~\ref{sec:approach}.
\iftechrep
\camera{
Some recent papers have examined complementary image security and
privacy problems.
Johnson \emph{et al.}\xspace discuss homomorphic encryption based methods for
verifying image signatures when images have been subject to
transformations like cropping, scaling, and JPEG-like
compression~\cite{Johnson2012homomorphic}.
End-to-end image encryption has been explored for the JPEG 2000 image
format~\cite{Engel2009survey}, and has resulted in a standard for
JPEG 2000 imaging (JPSEC)~\cite{JPSEC}.
}
\fi
Tangentially related is a body of work in the computer systems
community on ensuring other forms of privacy:
\camera{secure distributed storage systems}~\cite{SPORC,Depot,Depsky},
and privacy and anonymity for
mobile systems~\cite{TaintDroid,VirtualTripLine,Anonysense}.
None of these techniques directly apply to our setting.
\section{{P3}\xspace: System Design}
\label{sec:sysarch}
\begin{figure}
\centering
\includegraphics[viewport=20 180 700 540, scale=0.32, clip=true]{figures/p3_sysarch.pdf}
\caption{{P3}\xspace System Architecture}
\label{fig:sysarch}
\vspace{-2ex}
\end{figure}
In this section, we describe the design of a system for privacy
preserving photo sharing system.
This system, also called {P3}\xspace, has two desirable properties described
earlier.
First, it requires no software modifications at the PSP.
Second, it requires no modifications to client-side browsers or image
management applications, and only requires a small footprint software
installation on clients.
These properties permit \camera{fairly easy} deployment of privacy-preserving
photo sharing.
\subsection{{P3}\xspace Architecture and Operation}
Before designing our system, we explored the protocols used by PSPs
for uploading and downloading photos.
Most PSPs use HTTP or HTTPS to upload messages; we have verified this
for Facebook, Picasa Web, Flickr, PhotoBucket, Smugmug, and
Imageshack.
This suggests a relatively simple interposition architecture, depicted
in Figure~\ref{fig:sysarch}.
In this architecture, browsers and applications are configured to use
a local HTTP/HTTPS \emph{proxy} and all accesses to PSPs go through
the proxy.
The proxy manipulates the data stream to achieve privacy preserving
photo storage, in a manner that is transparent both to the PSP and the
client.
In the following paragraphs, we describe the actions performed by the
proxy at the sender side and at one or more recipients.
\mypar{Sender-side Operation.}
When a sender transmits the photo taken by built-in camera, the local
proxy acts as a middlebox and splits the uploaded image into a public
and a secret part (as discussed in Section~\ref{sec:approach}).
Since the proxy resides on the client device (and hence is within the
trust boundary per our assumptions, Section~\ref{sec:motiv}), it is
reasonable to assume that the proxy can decrypt and encrypt HTTPS
sessions in order to encrypt the photo.
We have not yet discussed how photos are encrypted; in our current
implementation, we assume the existence of a symmetric shared key
between a sender and one or more recipients.
This symmetric key is assumed to be distributed
out of band.
Ideally, it would have been preferable to store both the public and
the secret parts on the PSP.
Since the public part is a JPEG-compliant image, we explored methods
to embed the secret part within the public part.
The JPEG standard allows users to embed arbitrary application-specific
\emph{markers} with application-specific data in images; the standard
defines 16 such markers.
We attempted to use an application-specific marker to embed the secret
part; unfortunately, at least 2 PSPs (Facebook and Flickr) strip all
application-specific markers.
Our current design therefore stores the secret part on a cloud storage
provider (in our case, Dropbox).
Note that because the secret part is encrypted, we do not assume that
the storage provider is trusted.
Finally, we discuss how photos are named.
When a user uploads a photo to a PSP, that PSP may transform the photo
in ways discussed below.
Despite this, most photo-sharing services (Facebook, Picasa Web,
Flickr, Smugmug, and Imageshack\footnote{PhotoBucket does not, which
explains its vulnerability to fusking, as discussed earlier}) assign
a unique ID for all variants of the photo.
This ID is returned to the client, as part of the
API~\cite{FacebookAPI, FlickrAPI}, when the photo is updated.
{P3}\xspace's sender side proxy performs the following operations on the
public and secret parts.
First, it uploads the public part to the PSP either using HTTP or
HTTPS (e.g., Facebook works only with HTTPS, but Flickr supports
HTTP).
This returns an ID, which is then used to name a file containing the
secret part.
This file is then uploaded to the storage provider.
\mypar{Recipient-side Operation.}
Recipients are also configured to run a local web proxy.
A client device downloads a photo from a PSP using an HTTP get
request.
The URL for the HTTP request contains the ID of the photo
being downloaded.
When the proxy sees this HTTP request, it passes \camera{the} request on to the
PSP, but also initiates a concurrent download of the secret part from
the storage provider using the ID embedded in the URL.
When both the public and secret parts have been received, the proxy
performs the decryption and reconstruction procedure discussed in
Section~\ref{sec:approach} and passes the resulting image to the
application as the response to the HTTP get request.
However, note that a secret part may be reused multiple times: for
example, a user may first view a thumbnail image and then download a
larger image.
In these scenarios, it suffices to download the secret part once so
the proxy can maintain a cache of downloaded secret parts in order to
reduce bandwidth and improve latency.
There is an interesting subtlety in the photo reconstruction process.
As discussed in Section~\ref{sec:approach}, when the server-side
transformations are known, nearly exact reconstruction is
possible\footnote{The only errors that can arise are due to storing
the correction term in Section~\ref{sec:approach} in a lossy JPEG
format that has to be decoded for processing in the pixel
domain. Even if quantization is very fine, errors maybe introduced
because the DCT transform is real valued and pixel values are
integer, so the inverse transform of $\left( {\bf S_s} - {\bf S_s}^2
\right) {\bf w}$ will have to be rounded to the nearest integer
pixel value.}.
In our case, the precise transformations are not known, in general, to
the proxy, so the problem becomes more challenging.
By uploading photos, and inspecting the results, we are able to tell,
generally speaking, what kinds of transformations PSPs perform.
For instance, Facebook transforms a baseline JPEG image to a
progressive format and at the same time wipes out all irrelevant markers.
Both Facebook and Flickr statically resize the uploaded image with
different sizes; for example, Facebook generates at least three files
with different resolutions, while Flickr generates a series of
fixed-resolution images whose number depends on the size of the
uploaded image.
We cannot tell if these PSPs actually store the original images or
not, and we conjecture that the resizing serves to limit storage and
is also perhaps optimized for common case devices.
For example, the largest resolution photos stored by Facebook is
720x720, regardless of the original resolution of the image.
In addition, Facebook can dynamically resize and crop an image; the
cropping geometry and the size specified for resizing are both
encoded in the HTTP get URL, so the proxy is able to determine those
parameters.
Furthermore, by inspecting the JPEG header, we can tell some kinds of
transformations that may have been performed: e.g., whether baseline
image was converted to progressive or vice a versa, what sampling
factors, cropping and scaling etc. were applied.
However, some other critical image processing parameters are not
visible to the outside world.
For example, the process of resizing an image using down sampling is
often accompanied by a filtering step for antialiasing and may be
followed by a sharpening step, together with a color adjustment step
on the downsampled image.
Not knowing which of these steps have been performed, and not knowing
the parameters used in these operations, the reconstruction procedure
can result in lower quality images.
To understand what transformations have been performed, we are reduced
to searching the space of possible transformations for an outcome that
matches the output of transformations performed by the PSP\footnote{
This approach is clearly fragile, since the PSP can change the kinds
of transformations they perform on photos. Please see the discussion
below on this issue.
}.
Note that this reverse engineering need only be done when a PSP
re-jiggers its image transformation pipeline, so it should not be too
onerous.
Fortunately, for Facebook and Flickr, we were able to get reasonable
reconstruction results on both systems (Section~\ref{sec:eval}).
These reconstruction results were obtained by exhaustively searching
the parameter space with salient options based on commonly-used
resizing techniques~\cite{ResizeTechnique}.
More precisely, we select several candidate settings for colorspace
conversion, filtering, sharpening, enhancing, and gamma corrections,
and then compare the output of these with that produced by the PSP.
Our reconstruction results are presented in Section~\ref{sec:eval}.
\subsection{Discussion}
\label{sec:sysarch-discuss}
\mypar{Privacy Properties.}
Beyond the privacy properties of the {P3}\xspace algorithm, the {P3}\xspace system
achieves the privacy goals outlined in Section~\ref{sec:motiv}.
Since the proxy runs on the client for both sender and receiver, the
trusted computing base for {P3}\xspace includes the software and hardware
device on the client.
It may be possible to reduce the footprint of the trusted computing
base even further using a trusted platform module~\cite{TPM} and
trusted sensors~\cite{TrustedSensor}, but we have deferred that to future
work.
{P3}\xspace's privacy depends upon the strength of the symmetric key used
to encrypt in the secret part.
We assume the use of AES-based symmetric keys, distributed out of band.
Furthermore, as discussed above, in {P3}\xspace the storage provider cannot
leak photo privacy because the secret part is encrypted.
The storage provider, or for that matter the PSP, can tamper with
images and hinder reconstruction; protecting against such tampering is
beyond the scope of the paper.
For the same reason, eavesdroppers can similarly potentially tamper
with the public or the secret part, but cannot leak photo privacy.
\para{PSP Co-operation.}
The {P3}\xspace design we have described assumes no co-operation from the
PSP.
As a result, this implementation is fragile and a PSP can prevent
users from using their infrastructure to store {P3}\xspace's public parts.
For instance, they can introduce complex nonlinear transformations on
images in order to foil reconstruction.
They may also run simple algorithms to detect images where
coefficients might have been thresholded, and refuse to store such
images.
Our design is merely a proof of concept that the technology exists to
transparently protect the privacy of photos, without requiring
infrastructure changes or significant client-side modification.
Ultimately, PSPs will need to cooperate in order for photo privacy to
be possible, and this cooperation depends upon the implications of
photo sharing on their respective business models.
At one extreme, if only a relatively small fraction of a PSP's user base
uses {P3}\xspace, a PSP may choose to benevolently ignore this use (because
preventing it would require commitment of resources to reprogram their
infrastructure).
At the other end, if PSPs see a potential loss in revenue from not
being able to recognize objects/faces in photos, they may choose to
react in one of two ways: shut down {P3}\xspace, or offer photo privacy for
a fee to users.
However, in this scenario, a significant number of users see value
in photo privacy, so we believe that PSPs will be incentivized to
offer privacy-preserving storage for a fee.
In a competitive marketplace, even if one PSP were to offer
privacy-preserving storage as a service, others will likely follow
suit.
For example, Flickr already has a ``freemium'' business model and can
simply offer privacy preserving storage to its premium subscribers.
If a PSP were to offer privacy-preserving photo storage as a service,
we believe it will have incentives to use a {P3}\xspace like approach (which
permits image scaling and transformations), rather than end to end
encryption.
With {P3}\xspace, a PSP can assure its users that it is only able to see the
public part (reconstruction would still happen at the client), yet
provide (as a service) the image transformations that can reduce
user-perceived latency (which is an important consideration
for retaining users of online services~\camera{\cite{Haystack}}).
Finally, with PSP co-operation, two aspects of our {P3}\xspace design
become simpler.
First, the PSP image transformation parameters would be known, so
higher quality images would result.
Second, the secret part of the image could be embedded within the
public part, obviating the need for a separate online storage
provider.
\mypar{Extensions.}
Extending this idea to video is feasible, but left for future work.
As an initial step, it is possible to introduce the privacy preserving
techniques only to the I-frames, which are coded independently using
tools similar to those used in JPEG.
Because other frames in a ``group of pictures'' are coded using an
I-frame as a predictor, quality reductions in an I-frame propagate
through the remaining frames.
In future work, we plan to study video-specific aspects, such as how
to process motion vectors or how to enable reconstruction from a
processed version of a public video.
|
1,116,691,500,971 | arxiv | \section*{Introduction \label{sec:1}}
It is a long-standing problem whether and why it is sufficient to use in physics the Lagrangians containing
only first order time derivative. This is the more intriguing that adding higher derivatives may improve
our models in some respects, like ultraviolet behaviour \cite{1b,2b} (in particular, making modified
gravity renormalizable \cite{3b} or even asymptotically free \cite{4b}); also, higher-derivative Lagrangians appear to be a useful tool to describe some interesting models like relativistic particles with rigidity, curvature and torsion \cite{19b} Moreover, almost any effective theory
obtained by integrating out some degrees of freedom (usually, but not always, those related to high energy
excitations) of the underlying "microscopical" theory contains higher derivatives. One can argue that the
effective theory, being an approximation to perfectly consistent quantum theory need not to be considered
and quantized separately. However, we are never sure if our theory is the basic or effective one; therefore,
it is important to know whether it is at all possible to quantize the effective theory in a way which would
correctly reproduce some aspects of the microscopic one.
\par
First step toward the quantum theory is to put its classical counterpart in Hamiltonian form. Standard
framework for dealing with higher-derivative theories on Hamiltonian level is provided by Ostrogradski
formalism \cite{5b}-\cite{9b}. The main disadvantage of the latter is that the Hamiltonian, being linear function of some
momenta, is necessarily unbounded from below. In general, this cannot be cured by trying to devise an
alternative canonical formalism. In fact, any Hamiltonian is an integral of motion while it is by far
not obvious that a generic system described by higher-derivative Lagrangians posses globally defined
integrals of motion, except the one related to time translation invariance. Moreover, the instability of
Ostrogradski Hamiltonian is not related to finite domains in phase space which implies that it survives
standard quantization procedure (i.e. cannot be cured by uncertainty principle).
\par
Ostrogradski approach has also some other disadvantages. There is no straightforward transition from the
Lagrangian to the Hamiltonian formalism. In fact, Ostrogradski approach is based on the idea that the
consecutive time derivatives of initial coordinate(s) form new coordinates $q_i\sim q^{(i-1)}$. It appears
then that the Lagrangian cannot be viewed as a function on the tangent bundle to coordinate manifold
because it leads to incorrect equations of motion. Also, the Legendre transformation to the cotangent
bundle (phase space) cannot be performed. One deals with this problem by adding Lagrange
multipliers enforcing the proper relation between new coordinates and time derivatives of the original
ones. This results in further enlarging of coordinate manifold; moreover, the theory becomes constrained
(in spite of the fact that the initial theory may be nonsingular in the Ostrogradski sense, c.f. eq. (\ref{e:0}) below) and
the Hamiltonian formalism is obtained by applying Dirac constraint theory, i.e., by reduction of the
cotangent bundle to submanifold endowed with sympletic structure defined by Dirac brackets.
\par
In the present paper an alternative approach is proposed. It leads directly to the Lagrangians which, being
a function on the tangent manifold, gives correct equations of motion; no new coordinate variables
need to be added. Furthermore, for Lagrangians nonsingular in Ostrogradski sense the
Legendre transformation takes the standard form. Our approach is also applicable to the most interesting case of singular
Lagrangians (for example, those defining $f(R)$ gravities \cite{10b}).
\par
The paper is organized as follows. In Section \ref{sec:2} we consider nonsingular Lagrangians containing second
and third order time derivatives. Constrained theories are discussed in Section \ref{sec:3}. The general
formalism is applied to mini-superspace formulation of $f(R)$ gravity \cite{11b} in Section \ref{sec:4}. In Section \ref{sec:5},
the modifications necessary to cover the field-theoretic case
are given. In Appendix we describe (for one degree of freedom) the generalization of our formalism
to Lagrangians containing arbitrary high derivatives.
\section{Nonsingular Lagrangians of second and third order \label{sec:2}}
In this section we consider the Lagrangians containing second and third time derivatives which are
nonsingular in Ostrogradski sense. Ostrogradski approach is based on the idea that the consecutive
time derivatives of the initial coordinate form a new coordinates, $q_i\sim q^{(i-1)}$. However, it
has been suggested \cite{12b}-\cite{15b}, \cite{18b} that one can use every second derivative as a new variable, $q_i\sim q^{(2i-2)}$.
We generalize this idea by introducing new coordinates as some functions of the initial ones and their time derivatives.
Our paper is inspired by the results obtainded in Ref. \cite{13b}.
\subsection{The case of second derivatives\label{subsec:1}}
Let us start with Lagrangians containing time derivatives up to the second order,
\begin{equation}
\label{l35}
L=L(q,\dot q,\overset{..}{q});
\end{equation}
here $q=(q^\mu)$, $\mu=1,\ldots,N$ denotes the set of generalized coordinates. The nonsingularity
condition of Ostrogradski reads
\begin{equation}
\label{e:0}
\det\left( \frac{\partial^2 L}{\partial \overset{..}{q^{\mu}}\partial \overset{..}{q^{\nu}}}\right)\neq 0.
\end{equation}
In order to put our theory in the first-order form we define new coordinates $q_1^\mu,q_2^\nu$:
\begin{equation}
q^{\mu}=q_1^{\mu},\quad \dot q^{\mu}=\dot q_1^{\mu},\quad \overset{..}{q}^{\mu}=\chi^{\mu}(q_1,\dot q_1,q_2),
\end{equation}
where $\chi^\mu$ are the functions specified below.
\par
We select an arbitrary function
\begin{equation}
F=F(q_1,\dot q_1,q_2),
\end{equation}
subjected to the single condition
\begin{equation}
\label{l36}
\det\left(\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_2^{\nu}}\right)\neq 0.
\end{equation}
Now, $\chi^{\mu}$ are defined as a unique (at least locally due to (\ref{e:0})) solution to the following
set of equations
\begin{equation}
\label{l40}
\frac{\partial L(q_1,\dot q_1,\chi)}{\partial \chi^{\mu}}+\frac{\partial F(q_1,\dot q_1,q_2)}{\partial \dot q_1^{\mu}}=0.
\end{equation}
The new Lagrangian, which is now a standard Lagrangian of first order, is given by
\begin{eqnarray}
\label{l40b}
\mL(q_1,\dot q_1,q_2,\dot q_2)&=&L(q_1,\dot q_1,\chi (q_1,\dot q_1,q_2))+\frac{\partial F(q_1,\dot q_1,q_2)}{\partial q_1^{\mu}}\dot q_1^{\mu}\nonumber\\
& +&\frac{\partial F(q_1,\dot q_1,q_2)}{\partial q_2^{\mu}}\dot q_2^{\mu}+\frac{\partial F(q_1,\dot q_1,q_2)}{\partial\dot q_1^{\mu}}\chi^{\mu}(q_1,\dot q_1,q_2).
\end{eqnarray}
It differs from the initial one by an expression which becomes "on-shell" a total time derivative.
\par
The equation of motion for $q_2^\mu$ yield
\begin{equation}
\frac{\partial^2 F}{\partial\dot q_1^{\nu}\partial q_2^{\mu}}(\chi^{\nu}-{\overset{..}{q}}_1^{\nu})=0,
\end{equation}
which, by virtue of (\ref{l36}), implies
\begin{equation}
\label{l37}
\overset{..}{q}^{\mu}=\chi^{\mu}(q_1,\dot q_1,q_2).
\end{equation}
For the remaining variables $q_1^{\mu}$ one obtains
\begin{equation}
\frac{\partial L}{\partial q_1^{\mu}}-\frac{d}{dt}\left(\frac{\partial L}{\partial \dot q_1^{\mu}}\right)+\frac{d^2}{dt^2}\left(\frac{\partial L}{\partial \chi^{\mu}}\right)=0,
\end{equation}
and taking into account (\ref{l37}) one gets the initial Euler-Lagrange equations.
\par
It is worth to notice that, contrary to the original Ostrogradski approach, the formalism presented above
leads directly to the standard picture of Lagrangian as a function defined on the tangent bundle to
coordinate space (with no need of enlarging of the latter by adding the appropriate Lagrange multipliers).
\par
Our Lagrangian (\ref{l40b}) is nonsingular in the usual sense so one can directly pass to the Hamiltonian
picture by performing Legendre transformation leading to canonical dynamics on cotangent bundle.
\par
To this end we define the canonical momenta
\begin{equation}
\label{l39}
p_{1\mu}\equiv \frac{\partial \mL}{\partial\dot q_1^{\mu}}=\frac{\partial L}{\partial \dot q_1^{\mu}}+
\frac{\partial^2 F}{\partial q_1^{\nu}\partial \dot q_1^{\mu}}\dot q_1^{\nu}
+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial \dot q_1^{\nu}} \chi^{\nu}+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_2^{\nu}}\dot q_2^{\nu}+
\frac{\partial F}{\partial q_1^{\mu}},
\end{equation}
\begin{equation}
\label{l38}
p_{2\mu}\equiv\frac{\partial \mL}{\partial\dot q_2^{\mu}}=\frac{\partial F(q_1,\dot q_1,q_2)}{\partial q_2^{\mu}}.
\end{equation}
By virtue of (\ref{l36}) the second set of equations can be uniquely solved (at least locally) for $\dot q_1^{\mu}$
\begin{equation}
\dot q_1^{\mu}=\dot q_1^{\mu}(q_1,q_2,p_2).
\end{equation}
As for the first set (\ref{l39}), we note that $\dot q_2^{\mu}$ appears (linearly) only in the fourth term
on the RHS. Again, the same condition (\ref{l36}) allows us to solve (\ref{l39}) for $\dot q_2^{\mu}$,
\begin{equation}
\dot q_2^\mu=q_2^\mu(q_1,q_2,p_1,p_2).
\end{equation}
\par
The Hamiltonian $H$ is computed in standard way and the final result reads
\begin{equation}
\label{l41}
H=p_{1\mu}\dot q_1^{\mu}-L-\frac{\partial F}{\partial q_1^{\mu}}\dot q_1^{\mu}-\frac{\partial F}{\partial \dot q_1^{\mu}}\chi^{\mu},
\end{equation}
where everything is expressed in terms of $q_1,q_2,p_1$ and $p_2$. We have checked, by direct calculation,
that the canonical equations following from $H$ are equivalent to the initial Lagrangians ones.
\par
There exists canonical transformation which relates our Hamiltonian to the Ostrogradski one.
It reads
\begin{equation}\label{eq116}
\begin{array}{l}
\tilde q_1^{\mu} =q_1^{\mu},\\
\tilde q_2^{\mu}=f^{\mu}(q_1,q_2,p_2),\\
\tilde p_{1\mu}=p_{1\mu}-\frac{\partial F}{\partial q_1^{\mu}}(q_1,f,q_2),\\
\tilde p_{2\mu}=-\frac{\partial F}{\partial f^{\mu}}(q_1,f,q_2),
\end{array}
\end{equation}
where tilde refers to Ostrogradski variables and $f^{\mu}(q_1,q_2,p_2)$ solve eqs. (\ref{l38}), i.e.
$f^{\mu}=\dot q_1^{\mu}(q_1,q_2,p_2)$. The corresponding generating function $\Phi $ has the form
\begin{equation}
\Phi(q_1,\tilde p_1,q_2,\tilde q_2)=q_1^\mu\tilde p_{1\mu}+F(q_1,\tilde q_2,q_2).\label{eq117}
\end{equation}
However, it should be stressed that Ostrogradski Hamiltonian is singular in the sense that the inverse
Legendre transformation cannot be performed (contrary to our case). This means that the structure of
sympletic manifold (phase space) as a cotangent bundle to coordinate manifold is not transparent if
Ostrogradski variables are used.
\par
Let us conclude this part with a very simple example. The Lagrangian
\begin{equation}
\label{l47b}
L=\lambda \epsilon _{\mu\nu}\dot q^{\mu}{\overset{..}{q}}^{\nu}+\frac{\beta}{2}({\overset{..}{q}}^{\mu})^2,\quad\beta\neq 0,
\quad \mu,\nu=1,2
\end{equation}
is nonsingular in Ostrogradski sense provided $\beta\neq 0$. We take
\begin{equation}
F=\alpha \dot q_1^{\mu} q_2^{\mu},\quad \alpha\neq 0.
\end{equation}
Then
\begin{equation}
\chi^{\mu}=\frac{\lambda}{\beta}\epsilon_{\mu\nu}\dot q_1^{\nu}-\frac{\alpha}{\beta}q_2^{\mu},
\end{equation}
and
\begin{equation}
\mL=-\frac{\alpha^2}{2\beta}(q_2^{\mu})^2-\frac{\lambda^2}{2\beta}(\dot q_1^{\mu})^2-
\frac{\alpha\lambda}{\beta}\epsilon_{\mu\nu}\dot q_1^{\mu}q_2^{\nu}+
\alpha\dot q_1^{\mu}\dot q_2^{\mu}.
\end{equation}
Finally, the Hamiltonian reads
\begin{equation}
\label{l48b}
H=\frac{1}{\alpha}p_{1\mu}p_{2\mu}+\frac{\lambda^2}{2\alpha^2\beta}(p_{2\mu})^2+
\frac{\lambda}{\beta}\epsilon_{\mu\nu}p_{2\mu}q_2^{\nu}+\frac{\alpha^2}{2\beta}(q_2^{\mu})^2.
\end{equation}
It depends on an arbitrary parameter $\alpha$. One can pose the question whether any relevant physical quantity may depend on $\alpha$.
The answer is no: all physical quantities are $\alpha$-independent. Formally this can be shown using eqs~\eqref{eq116} and \eqref{eq117}.
Indeed, the function generating the canonical transformation to Ostrogradski variables reads
\begin{equation}\label{eq123}
\Phi(q_1^\mu,\tilde{p}_{1\mu},q_2^\mu,\tilde{q}_2^\mu)=
q_1^\mu\tilde{p}_{1\mu}+\alpha\tilde{q}_2^\mu q_2^\mu.
\end{equation}
The corresponding canonical transformation takes the form
\begin{equation}
\begin{array}{l}
p_{1\mu}=\tilde{p}_{1\mu},\\
q_1^\mu=\tilde{q}_1^\mu,\\
q_2^\mu=-\frac{1}{\alpha}\tilde{p}_{2\mu},\\
p_{2\mu}=\alpha\tilde{q}_2^\mu;
\end{array}
\end{equation}
when inserted into the Hamiltonian~\eqref{l48b} it yields the standard Ostrogradski Hamiltonian
\begin{equation}
\label{eq125}
H=\tilde{p}_{1\mu}\tilde{q}_2^\mu+\frac{1}{2\beta}(\tilde{p}_{2\mu})^2-\frac{\lambda}{\beta}\epsilon_{\mu\nu}\tilde{q}_2^\mu\tilde{p}_{2\nu}
+\frac{\lambda^2}{2\beta}(\tilde{q}_2^\mu)^2.
\end{equation}
It does not depend on $\alpha$. Therefore, the energy (energy spectrum in quantum theory) does not depend on $\alpha$.The role of our
$\alpha$-dependent modification is to provide the formalism which yields standard Lagrangian dynamics and regular Legendre transformation.
The above explanation is slightly formal. We shall now look at the problem of $\alpha$ dependence from a slightly different point of view.
Let us note that the classical state our system is uniquely determined once the values of $q(t)$, $\overset{.}{q}(t)$, $\overset{..}{q}(t)$,
$\overset{...}{q}(t)$ at some moment $t$ are given. Moreover, most physically relevant quantities are constructed via Noether procedure
(they are either conserved or partially conserved, i.e., their time derivatives are defined by transformation properties of symmetry breaking
terms in the action). As such they are expressible in terms of $q$, $\overset{.}{q}$, $\overset{..}{q}$ and $\overset{...}{q}$.
Therefore the latter are the basic variables. One can find their quantum counterparts provided we compute the relevant Poisson brackets.
To this end we write out the canonical equations of motion following from eq.~\eqref{l48b}:
\begin{equation}
\label{eq126}
\begin{array}{l}
\overset{.}{q}_1^\mu=\frac{1}{\alpha}p_{2\mu},\\
\overset{.}{q}_2^\mu=\frac{1}{\alpha}p_{1\mu}+\frac{\lambda^2}{\alpha^2\beta}p_{2\mu}+\frac{\lambda}{\beta}\epsilon_{\mu\nu}q_2^\nu,\\
\overset{.}{p}_{1\mu}=0,\\
\overset{.}{p}_{2\mu}=\frac{\lambda}{\beta}\epsilon_{\mu\nu}p_{2\nu}-\frac{\alpha^2}{\beta}q_2^\mu.
\end{array}
\end{equation}
They lead to the following relations
\begin{equation}
\label{eq127}
\begin{array}{l}
q^\mu=q_1^\mu,\\
\overset{.}{q}^\mu=\frac{1}{\alpha}p_{2\mu},\\
\overset{..}{q}^\mu=\frac{\lambda}{\alpha\beta}\epsilon_{\mu\nu}p_{2\nu}-\frac{\alpha}{\beta}q_2^\mu,\\
\overset{...}{q}^\mu=-\frac{2\lambda^2}{\alpha\beta^2}p_{2\mu}-\frac{2\lambda\alpha}{\beta^2}\epsilon_{\mu\nu}q_2^\nu-\frac{1}{\beta}p_{1\mu}.
\end{array}
\end{equation}
One can now find the Poisson brackets among $q$, $\overset{.}{q}$, $\overset{..}{q}$ and $\overset{...}{q}$. The nonvanishing ones read
\begin{equation}
\label{eq128}
\begin{array}{lll}
\{q^\mu,\overset{...}{q}^\nu\}=-\frac{1}{\beta}\delta_{\mu\nu},&\quad &
\{\overset{.}{q}^\mu,\overset{..}{q}^\nu\}=\delta_{\mu\nu},\\
\{\overset{.}{q}^\mu,\overset{...}{q}^\nu\}=-\frac{2\lambda}{\beta^2}\epsilon_{\mu\nu},&\quad &
\{\overset{..}{q}^\mu,\overset{..}{q}^\nu\}=\frac{2\lambda}{\beta}\epsilon_{\mu\nu},\\
\{\overset{.}{q}^\mu,\overset{..}{q}^\nu\}=\frac{4\lambda^2}{\beta^3}\delta_{\mu\nu}, &\quad &
\{\overset{.}{q}^\mu,\overset{..}{q}^\nu\}=\frac{8\lambda^3}{\beta^4}\epsilon_{\mu\nu}.
\end{array}
\end{equation}
Note that they are $\alpha$-independent. Upon quantizing we get four observables obeying $\alpha$-independent algebra.
Any other observable including energy can be constructed out of them so its spectrum and other properties do not depend on $\alpha$.
\subsection{The case of third derivatives\label{subsec:2}}
Let us consider a nonsingular Lagrangian of the form
\begin{equation}
\label{l48}
L=L(q,\dot q,\overset{..}{q},\overset{...}{q}).
\end{equation}
It is slightly surprising that this case (and, in general, the case when the highest time derivatives are of odd
order - see Appendix) is simpler. We define the new variables
\begin{equation}
q^{\mu}=q_1^{\mu},\quad \dot q^{\mu}=\dot q_1^{\mu},\quad \overset{..}{q}^{\mu}=q_2^{\mu},\quad {\overset{...}{q}}^{\mu}=\dot q_2^{\mu}.
\end{equation}
Next, the function $F(q_1,\dot q_1,q_2,q_3)$ is selected which obeys
\begin{equation}
\label{l49}
\det\left(\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_3^{\nu}}\right)\neq 0;
\end{equation}
here $q_3^\mu$ are additional variables. The modified Lagrangian reads
\begin{eqnarray}
\label{l50}
\mL(q_1,\dot q_1,q_2,\dot q_2,q_3,\dot q_3)&=&L(q_1,\dot q_1,q_2,\dot q_2)+\frac{\partial F(q_1,\dot q_1,q_2,q_3)}{\partial q_1^{\mu}}\dot q_1^{\mu}+
\frac{\partial F(q_1,\dot q_1,q_2,q_3)}{\partial q_2^{\mu}}\dot q_2^{\mu}
\nonumber\\ &+& \frac{\partial F(q_1,\dot q_1,q_2,q_3)}{\partial q_3^{\mu}}\dot q_3^{\mu}+\frac{\partial F(q_1,\dot q_1,q_2,q_3)}{\partial\dot q_1^{\mu}}q_2^{\mu}.
\end{eqnarray}
It can be easily shown that the Euler-Lagrange equations for $\mL$ yield the initial equations for the original
variable $q^\mu\equiv q_1^\mu$.
Again, as in the second-order case, the Legendre transformation can be directly performed due to the
condition (\ref{l49}). The momenta read
\begin{eqnarray}
\label{l53}
&&p_{1\mu}=\frac{\partial L}{\partial \dot q_1^{\mu}}+\frac{\partial^2 F}{\partial q_1^{\nu}\partial \dot q_1^{\mu}}\dot q_1^{\nu}
+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial \dot q_1^{\nu}} q_2^{\nu}+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_2^{\nu}}\dot q_2^{\nu}
+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_3^{\nu}}\dot q_3^{\nu}+\frac{\partial F}{\partial q_1^{\mu}},
\\
&& \label{l52}
p_{2\mu}=\frac{\partial L}{\partial \dot q_2^{\mu}}+\frac{\partial F}{\partial q_2^{\mu}},
\\
&& \label{l51}
p_{3\mu}=\frac{\partial F(q_1,\dot q_1,q_2,q_3)}{\partial q_3^{\mu}}.
\end{eqnarray}
By virtue of (\ref{l49}) one can solve (\ref{l51}) for $\dot q_1^\mu$,
\begin{equation}
\dot q_1^{\mu}=\dot q_1^{\mu}(q_1,q_2,q_3,p_3).
\end{equation}
Inserting this solution into eq. (\ref{l52}) one computes
\begin{equation}
\dot q_2^{\mu}=\dot q_2^{\mu}(q_1,q_2,q_3,p_2,p_3);
\end{equation}
the solution is (at least locally) unique because $L$ is, by assumption, nonsingular in Ostrogradski sense.
Similarly, (\ref{l53}) can be solved in terms of $\dot q_3^\mu$:
\begin{equation}
\dot q_3^{\mu}=\dot q_3^{\mu}(q_1,q_2,q_3,p_1,p_2,p_3).
\end{equation}
Finally, Hamiltonian is of the form
\begin{equation}
\label{l54}
H=p_{1\mu}\dot q_1^{\mu}+p_{2\mu}\dot q_2^{\mu}-L-\frac{\partial F}{\partial q_1^{\mu}}\dot q_1^{\mu}-\frac{\partial F}{\partial \dot q_1^{\mu}}q_2^{\mu}
-\frac{\partial F}{\partial q_2^{\mu}}\dot q_2^{\mu},
\end{equation}
where everything is expressed in terms of $q$'s and $p$'s (the terms containing $\dot q_3^\mu$ cancel).
As above, we have checked that the canonical equations of motion yield the initial equation.
The canonical transformation which relates our formalism to the Ostrogradski one reads
\begin{equation}
\label{l54b}
\begin{array}{l}
\tilde q_1^{\mu}=q_1^{\mu},\\
\tilde q_2^{\mu}=f^{\mu}(q_1,q_2,q_3,p_3),\\
\tilde q_3^{\mu}=q_2^{\mu},\\
\tilde p_{1\mu}=p_{1\mu}-\frac{\partial F}{\partial q_1^{\mu}}(q_1,f(q_1,q_2,q_3,p_3),q_2,q_3),\\
\tilde p_{2\mu}=-\frac{\partial F}{\partial f^{\mu}}(q_1,f(q_1,q_2,q_3,p_3),q_2,q_3),\\
\tilde p_{3\mu}=p_{2\mu}-\frac{\partial F}{\partial q_2^{\mu}}(q_1,f(q_1,q_2,q_3,p_3),q_2,q_3),\\
\end{array}
\end{equation}
where $f^{\mu}$ is the solution of eq. (\ref{l51}), i.e., $f^{\mu}=\dot q_1^{\mu}$.
The relevant generating function reads
\begin{equation}
\Phi(q_1,\tilde p_1,q_2,\tilde q_2,q_3,\tilde p_3)=q_1^\mu\tilde p_{1\mu}+q_2^\mu\tilde p_{3\mu}
+F(q_1,\tilde q_2,q_2,q_3).
\end{equation}
Again, the advantage of our Hamiltonian over the Ostrogradski one is that the former is nonsingular
in the sense that the inverse Legendre transformation can be performed directly.
\subsection{The second order Lagrangian once more}
By comparing Section \ref{subsec:1} and \ref{subsec:2} we see that the modified Hamiltonian formalism
is somewhat simpler in the case of third order Lagrangian (actually, as it is shown in Appendix,
this is the case for all Lagrangians of odd order). Namely, in latter case no counterpart of the condition
(\ref{l40}) is necessary. This will appear to play the crucial role in the case of singular (in Ostrogradski sense)
Lagrangians (see Section \ref{sec:3} below). Therefore, as a preliminary step, we consider here the second
order Lagrangians as a special, singular case of third order ones. The resulting Hamiltonian formalism
is then constrained. However, with an additional assumption that the function $F$ does not depend on
$q_2^\mu$, one can perform complete reduction of phase space obtaining the structure described
in Section \ref{subsec:1}.
\par
Let
\begin{equation}
L=L(q,\dot q,\overset{..}{q}),
\end{equation}
and $F=F(q_1,\dot q_1,q_3)$ obeys (\ref{l49}). We define
\begin{eqnarray}
\mL(q_1,\dot q_1,q_2,q_3,\dot q_3)&=&L(q_1,\dot q_1,q_2)+\frac{\partial F(q_1,\dot q_1,q_3)}{\partial q_1^{\mu}}\dot q_1^{\mu}
\nonumber\\&+& \frac{\partial F(q_1,\dot q_1,q_3)}{\partial q_3^{\mu}}\dot q_3^{\mu}+\frac{\partial F(q_1,\dot q_1,q_3)}{\partial\dot q_1^{\mu}}q_2^{\mu}.
\end{eqnarray}
The relevant momenta read
\begin{eqnarray}
&&\label{l59}
p_{1\mu}=\frac{\partial L}{\partial \dot q_1^{\mu}}+\frac{\partial^2 F}{\partial q_1^{\nu}\partial \dot q_1^{\mu}}\dot q_1^{\nu}
+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial \dot q_1^{\nu}} q_2^{\nu}
+\frac{\partial^2 F}{\partial\dot q_1^{\mu}\partial q_3^{\nu}}\dot q_3^{\nu}+\frac{\partial F}{\partial q_1^{\mu}},
\\
&& \label{l58}
p_{2\mu}=0,
\\
&&
\label{l57}
p_{3\mu}=\frac{\partial F}{\partial q_3^{\mu}}.
\end{eqnarray}
There is one set of primary constraints (\ref{l58}). On the other hand, due to the condition (\ref{l49})
$\dot q_1^\mu$ and $\dot q_3^\mu$ can be expressed in terms of $q_1,q_2,q_3,p_1,p_3$.
The Dirac Hamiltonian takes the form
\begin{equation}
\label{l60}
H=p_{1\mu}\dot q_1^{\mu}-L-\frac{\partial F}{\partial q_1^{\mu}}\dot q_1^{\mu}
-\frac{\partial F}{\partial\dot q_1^{\mu}}q_2^{\mu}+c^\mu p_{2\mu},
\end{equation}
where $c^\mu$ are Lagrange multipliers enforcing the constraints $\Phi_{1\mu}\equiv p_{2\mu}\approx 0$.
\par
The stability of primary constraints implies
\begin{equation}
\label{l67}
0\approx \dot\Phi_{1\mu}\equiv \Phi_{2\mu}=\frac{\partial L(q_1,\dot q_1(q_1,q_3,p_3),q_2)}{\partial q_2^{\mu}}+
\frac{\partial F(q_1,\dot q_1(q_1,q_3,p_3),q_3)}{\partial \dot q_1^{\mu}}.
\end{equation}
In order to check the stability of secondary constraints $\Phi_{2\mu}$ we note that, as it can be verified by direct computation,
\begin{equation}
\label{l67b}
\{\dot q_1^\mu,\dot q_1^\nu\}=0.
\end{equation}
Using (\ref{l67b}) together with
\begin{equation}
0\approx \dot\Phi_{2\mu}=\{\Phi_{2\mu},H\},
\end{equation}
we arrive at the following condition
\begin{equation}
\label{l68}
\frac{\partial^2 L}{\partial q_2^{\mu}\partial q_2^{\nu}}c^{\nu}+\frac{\partial^2 L}{\partial q_1^{\mu}\partial q_1^{\nu}}\dot q_1^{\nu}+
\frac{\partial^2 L}{\partial q_2^{\mu}\partial\dot q_1^{\nu}}q_2^{\nu}+p_{1\mu}-\frac{\partial L}{\partial \dot q_1^{\mu}}-\frac{\partial F}{\partial q_1^{\mu}}=0.
\end{equation}
The initial Lagrangian is nonsingular and eq. (\ref{l68}) can be used to determine the Lagrange multipliers
$c^\nu$ uniquely. Therefore, the are no further constraints.
\par
In order to convert our constraints into strong equations we define Dirac brackets. To this end we compute
\begin{equation}
\{\phi_{1\mu},\phi_{1\nu}\} =0,
\end{equation}
\begin{equation}
\label{l68c}
\{\phi_{1\mu},\phi_{2\nu}\} =-\frac{\partial ^2 L}{\partial q_2^{\mu}\partial q_2^{\nu}}\equiv -W_{\mu\nu}.
\end{equation}
Moreover,
\begin{equation}
\left\{ \frac{\partial L}{\partial q_{2}^{\mu}},\frac{\partial L}{\partial q_{2}^{\nu}}\right\}=0,\quad
\left\{ \frac{\partial F}{\partial \dot q_{1}^{\mu}},\frac{\partial F}{\partial \dot q_{1}^{\nu}}\right\}=0,\quad
\left\{ \frac{\partial F}{\partial \dot q_{1}^{\mu}},\frac{\partial L}{\partial q_{2}^{\nu}}\right\}=\frac{\partial^2 L}{\partial q_2^{\nu}{\partial\dot q_1^{\mu}}},
\end{equation}
which implies
\begin{equation}
\label{l72}
\{\phi_{2\mu},\phi_{2\nu}\}=\frac{\partial^2 L}{\partial \dot q_1^{\mu}\partial q_2^{\nu}}-\frac{\partial^2 L}{\partial \dot q_1^{\nu}\partial q_2^{\mu}}\equiv V_{\mu\nu}.
\end{equation}
By assumption, $W$ is a nonsingular matrix. Consequently,
\begin{equation}
C=\left(
\begin{array}{cc}
\{\phi_{1\mu},\phi_{1\nu}\} &\{\phi_{1\mu},\phi_{2\nu}\} \\
\{\phi_{2\mu},\phi_{1\nu}\} &\{\phi_{2\mu},\phi_{2\nu}\}
\end{array}
\right)=\left(
\begin{array}{cc}
0&-W\\
W&V
\end{array}\right),
\end{equation}
is also nonsingular and
\begin{equation}
C^{-1}=\left(
\begin{array}{cc}
W^{-1}VW^{-1}&W^{-1}\\
-W^{-1}&0
\end{array}\right).
\end{equation}
Dirac bracket takes the following form
\begin{eqnarray}
\label{l64b}
\{\cdot ,\cdot\}_D&=&\{\cdot,\cdot\}-\{\cdot,\phi_{1\mu}\}(W^{-1}VW^{-1})_{\mu\nu}\{\phi_{1\nu},\cdot\}\nonumber\\
& &-\{\cdot,\phi_{1\mu}\}(W^{-1})_{\mu\nu}\{\phi_{2\nu},\cdot\}+\{\cdot,\phi_{2\mu}\}(W^{-1})_{\mu\nu}\{\phi_{1\nu},\cdot\}.
\end{eqnarray}
The constraints $\Phi_{1\mu}$ depend on $p_{2\mu}$ only. We conclude from (\ref{l64b})
that the Dirac brackets for $q_1^\mu,q_3^\mu,p_{1\mu},p_{3\mu}$ take the canonical form.
Moreover, $p_{2\mu}=0$ while $q_2^\mu$ can be determined from (\ref{l67}). Note that the
solution for $q_2^\mu$, by virtue of eq. (\ref{l40}) reads
\begin{equation}
q_2^\mu=\chi^\mu(q_1,\dot q_1(q_1,q_3,p_3),q_3).
\end{equation}
So, up to renumbering $q_2\leftrightarrow q_3$ we arrived at the same scheme as in Section \ref{subsec:1}.
\par
In order to illustrate the above approach, we use the same example as before:
\begin{equation}
L=\lambda \epsilon _{\mu\nu}\dot q^{\mu}{\overset{..}{q}}^{\nu}+\frac{\beta}{2}({\overset{..}{q}}^{\mu})^2,\quad \beta\neq 0,
\end{equation}
and
\begin{equation}
F=\alpha \dot q_1^{\mu} q_3^{\mu},\quad \alpha\neq 0.
\end{equation}
Then $H$ takes the form
\begin{equation}
\label{l65}
H=\frac{1}{\alpha}p_{1\mu}p_{3\mu}-\frac{\lambda}{\alpha}\epsilon_{\mu\nu}p_{3\mu}q_2^{\nu}-
\frac{\beta}{2}(q_2^{\mu})^2-\alpha q_2^{\mu}q_3^{\nu}+\frac{2\lambda}{\beta}
\epsilon_{\mu\nu}p_{2\mu}q_2^{\nu}-\frac{1}{\beta}p_{1\mu}p_{2\mu},
\end{equation}
while the constraints are
\begin{equation}
\phi_{1\mu}=p_{2\mu},\quad \phi_{2\mu}=-\frac{\lambda}{\alpha}\epsilon_{\mu\nu}p_{3\nu}+\beta q_2^{\mu}+\alpha q_3^{\mu},
\end{equation}
and serve to eliminate $p_{2\mu}$ and $q_2^\mu$,
\begin{equation}
p_{2\mu}=0,\quad q_2^{\mu}=\frac{\lambda}{\alpha\beta}\epsilon_{\mu\nu}p_{3\nu}-\frac{\alpha}{\beta} q_3^{\mu}.
\end{equation}
Inserting this back into the Hamiltonian we arrive at the following expression
\begin{equation}
H=\frac{1}{\alpha}p_{1\mu}p_{3\mu}+\frac{\lambda^2}{2\alpha^2\beta}(p_{3\mu})^2+
\frac{\lambda}{\beta}\epsilon_{\mu\nu}p_{3\mu}q_3^{\nu}+\frac{\alpha^2}{2\beta}(q_3^{\mu})^2
\end{equation}
which coincides with the one given by eq. (\ref{l48b}) provided the replacement $q_2\leftrightarrow q_3$, $p_2\leftrightarrow p_3$ has been made.
\section{Singular Lagrangians of the second order \label{sec:3}}
In this section we consider the second order Lagrangians
\begin{equation}
L=L(q,\dot q,\overset{..}{q}),
\end{equation}
which are singular in the Ostrogradski sense, i.e.
\begin{equation}
\label{l68b}
\det(W_{\mu\nu})\equiv\det\left(\frac{\partial^2L}{\partial{\overset{..}{q}}^\mu\partial{\overset{..}{q}}^\nu}\right)=0.
\end{equation}
For standard Ostrogradski approach to such singular Lagrangians see, for example, Ref. \cite{16b,17b}.
\par The formalism of Section \ref{subsec:1} is not directly applicable because due to eq. (\ref{l68b}),
eqs. (\ref{l40}) cannot be solved to determine the functions $\chi^\mu$. Moreover, eqs. (\ref{l40}) put
in this case further restrictions on the form of $F$.
\par
In order to get rid of these problems we will follow the method of Section \ref{subsec:2} and consider $L$ as a third order singular Lagrangian. From this point of
view its singularity comes both from eq. (\ref{l68b}) and the fact that the third order time derivatives
are absent. Given a singular Lagrangian $L$ we select a function $F=F(q_1,\dot q_1,q_3)$ obeying
(\ref{l49}) and define
\begin{eqnarray}
\mL(q_1,\dot q_1,q_2,q_3,\dot q_3)&=&L(q_1,\dot q_1,q_2)+\frac{\partial F(q_1,\dot q_1,q_3)}{\partial q_1^{\mu}}\dot q_1^{\mu}
\nonumber\\&+& \frac{\partial F(q_1,\dot q_1,q_3)}{\partial q_3^{\mu}}\dot q_3^{\mu}+
\frac{\partial F(q_1,\dot q_1,q_3)}{\partial\dot q_1^{\mu}}q_2^{\mu}.
\end{eqnarray}
As before, the canonical momenta given by (\ref{l58}) provide the primary constraints while
(\ref{l59}) and (\ref{l57}) allow us to compute $\dot q_1^\mu$ and $\dot q_3^\mu$.
The Hamiltonian is given by eq. (\ref{l60}). The secondary constraints read again
\begin{equation}
\label{l70}
0\approx \Phi_{2\mu}=\frac{\partial L(q_1,\dot q_1(q_1,q_3,p_3),q_2)}{\partial q_2^{\mu}}+
\frac{\partial F(q_1,\dot q_1(q_1,q_3,p_3),q_3)}{\partial \dot q_1^{\mu}}.
\end{equation}
Now we have to investigate the stability of $\Phi_{2\mu}$. To this end we assume that $W$ has rank $K$, $K<N$;
this implies the existence of $J=N-K$ linearly independent null eigenvectors
$\gamma_a^\mu(q_1,\dot q_1,q_2)$, $a=1,2,\ldots,J$,
\begin{equation}
\label{l70b}
W_{\mu\nu}\gamma_a^\nu=0.
\end{equation}
Equations (\ref{l68}) do not determine uniquely the Lagrange multipliers $c^\mu$; on the contrary,
we get new constraints of the form
\begin{equation}
\label{l71}
0\equiv \Phi_{3a}=\gamma_a^{\mu}\left(\frac{\partial^2 L}{\partial q_1^{\mu}\partial q_1^{\nu}}\dot q_1^{\nu}+
\frac{\partial^2 L}{\partial q_2^{\mu}\partial\dot q_1^{\nu}}q_2^{\nu}+
p_{1\mu}-\frac{\partial L}{\partial \dot q_1^{\mu}}-\frac{\partial F}{\partial q_1^{\mu}}\right);
\end{equation}
here, as previously, $\dot q_1^\mu=\dot q_1^\mu(q_1,q_3,p_3)$, so the above constraints contain
$q_1,q_2,q_3,p_1$ and $p_3$.
\par
We have started with third order formalism; therefore, our phase space is $6N$-dimensional.
As in nonsingular case (Section \ref{sec:2}) we would like to eliminate $q_2$'s and $p_2$'s.
The latter are equal zero by primary constraints $\Phi_{1\mu}$.
As far as $q_2$'s are considered the situation is more involved.
\par
First, by virtue of the assumption (\ref{l68b}) about $W$ we can determine from eqs. (\ref{l70})
K variables $q_2^\mu$ in terms of $q_1,p_1,q_3,p_3$ and the remaining $q_2$'s. By substituting the resulting
expression back to eqs. (\ref{l70}) we arrive at $J$ constraints on $q_1,p_1,q_3$ and $p_3$. We denote these new
constraints by $\psi_a(q_1,q_3,p_1,p_3)$.
Let us now concentrate on the constraints (\ref{l71}). In general, they contain the $q_2^\mu$
variables and imply the constraints on $q_1,q_3,p_1,p_3$ only provided $q_2$'s enter in the combinations
which can be determined from eqs. (\ref{l70}). In order to decide if it happens consider the variations
$\delta q_2^\mu$ which do not change the RHS of (\ref{l70}). From the definition of $W_{\mu\nu}$
we conclude that such $\delta q_2^\mu$ are linear combinations of $\gamma_a^\mu$ (see (\ref{l70b})).
If the RHS of (\ref{l71}) are stationary under such variations $\delta q_2^\mu$, eqs. (\ref{l70}) and
(\ref{l71}) can be combined to yield the constraints which do not depend on $q_2$'s. The relevant condition
reads
\begin{equation}
\frac{\partial \Phi_{3a}}{\partial q_2^{\mu}}\gamma_{b}^{\mu}=0,\quad b=1,\ldots,J;
\end{equation}
where $a$ takes $M$ values, which without loss of generality can be chosen as $a=1,\ldots, M$.
In this way we obtain $M$ new constraints on $q_1,p_1,q_3,p_3$.
\par
One can check that
\begin{equation}
\frac{\partial \Phi_{3a}}{\partial q_2^{\mu}} \gamma^{\mu}_b=\gamma_b^{\mu}\gamma_{a}^{\nu}\left(\frac{\partial^2 L}{\partial \dot q_1^{\mu}\partial q_2^{\nu}}-
\frac{\partial^2 L}{\partial \dot q_1^{\nu}\partial q_2^{\mu}} \right).
\end{equation}
By virtue of (\ref{l72}) we find
\begin{equation}
\{\psi_{a},\psi_{b}\}\approx \gamma_a^{\mu}\gamma_b^{\nu}\{\phi_{2\mu},\phi_{2\nu}\}, \quad a=1,\ldots,M,\, b=1,\ldots,J.
\end{equation}
Let us summarize. For the nonsingular second order Lagrangian viewed as a singular third order one,
$(q_1,p_1,q_3,p_3)$ forms the reduced phase space; no further constraints exist. On the contrary, in the
singular case $q_1,p_1,q_3,p_3$ are still constrained. First, there exist $J$ constraints
$\psi_a(q_1,p_1,q_3,p_3)$; moreover, if some (say - $M$) $\psi$'s are in involution (on the constraint
surface) with all $\psi$'s there exist additional $M$ constraints following from eqs. (\ref{l70}) and (\ref{l71}).
This agrees with the conclusions of Ref. \cite{16b}.
\par
In general, for singular Lagrangian it is not possible to determine uniquely all Lagrange multipliers $c^\mu$.
However, we are in fact interested only in dynamical equations for $q_{1},q_3,p_{1}$ and $p_3$.
Therefore, we can use the following Hamiltonian
\begin{equation}
\label{l75}
H=p_{1\mu}\dot q_1^{\mu}-L-\frac{\partial F}{\partial q_1^{\mu}}\dot q_1^{\mu}
-\frac{\partial F}{\partial\dot q_1^{\mu}}q_2^{\mu}.
\end{equation}
On the constraint surface it does not depend on $q_2$'s,
\begin{equation}
\frac{\partial H}{\partial q_2^{\mu}}=-\frac{\partial L}{\partial q_2^{\mu}}-\frac{\partial F}{\partial \dot q_1^{\mu}}\approx 0.
\end{equation}
The existence of further secondary constraints depend on the particular form of the Lagrangian.
\par
Finally, let us note that the canonical transformation (\ref{l54b}) leads to the form of dynamics presented
in Ref. \cite{16b}. However, within our procedure the Legendre transformation from the tangent bundle
of configuration manifold to phase manifold is again straightforward (if one takes into account standard
modifications due to the existence of constraints).
Singular higher derivative Lagrangians were also considered in~\cite{18r}. The authors considered the physically important case of
reparametrization invariant theories (higher-derivative reparametrization invariant Lagrangians appear, for example, in the description
of radiation reaction~\cite{19r}). In their geometrical approach the image of the Legendre transformation form a submanifold of some cotangent bundle.
This suggests that in the case of higher-derivative singular theories it is advantageous to start with enlarged phase space; this agrees
with our conclusions.
To conclude this section with a simple example consider the following Lagrangian
\begin{equation}
\label{l75b}
L=\lambda\epsilon_{\mu\nu}\dot q^{\mu}{\overset{..}{q}}^{\nu}+\frac{\beta}{2}({\overset{..}{q}^1})^2, \quad \mu,\nu=1,2.
\end{equation}
It is singular and the matrix $W$ (eq. (\ref{l68c})) is of rank $1$ for $\beta\neq 0$ and $0$ for $\beta=0$.
We take $F$ as
\begin{equation}
F=\alpha\dot q_1^{\mu}q_3^{\mu}.
\end{equation}
Assume first $\beta\neq 0$. Then
\begin{equation}
\mL=\lambda \epsilon_{\mu\nu}\dot q_1^{\mu}q_2^{\nu}+\frac{\beta}{2}(q_2^1)^2+\alpha q_3^{\mu}q_2^{\mu}+\alpha\dot q_1^{\mu}\dot q_3^{\mu},
\end{equation}
and
\begin{equation}
\begin{array}{l}
p_{1\mu}=\lambda\epsilon_{\mu\nu}q_2^{\nu}+\alpha\dot q_3^{\mu},\\
p_{2\mu}=0,\\
p_{3\mu}=\alpha\dot q_1^{\mu}.
\end{array}
\end{equation}
The primary constraints are
\begin{equation}
\Phi_{1\mu}=p_{2\mu}\approx 0;
\end{equation}
while the Hamiltonian reads
\begin{equation}
H=\frac{1}{\alpha}p_{1\mu}p_{3\mu}-\frac{\lambda}{\alpha}\epsilon_{\mu\nu}p_{3\mu}q_{2}^{\nu}-\frac{\beta}{2}(q_2^1)^2-\alpha q_2^{\mu}q_3^{\mu}
+c^\mu p_{2\mu}.
\end{equation}
One easily derives the secondary constraints
\begin{equation}
\begin{array}{l}
0\approx\Phi_{21}=\frac{\lambda}{\alpha}p_{32}-\beta q_2^1-\alpha q_3^1,\\
0\approx\Phi_{22}=\frac{\lambda}{\alpha}p_{31}+\alpha q_3^2.
\end{array}
\end{equation}
The stability for $\Phi_{2\mu}$ yields
\begin{equation}
\label{l75c}
0 \approx \{\Phi_{21},H\}={2\lambda}q_2^2-\beta c^1-p_{11},
\end{equation}
\begin{equation}
\label{l75d}
0 \approx \{\Phi_{22},H\}=p_{12}+2\lambda q_2^1=\Phi_3.
\end{equation}
Equation (\ref{l75c}) allows us to compute $c^1$,
\begin{equation}
\label{l75e}
c^1=\frac{1}{\beta}(2\lambda q_2^2-p_{11}),
\end{equation}
while (\ref{l75d}) provides a new constraint. Its stability enforces $c^1=0$ which together with (\ref{l75e})
yields further constraint
\begin{equation}
0\approx \Phi_4=\frac{1}{\beta}(2\lambda q_2^2-p_{11}).
\end{equation}
Finally, differentiating the above equation with respect to time we get $c^2=0$.
The resulting Hamiltonian is
\begin{equation}
H=\frac{1}{\alpha}p_{1\mu}p_{3\mu}-\frac{\lambda}{\alpha}\epsilon_{\mu\nu}p_{3\mu}q_{2}^{\nu}-
\frac{\beta}{2}(q_2^1)^2-\alpha q_2^{\mu}q_3^{\mu}.
\end{equation}
Still we have to take into account the constraints $\Phi_{2\mu},\Phi_3$ and $\Phi_4$. The latter two
can be rewritten as
\begin{equation}
\Phi_{3\mu}=p_{1\mu}-2\lambda\epsilon_{\mu\nu}q_2^{\nu}.
\end{equation}
$\Phi_{2\mu}$ and $\Phi_{3\mu}$ are now used in order to eliminate all variables except $q_1^\mu,p_{1\mu}$
and $q_3^2,p_{32}$.
The only nonstandard Dirac bracket reads
\begin{equation}
\{q_3^2,p_{32}\}_D=\frac{1}{2}.
\end{equation}
The Hamiltonian, when expressed in terms of unconstrained variables, takes the form
\begin{equation}
H=\frac{1}{\alpha}p_{12}p_{32}-\frac{\alpha}{\lambda}p_{11}q_3^2+\frac{\beta}{8\lambda^2}(p_{12})^2.
\end{equation}
Let us note that the limit $\beta\rightarrow 0$ is smooth. Of course, we could put $\beta=0$ from the very
beginning and arrive at the same conclusion.
\section{An example: mini-superspace formulation of $f(R)$ gravity \label{sec:4}}
As a more elaborate but still a toy example we consider mini-superspace Hamiltonian formulation of $f(R)$ gravity \cite{11b}.
We consider the following (LFRW - type) metrics
\begin{equation}
ds^2=-N^2dt^2+a^2d{\vec x}^2.
\end{equation}
Under such reduction the Lagrangian of $f(R)$ gravity takes the form
\begin{equation}
L(a,N)=\frac{1}{2}Na^3f(R),
\end{equation}
where the curvature is given by
\begin{equation}
R=6{\left(\frac{\dot a}{NA}\right)}^.+12{\left(\frac{\dot a}{Na}\right)}^2.
\end{equation}
We see that $L$ depends on second time derivatives. We proceed along the lines described in Section \ref{sec:2}.
The basic dynamical variables are chosen as follows
\begin{equation}
a_1=a,\;\dot a_1=\dot a,\; N_1=N,\; \dot N_1=\dot N,\; a_2=R,
\end{equation}
while
\begin{equation}
\overset{..}{a}=\chi(a_1,\dot a_1,N_1,\dot N_1,a_2),
\end{equation}
is determined by eq. (\ref{l40}) once appropriate $F$ is selected. We take
\begin{equation}
\label{l76}
F=-3a^2_1f'(a_2)\dot a_1;
\end{equation}
under the assumption $f''\neq 0$, eqs. (\ref{l76}) and (\ref{l40}) yield
\begin{equation}
\label{l77}
a_2=R.
\end{equation}
Solving (\ref{l77}) with respect to $\overset{..}{a}$ we find
\begin{equation}
\overset{..}{a}=\frac{a_1N_1}{6}\left(R-\frac{6}{N_1^2a_1^2}((2-N_1){\dot a}_1^2-a_1{\dot a}_1{\dot N}_1)\right).
\end{equation}
The modified Lagrangian reads
\begin{eqnarray}
\mL&=&\frac{1}{2}a_1^3N_1f(a_2)+f'(a_2)\left(-9a_1{\dot a}_1^2+\frac{6 a_1{\dot a_1}^2}{N_1}-
\frac{1}{2}a_1^3N_1a_2-\frac{3a_1^2{\dot a}_1{\dot N}_1}{N_1}\right)\nonumber\\
& -&3f''(a_2)a_1^2{\dot a}_1{\dot a}_2.
\end{eqnarray}
It is straightforward to check that $\mL$ leads to the correct equations of motion. In order to simplify our
considerations we introduce new variable
\begin{equation}
n_1=N_1f'(a_2).
\end{equation}
In terms of new variable $\mL$ reads
\begin{eqnarray}
\mL&=&\frac{1}{2}a_1^3n_1\frac{f(a_2)}{f'(a_2)}-9a_1{\dot a}_1^2f'(a_2)+\frac{6 a_1{\dot a_1}^2{f'}^2(a_2)}{n_1}
\nonumber\\
& -& \frac{1}{2}a_1^3n_1a_2-\frac{3a_1^2{\dot a}_1{\dot n}_1f'(a_2)}{n_1}.
\end{eqnarray}
Now, we compute the canonical momenta:
\begin{eqnarray}
&&p_1\equiv \frac{\partial \mL}{\partial{\dot n}_1}=-\frac{3a_1^2}{n_1}f'(a_2)\dot a_1 \label{l78}, \\
&&\pi_1\equiv \frac{\partial \mL}{\partial{\dot a}_1}=-18a_1{\dot a}_1f'(a_2)+\frac{12a_1{\dot a}_1{f'}^2(a_2)}{n_1}
-3\frac{a_1^2}{n_1}f'(a_2){\dot n}_1 \label{l79},\\
&&\pi_2\equiv \frac{\partial \mL}{\partial{\dot a}_2}=0.
\end{eqnarray}
One can solve (\ref{l78}) and (\ref{l79}) in terms of $\dot a_1$ and $\dot n_1$. We form the Hamiltonian
\begin{eqnarray}
H&=&-\frac{n_1p_1\pi_1}{3a_1^2f'(a_2)}-\frac{n_1a_1^3f(a_2)}{2f'(a_2)}+\frac{n_1^2p_1^2}{a_1^3f'(a_2)}-
\frac{2n_1p_1^2}{3a_1^3}\nonumber\\
&+&\frac{1}{2}a_1^3a_2n_1+\mu\pi_2\equiv \tilde H+\mu \pi_2.
\end{eqnarray}
Now, we investigate the stability of $\Phi_1\equiv \pi_2$ constraint
\begin{equation}
0\approx \dot \Phi_1=\{\Phi_1,H\}=\frac{f''(a_2)}{f'(a_2)}\left(\tilde H+\frac{2n_1p_1^2}{3a_1^3}-
\frac{a_1^3a_2n_1}{2}\right)\equiv\Phi_2.
\end{equation}
The stability condition for $\Phi_2$ determines $\mu$; an explicit expression for $\mu$
is irrelevant for what follows. In fact, $(\Phi_1,\Phi_2)$ are second class constraints
\begin{equation}
\{\Phi_1,\Phi_2\}\approx \frac{f''(a_2)a_1^3N_1}{2f'(a_2)}.
\end{equation}
Thus, the constraints can be solved provided we use Dirac brackets. In particular, the Hamiltonian takes a
simple form
\begin{equation}
\label{l80}
H=\tilde H=\frac{1}{2}a_1^3a_2n_1-\frac{2}{3}\frac{n_1p_1^2}{a_1^3},
\end{equation}
where
\begin{equation}
\label{l81}
a_2=f^{-1}\left(-\frac{2p_1\pi_1}{3a_1^5}+\frac{2n_1p_1^2}{a_1^6}\right).
\end{equation}
Moreover, Dirac brackets for the variables $a_1,n_1,\pi_1,p_1$ remain canonical. Therefore, eqs. (\ref{l80})
and (\ref{l81}) give the complete Hamiltonian description. We have checked explicitly that it leads to correct
equations of motion. In the case under consideration our formalism, when compared with Ostrogradski version,
seems to be more complicated. However, it has an advantage that the curvature $R$ is one of basic variables.
\section{Field theory \label{sec:5}}
Our formalism has a straightforward generalization to the field theory case.
For simplicity, we consider only the Lagrangian densities depending on first and second derivatives. Such a density can be written in the form
\begin{equation}
\mL=\mL(\Phi,\partial_k\Phi,\partial_k\partial_l\Phi,\dot \Phi,\partial_k\dot\Phi,\overset{..}{\Phi}).
\end{equation}
Again, we put $\Phi=\Phi_1$ and select a function $F=F(\Phi_1,\dot \Phi_1,\Phi_2)$ obeying
\begin{equation}
\label{fe:1}
\frac{\partial^2F}{\partial\dot\Phi_1\partial\Phi_2}\neq 0;
\end{equation}
in the case of multicomponent field the relevant matrix should be nonsingular.
We define, as previously, the function
\begin{equation}
\chi=\chi(\Phi_1,\partial_k\Phi_1,\partial_k\partial_l\Phi_1,\dot\Phi_1,\partial_k\dot\Phi_1,\Phi_2),
\end{equation}
as the (locally unique by virtue of (\ref{fe:1})) solution to the equation
\begin{equation}
\frac{\partial\mL(\Phi_1,\partial_k\Phi_1,\partial_k\partial_l\Phi_1,\dot\Phi_1,\partial_k\dot\Phi_1,\chi)}
{\partial\chi}+\frac{\partial F(\Phi_1,\dot \Phi_1,\Phi_2)}{\partial\dot \Phi_1}=0.
\end{equation}
Finally, the new Lagrangian density reads
\begin{align}
\tilde\mL&=\mL(\Phi_1,\partial_k\Phi_1,\partial_k\partial_l\Phi_1,\dot\Phi_1,\partial_k\dot\Phi_1,\chi(\ldots))+
\frac{\partial F(\Phi_1,\dot\Phi_1,\Phi_2)}{\partial \Phi_1}\dot\Phi_1\nonumber\\
&+\frac{\partial F(\Phi_1,\dot\Phi_1,\Phi_2)}{\partial \Phi_2}\dot\Phi_2+
\frac{\partial F(\Phi_1,\dot\Phi_1,\Phi_2)}{\partial\dot \Phi_1}\chi(\ldots).
\end{align}
It is now straightforward to check that the Lagrange equations
\begin{equation}
\frac{\partial \tilde L}{\partial \Phi_i}-\partial_k\frac{\partial\tilde \mL}{\partial(\partial_k\Phi_i)}+
\partial_k\partial_l\frac{\partial\tilde\mL}{\partial(\partial_k\partial_l\Phi_i)}-\frac{d}{dt}\left(\frac{\partial\tilde\mL}{\partial\dot\Phi_i}
-\partial_k\frac{\partial\tilde\mL}{\partial(\partial_k\dot\Phi_i)}\right)=0,
\end{equation}
yield the initial equation for the original variable $\Phi\equiv\Phi_1$; as in the Section \ref{subsec:1} $\overset{..}{\Phi}=\chi(\ldots)$.
One can now perform the Legendre transformation. The canonical momenta read
\begin{equation}
\label{fe:2}
\pi_i(x)=\frac{\delta \tilde L}{\delta\dot \Phi_i(x)},\quad \tilde L\equiv\int d^3 x\tilde\mL.
\end{equation}
Equations (\ref{fe:2}) can be solve (due to (\ref{fe:1})) with respect to $\dot\Phi_i$:
\begin{align}
&\dot\Phi_1=\dot\Phi_1(\Phi_1,\Phi_2,\Pi_2)\\
&\dot\Phi_2=\dot\Phi_2(\Phi_1,\partial_k\Phi_1,\partial_k\partial_l\Phi_1,\partial_k\partial_l\partial_m\Phi_1,\Phi_2,\partial_k\Phi_2,\partial_k\partial_l\Phi_2,
\Pi_1,\Pi_2,\partial_k\Pi_2,\partial_k\partial_l\Pi_2).
\end{align}
$H$ is defined in a standard way
\begin{equation}
H=\int d^3x(\Pi_1(x)\dot\Phi_1(x)+\Pi_2(x)\dot\Phi_2(x))-\tilde L,
\end{equation}
and leads to the correct canonical equations of motions.
|
1,116,691,500,972 | arxiv | \section{Introduction}
The gauge/gravity duality \cite{ads/cft,gkp,w} as the most fruitful idea stemming from string theory, has been proved to be a powerful tool for studying the strongly coupled systems in field theory. By using a dual classical gravity description, we can effectively calculate correlation functions in a strongly interacting field theory.
Recently, a superconducting phase was established with the help of black hole physics in higher dimensional spacetime\cite{gub1,gub2,3h,horowitz}.
Counting on the numerical calculations, the critical temperature was calculated with and without the backreaction for various conditions \cite{horowitz2,gre,wang1,wang2,wang3,wang4,siani,wang5,cai,ling,kuang,hartman,barc,kanno,jing,chen,amm1}.
The behavior of holographic superconductors in the presence of an external magnetic field has been widely studied in the probe limit\cite{aj,wk,amm,john,mon,ns,silva,gw,wu,queen,mann}.
The analytical calculation is useful for gaining insight into the strong interacting system. If the problem can be solved analytically, however vaguely, one can usually gain some insight.
As an analytical approach for deriving the upper critical magnetic field, an expression was found in the probe limit by extending the matching method first proposed in \cite{gre} to the magnetic case \cite{gw},
which is shown to be consistent with the Ginzburg-Landau theory.
Most of previous studies on the holographic superconductors focus on the probe limit neglecting the backreaction of matter field on the spacetime.
The probe limit corresponds to the case the electric charge $q\rightarrow \infty$ or the Newton constant approaches zero.
The backreaction of the spacetime becomes important in the case away from the probe limit. At a lower temperature, the black hole becomes hairy and the phase diagram might be modified. Recently, an analytical calculation on the critical temperature of the Gauss-Bonnet holographic superconductors with backreaction was presented in \cite{kanno} and confirmed the numerical results that
backreaction makes condensation harder\cite{hartman,barc,wang4,siani} .
Considering the above facts, it would be of great interest to explore the behavior of the upper critical magnetic field for holographic superconductors in the presence of the backreaction.
In this work, we will first consider the effect of the spacetime backreaction to $s$-wave holographic superconductors without the magnetic field. Different from the probe limit case, the backreaction of spacetime actually leads to a charged black hole solution in AdS space at the leading order. We will compute the critical temperature analytically by using this charged black hole metric through the matching method. Secondly we will study the properties of holographic superconductors in the presence of external magnetic field. When we turn on the external magnetic field, the resulting background geometry becomes the dyonic black hole solution in AdS space to the zeroth order. The analytical investigation on the effect of the spacetime backreaction to the upper critical magnetic field has not been carried out so far. Therefore, the contents in this paper will be greatly different from the probe limit case as we consider the spacetime backreaction. Note that in both cases, the small backreaction approximation shall be used to obtain an analytical result. In order to compare the analytic study with numerical results, we will also carry on numerical computation.
The organization of the paper is as follows: We first consider the effects of backreaction on $2+1$-dimensional $s$-wave holographic superconductors in section 2. Without the magnetic field, the critical temperature with backreaction will be derived first. Then we continue the calculation to the strong external magnetic field case and find an analytical expression for the backreaction on the upper critical magnetic field.
{From the Einstein equation, we know that the presence of charge and magnetism in $4$-dimensional spacetime yields a dyonic black hole solution. The critical temperature may influenced by the backreaction of the magnetism. We will compare the analytic and numerical results.
We present the conclusion will be presented in the last section.}
\section{$(2+1)$-dimensional $s$-wave holographic superconductors}
{In this section, we first investigate the backreaction of electric field on superconductivity and derive the phase transition temperature $T_c$ in this case. After that, we turn to the backreaction of the external magnetic field and calculate the critical magnetic field.
}
\subsection{Critical temperature with backreaction in the absence of magnetic fields}
We begin with a charged, complex scalar field into the 4-dimensional
Einstein-Maxwell action with a negative cosmological constant
\begin{equation}\label{action}
S=\frac{1}{16\pi G_{4}}\int d^4
x\sqrt{-g}\bigg\{R-2\Lambda-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
-|\partial_{\mu}\psi-iqA_{\mu}\psi|^2-m^2|\psi|^2\bigg\},
\end{equation}
where $G_{4}$ is the 4-dimensional Newton constant, the cosmological
constant $\Lambda=-3/l^2$ and
$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$.
The hairy black hole solution is assumed to take the following metric ansatz
\begin{equation}
ds^2=-f(r)e^{-\chi(r)}dt^2+\frac{dr^2}{f(r)}+\frac{r^2}{l^2}(dx^2+dy^2),
\end{equation}
together with
\begin{equation}
A_{\mu}=(\phi(r),0,0,0),~~~\psi=\psi(r).
\end{equation}
The Hawking temperature, which will be interpreted as the temperature of the holographic superconductors, is given by
\begin{equation}
T=\frac{1}{4\pi}f'(r)e^{-\chi(r)/2}\bigg|_{r=r_{+}},
\end{equation}
where a prime denotes derivative with respect to $r$ and $r_{+}$ is the black hole horizon defined by $f(r_{+})=0$. $\psi(r)$ can be taken to be real by using the $U(1)$ transformation. The gauge and scalar equations become
\begin{eqnarray}
&&\phi''+(\frac{\chi'}{2}+\frac{2}{r})\phi'-\frac{2q^2\psi^2}{f}\phi=0,\\
&&\psi''+(\frac{f'}{f}-\frac{\chi'}{2}+\frac{2}{r})\psi'+\bigg(\frac{q^2\phi^2 e^{\chi}}{f^2}-\frac{m^2}{f}\bigg)\psi=0.
\end{eqnarray}
The $tt$ and $rr$ components of the background Einstein equations yield
\begin{eqnarray}
&&f'+\frac{f}{r}-\frac{3r}{l^2}+\kappa^2 r\bigg[\frac{e^{\chi}}{2}\phi'^2+m^2\psi^2+f\bigg(\psi'^2+\frac{q^2\phi^2\psi^2e^{\chi}}{f^2}\bigg)\bigg]=0,\\
&&\chi'+2\kappa^2r\bigg(\psi'^2+\frac{q^2\phi^2\psi^2e^{\chi}}{f^2}\bigg)=0.
\end{eqnarray}
When the Hawking temperature is above a critical temperature, $T>T_c$ the solution is the well-known AdS-Reissner-Nordstr$\ddot{o}$m (RNAdS) black holes
\begin{equation}
f=\frac{r^2}{l^2}-\frac{1}{r}\bigg(\frac{r^3_{+}}{l^2}+\frac{\kappa^2\rho^2}{2 r_{+}}\bigg)+\frac{\kappa^2\rho^2}{2r^2}, ~~~\chi=\psi=0,~~~\phi=\rho\bigg(\frac{1}{r_{+}}-\frac{1}{r}\bigg),
\end{equation}
where $\kappa^2=8\pi G_4$. {At the critical temperature $T=T_c$, the coupling of the scalar to gauge field induces an effective negative
mass term for the scalar field,the RNAdS solution thus becomes unstable against perturbation of the scalar field.}
At the asymptotic AdS boundary ($r\rightarrow \infty$), the scalar and the Maxwell fields behave as
\begin{equation}
\psi=\frac{<\mathcal{O}_{\Delta_{-}}>}{r^{\Delta_{-}}}+\frac{<\mathcal{O}_{\Delta_{+}}>}{r^{\Delta_{+}}}, ~~~~~\phi=\mu-\frac{\rho}{r}+...
\end{equation}
where $\mu$ and $\rho$ are interpreted as the chemical potential and charge density of the dual field theory on the boundary. According to the gauge/gravity duality,
$<\mathcal{O}_{\Delta_{\pm}}>$ represents the expectation value of the operator $\mathcal{O}_{\Delta_{\pm}}$ dual to the charged scalar field $\psi$.
The exponent $\Delta_{\pm}$ is determined by the mass as $\Delta_{\pm}=\frac{3}{2}\pm \frac{1}{2}\sqrt{9+4m^2}$. Note that for $\psi$ both of the falloffs are normalizable and we choose the boundary condition that
either $<\mathcal{O}_{\Delta_{-}}>$ or $<\mathcal{O}_{\Delta_{+}}>$ is vanishing. We will impose that $\rho$ is fixed and take $<\mathcal{O}_{\Delta_{-}}>=0$ as in \cite{horowitz}. Moreover, we will consider the values of $m^2$ which must satisfy the Breitenlohner-Freedman (BF) bound $m_{BF}^2 \leq m^2 < m_{BF}^2 + 1$ with $m_{BF}^2 = - (d-1) / 4$ \cite{BF} for the dimensionality of the spacetime $d=4$ in the following analysis.
After introducing the new coordinate $z=\frac{r_{+}}{r}$, the equations of motion become
\begin{eqnarray}
&&-f'+\frac{f}{z}-\frac{3r^2_{+}}{l^2z^3}+\kappa^2\frac{r^2_{+}}{z^3}\bigg[\frac{z^4e^{\chi}}{2r^2_{+}}\phi'^2+m^2\psi^2
+f(\frac{z^4}{r^2_{+}}\psi'^2+\frac{q^2\phi^2\psi^2e^{\chi}}{f^2})\bigg]=0,\label{m1}\\
&&-\chi'+2\kappa^2\frac{r^2_{+}}{z^3}\bigg[\frac{z^4}{r^2_{+}}\psi'^2+\frac{q^2\phi^2\psi^2e^{\chi}}{f^2})\bigg]=0,\label{m2}\\
&&\phi''+\frac{1}{2}\chi'\phi'-\frac{2q^2\psi^2r^2_{+}}{z^4f}\phi=0,\label{at}\\
&&\psi''-\bigg(\chi'-\frac{f'}{f}\bigg)\psi'+\frac{r^2_{+}}{z^4}\bigg(\frac{q^2\phi^2 e^{\chi}}{f^2}-\frac{m^2}{f}\bigg)\psi=0, \label{p1}
\end{eqnarray}
where the prime $'$ denotes a derivative with respect to $z$. One may find that the transformation $\phi\rightarrow \phi/q$ and $\psi\rightarrow \psi/q$ does not change the form of the Maxwell and the scalar equations, but the gravitational coupling of the Einstein equation changes $\kappa^2\rightarrow \kappa^2/q^2$. The probe limit studied in \cite{horowitz} corresponds to the limit $q\rightarrow \infty$ in which the matter sources drop out of the Einstein equations. The hairy black hole solution requires to go beyond the probe limit. In \cite{horowitz}, it was suggested to take finite $q$ by setting $2\kappa^2=1$. Recently, the author in \cite{kanno} proposed to keep
$2\kappa^2$ finite with setting $q=1$ instead. We will take the latter choice.
{In the neighborhood of the critical temperature $T_c$,we can choose the order parameter as an expansion parameter because it is small valued}
\begin{equation}
\epsilon\equiv <\mathcal{O}_{\Delta_{+}}>.
\end{equation}
{We find that given the structure of our equations of motion, only the even orders of $\epsilon$ in the gauge field and gravitational field,
and odd orders of $\epsilon$ in the scalar field appear here. That is to say, we can expand the scalar field $\psi$, the gauge field as a series in $\epsilon$ as}
\begin{eqnarray}
&&\phi=\phi_0+\epsilon^2 \phi_2+\epsilon^4 \phi_4+...\\
&&\psi=\epsilon \psi_1+\epsilon^3 \psi_3+\epsilon^5 \psi_5+...
\end{eqnarray}
{Let us expand the background geometry line elements $f(z)$ and $\chi(z)$ around the AdS-Reissner-Nordstr$\ddot{o}$m solution}
\begin{eqnarray}
&&f=f_0+\epsilon^2 f_2+\epsilon^4 f_4+...\\
&&\chi=\epsilon^2 \chi_2+\epsilon^4 \chi_4+...
\end{eqnarray}
{The chemical potential $\mu$ should also expanded as}
\begin{equation}
\mu=\mu_0+\epsilon^2 \delta\mu_2,
\end{equation}
where $\delta\mu_2$ is positive. {Therefore, near the phase transition, the order parameter as a function of the chemical potential, has the form}
\begin{equation}
\epsilon=\bigg(\frac{\mu-\mu_0}{\delta\mu_2}\bigg)^{1/2}.
\end{equation}
{It is clear that that when $\mu$ approaches $\mu_0$, the order parameter $\epsilon$ approaches zero. The phase transition occurs at the critical value $\mu_c=\mu_0$. Note that the critical exponent $1/2$ is the universal result from the Ginzburg-Landau mean field theory. The equation of motion for $\phi$ is solved
at zeroth order by $\phi_0=\mu_0(1-z)$ and this gives a relation $\rho=\mu_0r_{+}$. So, to zeroth order the equation for $f$ is solved as}
\begin{equation}
f_0(z)=\frac{r^2_{+}}{z^2l^2}\bigg(1-z\bigg)\bigg(1+z+z^2-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}z^3\bigg).
\end{equation}
Now the horizon locates at $z=1$. We will see that the critical temperature with spacetime backreaction can be determined by solving the equation of motion for $\psi$ to the first order.
At first order, we need solve the equation for $\psi_1$ by the matching method. The boundary condition and regularity at the horizon requires
\begin{equation}
\psi'(1)=\frac{r^2_{+}m^2}{f'_0(1)}\psi_1(1).\label{psi1}
\end{equation}
In the asymptotic AdS region, $\psi_1$ behaves like
\begin{equation}
\psi_1=C_{+}z^{\Delta_{+}},\label{as}
\end{equation}
Now let us expand $\psi_1$ in a Taylor series near the horizon
\begin{equation}
\psi_1=\psi_1(1)-\psi'_1(1)(1-z)+\frac{1}{2}\psi''_1(1)(1-z)^2+...\label{pp}
\end{equation}
From (\ref{p1}), we obtain the second derivative of $\psi_1$ at the horizon
\begin{equation}
\psi''_1(1)=-\frac{1}{2}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{m^2r^2_{+}}{f'_0(1)}\bigg)\psi'_{1}(1)-\frac{r^2_{+}\phi'_0(1)^2}{2 f'^2_0(1)}\psi_1(1).\label{psi2}
\end{equation}
Using (\ref{psi1}) and (\ref{psi2}), we find the approximate solution near the horizon
\begin{equation}
\psi_1(z)=\psi_1(1)-\frac{r^2_{+}m^2}{f'_0(1)}\psi_1(1)(1-z)+\bigg[-\frac{r^2_{+}m^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{r^2_{+}m^2}{f'_0(1)}\bigg)
-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]\psi_1(1)(1-z)^2+...
\end{equation}
In order to determine $\psi_1(1)$ and $C_{+}$, we match the solution (\ref{as}) and (\ref{pp}) smoothly at $z_m$. We find that
\begin{eqnarray}
z^{\Delta_{+}}_m C_{+}&=&\psi_1(1)-\frac{r^2_{+}m^2}{f'_0(1)}\psi_1(1)(1-z_m)+\bigg[-\frac{r^2_{+}m^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{r^2_{+}m^2}{f'_0(1)}\bigg)
\nonumber\\&&-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]\psi_1(1)(1-z_m)^2,\\
\Delta_{+}z^{\Delta_{+}-1}_mC_{+}&=&\frac{r^2_{+}m^2}{f'_0(1)}\psi_1(1)-2\bigg[-\frac{r^2_{+}m^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{r^2_{+}m^2}{f'_0(1)}\bigg)
\nonumber\\&&-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]\psi_1(1)(1-z_m).\label{d}
\end{eqnarray}
Solving the above equation, we obtain the expression for $C_{+}$ in terms of $\psi_1(1)$
\begin{equation}
C_{+}=\frac{2z_m}{2z_m+(1-z_m)\Delta_{+}}z^{-\Delta_{+}}_m \bigg(1-\frac{1-z_m}{2}\frac{r^2_{+}m^2}{f'_0(1)}\bigg)\psi_1(1).
\end{equation}
Substituting the above equation back into (\ref{d}), we get a non-trivial relation provided $\psi_1(1)\neq 0$,
\begin{eqnarray}
&&\frac{2\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}-
\bigg[\frac{(1-z_m)\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}+(3-2z_m)\bigg]\frac{r^2_{+}m^2}{f'_0(1)}\nonumber\\&-&\frac{(1-z_m)r^2_{+}m^2}{2}\frac{f''_0(1)}{f'_0(1)^2}
+\frac{1-z_m}{2}\frac{r^4_{+}m^4}{f'_0(1)^2}-\frac{(1-z_m)r^2_{+}}{2}\frac{\phi'_0(1)^2}{f'_0(1)^2}=0.\label{re1}
\end{eqnarray}
Note that $f'_0(1)=-\frac{r^2_{+}}{l^2}\bigg(3-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}\bigg)$, $f''_0(1)=\frac{r^2_{+}}{l^2}\bigg(6+\frac{\kappa^2l^2\mu^2_0}{r^2_{+}}\bigg)$ and $\phi'_0(1)=-\mu_0$. Plugging these relations back into
(\ref{re1}), we obtain an equation for $\mu_0$
\begin{eqnarray}\label{main}
&&\frac{\kappa^4l^4}{2r^4_{+}}\frac{\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}\mu^4_0-\frac{l^4(1-z_m)}{2r^2_{+}}\bigg\{1+2\kappa^2\frac{r^2_{+}}{l^4(1-z_m)}
\bigg[\frac{m^2l^4(1-z_m)\Delta_{+}}{2r^2_{+}(2z_m+(1-z_m)\Delta_{+})}\nonumber\\&&+\frac{6\Delta_{+}l^2}{r^2_{+}(2z_m+(1-z_m)\Delta_{+})}
-\frac{m^2l^4}{r^2_{+}}\bigg(\frac{3}{2}z_m-2\bigg)\bigg]\bigg\}\mu^2_0\nonumber\\&&+3m^2l^2(2-z_m)+\frac{m^4l^4}{2}(1-z_m)-\frac{18+3m^2l^2(1-z_m)}{z_m(\Delta_{+}-2)-\Delta_{+}}\Delta_{+}=0.
\end{eqnarray}
The main idea of \cite{kanno} is to work in the small backreaction approximation $\kappa^2\ll 1$ together with the matching method so that all the functions can be expanded by $\kappa^2$ and the $\kappa^4$ term in the above equation
can be neglected. In this sense, $\mu_0$ is solved as
\begin{eqnarray}\label{mu}
&&\mu_0=\sqrt{\frac{2}{1-z_m}}\frac{r_{+}}{l^2}\bigg[3m^2l^2(2-z_m)+\frac{m^4l^4}{2}(1-z_m)\nonumber\\&&-\frac{18+3m^2l^2(1-z_m)}{z_m(\Delta_{+}-2)-\Delta_{+}}\Delta_{+}\bigg]^{1/2}
\bigg\{1-\frac{2\kappa^2}{l^2(1-z_m)}\bigg[\frac{m^2l^2 (1-z_m)\Delta_{+}}{4(2z_m+(1-z_m)\Delta_{+})}\nonumber\\&&+\frac{3\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}-\frac{3m^2l^2z_m}{4}+m^2l^2\bigg]\bigg\}.\label{mu}
\end{eqnarray}
Without the $\kappa^2$ term, the expression for $\mu_0$ can be reduced to the result of the probe limit case. The $\kappa^2$ term in the above equation is positive, which means that $\mu_0$ increase.
By further using the relation $\mu_0=\frac{\rho}{r_{+}}$, we find an expression for $r_{+}$:
\begin{eqnarray}
&&r_{+}=\rho^{1/2}l(\frac{1-z_m}{2})^{1/4}\bigg[3m^2l^2(2-z_m)+\frac{m^4l^4}{2}(1-z_m)\nonumber\\&&-\frac{18+3m^2l^2(1-z_m)}{z_m(\Delta_{+}-2)-\Delta_{+}}\Delta_{+}\bigg]^{-1/4}
\bigg\{1+\frac{2\kappa^2}{l^2(1-z_m)}\bigg[\frac{m^2l^2}{8(2z_m+(1-z_m)\Delta_{+})}\nonumber\\&&+\frac{3\Delta_{+}}{2(2z_m+(1-z_m)\Delta_{+})}-\bigg(\frac{3z_m}{8}-\frac{1}{2}\bigg)m^2l^2\bigg]\bigg\}.\label{rp}
\end{eqnarray}
The Hawking temperature is given by
\begin{equation}
T=\frac{r_{+}}{4\pi l^2}\bigg(3-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}\bigg).\label{t}
\end{equation}
When $\mu_0=\mu_c$, the above Hawking temperature reaches the critical point $T_c$ where the order parameter approaches zero. From (\ref{mu}) and (\ref{t}), we obtain the critical temperature
\begin{equation}
T_c=\frac{r_{+}}{4\pi l^2}\bigg\{3-\frac{\kappa^2}{l^2(1-z_m)}\bigg[3m^2l^2(2-z_m)+\frac{m^4l^4}{2}(1-z_m)-\frac{18+3m^2l^2(1-z_m)}{z_m(\Delta_{+}-2)-\Delta_{+}}\Delta_{+}\bigg]\bigg\}
\end{equation}
Together with (\ref{rp}), we write the critical temperature in a form as
\begin{equation}
T_c=T_1(1-\frac{2\kappa^2}{l^2}T_2)\label{Tc}
\end{equation}
where
\begin{eqnarray}
&& T_1=\frac{3\rho^{1/2}}{4\pi l}\bigg(\frac{1-z_m}{2}\bigg)^{\frac{1}{4}}\bigg[3m^2l^2(z_m-2)+\frac{m^4l^4}{2}(1-z_m)
\nonumber\\&&-\frac{18+3m^2l^2(1-z_m)}{z_m(\Delta_{+}-2)-\Delta_{+}}\Delta_{+}\bigg]^{-\frac{1}{4}},\\
&& T_2=\frac{36\Delta_{+}+m^2l^2}{8(1-z_m)[2z_m+(1-z_m)\Delta_{+}]}+\frac{m^2l^2\Delta_{+}}{2[2z_m+(1-z_m)\Delta_{+}]}\nonumber\\&&+\frac{(12-7z_m)m^2l^2}{8(1-z_m)}+\frac{m^4l^4}{12}.
\end{eqnarray}
It is easy to check that when $m^2l^2=-2$, $z_m=1/2$ and thus $\Delta_{+}=2$, we have $T_1=\frac{3 \sqrt{\rho }}{4 \pi l\sqrt{2\sqrt{7}} }$, the exact result obtained in \cite{gre} for $(2+1)$-dimensional superconductors and $T_2=\frac{5}{6}$. This result is also in good agreement with the numerical result by choosing a proper matching point $z_m$ \cite{horowitz}.
We also find that the corrections due to the backreaction $T_2$ is positive for arbitrary $z_m$ in the region $(0<z_m<1)$. Therefore, we confirm the result found in \cite{kanno,hartman,barc} that the backreaction makes condensation harder. The reason for the decreasing of the critical temperature can be understood from the relation $T_c \propto 1/{\mu^{1/2}_0}$\cite{gre}. The value of $\mu_0$ increases due to the gravitational backreaction and thus $T_c$ decreases.
\subsection{The upper critical magnetic field with backreaction}
In this section, we will explore the effects of the backreaction on the external critical magnetic field.
{In the neighborhood of the upper critical magnetic field $B_{c2}$, the scalar field
$\psi$ is small and can be regarded as a perturbation.
The scalar field $\psi$ becomes a function of the bulk coordinate
$z$ and the boundary coordinates $(x,y)$ simultaneously because of the presence of the magnetic field. According
to the AdS/CFT correspondence, if the scalar field $\psi\sim
X(x,y)R(z)$, the vacuum expectation values $<\mathcal {O}> \propto
X(x,y)R(z)$ at the asymptotic AdS boundary (i.e. $z\rightarrow
0$)\cite{aj,ns}. We can simply write $<\mathcal {O}> \propto R(z)$
by dropping the overall factor $X(x,y)$. So, to the leading order, it is
consistent to set the ansatz}
\begin{equation}\label{ft}
A_t=\phi_0(z),~~~A_x=0,~~~A_y=B_{c2}x.
\end{equation}
Note that the applied external magnetic is constant and homogenous. Considering the fact that an external magnetic field is included, one may wonder whether such a constant external magnetic field could backreact on the bulk gravity or not. We may need to consider
the effects of the spatial component of the gauge field in the superconducting phase and assume the gauge field behaves as
\begin{equation}
A=\phi_0(z)dt+b(z) dx.
\end{equation}
In other words, we need solve the bulk gravity equation for $b(z)$ and the resulted metric is anisotropic. following this line, we may obtain a kind of dyonic black hole solution, which includes charge and magnetism.
However, we notice that several authors have already discussed such conditions in \cite{vector}. In these works, $b(z)$ is interpreted as the vector hair of the black hole. At the AdS boundary, $b(z)$ behaves as
\begin{equation}
b(z)=\sigma-\xi z+...
\end{equation}
According the AdS/CFT correspondence, $\xi$ is the dual current density and $\sigma$ is the dual current source of the holographic superfluid. Of course, it is not proper to regard $\xi$ as a homogenous applied magnetic field.
Actually, when we discuss the vortex structure of the holographic superconductors, in general we should consider
\begin{equation}
\psi_1=\psi_1(x,y,z),~~~A_t=\phi (x,y,z),~~~A_x=A_x(x,y,z),~~~A_y=A_y(x,y,z).
\end{equation}
or simply in the polar coordinate $\psi_1=\psi_1(\varrho,z), A_t=\phi_0(\varrho,z), A_{\varphi}=A_{\varphi}(\varrho,z)$ as well as the boundary condition that $A_t(z=1)=0$ and $A_{\varphi}(z=1)$ regular.
In the presence of external magnetic field, not only the matter fields but also the spacetime metric should depend on the coordinates $(z,x,y)$. The background static metric may have the form
\begin{equation}
ds^2=g_{00}(z,x,y)dt^2+g_{zz}(z,x,y)dz^2+g_{xx}(z,x,y)dx^2+g_{yy}(z,x,y)dy^2.
\end{equation}
In this case, we need solve the Einstein equations
\begin{equation}\label{einstein}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-\frac{3}{l^2}g_{\mu\nu}=\kappa^2 T_{\mu\nu},
\end{equation}
where \begin{equation}
T_{\mu\nu}=F_{\mu\lambda}F^{\lambda}_{\nu}-\frac{1}{4}g_{\mu\nu}F^{\lambda \rho}F_{\lambda \rho}
-g_{\mu\nu}(|D \psi|^2+m^2|\psi|^2)+\bigg[D_{\mu} \psi (D_{\nu} \psi)^{*}+D_{\nu} \psi (D_{\mu} \psi)^{*}\bigg],
\end{equation}
together with the Klein-Gordon equation
\begin{equation}\label{klein}
\frac{1 }{\sqrt{-g}}D_{\mu}\bigg(\sqrt{-g}g^{\mu\nu}D_{\nu}\psi\bigg)=m^2\psi,
\end{equation}
and the Maxwell equation
\begin{equation}\label{maxwell}
\frac{1 }{\sqrt{-g}}\partial_{\lambda}\bigg(\sqrt{-g}g^{\lambda \mu}g^{\rho\sigma}F_{\mu\sigma}\bigg)=g^{\rho\sigma}J_{\sigma},
\end{equation}
where we have defined $D_{\mu}\psi=\partial_{\mu}\psi-iA_{\mu}\psi$ and $J_{\sigma}=i[\psi^{*}D_{\sigma}\psi-\psi(D_{\sigma}\psi)^{*}]$.
In this case, we have three coupled nonlinear partial differential equations involving the metric components, scalar field
$\psi$, the scalar potential $A_t$ and vector potential $\bf{A}$ in which analytic study becomes very difficult to do. Note that we can expand the background geometry in series of $\epsilon$
\begin{eqnarray}
g_{\mu\nu}=g_{\mu\nu}^{(0)}+\epsilon^2 g_{\mu\nu}^{(2)}+\epsilon^4 g_{\mu\nu}^{(4)}+...
\end{eqnarray}
To solve these equations analytically we will follow the
logic as shown in table \ref{table}. In the absence of the external magnetic field, the backreaction of the electric field to the background geometry leads to the RNAdS black hole solution at the zeroth order. At the linear order, the metric receives no corrections from matter fields and we need only solve the equation of motion for $\psi_1$ at this moment. As we have done from (\ref{p1}) to (\ref{Tc}), all the equations depends only on the radial coordinate $z$. We obtained the critical temperature with backreaction. When we turn on the external magnetic field, the background spacetime changes because of the presence of $B_{c2}$. We can still expand $\psi$, $A_t$ and $A_{\varphi}$ in series of $\epsilon$. At the leading order, the matter field $\phi_0$ and $A^{(0)}_{\varphi}$ result in a dyonic black hole solution in AdS space. By solving $\psi_1(x_i,z)$ ($x_i=x,y$) at next to leading order, we should obtain the expression for the upper critical magnetic field. The above arguments are actually the logic of the calculation of the whole paper.
\begin{table}[htbp]\label{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$$&$\rm Vanishing~~~ magnetic~~~ field$&$\rm External~~~ magnetic~~~ field$\\
\hline
$\psi=$&$ \epsilon^1 \psi_1(z)+\epsilon^3 \psi_3+... $&$ \epsilon^1 \psi_1(z,x_i)+\epsilon^3 \psi_3(z,x_i)+...$\\
\hline
$A_t=$&$ \epsilon^0 \phi_0(z)+\epsilon^2 \phi_2(z)+... $&$\epsilon^0 \phi_0(z)+\epsilon^2 \phi_2(z,x_i)+... $\\
\hline
$A_{\varphi}=$&$ 0$&$ \epsilon^0 A^{(0)}_{\varphi}(z,x_i)+\epsilon^2 A^{(2)}_{\varphi}(z,x_i)+.. $\\
\hline
$g_{\mu\nu}=$&$ \epsilon^0 g_{RNAdS}+\epsilon^2 g_{\mu\nu}^{(2)} +...$&$ \epsilon^0 g_{dyonic}+\epsilon^2 g_{\mu\nu}^{(2)} +... $\\
\hline
\end{tabular}
\caption{Logic of the analytic calculation. }\label{table}
\end{center}
\end{table}
After justify the usage of (\ref{ft}), we can then solve the equations of motion order by order. {The black hole carries both electric and magnetic charge and the bulk Maxwell field yields}
\begin{equation}
A=B_{c2}x dy+\phi dt
\end{equation}
At the zeroth order $\mathcal{O}(\epsilon^0)$, we solve the Einstein equation and the line elements of the dyonic black hole metric are given by \cite{dyonic}
\begin{eqnarray}
&&ds^2=-f_0dt^2+\frac{r^2_{+}}{z^2l^2}(dx^2+dy^2)+\frac{r^2_{+}}{z^4f_0}dz^2,\\
&&\phi_0=\mu_0-\frac{\rho}{r_{+}}z,~~~ A^{(0)}_y=B_{c2}x
\end{eqnarray}
where $f_0=\frac{r^2_{+}}{z^2l^2}\bigg(1-z\bigg)\bigg(1+z+z^2-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}z^3-\frac{\kappa^2l^4B^2_{c2}}{2r^4_{+}}z^3\bigg)$ . The Hawking temperature at the event horizon is evaluated as
\begin{equation}
T=\frac{r_{+}}{4\pi l^2}\bigg(3-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}-\frac{\kappa^2l^2 B^2_{c2}}{2r^4_{+}}\bigg).
\end{equation}
To the linear order, the equation of motion for $\psi_1$ has its new form
\begin{equation}\label{p2}
f_0\psi''_1+f'_0\psi'_{1}+\frac{r^2_{+}}{z^4}\bigg(\frac{\phi^2_0 }{f_0}-m^2\bigg)\psi_1=-\frac{l^2}{z^2}\bigg[\partial^2_{x}+(\partial_y-iB_{c2}x)^2\bigg]\psi_1,
\end{equation}
where the prime denotes derivative with respect to $z$.
We use separation of variables
\begin{equation}
\psi_1=e^{ik_y y}X_n(x)R_n(z),
\end{equation}
and obtain the equation of a two dimensional harmonic oscillator and a equation for $R(z)$
\begin{eqnarray}
&&-X''_n(x)+(k_y-B_{c2}x)^2X_n(x)=\lambda_n B_{c2}X_n(x),\label{x}\\
&&f_0R''_n+f'_0R'_{n}+\frac{r^2_{+}}{z^4}\bigg(\frac{\phi^2_0 }{f_0}-m^2\bigg)R_n=\frac{\lambda_n B_{c2}l^2}{z^2}R_n,\label{R},
\end{eqnarray}
where $\lambda_n=2n+1$ is the eigenvalue of the harmonic oscillator equation, $n=0,1,2,...$ denotes the Landau energy level and the prime in (\ref{x}) and (\ref{R}) denote derivative with respect to $x$ and $z$, respectively. The equation (\ref{x})is solved by the Hermite polynomials
\begin{equation}
X_n(x)=e^{-\frac{(B_{c2}x-k_y)^2}{2B_{c2}}}H_n(x).
\end{equation} {Let us choose the lowest mode $n=0$ in what follows, which is the first to condensate and is the
most stable solution after condensation}. Actually, the Arikosov vortex lattice is given by a superposition of the lowest energy solutions
\begin{equation}
\psi_1=R_0(z)\sum_{j} c_j e^{ik_j y}X_0(x),
\end{equation}
where $c_j$ are coefficients that determine the structure of the vortex lattice.
Now we are going to solve (\ref{R}) by exploring the matching method and find the correction to the upper critical magnetic field away from the probe limit. Again regularity at the horizon requires
\begin{equation}
R'_0(1)=\frac{m^2r^2_{+}}{f'_0(1)}R_0(1)+\frac{B_{c2}l^2}{f'_0(1)}R_0(1).
\end{equation}
The behavior of $R_0$ at the asymptotic AdS boundary is given by
\begin{equation}
R_0(z)=C_{+}z^{\Delta_{+}}. \label{R0}
\end{equation}
The scalar potential $\phi_0$
satisfies the boundary condition at the asymptotic AdS region
$\phi_0(z)=\mu-\frac{\rho}{r_+}z$ and vanishes at the horizon
$\phi_0=0,$ as $z\rightarrow 1$. In the strong field limit, the scalar
field $\psi$ is almost vanishing and we can drop out the
$|\psi|^2$ term in the right hand side of equation (\ref{at}). One may find that $\phi_0(z)=\frac{\rho}{r_+} (1-z)$ is a solution
that satisfies (\ref{at}) and the corresponding boundary conditions
\cite{ns}.
In the presence of the external magnetic field, the Taylor expansion of $R_0$ near the horizon still goes as
\begin{equation}
R_0(z)=R_0(1)-R'_0(1)(1-z)+\frac{1}{2}R''_0(1)(1-z)^2+...\label{Rex}
\end{equation}
From (\ref{R}), we know that near $z=1$, $R''(1)$ is expressed as
\begin{equation}
R''_0(1)=-\frac{1}{2}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{m^2r^2_{+}}{f'_0(1)}+\frac{B_{c2}l^2}{f'_0(1)}\bigg)R'_{0}(1)+\frac{B_{c2}l^2}{f'_0(1)}R_0(1)-\frac{r^2_{+}\phi'_0(1)^2}{2 f'^2_0(1)}R_0(1).
\end{equation}
Putting the expressions for $R'_0(1)$ and $R''_0(1)$ into (\ref{Rex}), we obtain
\begin{eqnarray}
R_0(z)&=&R_0(1)-\bigg(\frac{r^2_{+}m^2}{f'_0(1)}+\frac{B_{c2}l^2}{f'_0(1)}\bigg)R_0(1)(1-z)+\bigg[-\frac{r^2_{+}m^2+B_{c2}l^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}\nonumber\\&-&\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}\bigg)
+\frac{B_{c2}l^2}{2f'_0(1)}-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]R_0(1)(1-z)^2+...\label{R01}
\end{eqnarray}
We connect the two solutions (\ref{R0}) and (\ref{R01}) at a intermediate point $z_m$ smoothly and thus find that
\begin{eqnarray}
C_{+}z^{\Delta_{+}}_m&=&R_0(1)-\bigg(\frac{r^2_{+}m^2}{f'_0(1)}+\frac{B_{c2}l^2}{f'_0(1)}\bigg)R_0(1)(1-z_m)+\bigg[-\frac{r^2_{+}m^2+B_{c2}l^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}\nonumber\\&-&\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}\bigg)
+\frac{B_{c2}l^2}{2f'_0(1)}-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]R_0(1)(1-z_m)^2,\\
\Delta_{+}z^{\Delta_{+}-1}_mC_{+}&=&\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}R_0(1)-2\bigg[-\frac{r^2_{+}m^2+B_{c2}l^2}{4f'_0(1)}\bigg(4+\frac{f''_0(1)}{f'_0(1)}-\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}\bigg)
\nonumber\\&&+\frac{B_{c2}l^2}{2f'_0(1)}-\frac{r^2_{+}}{4}\frac{\phi'_1(1)^2}{f'_0(1)^2}\bigg]R_0(1)(1-z_m).\label{R2}
\end{eqnarray}
From the above equations, we find that
\begin{equation}
C_{+}=\frac{2z_m}{2z_m+(1-z_m)\Delta_{+}}z^{-\Delta_{+}}_m\bigg(1-\frac{1-z_m}{2}\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}\bigg)R_0(1)
\end{equation}
Substituting the above relation back into (\ref{R2}), we get a non-trivial expression
\begin{eqnarray}\label{sf}
&&\frac{2\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}-
\bigg[\frac{(1-z_m)\Delta_{+}}{2z_m+(1-z_m)\Delta_{+}}+(3-2z_m)\bigg]\frac{r^2_{+}m^2+B_{c2}l^2}{f'_0(1)}\nonumber\\&-&\frac{(1-z_m)(r^2_{+}m^2+B_{c2}l^2)}{2}\frac{f''_0(1)}{f'_0(1)^2}
+\frac{1-z_m}{2}\frac{(r^2_{+}m^2+B_{c2}l^2)^2}{f'_0(1)^2}\nonumber\\&+&\frac{B_{c2}l^2}{f'_0(1)}(1-z_m)-\frac{(1-z_m)r^2_{+}}{2}\frac{\phi'_0(1)^2}{f'_0(1)^2}=0.\label{re}
\end{eqnarray}
When we turn off the magnetic field $B_{c2}=0$, (\ref{re}) returns to (\ref{re1}). In the presence of the magnetic field, both the charge and the magnetic field can backreact on the black hole. The critical temperature should receive further corrections from the magnetic field. The difference between (\ref{re}) and (\ref{main}) comes from the $B_{c2}$ related terms, which goes as
\begin{eqnarray}
&&\frac{2\Delta_{+}}{2z_m+(1-z_m)}3\kappa^2B^2_{c2}+\bigg(3-2z_m+\frac{(z_m-1)\Delta_{+}}{z_m(\Delta_{+}-2)-\Delta_{+}}\bigg)\bigg(\frac{m^2l^2B^2_{c2}}{2}\kappa^2+\frac{B^3_{c2}\kappa^2}{2r^2_{+}}\nonumber\\&-&3B_{c2}r^2_{+}\bigg)
+\frac{1-z_m}{2}\bigg(\frac{m^2l^2B^2_{c2}}\kappa^2+\frac{B^3_{c2}\kappa^2}{r^2_{+}}+6B_{c2}r^2_{+}-B^2_{c2}+2m^2l^2r^2_{+}B_{c2}\bigg)\nonumber\\&-&B_{c2}(1-z_m)\bigg(\frac{\kappa^2B^2_{c2}}{2r^2_{+}}-3r^2_{+}\bigg)
=\bigg(3-2z_m+\frac{(z_m-1)\Delta_{+}}{z_m(\Delta_{+}-2)-\Delta_{+}}\bigg)\frac{B_{c2}\kappa^2}{2}\mu^2_0.
\end{eqnarray}
We obtain a relation between $B_{c2}$ and $r_{+}$ by using (\ref{mu}) and $m^2l^2=-2$, $z_m=1/2$, $q=1$, $\Delta_{+}=2$,
\begin{eqnarray}\label{bbc}
B_{c2}=\frac{58}{5}r^2_{+}-812 \kappa ^2r^2_{+}+\mathcal{O}(\kappa^4).
\end{eqnarray}
The critical temperature dropped because of the magnetic field
\begin{equation}\label{ntc}
T_c=T_1(1-\frac{1807}{75}\frac{\kappa^2}{l^2}),
\end{equation}
where $T_1=\frac{3 \sqrt{\rho }}{4 \pi l\sqrt{2\sqrt{7}} }$. This reflects the fact that condensation becomes even harder when one turns on the external magnetic field.
Note that (\ref{bbc}) is not enough to determine the relation among the upper critical magnetic field, the system temperature $T$ and the critical temperature $T_c$.
Considering the values of $f'_0(1)$, $f''_0(1)$ and $\phi'_0(1)$ and solving (\ref{sf}) to the first order of $\kappa^2$, we get
\begin{eqnarray}
&&\mu_0=\frac{\mathcal{H}}{\sqrt{1-z_m}l^2}\bigg\{1+\frac{\kappa^2}{2l^2}\bigg(\frac{1}{1-z_m}-\frac{B^2_{c2}l^6}{\mathcal{H}r^4_{+}}\bigg)
\bigg(12\Delta_{+}+2B_{c2}l^4\bigg(z_m(3+z_m(\Delta_{+}-2)-3\Delta_{+})\nonumber\\&&+2\Delta_{+}+m^2l^2(z(8+3z_m(\Delta_{+}-2)-8\Delta_{+})
+5\Delta_{+})/r^2_{+}\bigg)[z_m(\Delta_{+}-2)-\Delta_{+}]^{-1}\bigg)\bigg\}\nonumber\\&&+\mathcal{O}(\kappa^4),
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{H}&=&\bigg\{\frac{B^2_{c2}}{r^4_{+}}l^8(1-z_m)+\frac{36\Delta_{+}}{z_m(\Delta_{+}-2)-\Delta_{+}}+\frac{6m^2l^2\bigg(z_m(4+z_m(\Delta_{+}-2)-4\Delta_{+})+3\Delta_{+}\bigg)}{z_m(\Delta_{+}-2)-\Delta_{+}}
\nonumber\\&+&m^4l^4(z_m-1)+\frac{12B_{c2}(z_m+\Delta_{+}(1-z_m))}{r^2_{+}(z_m(\Delta_{+}-2)-\Delta_{+})}\bigg\}
[z_m(\Delta_{+}-2)-\Delta_{+}]^{-1}.
\end{eqnarray}
Combining the above equation with $\mu_0=\frac{\rho}{r_{+}}$, we can obtain a relation between $B_{c2}$ and $r_{+}$. We then substitute (\ref{mu}) and (\ref{bbc}) into the Hawking temperature $T=\frac{r_{+}}{4\pi l^2}\bigg(3-\frac{\kappa^2l^2\mu^2_0}{2r^2_{+}}-\frac{\kappa^2l^2 B^2_{c2}}{2r^4_{+}}\bigg)$ and now the Hawking temperature plays the role of the critical temperature in the presence of magnetic fields. In order to have a clear picture,
by choosing $m^2l^2=-2$, $\Delta_{+}=2$ and $z_m=1/2$ and further using the relation between $r_{+}$ and $T$ from the new Hawking temperature,
we find that the upper critical magnetic field $B_{c2}$ yields
\begin{eqnarray}
B_{c2}&=&B_1+B_2 \kappa^2 =\frac{1}{9}\bigg(\sqrt{5376\pi^4T^4+\frac{81\rho^2}{l^4}}-112\pi^2T^2\bigg)\nonumber\\
&+& \kappa ^2\bigg[\frac{14565376 \pi ^4 T^6-46575 T^2 \rho ^2}{450 \sqrt{5376 \pi ^4 T^8+81 T^4 \rho ^2}}-\frac{455168}{675} \pi ^2 T^2
\nonumber\\&+&\frac{45 \rho ^2}{32 \pi ^2 T^2}\bigg]+\mathcal{O}(\kappa^4).
\end{eqnarray}
\begin{figure}[htbp]
\begin{minipage}{1\hsize}
\begin{center}
\includegraphics*[scale=0.6] {B1.eps}
\end{center}
\caption{(color online) The coefficient of the $\kappa^2$ term of the upper critical magnetic field as a function of the temperature $T/T_c$. We set $T_c=1$ here} \label{B1}
\end{minipage}
\end{figure}
In this case, the charge density can be evaluated from (\ref{ntc}), that is $\rho=\frac{32 \sqrt{7} \pi ^2}{9}T^2_c \bigg(1-\frac{2\kappa^2}{l^2}T_2\bigg)^{-2}$.
The upper critical magnetic field $B_{c2}$ in series of $\kappa^2$ can be expressed as
\begin{eqnarray}\label{bc}
B_{c2}&=&\frac{16}{9}T^2_c \pi ^2 \bigg[\left(\sqrt{7}\sqrt{4+3 \frac{T^4}{T^4_c}}-7\frac{ T^2}{T^2_c}\right)+\bigg(\frac{\frac{4064}{25}\frac{ T^4}{T^4_c}+\frac{5503}{75}}{\sqrt{4+3 \frac{T^4}{T^4_c}}}\sqrt{7}+70\frac{T^2_{c}}{T^2}\nonumber\\&-&\frac{28448}{75} \frac{T^2}{T^2_{c}}\bigg) \kappa^2+\mathcal{O}(\kappa^4)\bigg].
\end{eqnarray}
\begin{figure}[htbp]
\begin{minipage}{1\hsize}
\begin{center}
\includegraphics*[scale=0.8] {kt.ps}
\end{center}
\caption{(color online) For fixed external magnetic field, the phase transition temperature decreases if $\kappa^2$ becomes larger. We set $T_c=1$ here.} \label{kt}
\end{minipage}
\end{figure}
It is worth noting that $T_c$ means the critical temperature without magnetic fields and gravitational backreaction. The result (\ref{bc}) also implies that it is only applicable near the critical temperature $T_c$ because the $\kappa^2$ term will be divergent in the low temperature limit. One may find that when $\kappa^2=0$, the result exactly agrees with \cite{gw}, which is also consistent with the Ginzburg-Landau theory where $B_{c2}\propto \bigg(1-T/T_c\bigg)$. We also find that the coefficient of the $\kappa^2$ term is positive for the system temperature $T$ (see Fig. \ref{B1}).
This result indicates that the effects of the backreaction enhance the value of the upper critical magnetic field. The increasing of the magnetic field $B_{c2}$ can be explained from the relation that $B_{c2} \propto \mu_0$
for fixed value of $r_{+}$ \cite{gw}. Therefore, if the value of $\mu_0$ becomes larger, then $B_{c2}$ increases. However, this does not mean condensation become easy in the presence of the magnetic field. We can see from Fig. \ref{kt} that for fixed magnetic field, the phase transition temperature goes down as $\kappa^2$ increases.
\subsection{Numerical results}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$\kappa^2$ & 0 & 0.025& 0.05 & 0.1 & 0.15 & 0.2 & 0.3 & 0.35 \\ \hline
${T}/{\sqrt{\rho}}$ & 0.118 & 0.111& 0.104 & 0.09& 0.07&0.06& 0.03&0.01 \\ \hline
\end{tabular}
\caption{The critical temperature $T$ drops as $\kappa^2$ increases in the absence of the magnetic field.}
\label{tableO}
\end{table}
\begin{figure}[htbp]
\begin{minipage}{1\hsize}
\begin{center}
\includegraphics*[scale=0.8] {bct1.eps}
\includegraphics*[scale=0.8] {BCTK.ps}
\end{center}
\caption{(color online) Left: The external magnetic field as a function of $T/T_c$ at $\kappa^2=0$ (Green) and $\kappa^2=0.01$ (Blue). Right: The phase transition temperature decreases if $\kappa^2$ increases in the case that $B_{c2}=2$. In both cases, we choose $T_c=1$. } \label{BCT}
\end{minipage}
\end{figure}
For completeness of our study, we carry on numerical computation in this subsection. We first solve the equations of motion (\ref{m1}) to(\ref{p1}) in the absence of the external magnetic field and from which we can obtain the critical temperature and the phase diagram. The properties of holographic superconductors without magnetic fields away from the probe limit were studied numerically in \cite{wang4} by setting $2\kappa^2=1$ and finite $q$. We work in the case $q=1$ but finite $\kappa^2$ instead and set $r_{+}=1$ and $l=1$ in the numerical computation. The critical temperature $T$ as a function of the backreaction $\kappa^2$ is shown in Table \ref{tableO}. It is clear that the critical temperature $T$ drops as $\kappa^2$ increases, which is in consistent with \cite{wang4,hartman}.
We then consider the behavior of the external magnetic field numerically to the linear level by solving equation (\ref{p2}). In Fig.\ref{BCT} (left), we find that the external magnetic field drops in different ways for $\kappa^2=0$ case and $\kappa^2=0.01$ case. This is in consistent with the analytic calculation in the range $T\sim T_c$. That is to say, although the critical temperature is significantly suppressed by a non-zero $\kappa^2$, the upper bound of $B_{c2}$ become larger.
In Fig.\ref{BCT} (right), we also plot the phase diagram of the critical
temperature against the gravitational backreaction. When we fix the magnetic field, the phase transition temperature is depressed as $\kappa^2$ increases, which is also comparable with the analytic results at qualitative level since the analytic method closely depends on the matching point.
Note that the numerical results presented here can be regarded as a side note because we mainly deal with analytical calculation in this paper. A more general and thorough numerical computation in the presence of external magnetic field with backreaction is called for in the future.
\section{Conclusion}
In this paper, we have investigated the effect of backreaction to the upper critical magnetic field of $(2+1)$-dimensional holographic superconductors in Einstein gravity by using the analytical method developed in \cite{kanno,gre}. As a consistent check, we have derived the critical temperature with backreaction in four-dimensional Einstein gravity and confirmed the numerical result given in \cite{wang4} that backreaction makes the condensation harder to form. We have obtained the spatially dependent condensate solutions in the presence of the magnetism. The coefficient of spacetime backreaction on the upper critical magnetic field is positive for Einstein gravity, which indicates that the magnetic field becomes strong with respect to the backreaction in consistent with reference \cite{gtw} . We have also shown the corresponding numerical results for each case.
We can see that the spacetime backreaction presents us an interesting property of holographic superconductors: While the backreaction causes the depression of the critical temperature, it can enhance the upper critical magnetic field. The upper critical field $B_{c2}$ is an important parameter because it determines the value of the coherence length and strongly affects the critical current density $J_c$. The improvement in $B_{c2}$
has been the main research topic for some experiments. In this paper, we work in the small backreaction limit (i.e. $\kappa^2\ll 1$). So if we regard the backreaction as a factor of ``doping'' in holographic superconductors, then we may find that the backreaction plays the same role as carbon doping in $\rm MgB_{2}$ reported in recent experiments\cite{mgb}: It results in the depression in $T_c$, while the $B_{c2}$ performance is improved. Otherwise, we can treat the probe limit approximation as the ``effective doping'': comparing with the superconducting properties with backreaction, the probe limit approximation improves the critical temperature but reduces the upper critical magnetic field.
In microscopic models of high temperature superconductors, the interaction between doping and electrons contributes a potential term in the Hamiltonian and the self energy of the superconducting quasi-particles will be changed. The self-energy can be
calculated by using the Green function and the correction to the critical temperature can be read off from the Green function\cite{poole}. For holographic superconductors, the effective mass term is changed with the variation of $\kappa^2$.
The extension of this work to the five-dimensional Gauss-Bonnet gravity case would be interesting\cite{g4,gsst,ge,bm,esc,ges}. But since the resulting metric is anisotropic and analytic calculation become very difficult and involving, we would like to leave it to the future publication by using numerical calculation.
\section*{Acknowledgements}
XHG would like to thank Ying Jiang for helpful discussions on $\rm MgB_2$. The work was partly supported by NSFC, China
(No. 11005072), Shanghai Rising-Star Program and
Shanghai Leading
Academic Discipline Project (S30105).
|
1,116,691,500,973 | arxiv | \section{Introduction}
Let us consider the problem of denoising an $n$ dimensional
noisy signal $\bY$ using a family of candidates $\boldsymbol\theta_1,
\ldots,\boldsymbol\theta_m$. More precisely, we assume that
\begin{align}
\bY = \boldsymbol\theta^* + \bxi
\end{align}
where $\boldsymbol\theta^*\in\mathbb R^n$ is the $n$ dimensional
true signal and $\bxi$ is random noise. Only the noisy
vector $\bY$ is observed and the goal is to construct
an estimator $\widehat\boldsymbol\theta$ such that the expected error
$\mathbf E[\|\widehat\boldsymbol\theta -\boldsymbol\theta^*\|^2]$ is as small as
possible, where $\|\bv\|$ stands for the Euclidean
norm of $\bv\in\mathbb R^n$. We consider the framework
in which to achieve the aforementioned goal we are
given a set of vectors $\{\boldsymbol\theta_1,\ldots,\boldsymbol\theta_m\}$.
An estimator $\widehat\boldsymbol\theta$ is considered a good estimator,
if the regret
\begin{align}\label{eq:regret}
\mathbf E[\|\widehat\boldsymbol\theta -\boldsymbol\theta^*\|^2] - \min_{j=1,
\ldots,m}\|\boldsymbol\theta_j -\boldsymbol\theta^*\|^2
\end{align}
is as small as possible. This problem has been coined
model-selection aggregation in \citep{Tsyb03}, where it
also proved that the optimal rate of the difference
\eqref{eq:regret} is $\log m$. The problem of aggregation
has been extensively studied in the literature, see
for instance \citep{BTW2007,Yang00,Yang1,Yang2,JRT,bellec2018,Rigollet11,TsybICM,Alquier3, Lecue,Golubev1}. In this note, we consider the
exponentially weighted aggregate (EWA) defined as follows.
Let $\pi_0(1),\ldots,\pi_0(m)$ be some nonnegative weights
summing to one. Each $\pi_0(j)$ represent our prior
confidence in the approximation of $\boldsymbol\theta^*$ be
$\boldsymbol\theta_j$. Based on these prior weights and the observed
vector $\bY$, we define
\begin{align}
\widehat\boldsymbol\theta = \sum_{j=1}^m \boldsymbol\theta_j \widehat\pi(j),
\qquad\text{with}\qquad \widehat\pi(j) = \frac{\exp\{ -
\|\bY-\boldsymbol\theta_j\|^2/\beta\}\pi_0(j)}{\sum_{\ell=1}^m
\exp\{ -\|\bY-\boldsymbol\theta_\ell\|^2/\beta\}\pi_0(\ell)}.
\end{align}
In this expression, $\beta>0$ is a tuning parameter of the
method. As established in the aforementioned references, in
different settings one can prove that EWA satisfies the
inequality
\begin{align}\label{EWA1}
\mathbf E[\|\widehat\boldsymbol\theta -\boldsymbol\theta^*\|^2] \leqslant \min_{j=1,
\ldots,m}\Big(\|\boldsymbol\theta_j -\boldsymbol\theta^*\|^2 + \beta
\log(1/\pi_0(j))\Big).
\end{align}
In particular, if $\pi_0$ is the uniform distribution over
$\{1,\ldots,m\}$, one obtains the rate-optimal remainder
term $\beta\log m$ for the difference in \eqref{eq:regret}.
As pointed out in some papers \citep{DT07,DT08,Dal_IHP},
it is helpful to extend the above-described framework to
the case of aggregating a family of estimators which is
potentially infinite. This is equivalent to considering
a subset $S_0\subset \mathbb R^n$ and aiming at finding an
``optimal'' way of combining all its elements in order
to estimate $\boldsymbol\theta^*$. These types of considerations have
led to the following extension of the estimator \eqref{EWA1}:
\begin{align}\label{EWA2}
\widehat\boldsymbol\theta = \int_{\mathbb R^n} \boldsymbol\theta\,\widehat\pi(d\boldsymbol\theta),
\qquad \text{with}\qquad \frac{d\widehat\pi}{d\pi_0}
(\boldsymbol\theta) = \frac{\exp\{-\|\bY-\boldsymbol\theta\|^2/\beta\}}{
\int_{\mathbb R^n} \exp\{-\|\bY-\boldsymbol u\|^2/\beta\}\pi_0(d\boldsymbol u)
} .
\end{align}
Notice that this estimator is the Bayesian posterior mean
in the case where $\bxi$ is drawn from the Gaussian
distribution with zero mean and covariance matrix
$(\beta/2)\mathbf I_n$. The goal of this note is to
provide an alternative and simple proof of the fact that
EWA $\widehat\boldsymbol\theta$ satisfies \eqref{EWA1} and its extension
to aggregating an infinite set, provided that the
distribution of the noise $\xi$ satisfies some suitable
conditions. We also slightly extend the existing results by
including noise distributions that are not symmetric with
respect to the origin. This is particularly suitable for
estimating the parameters of Bernoulli or binomial
distributions.
\paragraph{Notation} We use boldface letters for
vectors, which are always seen as one-column matrices.
For any vector $\bv$, $\|\bv\|$ and $\|\bv\|_\infty$
are respectively the Euclidean norm and the sup-norm.
By convention, throughout this work, $0\cdot\infty = 0$.
For a probability distribution $\pi$ on $\mathbb R^n$, we
denote by $\text{Var}_\pi(\boldsymbol\theta)$ the variance with
respect to $\pi$ defined by $\int_{\mathbb R^n}
\|\boldsymbol\theta\|^2\,\pi(d\boldsymbol\theta) - \|\int_{\mathbb R^n}
\boldsymbol\theta\,\pi(d\boldsymbol\theta)\|^2$. For two probability
distributions $\mu$ and $\nu$ defined on the same
probability space and such that $\mu$ is absolutely
continuous with respect to $\nu$, the Kullback-Leibler
divergence is defined by $D_{\rm KL} (\mu||\nu) =
\int \frac{d\mu}{d\nu}(x)\log \frac{d\mu}{d\nu}(x)\,
\nu(dx)$.
\section{Main result}
This section is devoted to stating and briefly discussing
the main result, the proof being postponed to \Cref{sec:proof}
below. Prior to stating the result, we recall the Bernstein condition. For some $v>0$ and $b\geqslant 0$, we
say that a random variable $\eta$ satisfies the $(v,b)
$-Bernstein condition, if
\begin{align}
\mathbf E[e^{t\eta}] \leqslant \exp\Big\{\frac{v^2t^2}{2
(1-b|t|)}\Big\},\qquad \forall t\in(-1/b,1/b).
\end{align}
This condition is clearly on the distribution of the random variable.
One can check that if $\eta$ satisfies the $(v,b)$-Bernstein
condition, then it is sub-exponential with zero mean, and the
variance of $\eta$ is at least equal to $v$. Many common
distributions satisfy this assumption. For instance, any
sub-Gaussian distribution with variance proxy $\tau$ satisfies
the $(\tau,0)$-Bernstein condition. Any random variable supported
by $[-A,A]$ satisfies the Bernstein condition with $(v,b) =
(A^2,0)$ but also with $(v,b) = (\text{Var}(\eta) , A/3)$
\citep{vershynin_2018}. We will see that the latter is more
useful for our purposes than the former.
Similarly, if $\mathcal F$ is a sigma-algebra and $v$ and $b$
are two $\mathcal F$- measurable random variables, we say that
$\eta$ is $(v,b)$-Bernstein conditionally to $\mathcal F$, if
almost surely, the inequality $\mathbf E[e^{t\eta}|\mathcal F]
\leqslant \exp\{v^2t^2/(1-b|t|)\}$ is satisfied for every
$t\in\mathbb R$ such that $|t|b<1$.
\begin{theorem}\label{thm}
Let $\pi_0$ be a probability distribution supported
by $S_0\subset\mathbb R^n$ with a diameter measured in
sup-norm bounded by $\mathcal D_0$. Assume that the
distribution of $\bxi$ satisfies the following assumption:
for some sigma algebra $\mathcal F$ and for some
$b:[0,1]\to [0,\infty)$ and continuously differentiable
function $v:[0,1]\to [0,\infty)$ vanishing at the
origin, for every $\alpha \in(0,1]$, there exists
an $n$-dimensional random vector $\bzeta$ such that
\begin{align}\label{cond:zeta}
\mathbf E[\bzeta|\mathcal F] = 0,\qquad \bxi + \bzeta
\stackrel{\mathscr D}{=} (1+\alpha) \bxi .
\end{align}
and, conditionally to $\mathcal F$, the entries $\zeta_i$
are independent and satisfy the $(v(\alpha),b(\alpha))$-
Bernstein condition. Then, for every $\beta \geqslant
2b(0) \mathcal D_0$, we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\} +\bigg(\frac{2
v'(0)}{ \beta -2 b(0) \mathcal D_0}
- 1\bigg) \mathbf E[{\rm Var}_{\widehat\pi}(\boldsymbol\theta)],
\label{eq:main}
\end{align}
where the $\inf$ is over all the probability distributions.
As a consequence, for $\beta \geqslant 2v'(0)
+ 2b(0) \mathcal D_0$, we get
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\}.
\label{eq:main1}
\end{align}
\end{theorem}
Let us briefly comment on this result. First, the link
between \eqref{eq:main1} and \eqref{EWA1} might not be
easy to see. It is obtained by considering a prior
distribution $\pi_0$ supported by the finite set $\{\boldsymbol\theta_1,
\ldots,\boldsymbol\theta_m\}$ and by upper bounding the infimum in
\eqref{eq:main1} by the minimum over all the Dirac measures
$\delta_{\boldsymbol\theta_j}$. One easily checks that $D_{KL}(
\delta_{\boldsymbol\theta_j}||\pi_0) = \log(1/\pi_0(j))$, which allows
to infer \eqref{EWA1} from \eqref{eq:main1}.
Second, one may wonder where the form of the upper bound in
\eqref{eq:main1} comes from. The presence of the KL-divergence
in this bound may seem surprising. The reason is that
there is a deep connection between the KL-divergence and the
exponential weights. Indeed, according to the Varadhan-Donsker variational formula, the ``posterior'' distribution $\widehat\pi$
defined in \eqref{EWA2} is solution to following problem:
\begin{align}\label{eq:DV}
\widehat\pi \in\mathop{\rm argmin}_\pi\limits \bigg\{\int_{
\mathbb R^n} \|\boldsymbol\theta-\bY\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\},
\end{align}
where the $\min$ is over all the probability distributions.
This result will be the starting point of the proof.
Finally, one can wonder how restrictive the assumptions of this
theorem are. We will show below that they are satisfied for a
broad class of noise distributions.
\section{Instantiation to some well-known noise
distributions}
The main theorem stated in the previous section requires
a general and a rather abstract condition to be satisfied
by the noise distribution. This section shows that many
distributions encountered in applications satisfy this
assumption with some parameters $v'(0)$ and $b(0)$
which are easy to determine.
\subsection{Centered Bernoulli noise}
Assume that each $\xi_i$
is a centered Bernoulli random variable: it takes the
value $1-\rho_i$ with probability $\rho_i$ and the value
$-\rho_i$ with probability $1-\rho_i$. Here, $\rho_i\in
(0,1)$. Then, one can set
\begin{align}
\mathbf P\big(\zeta_i = \alpha\xi_i\,|\, \xi_i \big)
= \frac{1+\alpha-\alpha|\xi_i|}{\alpha+1}
,\quad \mathbf P\big(\zeta_i = -\mathop{\rm sgn}(\xi_i
)(1+\alpha - \alpha|\xi_i|)\,|\, \xi_i\big) = \frac{
\alpha |\xi_i|}{\alpha+1}.
\end{align}
We see that conditionally to $\xi_i$, the random variable
$\zeta_i$ is zero mean and takes its values in an interval
of length $\alpha(1-\rho_i) + \alpha\rho_i +1 = \alpha\rho_i
+ 1 + \alpha - \alpha\rho_i = 1+\alpha$. This implies that
$\zeta_i$ satisfies the $((1+\alpha)^2/4,0)$-Bernstein
condition, conditionally to $\xi_i$. In other terms,
$\zeta_i$ is sub-Gaussian with variance proxy $(1 +
\alpha)^2/4$. However, this does not help in applying
\Cref{thm}, since the function $v(\alpha) = (1+\alpha)^2/4$
does not vanish at the origin. On the positive side, since the
conditional variance of $\zeta_i$ given $\xi_i$ is smaller than $\alpha(1+\alpha)$ and the support is included in $[-(1+\alpha
), (1+\alpha)]$, the conditional distribution of $\zeta_i$
given $\xi_i$ satisfies the Bernstein condition with
$v(\alpha) = \alpha(1+\alpha)$ and $b(\alpha) = (1+\alpha)/3$,
see \citep[Exercise 2.8.5]{vershynin_2018}. This
yields the following result.
\begin{corollary}
Let $\pi_0$ be a probability distribution supported by
$S_0\subset \mathbb R^n$ such that $\mathcal D_0 =
\sup_{\boldsymbol\theta,\boldsymbol\theta'\in S_0}\|\boldsymbol\theta-\boldsymbol\theta'
\|_\infty <\infty$. Assume that $\bxi$ has
independent entries $\xi_i$ satisfying $\mathbf
P(\xi_i = 1-\rho_i) = 1 - \mathbf P(\xi_i = -\rho_i)
= \rho_i$ for some $\rho_i\in (0,1)$. Then, for every
$\beta \geqslant (2/3)\mathcal D_0$, we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\} +\Big(\frac{6}{3
\beta - 2\mathcal D_0} - 1 \Big) \mathbf E[{\rm Var}_{
\widehat\pi} (\boldsymbol\theta)].
\label{eq:bernoulli}
\end{align}
In particular, if $\beta\geqslant 2 + (2/3)\mathcal D_0$,
the last term in \eqref{eq:bernoulli} is nonpositive
and, therefore, can be neglected.
\end{corollary}
This corollary can be used in cases where the observations
$Y_i$ are independent Bernoulli random variables with
mean $\theta_i^*$. In such a situation, it is natural to
choose a prior distribution $\pi_0$ that is concentrated
on the unit hypercube $[0,1]^n$, the diameter of which in
sup-norm is equal to $1$. The corollary implies that in
such a situation the inequality stated in \eqref{eq:main1}
is true provided that $\beta \geqslant 8/3$. We refer the
reader to \citep{Etienne1} for an application of this result
to graphon estimation.
\subsection{Gaussian noise}
In the case of the Gaussian noise $\bxi$ with independent
entries having $0$ mean and variance equal to $\sigma_i^2$,
one can check that the conditions of \Cref{thm} are
satisfied with the random vector $\bzeta$ which is
independent of $\bxi$ and has independent entries
drawn from the Gaussian distribution $\mathcal N(0,
(2\alpha+\alpha^2)\sigma_i^2)$. This means that in the
Bernstein condition one can choose $\mathcal F = \sigma
(\bxi)$, $b = 0$ and $v(\alpha) = (2\alpha + \alpha^2)
\max_{1\leqslant i\leqslant n}\sigma_i^2$, which leads
to the following result.
\begin{corollary}
Let $\pi_0$ be a probability distribution on $\mathbb
R^n$. Assume that $\bxi$ has independent entries
$\xi_i\sim \mathcal N(0,\sigma_i^2)$, $i=1,\ldots,n$.
Then, for every $\beta >0$, we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\} +\big(
4\sigma^2\beta^{-1} - 1\big) \mathbf E[{\rm Var}_{
\widehat\pi}(\boldsymbol\theta)],
\label{eq:Gaussian}
\end{align}
where $\sigma = \max_{1\leqslant i\leqslant n} \sigma_i$.
In particular, if $\beta\geqslant 4\sigma^2$, the last
term in \eqref{eq:Gaussian} is nonpositive and, therefore,
can be neglected.
\end{corollary}
Some preliminary versions of this result can be traced back
to \citep{George86,George86b}. In the form \eqref{eq:main1},
and with an
extension to aggregation of projection estimators, the result
appeared in \citep{LeungBarron}. Further generalisations to
various families of linear estimators have been explored in
\citep{DalSal}. The proof of the oracle inequality in all
these papers is very specific to the Gaussian distribution
since it is based on Stein's lemma (integration by parts for
the Gaussian measure). The alternative proof presented in this
work relies on techniques developed in
\citep{DT07,colt_DalalyanT09, Dal_IHP}.
\subsection{Bounded noise}
For every $a,b>0$, let $\mathcal B(a,b)$ be the distribution
of a random variable that takes the values $a$ and $-b$ with
probabilities $b/(a+b)$ and $a/(a+b)$, respectively.
If the distribution of $\xi_i$ can be written as a mixture
of the distributions $\mathcal B(a,b)$ with a mixing distribution
with bounded support, then our main theorem can be applied.
More precisely, assume that the distribution of $\xi_i$ is
given by
\begin{align}
p_{\xi_i}(dx) = \int_{0}^A\int_0^B \frac{b\delta_a(dx) +
a\delta_{-b}(dx)}{a+b}\, \nu_i(da,db),
\end{align}
where $\nu_i$ is a probability distribution on $[0,A]
\times [0,B]$.
This means that $\xi_i = \eta_i^{\alpha_i,\beta_i}$
with random variables $(\alpha_i,\beta_i)$ drawn from
$\nu_i$ and $\eta_i^{a,b}$ drawn from the binary distribution
$\frac{b\delta_a(dx) + a\delta_{-b}(dx)}{a+b}$. Akin to the
first subsection of this section, one can choose $\zeta_i^{a,b}$
so that $(1+\alpha)\eta_i^{a,b}$ has the same distribution
as $\eta_i^{a,b} + \zeta_i^{a,b}$, for every pair $(a,b)$.
Then, clearly, $(1+\alpha)\xi_i$ has the same distribution as
$\xi_i+\zeta_i^{\alpha,\beta}$. Let $\mathcal F$ be the
sigma algebra generated by the random variables $\alpha,
\beta,\{\xi_i^{a,b}:(a,b)\in[0,A]\times[0,B],i\in[n]\}$.
Conditionally to $\mathcal F$, $\zeta_i^{a,b}$ is a binary
random variable with zero mean and takes its values in the interval $[-B,A]$,
it satisfies the Bernstein condition with $b(\alpha) =
(A+B)(1+\alpha)/3$ and $v(\alpha) = (A+B)^2\alpha (1+\alpha)$. Therefore, we get the following result.
\begin{corollary}
Let $\pi_0$ be a probability distribution supported by
$S_0\subset \mathbb R^n$ such that $\mathcal D_0 =
\sup_{\boldsymbol\theta,\boldsymbol\theta'\in S_0}\|\boldsymbol\theta-\boldsymbol\theta'
\|_\infty <\infty$. Assume that $\bxi$ has
independent entries $\xi_i$, $i=1,\ldots,n$, taking
values in an interval $I_i$ of length at most $L$.
Then, for every $\beta \geqslant (2/3)L\mathcal D_0$,
we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\} +\Big(\frac{
6L^2}{3\beta - 2L\mathcal D_0} - 1\Big)
\mathbf E[{\rm Var}_{\widehat\pi}(\boldsymbol\theta)].
\label{eq:bounded}
\end{align}
In particular, if $\beta\geqslant 2L^2 + (2/3)L
\mathcal D_0$, the last term in \eqref{eq:bounded}
is nonpositive and, therefore, can be neglected.
\end{corollary}
This result is well suited for the setting
where the components $Y_i$ of the observation $\bY$ are
bounded. For instance, if we know that $\mathbf P(Y_i\in
[0,L]) = 1$ for every $i\in\{1,\ldots,n\}$, then it is also
natural to choose a prior distribution satisfying $\mathcal
D_0 = L$. Inequality \eqref{eq:main1} is then satisfied for
every $\beta\geqslant (8/3)L^2$. Note that, to the best of
our knowledge, this is the first time that such a precise
bound is obtained for asymmetric noise distributions. The
similar result established in \citep[Theorem 2]{Dal_IHP}
deals with symmetric distributions only.
\subsection{Centered binomial noise}
Consider the case where $\xi_i$'s are independent and
drawn from a centered and scaled binomial distribution
$a\mathcal B(k,\rho_i) - a k \rho_i$, where $a>0$ is the
scaling factor. This distribution is a particular case
of distributions supported by a finite interval considered
in the previous subsection. One can therefore apply the
last corollary with $L = ak$. However, this leads to
a bound which is too crude. Indeed, one can use the fact
that $\xi_i $ is equal in distribution to $a(\eta_1+\ldots
+ \eta_k)$ where $\eta_j$'s are iid centered Bernoulli
variables. Defining $\bar\zeta_1,\ldots,\bar\zeta_k$
as independent random variables satisfying
\begin{align}
\mathbf P\big(\bar\zeta_j = \alpha\eta_j\,|\, \eta_j \big)
= \frac{1+\alpha-\alpha|\eta_j|}{\alpha+1}
,\quad \mathbf P\big(\bar\zeta_j = -\mathop{\rm sgn}(
\eta_j)(1+\alpha - \alpha|\eta_j|)\,|\,\eta_j\big)
= \frac{\alpha |\eta_j|}{\alpha+1},
\end{align}
one easily checks that $\eta_j + \bar\zeta_j$ has the same
distribution as $(1+\alpha)\eta_j$. Therefore,
$\xi_i + \zeta_i$, for $\zeta_i = a(\bar\zeta_1+\ldots +
\bar\zeta_k)$, has the same distribution as $(1+\alpha)
\xi_i$. Furthermore, conditionally to the sigma-algebra
generated by $\{\eta_1,\ldots,\eta_k\}$, $\zeta_i$
has zero mean and satisfies the Bernstein condition with
$b(\alpha)=a(1+\alpha)/3$ and $v(\alpha) = a^2k\alpha
(1+\alpha)$.
\begin{corollary}
Let $\pi_0$ be a probability distribution supported by
$S_0\subset \mathbb R^n$ such that $\mathcal D_0 =
\sup_{\boldsymbol\theta,\boldsymbol\theta'\in S_0}\|\boldsymbol\theta-\boldsymbol\theta'
\|_\infty <\infty$. Assume that $\bxi$ has
independent entries $\xi_i$, $i=1,\ldots,n$, drawn
from the scaled and centered binomial distribution
$a(\mathcal B(k,\rho_i) - k\rho_i))$. Then, for every
$\beta \geqslant (2/3)a\mathcal D_0$, we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb R^n}
\|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi(d\boldsymbol\theta) + \beta
D_{\rm KL}(\pi||\pi_0) \bigg\} +\bigg(\frac{6a^2k
}{3\beta - 2a\mathcal D_0} - 1\bigg) \mathbf E[{\rm
Var}_{\widehat\pi}(\boldsymbol\theta)]. \label{eq:binomial}
\end{align}
In particular, if $\beta\geqslant 2a^2k + (2/3)a
\mathcal D_0$, the last term in \eqref{eq:binomial}
is nonpositive and, therefore, can be neglected.
\end{corollary}
A typical application of this result concerns the case
of observing the average of $k$ Bernoulli variables,
that is $Y_i\sim (1/k)\mathcal B(k,\theta_i^*)$. In
this case, all the $\theta_i^*$ belong to $[0,1]$ and,
therefore, it is reasonable to choose a prior
distribution $\pi_0$ supported by $[0,1]^n$. This
ensures that $\mathcal D_0 \leqslant 1$, and, therefore,
inequality \eqref{eq:main1} follows from the last
corollary provided that $\beta\geqslant 8/(3k)$ (this
is obtained by choosing $a = 1/k$).
\subsection{Double exponential noise}
All the previous examples considered in this section
are distributions with sub-exponential tails. Let
us check that \Cref{thm} can also be applied to
some distributions that have heavier, say
sub-exponential, tails. Let $\xi_i$ be independent
drawn from the Laplace distribution\footnote{This
means that the density of $\xi_i$ is equal to
$(2\mu_i)^{-1}\exp(-|x|/\mu_i)$.} with parameters $\mu_i
>0$, $i=1,\ldots,n$. Then, one can choose $\mathcal
F = \mu(\bxi)$ and $\zeta_1,\ldots, \zeta_n$ to be
independent, independent of $\bxi$, and drawn from
the distribution $\frac1{(1+\alpha )^{2}}\delta_0
+ \frac{2\alpha + \alpha^2}{(1 + \alpha)^2}
\textsf{Lap}((1+\alpha)\mu_i)$. The fact that
$\xi_i+\zeta_i$ has the same distribution as $(1 +
\alpha)\xi_i$ can be checked by computing the
characteristic functions of these variables and by
verifying that they are equal. As for the Bernstein
condition, for every $t$ such that $(1+\alpha)\mu_i
|t|\leqslant 1$ we have
\begin{align}
\mathbf E[e^{t\zeta_i}] &= \frac1{(1+\alpha)^2} +
\frac{2\alpha + \alpha^2}{(1+\alpha)^2} \times
\frac{1}{1-(1+\alpha)^2 t^2\mu_i^2}\qquad
\big(p:=1-(1+\alpha)^{-2}, z:= (1+\alpha)t
\mu_i\big)\\
& = 1-p + \frac{p}{1-z^2} = 1+ \frac{pz^2}{
1-z^2} \leqslant 1+ \frac{pz^2}{1-|z|} \\
&\leqslant \exp\Big\{\frac{pz^2}{1-|z|}\Big\}
= \exp\Big\{\frac{\alpha(2+\alpha)\mu_i^2
t^2}{1-(1+\alpha) \mu_i |t|}\Big\}
\end{align}
This means that the (conditional) Bernstein condition
is satisfied with $v(\alpha) = \alpha(2+\alpha)
\mu^2$ and $b(\alpha) = (1+\alpha)\mu$, where
$\mu$ is the largest value among $\mu_i$.
\begin{corollary}
Let $\pi_0$ be a probability distribution
supported by $S_0\subset \mathbb R^n$ such that
$\mathcal D_0 = \sup_{\boldsymbol\theta,\boldsymbol\theta'\in S_0}
\|\boldsymbol\theta-\boldsymbol\theta'\|_\infty <\infty$. Assume
that $\bxi$ has independent entries $\xi_i$,
$i=1,\ldots,n$, drawn from the Laplace
distribution $\textsf{Lap}(\mu_i)$. Set $
\mu = \max_{1\leqslant i \leqslant n}\mu_i$.
Then, for every $\beta \geqslant 2\mu
\mathcal D_0$, we have
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \inf_{\pi}\bigg\{\int_{\mathbb
R^n} \|\boldsymbol\theta-\boldsymbol\theta^*\|^2 \,\pi
(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0)
\bigg\} +\Big(\frac{4\mu^2}{\beta -
2\mu\mathcal D_0} - 1\Big) \mathbf E [
{\rm Var}_{\widehat\pi}(\boldsymbol\theta)].
\label{eq:Laplace}
\end{align}
In particular, if $\beta\geqslant 4\mu^2 +
2\mu \mathcal D_0$, the last term in
\eqref{eq:Laplace} is nonpositive and,
therefore, can be neglected.
\end{corollary}
The last claim improves on
\citep[Prop.\ 1]{DT08}, since the latter
requires the condition $\beta\geqslant
(16\mu^2) \vee (\sqrt{8}\,\mu \mathcal D_0)$.
\begin{remark}
Let us finally remark that the construction
of $\zeta_i$'s used in this section can be
extended to the case where $\xi_i$'s are
scale-mixtures of Laplace distributions with
a mixing density supported by a compact set.
The only modification in the statement of the
final result should be the definition of $\mu$,
which should correspond to the smallest real
number such that the mixing density has no mass
in $(\mu,\infty)$. Similar extension can be
carried out in the case of scale-mixtures
of Gaussians.
\end{remark}
\section{Proof of \Cref{thm}}\label{sec:proof}
Since $\widehat\pi$ minimizes the criterion $\pi\mapsto
\int_{\mathbb R^n} \|\bY-\boldsymbol\theta\|^2\,\pi(d\boldsymbol\theta) +
\beta D_{\rm KL}(\pi||\pi_0)$, we have
\begin{align}
\int_{\mathbb R^n} \|\bY-\boldsymbol\theta\|^2\,\widehat\pi
(d\boldsymbol\theta) + \beta D_{\rm KL}(\widehat\pi||\pi_0)
\leqslant \int_{\mathbb R^n} \|\bY-\boldsymbol\theta\|^2
\,\pi(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0)
\label{eq:2}
\end{align}
for all densities $\pi$ over $\mathbb R^n$. The
KL-divergence being always nonnegative, we infer from
the last display that
\begin{align}
\|\bY-\widehat\boldsymbol\theta\|^2 & = \int_{\mathbb R^n} \|\bY -
\boldsymbol\theta\|^2\,\widehat\pi(d\boldsymbol\theta) - \int_{\mathbb R^n}
\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2\,\widehat\pi(d\boldsymbol\theta)\\
&\leqslant \int_{\mathbb R^n} \|\bY-\boldsymbol\theta\|^2
\,\pi(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0)
- \int_{\mathbb R^n} \|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2
\, \widehat\pi(d\boldsymbol\theta).\label{eq:3}
\end{align}
Using the decompositions $\|\bY-\widehat\boldsymbol\theta\|^2 =
\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2 +2(\widehat\boldsymbol\theta-\boldsymbol\theta^*)^\top\bxi
+\|\bxi\|^2$ and $\|\bY-\boldsymbol\theta\|^2 = \|\boldsymbol\theta-\boldsymbol\theta^*\|^2
+2(\boldsymbol\theta^* - \boldsymbol\theta)^\top\bxi + \|\bxi\|^2$ and taking the
expectation of the two sides of \eqref{eq:3}, we get
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2] +
2\mathbf E[(\widehat\boldsymbol\theta-\boldsymbol\theta^*)^\top\bxi]
\leqslant \int_{\mathbb R^n} \|\boldsymbol\theta-\boldsymbol\theta^*\|^2
\,\pi(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0) -
\mathbf E\bigg[\int_{\mathbb R^n} \|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2
\, \widehat\pi(d\boldsymbol\theta)\bigg]
\end{align}
which can be equivalently written as
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \int_{\mathbb R^n} \|\boldsymbol\theta-\boldsymbol\theta^*\|^2
\,\pi(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0)
+ 2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi]
- \int_{\mathbb R^n} \mathbf E[\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2
\, \widehat\pi(\boldsymbol\theta)]\,d\boldsymbol\theta.\label{eq:1}
\end{align}
In addition, we have
\begin{align}
2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi] = \frac\beta{\alpha}
\mathbf E\bigg[\int_{\mathbb R^n} \log e^{2(\alpha/\beta)
\boldsymbol\theta^\top \bxi}\widehat\pi(d\boldsymbol\theta)\bigg],
\end{align}
where $\alpha>0$ is an arbitrary number. Since the logarithm
is concave, the Jensen inequality yields
\begin{align}
2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi] &\leqslant \frac\beta{
\alpha} \mathbf E \bigg[ \log\bigg(\int_{\mathbb R^n}
e^{2(\alpha/\beta) \boldsymbol\theta^\top \bxi} \widehat\pi(d\boldsymbol\theta)
\bigg) \bigg] \\
&= \frac\beta{\alpha} \mathbf E \bigg[\log\bigg(\int_{
\mathbb R^n} e^{2(\alpha/\beta) \boldsymbol\theta^\top \bxi -
\|\boldsymbol\theta^*+\bxi - \boldsymbol\theta\|^2/\beta} \, \pi_0
(d\boldsymbol\theta)\bigg) - \log\bigg(\int_{\mathbb R^n}
e^{- \|\boldsymbol\theta^*+\bxi - \boldsymbol\theta\|^2/\beta} \,
\pi_0(d\boldsymbol\theta) \bigg) \bigg]\\
&= \frac\beta{\alpha} \mathbf E \bigg[ \log\bigg(\int_{
\mathbb R^n} e^{(2(1+\alpha)\boldsymbol\theta^\top \bxi
- \|\boldsymbol\theta^* - \boldsymbol\theta\|^2)/\beta} \,\pi_0
(d\boldsymbol\theta)\bigg) - \log\bigg(\int_{\mathbb R^n}
e^{(2 \boldsymbol\theta^\top \bxi- \|\boldsymbol\theta^* - \boldsymbol\theta\|^2)
/\beta} \, \pi_0(d\boldsymbol\theta)\bigg) \bigg]\label{eq:4}
\end{align}
Let $\bzeta = \bzeta_\alpha$ be the $n$ dimensional random
vector the existence of which is required in the statement
of the theorem. Recall that it satisfies
\begin{align}\label{cond:zeta}
\mathbf E[\bzeta|\mathcal F] = 0,\qquad \bxi + \bzeta
\stackrel{\mathscr D}{=} (1+\alpha) \bxi ,\qquad
\end{align}
These conditions imply that in the first expectation
in \eqref{eq:4}, one can replace $(1+\alpha)\bxi$
by $\bxi+\bzeta$, which yields
\begin{align}
2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi] &\leqslant
\frac\beta{\alpha} \mathbf E \bigg[ \log\bigg(\int_{
\mathbb R^n} e^{(2\boldsymbol\theta^\top\bxi + 2\boldsymbol\theta^\top
\bzeta - \|\boldsymbol\theta^* - \boldsymbol\theta\|^2)/\beta} \,
\pi_0(d\boldsymbol\theta)\bigg) \bigg] - \frac\beta{\alpha}
\mathbf E \bigg[ \log\bigg(\int_{\mathbb R^n} e^{(2
\boldsymbol\theta^\top \bxi- \|\boldsymbol\theta^* - \boldsymbol\theta\|^2)/\beta} \,\pi_0(d\boldsymbol\theta)\bigg) \bigg]\\
& = \frac\beta{\alpha} \mathbf E \bigg[ \log\bigg( \int_{
\mathbb R^n} e^{2\boldsymbol\theta^\top\bzeta/\beta} \,\widehat\pi
(d\boldsymbol\theta)\bigg) \bigg]
= \frac\beta{\alpha} \mathbf E \bigg[ \log\bigg( \int_{
\mathbb R^n} e^{2(\boldsymbol\theta-\widehat\boldsymbol\theta)^\top\bzeta/\beta}
\,\widehat\pi(d\boldsymbol\theta)\bigg) \bigg].
\label{eq:5}
\end{align}
Since conditionally to $\bxi$, $\zeta_i$'s are independent
and each $\zeta_i$ satisfies the $(v(\alpha), b(\alpha))
$-Bernstein condition, one can use the Jensen inequality
to upper bound the expectation in\eqref{eq:5} as follows
\begin{align}
2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi] & \leqslant
\frac\beta{\alpha} \mathbf E \bigg[ \log\bigg( \int_{
\mathbb R^n} \mathbf E[e^{2(\boldsymbol\theta-\widehat\boldsymbol\theta)^\top\bzeta
/\beta} |\mathcal F]\,\widehat\pi (d\boldsymbol\theta)\bigg) \bigg]\\
&\leqslant \frac\beta{\alpha} \mathbf E \bigg[
\log\bigg( \int_{ \mathbb R^n} \exp\Big\{\frac{2
\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2 v(\alpha)}{\beta(\beta
- 2 b(\alpha) \|\boldsymbol\theta-\widehat\boldsymbol\theta\|_\infty)}\Big\}
\,\widehat\pi (d\boldsymbol\theta) \bigg) \bigg]\label{eq:6}
\end{align}
for every $\beta$ satisfying $\beta\geqslant 2b(\alpha)
\|\boldsymbol\theta - \boldsymbol\theta'\|_\infty$ for every $\boldsymbol\theta,\boldsymbol\theta'
\in S_0 : = \text{supp}(\pi_0)$. Note that for every
$\boldsymbol\theta\in S_0$, we have $\|\boldsymbol\theta-\widehat\boldsymbol\theta\|_\infty
\leqslant \mathcal D_\infty(S_0)$. The inequality in
\eqref{eq:6} being true for any $\alpha>0$, one can
check that
\begin{align}
2\mathbf E[\widehat\boldsymbol\theta{}^\top\bxi]
&\leqslant \liminf_{\alpha\to 0}\frac\beta{\alpha}
\mathbf E \bigg[ \log\bigg( \int_{ \mathbb R^n}
\exp\Big\{\frac{2\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2
v(\alpha)}{\beta( \beta - 2 b(\alpha)\|\boldsymbol\theta -
\widehat\boldsymbol\theta\|_\infty)}\Big\}\,\widehat\pi (d\boldsymbol\theta)
\bigg) \bigg]\\
& = \mathbf E \bigg[ \int_{ \mathbb R^n} \frac{2
\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2 v'(0)}{\beta - 2b(0)
\|\boldsymbol\theta - \widehat\boldsymbol\theta\|_\infty}\,\widehat\pi
(d\boldsymbol\theta) \bigg]
\leqslant \frac{2 v'(0)}{\beta - 2 b(0)
\mathcal D_\infty(S_0)}\mathbf E \bigg[\int_{\mathbb R^n}
\|\boldsymbol\theta - \widehat\boldsymbol\theta\|^2\,\widehat\pi (d\boldsymbol\theta)
\bigg]. \label{eq:7}
\end{align}
Combining \eqref{eq:1} and \eqref{eq:7}, we see that
\begin{align}
\mathbf E[\|\widehat\boldsymbol\theta-\boldsymbol\theta^*\|^2]
\leqslant \int_{\mathbb R^n} \|\boldsymbol\theta-\boldsymbol\theta^*\|^2
\,\pi(d\boldsymbol\theta) + \beta D_{\rm KL}(\pi||\pi_0)
+\bigg(\frac{2 v'(0)}{\beta - 2 b(0) \mathcal
D_\infty(S_0)}- 1\bigg) \mathbf E[\text{Var}_{\widehat\pi}
(\boldsymbol\theta)].\label{eq:8}
\end{align}
This completes the proof.
\paragraph*{Acknowledgments} The work of the author
was supported by the grant Investissements d’Avenir
(ANR-11-IDEX-0003/Labex Ecodec/ANR-11-LABX-0047),
the FAST Advance grant and the center Hi! PARIS.
|
1,116,691,500,974 | arxiv |
\section{Introduction}
\label{sec:introduction}
In applications like X-ray computed
tomography (CT)~\cite{ct} and
magnetic resonance imaging
(MRI)~\cite{5484183}, reconstructing
images from undersampled or corrupted
observations is of critical importance.
For example, this
is necessary to
reduce a patient's exposure
to radiation in CT or reduce time spent acquiring MRI data.
MRI scans involve sequential data acquisition resulting in
long acquisition times that are not only a burden for patients and hospitals, but also make MRI susceptible to motion artifacts.
Reconstructing images from limited measurements can speed up the MRI scan, but
usually entails
solving an ill-posed inverse problem.
Recent approaches to accelerating MRI acquisition such as compressed
sensing (CS)~\cite{compress} reduce scan time by
collecting fewer measurements while preserving image quality by exploiting image priors or regularizers.
Historically, regularization in CS-MRI
has been based on
sparsity of wavelet coefficients~\cite{wave} or using total
variation~\cite{totalv}.
While conventional CS assumes sparse or incoherent signals, approaches based on learned image models have been shown to be more effective for MRI reconstruction, starting with learned synthesis dictionaries~\cite{ravishankar2011dlmri,jacob2013blindCSMRI}.
The dictionary parameters could be learned from unpaired clean image patches from a dataset and used for reconstruction or learned simultaneously with image reconstruction~\cite{CTdict,super,ravishankar2011dlmri}.
Additionally, recent advances in sparsifying transform learning have resulted in efficient or inexpensive data-adaptive sparsity-based reconstruction frameworks for MRI~\cite{ravishankar2012learning,ravishankar2020review,wensailukebres19}. Other
contemporary techniques could allow learning explicit regularizers in a supervised manner~\cite{9747201} for improved image restoration.
Convolutional neural networks have gained popularity in recent
years as a result of the introduction of deep learning-based
approaches for computer vision applications. They are now used for
image denoising, classification, segmentation, and, more recently,
medical image reconstruction.
The U-net~\cite{Unet,jin:17:dcn} is a deep convolutional neural
network (or CNN) that is frequently used in imaging applications.
It has a contracting path for context acquisition and an expanding
symmetric path for precise localization.
There are numerous other network models
available, including the Transformer
architecture, which enables sharing
of representations and feature
transmission across multiple tasks in
order to obtain higher-quality,
super-resolved, and motion-artifact-free
images from highly undersampled and
degenerated MRI
data~\cite{transfomer2021task}.
Due to the popularity and computational
efficiency of deep learning
methodologies, there has been an
increasing trend toward the use of deep
supervised methods in MRI applications.
MR image reconstruction techniques that are either supervised or
unsupervised include image-domain (denoising) methods,
sensor-domain methods, AUTOMAP, and hybrid-domain methods (cf.
review in
~\cite{ravishankar2020review}).
To improve stability and performance, hybrid-domain methods (e.g.,~\cite{modl}) enforce data consistency (i.e., the reconstruction is enforced to be consistent with the measurement model) all through
training and reconstruction.
Networks with data consistency layers play an
important role in MR imaging to keep the reconstructed image consistent with
the original image in k-space~\cite{Zheng2019twodataconsist,casade2017deep}.
These include deep unrolling-based methods~\cite{sun2016deep,hammernik2018learning} (that unroll a traditional iterative algorithm and learn the regularization parameters therein),
regularization by denoising~\cite{romanoRED17}, plug-and-play methods~\cite{buzzard:18:pap}, etc.
For example, Yang
et al.~\cite{yang2017admmnet} proposed
ADMM-CSNet to learn the parameters of the
ADMM algorithm via neural networks.
ISTA-Net was similarly inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $\ell_1$ norm CS reconstruction
model~\cite{zhang2018istanet}.
Moreover,
transfer learning~\cite{transfer_learning_MRI}, a well-known technique, can also
enable the use of neural networks for MRI reconstruction without a need for extensive datasets.
While these CNN-based
reconstruction methods have outperformed traditional compressed sensing (CS) methods, there remain concerns about their stability and interpretability~\cite{AIconcern}.
Apart from algorithmic advances, another
driving force behind deep
learning-based reconstruction is the
rapid growth of publicly available
training datasets. The availability
of (paired or unpaired) training data sets made possible by
efforts like OCMR~\cite{chen2020ocmr} and
fastMRI~\cite{zbontar2019fastmri} has
enabled rapidly demonstrating the capacity of deep
learning-based algorithms for
improved image reconstruction
or denoising quality in MRI applications.
While many deep learning approaches are based on pixel-by-pixel supervised learning, they often require large, paired data sets and long training times to learn models with a large number of parameters.
Some works have focused on unsupervised learning such as generative adversarial network (GAN)-based methods~\cite{Gans, compresscyclic_loss,OT-cycleGAN-19}, but may still require large data sets.
A recent deep learning-based approach is the deep image prior~\cite{ulyanov2018deep}, which has been applied to MRI~\cite{DIP_MRI} and learns a neural network for reconstruction in an unsupervised fashion from a single image's measurements. A related approach dubbed self-supervised learning has also shown promise for MRI~\cite{akcakaya20} and uses a large unpaired data set.
\vspace{-0.1in}
\subsection{Contributions}
While deep learning approaches have gained popularity for MRI reconstruction due to their ability to model complex data sets, they often have difficulties generalizing to new data or distinct experimental situations at test time.
Deep CNNs usually require enormous datasets to ensure adequate performance trade-offs.
In this work, we propose to learn adaptive LOcal NeighborhooD-based Networks for MRI (LONDN-MRI) reconstruction. The approach efficiently learns reconstruction networks from small clusters in flexible training sets and directly at reconstruction time.
The models are trained using a small number of
adaptively chosen neighbors that are in close
proximity to (or are similar to in a sense) the underlying (to be reconstructed) image (cf.~\cite{lahiri:20:csa} for a slightly related approach in the context of patch-based dictionary learning).
The proposed algorithm for image reconstruction alternates between finding a small set of similar images to a current reconstruction, and training the network locally on such neighbors and updating the reconstruction. We show connections of this algorithm to a challenging bilevel optimization problem.
The proposed local learning approach can be combined with any existing deep learning MRI framework to improve its performance (e.g., unrolled networks, image-domain denoisers such as the U-Net or DIDN, etc.).
Through several experiments on the FastMRI data set, we demonstrate that the proposed learned model is extremely adaptable to individual scans as well as
changes in experimental conditions (e.g., sampling masks), training set modifications, and so on.
Our results show that the proposed local adaptation technique outperforms popular networks (e.g., the MoDL method~\cite{modl}) trained on larger (global) datasets at a variety of MRI k-space undersampling factors.
This work is a substantial extension of our very recent conference work ~\cite{LONDN} with comprehensive experiments and evaluations and theoretical understanding.
To better understand the proposed model, we
perform a variety of studies including on the effect of different similarity/distance metrics (e.g., euclidean or $\ell_2$ distance, Manhattan or $\ell_1$ distance, normalized cross-correlation), or weight regularization during learning, etc.
\vspace{-0.1in}
\subsection{Organization}
The rest of this article is organized as follows. Section~\ref{section2} discusses some preliminaries on multi-coil MRI reconstruction and the approach for searching neighbors that will be used in our algorithm.
Section~\ref{section3} describes the proposed technique and its interpretations.
Section~\ref{section4} presents the experimental setup and results. Section~\ref{section5} provides a discussion of our findings, and in Section~\ref{section6}, we conclude.
\section{Preliminaries} \label{section2}
\subsection{Multi-coil MRI Reconstruction}
When an image $\xmath{\bm{x}} \in \mathbb{C}^q$ (vectorized) is sufficiently
sparse in some transform domain and the transform is sufficiently incoherent with the measurement operator, the
theory of compressed
sensing~\cite{compress,lustig:07:smt} enables
accurate image recovery from limited measurements.
The image
reconstruction problem in MRI is typically
formulated as an optimization of a data-fidelity penalty and a regularizer as follows:
\begin{equation}\label{eq:inv_pro}
\hat{\xmath{\bm{x}}}=\underset{\xmath{\bm{x}}}{\arg\min} ~\sum_{c=1}^{N_c}\|\mathbf{A}_c \xmath{\bm{x}} - \xmath{\bm{y}}_c \|^{2}_2 + \lambda \mathcal{R}(\xmath{\bm{x}}),
\end{equation}
where $\xmath{\bm{y}}_c \in \mathbb{C}^p, \ c=1, \ldots,
N_c,$ represent the acquired k-space measurements
from $N_c$ coils.
We write the imaging forward operator or measurement operator as $\mathbf{A}_c = \mathbf{M} \mathcal{F} \mathbf{S}_c$,
where $\mathbf{M} \in \{0,1\}^{p\times q}$ is a masking operator that captures the undersampling pattern in k-space,
$\mathcal{F}\in \mathbb{C}^{q\times q}$ is
the Fourier transform operator (corresponding to densely sampled measurements), and $\mathbf{S}_c \in
\mathbb{C}^{q\times q}$ is the $c$th
coil-sensitivity
matrix (a diagonal matrix).
Additionally, the regularizer above may include a slew
of terms capturing the assumed model of the underlying image.
It enables enforcing desirable properties such as spatial smoothness, image sparsity, or edge preservation in the reconstructed image.
Numerous iterative optimization techniques exist for~\eqref{eq:inv_pro}.
In MRI, the regularizer can involve $\ell_1$ penalty on wavelet coefficients~\cite{wave} or a total variation penalty~\cite{totalv} or patch-based sparsity in learned dictionaries~\cite{ravishankar2011dlmri} or sparsifying transforms~\cite{wensailukebres19}, or proximity to deep learning-based reconstructions, etc.
\revise{For example, sparsity w.r.t. a known transform matrix $\mathbf{W}$ is captured by $\mathcal{R}(\xmath{\bm{x}}) = \| \mathbf{W} \xmath{\bm{x}}\|_{1}$.}
\vspace{-0.1in}
\subsection{Neighbor Search}
Our approach relies on finding images in a data set that are in a sense similar to the one being reconstructed. The similarity may be defined using a metric such as euclidean distance or other metrics.
Assume we have a
data set $\left \{ \xmath{\bm{x}}_n, \xmath{\bm{y}}_n \right
\}_{n=1}^N$ with $N$ reference or
ground-truth images
$\xmath{\bm{x}}_n$ and their corresponding k-space measurements $\xmath{\bm{y}}_n$ (with multi-coil data), we use the distance metric $d$ to find the $k$
nearest neighbors to an (estimated/reconstructed) image $\xmath{\bm{x}}$ as follows:
\begin{equation}
\label{eq:bm_formulation}
\begin{aligned}
\hat{C}_{\xmath{\bm{x}}} =\underset{C \in \mathcal{C}, |C|= k}{\arg\min}
\sum_{r\in C}d(\xmath{\bm{x}},\xmath{\bm{x}}_n),
\end{aligned}
\end{equation}
where $C$ is a set of cardinality $k$ containing indices of feasible neighbors, and $\mathcal{C}$ denotes the set of all such sets with $k$ elements.
Different distance functions could produce a different set of
similar neighbors, which could then affect the outcome of the
reconstruction algorithm, as our network modeling is
dependent on the choice of the local data set.
As a result, we used different
metrics for evaluating our approach in this work.
The distances serve as a proxy
for data similarity, with nearby data considered similar
and distant data considered dissimilar. We used
the euclidean distance, Manhattan distance, and normalized cross-correlation as distance metrics as follows.
\begin{align*}
& d^{L1}(\xmath{\bm{x}},\xmath{\bm{x}}_n) = \| \xmath{\bm{x}} -\xmath{\bm{x}}_n\|_1 \\
& d^{L2}(\xmath{\bm{x}},\xmath{\bm{x}}_n) = \|\xmath{\bm{x}} -\xmath{\bm{x}}_n\|_2 \\
& d^{NCC}(\xmath{\bm{x}},\xmath{\bm{x}}_n) = \frac{\begin{vmatrix}
\xmath{\bm{x}}^{H}\xmath{\bm{x}}_n
\end{vmatrix}}{\| \xmath{\bm{x}} \|_2 \, \| \xmath{\bm{x}}_n\|_2 } \label{eq:distance_metric}
\end{align*}
In all cases, we select
the top $k$ most similar neighbors from a set
that correspond to the $k$ smallest distances in~\eqref{eq:bm_formulation}.
The indices of the chosen images are in the set $\hat{C}_{\xmath{\bm{x}}}$, i.e., they are the minimizer in~\eqref{eq:bm_formulation}.
These neighbors can be used to train
the local model.
These are expected to
capture structures most similar to the image being reconstructed, enabling a highly effective reconstruction model to be learned.
\vspace{-0.05in}
\section{Proposed LONDN-MRI Algorithm} \label{section3}
Our primary objective is to learn an adaptive neural network for MRI reconstruction,
in which the model's free parameters are
fitted using training data that are similar in a sense to the current scan.
We emphasize that the proposed
model is local in the sense that it changes in
response to the input.
The
advantage of the proposed method is that the model is fit for every scan and can thus be adaptive to the scan, handle changes in sampling masks, etc., readily.
The algorithm begins by obtaining an initial estimate of the underlying image, denoted $\xmath{\bm{x}}^0$, from undersampled measurements $\xmath{\bm{y}}$.
Our proposed strategy then
alternates between computing the closest neighbors
to the reconstruction
in the training set and performing
CNN-based supervised learning on the estimated local
dataset.
During supervised learning, the network weights could be randomly initialized or could be warm started with the weights of a pre-trained (e.g., state-of-the-art) network. In the latter case, the pre-trained network would adapt to the features of images similar to the one being reconstructed (akin to transfer learning~\cite{transfer_learning_MRI}).
In each iteration, the nearest ground truth images in the training
set are computed in relation to the reconstruction
(estimate) predicted by the locally learned network,
except in the first iteration, when the nearest neighbors
are computed in relation to the (typically highly aliased)
initial $\xmath{\bm{x}}^{0}$ (we used corresponding aliased images in the dataset for computing distances in the first iteration).
In
practice, pairwise distances to even a
large number of
images can be computed very efficiently (in parallel),
after which the local network can be
rapidly learned on a
small set of neighbors (typically a
shallow network or with early stopping).
The network weights for
deep reconstruction are constantly updated to map the
initial images for the local data set to the target
(ground truth) versions.
To demonstrate our approach, we used
the state-of-the-art deep CNN
reconstruction model~MoDL~\cite{modl},
which is trained locally in our scheme.
Additionally, we trained it globally, i.e., once on a
larger dataset,
in
order to compare it to our on-the-fly
neighborhood-based
learning scheme.
For completeness, we briefly recap the MoDL
scheme in the
following and discuss its local
training within our framework.
MoDL is similar to the plug-and-play approach, except that
instead of pre-trained denoiser networks, end-to-end
training is used to learn the shared network
weights across iterations in the architecture.
\vspace{-0.1in}
\subsection{Network Model and Training}
The proposed approach is compatible with any
network architecture. We use MoDL, which has shown promise for MR image reconstruction, and combines a
denoising network with a data consistency (DC) module in each iteration of an unrolled architecture.
MoDL unrolls
alternating minimization for the following problem:
\begin{equation}
\text{L}_a(\xmath{\bm{z}},\xmath{\bm{x}}) := \nu \sum_{c=1}^{N_c} \|\blmath{A}_c \xmath{\bm{x}} - \xmath{\bm{y}}_c\|_2^2 + \mathcal{R}(\xmath{\bm{z}})+\mu\|\xmath{\bm{x}} - \xmath{\bm{z}}\|_{2}^2.
\label{eq:altmin}
\end{equation}
We denote the initial image in the process as $\xmath{\bm{x}}^{0}$,
$\nu \geq 0$ weights the data-consistency term above, and $\mu \geq 0$ weights the proximity of $\xmath{\bm{x}}$ to $\xmath{\bm{z}}$.
By decomposing the optimization into two subproblems over $\xmath{\bm{z}}$ and $\xmath{\bm{x}}$, the explicit regularizer-based update for $\xmath{\bm{z}}$ can be solved by replacing it with a CNN-based denoiser ($D_{\theta} (\cdot)$), and the denoised estimate is then used to update $\xmath{\bm{x}}$.
The $\xmath{\bm{x}}$ update in the MoDL scheme involves the data-consistency term and is performed using Conjugate Gradient (CG) descent.
Thus, $\xmath{\bm{z}}$ is obtained as the output from a CNN-based denoiser ($D_{\theta}$) and $\xmath{\bm{x}}$ is updated by CG.
This alternating scheme is repeated $L$ times (unrolling), with the initial input image $\xmath{\bm{x}}^{0}$ being passed through $L$ blocks of denoising CNN + CG updates.
Now, if $\Sbs{l}{\theta}(.)$ is the function capturing the $l$th iteration of the algorithm,
then the MoDL output for the $l$th block
is given as
\begin{equation}
\begin{split}
\label{modleqn1}
&\xmath{\bm{x}}^{l+1} = \Sbs{l}{\theta}(\xmath{\bm{x}}^l) =
{\xmath{\bm{S}}}\big(\xmath{\bm{x}}^l,\theta,\nu_l, \{\blmath{A}_c,\xmath{\bm{y}}_c\}_{c=1}^{N_c} \big)
, \, \text{and}\\
&\xmath{\bm{S}}\big(\bar{\xmath{\bm{x}}},\theta,\nu,\{\blmath{A}_c,\xmath{\bm{y}}_c\}_{c=1}^{N_c}\big)
\triangleq
\\&
\argmin{\xmath{\bm{x}}}~ \nu \sum_{c=1}^{N_c} \|\blmath{A}_c \xmath{\bm{x}} - \xmath{\bm{y}}_c\|_2^2
+ \|\xmath{\bm{x}}-\xmath{\bm{D}}_\theta(\bar{\xmath{\bm{x}}})\|_2^2.
\end{split}
\end{equation}
After $L$ iterations, the final output is
\begin{gather}\label{eq:supervised_loss}
\xmath{\bm{x}}_{\text{supervised}} = \xmath{\bm{x}}^L = \paren{\bigcomp_{l=0}^{L-1}\Sbs{l}{\theta}}(\xmath{\bm{x}}^0)\triangleq \xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}^0),
\end{gather}
where $\bigcomp_{i=0}^{L-1} f^i$ represents
the composition of $L$ functions
$f^{L-1}\circ f^{L-2}\circ\ldots\circ f^0$,
and $\xmath{\bm{x}}^{0}$ is the initial image.
The weights of the denoiser $\xmath{\bm{D}}_\theta$
are shared across the $L$ blocks.
The network parameters $\theta$ are
learned in a supervised manner
so that $\xmath{\bm{x}}_\text{supervised}$ matches
known ground truths
(in mean squared error or other
metric) on a (large/global or local) training
set.
This involves the following optimization for training:
\revise{
\begin{equation}
\begin{split}
\hat{\theta} & =
\argmin{\theta}~ \sum_{n \in S}C_{\beta}(\xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}_n^{0}); \xmath{\bm{x}}_{\text{n}})
\nonumber\\
& = \argmin{\theta}
\sum_{n \in S} \big( \big \|\xmath{\bm{x}}_n - \xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}_n^{0}) \big\|_2^2
)
,
\end{split}
\label{eq:suptrn_cost}
\end{equation}}
where
$n$ indexes the samples from the data
set used for training, with
\revise{$\xmath{\bm{x}}_{\text{n}}$} denoting the $n$th
target (or ground truth) image
reconstructed from fully-sampled
k-space measurements and \revise{$\xmath{\bm{x}}_n^{0}$}
denotes the initial image estimate from
undersampled measurements.
The cost $C_{\beta}(\hat{\xmath{\bm{x}}}_n; \xmath{\bm{x}}_n)$
denotes the training loss.
The main difference between a globally
learned and locally learned network is the choice of
the set $S$ of training indices.
For the proposed local approach, we fit
the network based on the $k$ training
samples closest to the current test
image estimate, whereas the
conventional (or global) training would fit
networks to a large dataset.
The initial image estimate $\xmath{\bm{x}}_n^{0}$
is obtained from the undersampled
measurements $\xmath{\bm{y}}_n$ by e.g.,
using a simple analytical
reconstruction scheme such as
applying the adjoint of the forward
model to the measurements.
In each iteration, the network is
updated (Fig.~\ref{algorithmflowchart}),
and the initial estimate of the
underlying unknown image is passed
through the network to obtain a new
estimate. In
Fig.~\ref{algorithmflowchart}, we
illustrate the iterative process of neighbor
fine-tuning and local network updating.
Local learning may have the advantage
of accommodating changes in
experimental conditions (e.g.,
undersampling pattern) at test time,
provided that such modified
measurements and initial images for the small local
training set can be easily simulated
from the existing $\xmath{\bm{x}}_n$ or $\xmath{\bm{y}}_n$.
Our overall algorithm is also summarized in Algorithm~\ref{alg::ADMM_anisoTV}.
\vspace{-0.1 in}
\begin{algorithm
\caption{LONDN-MRI Algorithm}
\label{alg::ADMM_anisoTV}
\begin{algorithmic}[1]
\Require Initial image $\xmath{\bm{x}}^{0}$, number of neighbors $k$, k-space undersampling
mask $\mathbf{M}$, regularization parameters
$\nu$ and $\mu$, number of training epochs $T$, number of iterations of alternating algorithm $S$.
\State Initialize reconstruction network parameters $\theta$ with pre-learned network weights $\hat{\theta}$ or randomly initialized weights. Set $\xmath{\bm{x}} = \xmath{\bm{x}}^{0}$.
\For{Iteration $<$ maximal iteration $S$}
\State Compute the set of $k$ similar neighbors $\hat{C}_{\xmath{\bm{x}}}$ to the current reconstruction estimate $\xmath{\bm{x}}$ using metric $d$.
\For{epoch $<$ maximal number $T$}
\State For each batch of neighbor data, compute the gradient of the training loss with respect to network parameters $\theta$ and make a step of update of the parameters.
\EndFor
\State Update $\xmath{\bm{x}}$ $\leftarrow \xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}^{0})$
\EndFor
\State \textbf{return} reconstruction $\xmath{\bm{x}}$ and learned net. parameters $\theta$.
\label{algorithm_londn}
\end{algorithmic}
\end{algorithm}
\subsection{Regularization}
In order to avoid over-fitting when training networks on small sets, we also adopted regularization of weights during training as follows:
\revise{
\begin{equation}
\begin{split}
\hat{\theta} = \argmin{\theta}
\sum_{n \in S} \big \|\xmath{\bm{x}}_n - \xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}_n^{0}) \big\|_2^2 + \lambda \, \mathcal{R}(\theta),
\end{split}
\label{eq:suptrn_cost_regularization}
\end{equation}}
where $\mathcal{R}(\cdot)$ denotes the regularization term on network weights. We primarily used the $\ell_1$ norm regularizer to enforce sparsity of the network weights to learn simpler models.
We observed that regularizing the
local model enables it to converge more easily, and shrinks
weights for less important or noisy features to zero.
We provide more discussion in the experiments section.
\begin{figure
\vspace{-0.05in}
\centering
\setlength{\tabcolsep}{0.4cm}
\begin{tabular}{cc}
\includegraphics[width=1.0\linewidth]{Image_for_chart/Flowchart_last.PNG}
\end{tabular}
\caption{Flowchart of the proposed LONDN-MRI scheme with a specific unrolled reconstruction network. The denoising network could be for example a U-Net or the recent DIDN.}
\label{algorithmflowchart}
\vspace{-0.17in}
\end{figure}
\subsection{Connections to Bilevel Optimization}
The alternating algorithm for training involving a neighbor search step and a local network update step could be viewed as a heuristic algorithm for the following bilevel optimization problem:
\revise{
\begin{align}
\nonumber & \min_{C \in \mathcal{C}, \, |C|=k} \sum_{i\in C} || f_{{\theta}(C)}(\xmath{\bm{y}}) - \xmath{\bm{x}}_{i}||_2^{2},\\
\text{s.t.} & \;\;\; \theta(C) = \arg \min_{\theta} \sum_{i\in C} || \xmath{\bm{x}}_{i} - f_{\theta}(\xmath{\bm{y}}_i)||_2^{2}.
\label{eq:bilevel_problem}
\end{align}}
Here, $ f_{{\theta}(C)}$ denotes a deep neural network learned on a subset $C$ of a data set that maps the current k-space measurements $\xmath{\bm{y}}$ to a reconstruction. The network is akin to $\xmath{\bm{\mathscr{M}}}_{\theta}(\xmath{\bm{x}}^{0})$ shown earlier, but with $\xmath{\bm{x}}^{0}$ assumed to be generated from $\xmath{\bm{y}}$ (e.g., via the well-known sum of squares of coil-wise inverse Fourier transforms, or via SENSE reconstruction, etc.).
Problem~\ref{eq:bilevel_problem} aims to find the best neighborhood or cluster among the training data, where the reconstructed image belongs (with closest distances to neighbors -- we assumed euclidean distance here), with the network weights for reconstruction estimated on the data in that cluster.
Problem~\ref{eq:bilevel_problem} is a bilevel optimization problem with the cluster optimization forming the upper level cost and network optimization forming the lower level cost. Bilevel problems are known to be quite challenging~\cite{crockett2021bilevel,9747201}.
It is also a combinatorial problem because we would have to sweep through all possible choices of clusters of $k$ training samples with reconstruction networks trained in each such cluster, to determine the best cluster choice.
The proposed algorithm is akin to optimizing the bilevel problem by optimizing for the network weights $\theta$ with the clustering $C$ fixed (the lower level problem) and then optimizing for the clustering $C$ (upper level minimization) with the network weights fixed. This is a heuristic because the optimized variables in each step are related, however, such an approach has been used in prior work~\cite{super} and shown to be approximately empirically convergent for the bilevel cost.
In this work, we performed an empirical evaluation of convergence in the experiments section, where the alternating algorithm is shown to reduce the upper-level cost in~\eqref{eq:bilevel_problem}.
\section{Experiments} \label{section4}
\subsection{Experimental Framework}
We evaluate the proposed LONDN-MRI reconstruction method
on the multi-coil FastMRI knee dataset~\cite{zbontar2019fastmri,knoll2020fastmri}.
We chose a random subset of 3000 images from the dataset as a training set and used 15 randomly selected unrelated images for testing. In some experiments, we evaluated the effect of training set size, where we worked with fewer images in the training set.
Coil
sensitivity maps for model-based reconstruction were generated for each scan
using the BART toolbox~\cite{martin_uecker_2018_1215477} using fully-sampled k-space data.
Since the proposed LONDN-MRI framework is quite general and can be combined with any supervised deep learning based reconstruction approach, we chose the recent popular model-based deep learning (MoDL) reconstruction network and compared globally (over the large set of training samples) and locally (over very small matched set of samples) learned versions of the model for different choices of deep denoisers in the network.
\footnote{
See
\url{https://github.com/sjames40/Multi_coil_local_model}
for our code in PyTorch.
}.
We performed reconstructions at fourfold or 4x
acceleration (25.0\% sampling) as well as at eightfold or 8x acceleration (12.5\% sampling) of the k-space acquisition. In all cases, variable density
1D random Cartesian (phase-encode) undersampling of k-space was performed.
The initial image estimates for MoDL were obtained by applying the adjoint of the measurement operator to the subsampled k-space data,
and were then used to train both local and global versions of MoDL networks.
In our local versions (LONDN-MRI), we used 30 images for training (searched from e.g., 3000 images). while the global versions used the full subset of training images.
\subsection{Sampling Masks}
We used binary masks for fourfold and eightfold Cartesian undersampling of k-space. Fig.~\ref{fig:usml_msk} shows the sampling masks primarily used in our experiments that include a fully-sampled central region (with $31$ central lines at 4x acceleration and $15$ central lines at 8x acceleration) and the remaining phase encode lines were sampled uniformly at random.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=1.0in]{Image_for_chart/4x_mask.png}&
\includegraphics[height=1.0in]{Image_for_chart/8x_mask.png}\\
(a) & (b) \\
\end{tabular}
\caption{Undersampling masks used in our experiments:
(a) fourfold undersampled 1D Cartesian phase-encoded; and
(b) eightfold undersampled 1D Cartesian phase-encoded. The masks were zero-padded for slightly larger images.
}
\label{fig:usml_msk}
\end{center}
\vspace{-0.3in}
\end{figure}
\subsection{Performance Metrics}
We used three common metrics to quantify the reconstruction quality of different methods.
These were the peak signal-to-noise ratio (PSNR) in decibels (dB), structural similarity index (SSIM)~\cite{wang2004image}, and the high frequency error norm (HFEN)~\cite{ravishankar2011dlmri}, which were computed between the reconstruction and the ground truth obtained from fully-sampled k-space data.
The HFEN was computed from the $\ell_2$ norm of the difference between Laplacian of Gaussian (LoG) filtered reconstructed and ground truth images. This was normalized by the $\ell_2$ norm of the LoG filtered ground truth.
\begin{figure*
\vspace{-0.1in}
\centering
\setlength{\tabcolsep}{0.5cm}
\includegraphics[width=0.24\linewidth]{PSNR_SSIM_HFEN/SSIM_unet_4x.PNG}
\includegraphics[width=0.24\linewidth]{PSNR_SSIM_HFEN/SSIM_unet_8x.PNG}
\includegraphics[width=0.24\linewidth]{PSNR_SSIM_HFEN/HFEN_unet_4x.PNG}
\includegraphics[width=0.24\linewidth]{PSNR_SSIM_HFEN/HFEN_unet_8x.PNG}
\vspace{-0.1 in}
\caption{ Comparison of MoDL with UNet denoiser trained globally vs. using the proposed LONDN-MRI scheme(1 iteration). Reconstruction metrics are shown across training set sizes at 4x and 8x undersampling.}
\label{PSNR_SSIM_HFEN}
\vspace{-0.02in}
\end{figure*}
\subsection{Network Architectures and Training}
We trained two types of MoDL models at 4x and 8x k-space undersampling, respectively.
One used the well-known UNet denoiser, with a two-channel input and two-channel output, where the real and imaginary parts of an image are separated into two channels. The network weights during training were initialized randomly (normally distributed). The
ADAM optimizer was
utilized for training the network weights. For LONDN-MRI, we used an initial learning rate of $6
\times 10^{-5}$ with a multi-step learning rate scheduler, which decreases the learning rate at $100$ and $150$ epochs with learning rate decay $0.65$.
\revise{For training globally, we used an initial learning rate of $1
\times 10^{-4}$ with $150$ epochs of training and a multi-step learning rate scheduler that decreased the learning rate at $50$ and $100$ epochs with learning rate decay $0.6$.}
For LONDN-MRI, MoDL with $5$
iterations was used
with a shallow UNet that had $2$ layers in the encoder and
decoder, respectively.
We used a shallow network with dropout for the local model to avoid over-fitting to the very small training set.
For the MoDL network trained globally (on large dataset) for making comparisons with, we utilized $4$ layers in the decoder and encoder in UNet and $6$ MoDL blocks.
We used a batch size of $2$ during training for both the global and local cases. Furthermore, for the data-consistency term, we used a tolerance of $10^{-5}$ in CG and a $\mu/\nu$ ratio of 0.1. \revise{Also, we chose the regularization weight $\lambda$ as $10^{-9}$ for LONDN-MRI, unless specified otherwise.}
For the second MoDL architecture, we used the recent state-of-the-art denoising network DIDN~\cite{DIDN_ori,blipstmi2021}.
Due to the high complexity of the DIDN
network, we first pre-trained it on the larger (global) dataset \revise{(learning rate, etc., similar to the UNet case)} before adapting the weights within LONDN-MRI for each scan.
This is an alternative to constructing shallower versions of a network for local adaptation.
The
ADAM optimizer was utilized for training, with a
learning rate of $5\times
10^{-5}$ in LONDN-MRI. We used $6$ iterations of MoDL with the
DIDN denoiser for which we used $3$ down-up blocks (DUBs). The number of epochs for training was $30$ in LONDN-MRI.
The remaining training
parameters were chosen similarly as in the previous UNet-based case.
Using a pre-trained state-of-the-art denoiser allows the local adaptation to converge faster.
\subsection{Results for the UNet-based Reconstructor}
Table~\ref{table:PSNR_comparison_unet}
compares the average PSNR values for reconstruction
over the testing set at both 4x and 8x undersampling.
We varied the number of images in the training set for a more comprehensive study.
For LONDN-MRI, we used NCC to measure distances
for neighbor search. We compare learning networks over a small set of similar images to learning networks over the larger datasets (global), as well as to an oracle LONDN scheme, where the neighbors in the training set were computed based on each ground truth test image.
The oracle scheme would ideally provide an upper bound on the performance of the iterative LONDN-MRI scheme.
When varying the size of the training set, the global approach was trained on the full set each time, whereas the local approach performed training on small subsets of $30$ training pairs selected from the larger datasets.
The iterations of the LONDN-MRI scheme quickly improve reconstruction performance, and even with only 2 alternations, the PSNR values begin approaching the oracle setting.
The LONDN schemes (oracle or iterative) consistently outperform the globally trained networks across the different training set sizes considered.
Figure~\ref{PSNR_SSIM_HFEN} compares the SSIM and HFEN reconstruction metrics using bar graphs, where a similar trend is observed as with PSNR.
Figs.~\ref{fig:denoised_imgs_zoomed1} and~\ref{fig:denoised_imgs_zoomed2} show images reconstructed by different methods at 8x and 4x undersampling, respectively. The LONDN-MRI reconstructions (either iterative or oracle) show less artifacts and sharper features and less errors than the global MoDL and initial aliased reconstructions.
The iterative LONDN-MRI results are also quite close to the oracle result.
\begin{table}[htp!]
\centering
\addtolength{\tabcolsep}{-2.1pt}
\begin{tabular}{@{}llllll@{}}
\toprule
\multicolumn{1}{c}{Acceleration}&
\multicolumn{1}{c}{Data Size}&
\multicolumn{1}{c}{Global}&
\multicolumn{1}{c}{LONDN-MRI}&
\multicolumn{1}{c}{LONDN-MRI} &
\multicolumn{1}{c}{Oracle} \\
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{(1 iteration)}&
\multicolumn{1}{c}{(2 iterations)} &
\multicolumn{1}{c}{} \\
\hline
\multirow{3}{*}{$\mathrm{4x}$} &1000& 32.63 &32.78 & \textbf{32.87} & 32.99 \\
&2000& 33.00 &33.28 & \textbf{33.31} & 33.35\\
&3000& 33.17 & 33.46 & \textbf{33.51} &33.54 \\
\hline
\multirow{3}{*}{$\mathrm{8x}$} &1000 & 29.78 & 30.15 & \textbf{30.26} &30.34 \\
&2000 & 30.21 &30.53 &\textbf{30.58} & 30.64 \\
&3000& 30.47 & 30.76 &\textbf{30.80} & 30.85 \\
\bottomrule \vspace{0.01in}
\end{tabular}
\caption{Average reconstruction PSNRs (in dB) for 15 images at 4x and 8x k-space undersampling. The proposed LONDN-MRI (with 1 or 2 alternations) is compared to training a global reconstructor for different training set sizes. We also compare to an oracle local reconstructor, where neighbors are found with respect to known ground truth test images. }
\label{table:PSNR_comparison_unet}
\vspace{-0.05in}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Acceleration& Reconstruction Metric & L1 & L2 & NCC \\\hline
\multirow{3}{*}{4x}& SSIM &0.85 & 0.849 & 0.852\\\cline{2-5}
&PSNR (dB) & 33.49 &33.44 &33.54 \\\cline{2-5}
& HFEN &0.552 & 0.56 &0.542\\\hline
\multirow{3}{*}{8x}& SSIM & 0.803 & 0.802 & 0.804\\\cline{2-5}
& PSNR (dB) & 30.79 &30.71 & 30.85\\\cline{2-5}
& HFEN & 0.664& 0.674 & 0.658\\
\hline
\end{tabular}
\caption{Average PSNR, SSIM, and HFEN values over 15 testing images for LONDN-MRI with neighbor search peformed using L1 distance, L2 distance, and normalized cross-correlation (NCC).}
\label{tab:distancemetrics}
\vspace{-0.2in}
\end{table}
\subsection{Performance with Different Distance Metrics}
Here, we study the effect of different distance metrics for selecting the matching dataset for training in LONDN-MRI (oracle scheme). We tested the performance of MoDL with UNet denoiser using L1, L2, and normalized cross-correlation (NCC) for finding the matched training set from 3000 images.
From the results in Table~\ref{tab:distancemetrics}, we see that the different distance functions offer slight differences in reconstruction performance, with NCC offering the best results with respect to all reconstruction metrics.
\subsection{Results for the DIDN-based Reconstructor}
Table~\ref{table:PSNR_comparison_DIDN} compares reconstruction performance on the test set with the DIDN denoiser-based MoDL architecture. Average PSNR values with LONDN-MRI are compared to those with networks trained globally at different training set sizes. We ran only 1 iteration of LONDN-MRI, where the reconstruction with a pre-trained (global) network was used to find neighbors for local adaptation (or local transfer).
PSNR values for the oracle LONDN-MRI reconstructor are also shown.
The overall performances with the DIDN-based architectures are better than with the UNet-based unrolled networks.
The PSNRs for LONDN-MRI are consistently and similarly better than for the globally trained network across the different training set sizes considered, indicating potential for LONDN-MRI in improving more state-of-the-art models via local adaptation.
Fig.~\ref{fig:denoised_imgs_zoomed3} visually compares reconstructions and reconstruction errors (in zoomed in region) for different methods. We can see that the LONDN reconstructors capture the original image features better and sharper than the globally learned reconstructor.
\begin{figure}
\vspace{-0.2in}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\linewidth,valign=t]{Fig3/groundturth_modl1081_TMI.png} &
\includegraphics[width=.4\linewidth,valign=t]{Fig3/input1081_TMI.png}\\
(a) Original & (b) Initial
\\
\includegraphics[width=.4\linewidth]{Fig3/global_modl1081_TMI.png} &
\includegraphics[width=.4\linewidth]{Fig3/local_modl_noise1081_TMI.png}\\
(c) Global & (d) LONDN-MRI (1 iteration)
\\
\includegraphics[width=.4\linewidth]{Fig3/modl_oracle1081_TMI.png} &
\includegraphics[width=.4\linewidth]{Fig3/LONDN_second_1081_TMI.png}\\
(e) Oracle & (f) LONDN-MRI (2 iterations)
\end{tabular}%
\vspace{-0.1in}
\caption{ Comparison of image reconstructions at 8x
undersampling using MoDL architecture with UNet denoiser with 1000 training images: (a) ground truth, (b) initial image input to the networks (PSNR = 16.43 dB), (c) global
reconstruction (PSNR = 29.26 dB), (d) LONDN-MRI (1 iteration)
reconstruction (PSNR = 29.50 dB), (e) oracle LONDN-MRI
reconstruction (PSNR = 29.72 dB), where the nearest neighbors
in the training set are found using known ground truth test
image, and (f) LONDN-MRI (2 iterations) reconstruction (PSNR =
29.68 dB).
The inset panel
on the top left in each image corresponds to a section of interest
in the image (shown by the red bounding box), while
the inset panel on the top right corresponds to \revise{the error map with respect to the ground truth.}}
\label{fig:denoised_imgs_zoomed1} \vspace{-0.1in}
\end{figure}
\begin{figure}
\vspace{-0.2in}
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[]{\label{fig.a}
\begin{tabular}{cc}
\includegraphics[width=.4\linewidth,valign=t]{Fig4/groundturth_modl1080_TMI.png} &
\includegraphics[width=.4\linewidth,valign=t]{Fig4/input1080_TMI.png}\\
(a) Original & (b) Initial\\
\includegraphics[width=.4\linewidth]{Fig4/global_modl1080_TMI2.png}&
\includegraphics[width=.4\linewidth]{Fig4/local_modl_noise1080_TMI2.png}\\
(c) Global & (d) LONDN-MRI (1 iteration)\\
\includegraphics[width=.4\linewidth]{Fig4/modl_oracle1080_TMI2.png}&
\includegraphics[width=.4\linewidth]{Fig4/LONDN_second_1080_TMI2.png}\\
(e) Oracle & (f) LONDN-MRI (2 iterations)
\end{tabular}}
\vspace{-0.2in}
\caption{ Comparison of image reconstructions at 4x
undersampling using MoDL architecture with UNet denoiser with 3000 training images: (a) ground truth and (b) initial image input to
the networks (PSNR = 21.23 dB), (c) global reconstruction
(PSNR = 32.78 dB), (d) LONDN-MRI (1 iteration) reconstruction
(PSNR = 33.16 dB), (e) oracle LONDN-MRI reconstruction (PSNR =
33.30 dB), where the nearest neighbors in the training set are
found using known ground truth test image, and (f) LONDN-MRI
(2 iterations) reconstruction (PSNR = 33.25 dB). The inset panel
on the top left in each image corresponds to a section of interest
in the image (shown by the red bounding box in the image), \revise{while
the inset panel on the top right corresponds to the error map with respect to the ground truth}.}
\label{fig:denoised_imgs_zoomed2}
\vspace{-0.2in}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}[b]{cc}
\includegraphics[width=.4\linewidth,valign=t]{Fig5/groundturth_modl1076_TMI.png}&
\includegraphics[width=.4\linewidth,valign=t]{Fig5/global_modl1076_TMI.png}\\
(a) Original & (b) Global
\\
\includegraphics[width=.4\linewidth]{Fig5/modl_oracle1076_TMI.png} &
\includegraphics[width=.4\linewidth]{Fig5/local_modl_noise1076_TMI.png}\\
(c) Oracle & (d) LONDN-MRI (1 iteration)
\end{tabular}
\caption{Comparison of image reconstruction at 4x
undersampling for the MoDL network with DIDN denoiser and 3000 training images: (a) ground truth,
(b) global MoDL reconstruction
(PSNR = 34.15 dB), (c) LONDN-MRI (1 iteration) reconstruction
(PSNR = 34.45 dB), and (d) oracle LONDN-MRI reconstruction (PSNR = 34.54 dB), where the nearest neighbors in the training set are
found using the ground truth test image.
The inset panel
on the top left in each image corresponds to a region of interest (shown by red bounding box in the image), and
the inset panel on the top right corresponds to the error map with respect to the ground truth.}
\label{fig:denoised_imgs_zoomed3}
\end{figure}
\subsection{Effect of Varying Scan Settings at Test Time}
Since the reconstruction network in LONDN-MRI is trained for each scan, we would like to understand better the benefits this provides in terms of letting the network adapt to distinct scan settings.
So we chose the MoDL reconstructor with UNet denoiser (with same hyperparameters for training as before) and trained it on the 3000 image set in two ways: with a fixed sampling mask across the images (the mask was padded with zeros to account for slight variations in matrix sizes), and with a different random sampling mask for each image. The \revise{first} setting was used in previous subsections. For LONDN-MRI, here, we used a different random sampling mask for each test scan, but the network was adapted locally with the same mask used across each (small) local training set.
Table~\ref{table:PSNR_comparison_mask} shows the average PSNR values on the test set with these different strategies as well as with the oracle LONDN-MRI scheme.
It is clear that the globally learned model with a fixed sampling mask struggles to generalize to the different scan settings at test time. But training the global model with random sampling masks leads to improved reconstruction PSNRs.
Importantly, the LONDN-MRI schemes that adapt the reconstruction model to the settings as well as the data for each scan provide marked improvements over both globally learned network settings.
\begin{table
\centering
\addtolength{\tabcolsep}{-2.1pt}
\begin{tabular}{@{}lllll@{}}
\toprule
\multicolumn{1}{c}{Acceleration}&\multicolumn{1}{c}{Data Size}& \multicolumn{1}{c}{Global}& \multicolumn{1}{c}{LONDN-MRI}&
\multicolumn{1}{c}{Oracle} \\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}&
\multicolumn{1}{c}{}&\multicolumn{1}{c}{(1 iteration)}&
\multicolumn{1}{c}{} \\
\hline
\multirow{3}{*}{$\mathrm{4x}$} &1000& 33.66 &33.92 & 33.96 \\
&2000 & 34.01 & 34.23 &34.31 \\
& 3000&34.15 &34.39 & 34.42 \\
\hline
\multirow{3}{*}{$\mathrm{8x}$} & 1000& 31.02 & 31.33 &31.37 \\
& 2000& 31.34 & 31.64 &31.68 \\
& 3000& 31.79 & 32.08 &32.12 \\
\bottomrule \vspace{0.01in}
\end{tabular}
\vspace{-0.1in}
\caption{Average reconstruction PSNR values (in dB) on the testing set at 4x and 8x undersampling for various training set sizes. MoDL reconstructor with DIDN denoiser is used. }
\label{table:PSNR_comparison_DIDN}
\end{table}
\begin{table}[htp!]
\centering
\addtolength{\tabcolsep}{-2.1pt}
\begin{tabular}{ccccc}
\toprule
\multicolumn{1}{c}{Acceleration}& \multicolumn{1}{c}{Global Model}& \multicolumn{1}{c}{Global Model}&
\multicolumn{1}{c}{LONDN-MRI} &
\multicolumn{1}{c}{Oracle} \\
\multicolumn{1}{c}{}& \multicolumn{1}{p{1.5cm}}{trained with a fixed mask} & \multicolumn{1}{p{1.5cm}}{trained with rand. masks} &
\multicolumn{1}{p{1.5cm}}{(2 iterations)} &
\multicolumn{1}{c}{LONDN} \\
\hline
$\mathrm{4x}$ & 33.03 & 33.19 & 33.56 &33.64\\
$\mathrm{8x}$ & 30.62 & 30.84 &31.14 & 31.22 \\
\bottomrule \vspace{0.01in}
\end{tabular}
\vspace{-0.05 in}
\caption{Average reconstruction PSNR values (in dB) on the test set at 4x and 8x undersampling. The LONDN-MRI results are compared to training a global model with a fixed sampling mask or with random masks.}
\label{table:PSNR_comparison_mask}
\end{table}
\subsection{Effect of Weight Regularization in LONDN-MRI}
Here, we vary the strength of the regularization penalty weight in~\eqref{eq:suptrn_cost_regularization} and run LONDN-MRI over the test set at 4x k-space undersampling. Fig.~\ref{regularization} plots the average PSNR as a function of the penalty weight for the MoDL network with UNet denoiser.
The normalized cross-correlation distance was used during neighbor search, with other parameters as before.
The result shows slight benefits for choosing the regularization weight carefully.
\begin{figure}[hbt!]
\vspace{-0.01in}
\centering
\setlength{\tabcolsep}{0.5cm}
\includegraphics[width=1.0\linewidth]{Regu_factor/regulation_correct.png}
\caption{Average reconstruction PSNR on the test set at 4x undersampling for different regularization penalty parameters. We used $\ell_1$ norm regularization of network weights for an MoDL network with UNet denoiser.}
\label{regularization}
\vspace{-0.05 in}
\end{figure}
\begin{figure
\vspace{-0.01in}
\centering
\setlength{\tabcolsep}{0.5cm}
\begin{tabular}{cc}
\includegraphics[width=0.7\linewidth]{image_accuracy/Accuracy2_1.pdf}
\end{tabular}
\vspace{-0.05 in}
\caption{Average accuracy (over test set) of neighbor search in LONDN-MRI (MoDL with UNet denoiser) at 4x undersampling in (a) the first iteration (neighbors found with respect to the initial input images $\xmath{\bm{x}}^0$) and after the (b) first and (c) second iteration. (d)-(f) are corresponding results at 8x undersampling.}
\label{accuracy2}
\end{figure}
\subsection{Evaluating the Accuracy of Neighbor Search}
Here, we study how the neighbor search proceeds across the iterations or alternations of LONDN-MRI.
We are interested to know if our
locally learned reconstructor can improve the neighbor finding process over iterations.
We used all images from the test set.
First, we find the $k$ closest neighbors
(in terms of euclidean distance)
for each ground truth test image amongst the ground truth training image. The set $C_{r}^*$ contains the indices of these \emph{oracle} neighbors for a test image indexed $r$. The set $\hat{C}_{r}$ contains the indices of closest neighbors from a certain iteration of LONDN-MRI. The neighbor matching accuracy (NMA) metric below computes the average (over the test set indices $\mathcal{T}$) percentage match between the two sets:
\begin{equation}
\text{NMA} := \frac{100}{|C|}\sum_{r \in \mathcal{T}} \frac{|\hat{C}_{r} \cap C_{r}^*|} {k},
\end{equation}
The accuracy of the neighbor search at both 4x and 8x undersampling is shown in Fig.~\ref{accuracy2}.
The accuracy of the initial search (based on $\xmath{\bm{x}}^0$) and after $1$ or $2$ iterations of LONDN-MRI are shown.
We find nearest neighbors for the initial highly aliased $\xmath{\bm{x}}^0$ with respect to the corresponding aliased images in the training set (based on the same k-space undersampling mask as at testing time), rather than based on the ground truth training images, because the latter resulted in lower neighbor search accuracy for $\xmath{\bm{x}}^0$.
It is clear from Fig.~\ref{accuracy2} that
the accuracy improves quickly and tapers off in few iterations.
\subsection{Convergence of Loss in Bilevel Optimization}
Finally, we study the behavior of the alternating LONDN-MRI algorithm as a heuristic for the bilevel optimization formulation in~\eqref{eq:bilevel_problem}.
Here, we used an MoDL network with the UNet denoiser and $k=30$ training pairs were chosen (from 3000 cases) in the local dataset in each iteration of LONDN-MRI.
The UNet weights were randomly initialized to begin with, and the neighbor search in the first iteration of LONDN-MRI was performed using $\xmath{\bm{x}}^0$ and correspondingly generated aliased training images.
Fig.~\ref{bilevel} plots the upper level loss in~\eqref{eq:bilevel_problem} (in a root mean squared error form) after each iteration of LONDN-MRI for a test image.
Here, we ran many iterations to verify convergence. We observe that the loss changes very little after a few iterations and stabilizes. This matches with the behavior of the neighbor search accuracy bar plots. The result indicates that the proposed alternating scheme could be a reasonable heuristic for reducing the loss in the challenging problem~\eqref{eq:bilevel_problem}.
Finally, we compare the loss values in Fig.~\ref{bilevel} with an oracle loss, where the upper level loss in~\eqref{eq:bilevel_problem} is computed using the ground truth test image and its $k$ nearest neighbors. It is clear that the loss values in LONDN-MRI converge very close to the oracle loss, indicating potential for our scheme.
\begin{figure}[hbt!]
\vspace{-0.1in}
\centering
\setlength{\tabcolsep}{0.6cm}
\includegraphics[width=1.0\linewidth]{Bilevel-optimization/Bilevel_cost.png}
\caption{Upper level loss in the bilevel optimization formulation~\eqref{eq:bilevel_problem} plotted over the iterations (after network update step) of the LONDN-MRI scheme at 4x undersampling. We used MoDL with a UNet denoiser and $k=30$ \revise{for} neighbor search. In addition, the red line shows an oracle upper level loss computed using the ground truth test image and its $k$ nearest neighbors.}
\label{bilevel}
\end{figure}
\section{Discussion} \label{section5}
We proposed a novel LONDN-MRI reconstruction technique that efficiently matches test reconstructions to a cluster of a dataset, where networks are adaptively estimated on images most related to a current scan.
Our results on the multi-coil FastMRI dataset showed promise for our scan or patient adaptive network estimation scheme. The approach does not involve pre-training and can thus readily handle changes in the training set.
The networks in LONDN-MRI can be randomly initialized and trained adaptively on very small datasets, and such networks outperformed models trained globally on much larger datasets (with lengthy training times).
For example, LONDN-MRI with 2 alternations involving MoDL with a randomly initialized UNet denoiser took 30 minutes to complete on a NVIDIA GeForce RTX 3080 GPU (with batchsize of $2$ and $200$ epochs each time to update networks locally),
whereas the global MoDL training on 3000 images with $150$ epochs of ADAM optimizer took about $69$ hours.
LONDN-MRI requires only a few images (e.g., 30) to train networks, with often 200-250 epochs for locally updating randomly initialized networks such as the UNet. Fewer epochs (often 10 suffices) of update were needed with pre-trained networks such as the pre-trained DIDN, resulting in runtimes of only 3 minutes for 1 iteration of LONDN-MRI.
Although our approach is iterative, we found that even a couple iterations are sufficient to provide image quality close to the oracle case. The rapid convergence was also seen in the convergence of neighbor search accuracy and the loss in the underlying bilevel optimization.
The use of only few iterations of LONDN-MRI and a small local dataset (as well as pre-training) reduces the computational \revise{cost} of our scheme, given that models need to be updated for each scan.
When compared to the supervised global model, the proposed method offers consistently improved reconstruction quality in terms of PSNR, SSIM, and HFEN metrics.
Additionally, we demonstrated that the local model is easily adaptable to test time scan changes, such as changes to the sampling mask, compared to a globally learned model, which is fixed once learned.
Our approach produced comparable improvements over the global scheme, when using a small dataset (1000 cases) or a relatively larger set (3000 cases).
Additionally, our study with different distance metrics revealed they have some effect on network reconstruction performance.
The NCC metric provided the best reconstruction quality. This also indicates that perhaps a learned metric ~\cite{sym11091066} could further enhance the performance of LONDN-MRI.
\section{Conclusions} \label{section6}
This paper examined supervised learning of deep unrolled networks at reconstruction time for MRI by exploiting training sets along with local modeling and clustering.
We showed advantages for this approach at different k-space undersampling factors over networks learned in a global manner on larger data sets. The training may be connected to a bilevel optimization problem.
We also compared different distance metrics for finding neighbors in our approach and regularization to reduce local overfitting.
We intend to expand our approaches in the future by incorporating non-Cartesian undersampling patterns, such as radial and spiral patterns, as well as deploy to other imaging modalities.
Additionally, the method's generalizability will be examined, with a particular emphasis on heterogeneous datasets.
We showed benefits for both randomly seeded training of simple models and for transfer learning of sophisticated pre-trained models, and
believe our methodology could be applied to a variety of deep learning-based
tasks effectively
to improve overall performance.
Finally, metric learning~\cite{sym11091066} to improve local clustering and subsequent network adaptation will be an important future direction.
{\small
\bibliographystyle{IEEEbib}
|
1,116,691,500,975 | arxiv | \section{Introduction}
The hierarchy problem is one of the most puzzling aspects of the Standard Model, and still it lacks a satisfactory solution. Composite Higgs models \cite{Kaplan:1983fs, Kaplan:1983sm, Agashe:2004rs, Azatov:2011qy} offer a fascinating explanation of the origin of the electroweak scale -- the Higgs is a composite pseudo-Nambu Goldstone boson (pNGB), which arises when a new sector becomes strongly interacting and confines. This new sector is endowed with a global symmetry, and it is the breaking of this global symmetry by non-perturbative vacuum condensates which leads to the appearance of the Higgs as a pNGB.
The low-energy behaviour of Composite Higgs (CH) models can be studied in an Effective Field Theory (EFT) framework, in which the heavy resonances of the strong sector are integrated out. This picture is useful, since we do not need to know the details of the UV-completion in order to understand the spectrum of the theory at energies below the confinement scale. The only features of the strong sector that we need to specify are its global symmetry $\mathcal G$ and the manner in which this symmetry breaks: $\mathcal G \rightarrow \mathcal H$. The pNGBs will come in a non-linear representation of the broken symmetry coset $\mathcal G/\mathcal H$, and the top-partners -- the light, fermionic resonances that are present in all realistic realisations -- will come in full representations of $\mathcal G$. A sigma-model approach then allows for a derivation of the pNGB potential (albeit in terms of unknown form-factors). In this way the main phenomenological differences between different CH models can be readily inferred from the symmetry structure of the theory.
Of course, merely plucking a symmetry out of the air is not equivalent to claiming it is \emph{realisable} in a QFT framework. Some work has been done towards constructing UV-completions of Composite Higgs models \cite{Ferretti:2013kya, Ferretti14, Ferretti:2016upr, Barnard13, Golterman:2017vdj}. Not all symmetry cosets, it turns out, were created equal. The cosets $SU(4)/Sp(4)$, $SU(5)/SO(5)$, and $SU(4)\times SU(4)/ SU(4)$ have been identified as the minimal cosets that have a UV-completion in the form of a fermion-gauge theory. The Minimal Composite Higgs Model (MCHM) $SO(5)/SO(4)$ is notably not so easy to complete. From one perspective, it might be argued that one should restrict one's attention to Composite Higgs models based on UV-completable cosets, and to take seriously the phenomenology they predict.
However in this work we describe a mechanism whereby a Composite Higgs model with the coset $\mathcal G/\mathcal H$ might, at energies currently accessible to us, be \emph{disguised} as a model with a different symmetry coset $\mathcal G'/\mathcal H'$, with $\mathcal G' \subset \mathcal G$ and $\mathcal H' \subset \mathcal H$. This can happen in such a way that at or below the confinement scale $f$, only the resonances predicted by the $\mathcal G'/\mathcal H'$ model are seen, while the remaining resonances acquire masses $\gg f$ and could remain hidden -- thus the model is disguised.
This paper is organised as follows. In Section~\ref{m}, we present a general description of the mechanism, assuming that the field responsible for deforming the strong sector is a new fermion $\psi$ which is a singlet under the SM gauge group. In Section~\ref{te}, we walk through two examples in which the original symmetry coset is $SU(4)/Sp(4)$ and $SU(5)/SO(5)$, in both cases showing that they can be disguised as the MCHM coset $SO(5)/SO(4)$. Then in Section~\ref{dwtR} we argue that the field responsible for the deformation could in fact be the right-handed top quark, if we take $t_R$ to be `mostly' composite. Finally in Section~\ref{c} we conclude our discussion.
\section{Mechanism}
\label{m}
In Composite Higgs models we assume that the new, strongly interacting sector is endowed with a global symmetry $\mathcal G$. The Higgs will be part of a set of pseudo-Nambu Goldstone bosons (pNGBs) that arise when $\mathcal G$ is spontaneously broken to a subgroup $\mathcal H$. The $n$ pNGBs live in the coset $\mathcal G/\mathcal H$, and there will be one for each broken generator, i.e. $n = \dim \mathcal G - \dim \mathcal H$. The Higgs and other pNGBs can only acquire a potential if the global symmetry $\mathcal G$ is explicitly broken by couplings to an external sector. This is normally accomplished by allowing the SM to couple to the strong sector -- these couplings then explicitly break $\mathcal G$ and induce a loop-level potential for the pNGBs.
We are going to consider a modified scenario, in which some new fields couple to the strong sector and provide an extra source of explicit breaking. We are particularly interested in the case where these new couplings are \emph{strong}. We will say that the new couplings deform, or rather, \emph{disguise} the strong sector's symmetry properties -- due to the explicit breaking, its apparent global symmetry is now a subgroup of the original symmetry, and the pattern of spontaneous breaking has been modified.
Depending on the nature of these new fields, there are different ways they could couple to the strong sector. We are going to focus on the case where the new fields are fermionic, and couple to the strong sector via the partial compositeness mechanism \cite{Kaplan91, Contino06}. This mechanism is normally employed to allow the SM quarks (or at the very least, the top), to interact with the composite sector. Ordinarily we consider terms such as
\begin{equation}
\label{partial_compositeness}
\mathcal L \supset y_L f \overline q_L \mathcal O_L + y_R f \overline t_R \mathcal O_R + h.c.,
\end{equation}
where $q_L = (t_L, b_L)$. The $\mathcal O_{L,R}$ are composite fermionic operators with the same SM quantum numbers as $q_L, t_R$. Thus the elementary top quark mixes with the `top-partners', allowing the physical, partially composite eigenstate to couple to the Higgs.
Now, the couplings in \eqref{partial_compositeness} will explicitly break the global symmetry $\mathcal G$. If we were to write the couplings in full we would have, for instance:
\begin{equation}
\label{with_spurion}
\mathcal L \supset y_L f (\overline q_L)_\alpha (\Delta_L)^\alpha_i \mathcal O_L^i + y_R f (\overline t_R)_\alpha (\Delta_R)^\alpha_i \mathcal O_R^i,
\end{equation}
where $i$ is an index belonging to $\mathcal G$ and $\alpha$ belongs to the SM gauge group. The tensor $\Delta$ carries indices under both the SM gauge group and $\mathcal G$, parametrising precisely how the symmetry $\mathcal G$ is broken \cite{Matsedonskyi12}. One can think of $(\overline t_L)_\alpha \Delta^\alpha_i$ as an embedding of the SM top into a `spurionic' representation of $\mathcal G$. The representation into which the top is embedded should match the representation in which $\mathcal O_L$ transforms, and this ensures that the explicit breaking is treated in a way formally consistent with the symmetries of the strong sector.
As an example, let us consider the MHCM, which has the pNGB coset $SO(5)/SO(4)$. The $SO(4)$ in this model becomes the custodial $SO(4) \simeq SU(2)_L \times SU(2)_R$. We can take $\mathcal O_L$ and $\mathcal O_R$ to both be in the ${\bf 5}$ of $SO(5)$, which decomposes under the custodial group as $({\bf 2}, {\bf 2}) \oplus ({\bf 1}, {\bf 1})$. The $q_L$ then couples to the bidoublet, while the $t_R$ couples to the singlet. This translates into the following expressions \cite{Contino:2006qr} for $\Delta_{L,R}$ in \eqref{with_spurion}:
\begin{align}
\begin{split}
\Delta_L &= \frac{1}{\sqrt{2}} \begin{pmatrix}
0 & 0 & 1 & -i & 0 \\
1 & i & 0 & 0 & 0
\end{pmatrix} \\
\Delta_R &= -i\begin{pmatrix}
0 & 0 & 0 & 0 & 1
\end{pmatrix}.
\end{split}
\end{align}
Proceeding along similar lines, let us introduce a new fermion $\psi$, which mixes with a composite operator $\mathcal O_\psi$. For simplicity, let us take $\psi$ to be a singlet under the SM gauge group. The mixing terms look like:
\begin{equation}
\label{l_slashed}
\mathcal L_\slashed{\mathcal G} = y_\psi f \overline \psi \Delta_i \mathcal O_\psi^i + h.c.
\end{equation}
Note that the $\alpha$ index has been omitted, since $\psi$ is a singlet under the Standard Model.
Now we are going to assume that the mixing parameter $y_\psi$ is large -- so that $\mathcal G$ is no longer a good symmetry. Let us define $\mathcal G' \subset \mathcal G$ such that $\mathcal G'$ is the residual symmetry after the interactions with $\psi$ are included. Suppose that the global symmetry of the original theory \emph{spontaneously} breaks to $\mathcal H$, and define $\mathcal H' = \mathcal H \cap \mathcal G'$. Then, with the inclusion of $\mathcal L_\slashed{\mathcal G}$, the new theory appears to have the \emph{new} symmetry breaking pattern $\mathcal G'/\mathcal H'$. One composite Higgs model has been disguised as another.
What do we mean when we say that $y_\psi$ is large? In the language of \cite{Giudice07, Liu:2016idz}, we can broadly parametrise the strong sector via its typical mass scale $m_\rho$ and coupling $g_\rho$, which scales in large-$N$ theories \cite{Witten:1979kh} as
\begin{equation}
g_\rho = \frac{4\pi}{\sqrt{N}}.
\end{equation}
They are related to the symmetry-breaking scale via $m_\rho = g_\rho f$. The limit $g_\rho = 4\pi$ represents the limit of validity of the effective theory; for stronger couplings a loop expansion in $(g_\rho/4\pi)^2$ is no longer valid.
For $y_\psi \approx g_\rho$, the mixing angle between the elementary $\psi$ and $\mathcal O_\psi$ is large, and the physical eigenstates will have a large degree of compositeness. Operators induced by the coupling of $\psi$ to the strong sector (which violate $\mathcal G$) will be proportional to some power of $(y_\psi / g_\rho)$, and in the limit where $y_\psi \approx g_\rho$, these operators are no longer suppressed. We are justified in saying that the apparent global symmetry of the strong sector has been disguised, since operators which break the symmetry appear at the same order as operators which respect it.
In order to have a large value of $y_\psi$, we require the scaling dimension of $\mathcal O_\psi$ to be close to $5/2$. This can happen if the dynamics above the compositeness scale are approximately conformal, and the operator $\mathcal O_\psi$ has a large anomalous dimension \cite{Agashe:2004rs}. A similar requirement holds for the mixings of the top quark to the top-partners -- in order to generate a sizeable $\mathcal O(1)$ top Yukawa, the $\mathcal O_{L,R}$ must have large anomalous dimensions so that the mixing terms become effectively relevant operators.
\section{Two examples}
\label{te}
It is often remarked that the Minimal Composite Higgs model (MCHM) \cite{Agashe:2004rs} has no UV-completion in the form of a renormalisable gauge-fermion theory. As discussed in \cite{Ferretti:2013kya, Ferretti14}, a theory whose UV-completion consists of $n_i$ fermions in each representation $R_i$ of the new strongly interacting gauge group (assuming it is simple) has the following global symmetry:
\begin{equation}
\mathcal G = SU(n_1) \times \dots \times SU(n_p) \times U(1)^{p-1},
\end{equation}
where $p$ is the number of different irreducible representations in the model. From this we see that there is no simple gauge-fermion theory that gives rise to an $SO(5)/SO(4)$ pNGB coset.
In this section we will take two models which \emph{do} have gauge-fermion UV-completions, and show that using the above procedure they can be disguised at low energy as the $SO(5)/SO(4)$ model.
\subsection{$SU(4)/Sp(4)$}
\label{SU4Sp4}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\definecolor{mycolor}{RGB}{200,200,200}
\begin{scope}
\draw[fill=white] (0,0) circle [radius=3.5] ;
\draw[fill=mycolor] (-1,0) circle [radius=2] ;
\draw[dashed] (1,0) circle [radius=2] ;
\node at (0, 3) {$SU(4)$} ;
\node at (-1.2, 1.6) {$Sp(4)$} ;
\node at (1.2, 1.6) {$Sp(4)'$} ;
\node at (0, 0) {$SO(4)$} ;
\node at (0, -2.7) {$\eta$} ;
\node at (1.8, 0) {$H$} ;
\def(8,0) circle (2.01){(8,0) circle (2.01)}
\def(7,0) circle (3.2){(7,0) circle (3.2)}
\definecolor{mycolor}{RGB}{200,200,200}
\draw[fill=mycolor] (6,0) circle [radius=2] ;
\draw[white,fill=white, even odd rule] (7,0) circle (3.2){(8,0) circle (2.01)};
\end{scope}
\draw[dashed] (8,0) circle [radius=2] ;
\node at (8.2, 1.6) {$Sp(4)'$} ;
\node at (7, 0) {$SO(4)$} ;
\node at (8.8, 0) {$H$} ;
\draw[-{Latex[scale=2.0]}] (4,0) -- (5.5,0) ;
\end{tikzpicture}
\caption{Symmetry breaking patterns in the disguised $SU(4)/Sp(4)$ model. The solid circles represent the spontaneous breaking in the original model. The dashed circle represents the $Sp(4)'$ subgroup preserved by the explicit breaking, so that the `disguised' model becomes $Sp(4)'/SO(4)$.}
\label{breakings}
\end{figure}
In this section we will look at the next to minimal Composite Higgs model \cite{Gripaios09, Galloway:2010bp}, in which the pNGB coset is $SU(4)/Sp(4)$\footnote{A UV-completion of this coset was studied on the lattice with an $SU(2)$ confining gauge force \cite{Hietanen:2014xca} -- the results point to a large value of $g_\rho \sim \mathcal O(10)$, in line with the large-$N$ expectation.}. This coset features one extra singlet pNGB, which we denote by $\eta$. The spontaneous breaking is achieved by a VEV in the antisymmetric ${\bf 6}$ of $SU(4)$, which we will take to be proportional to
\begin{equation}
\langle {\bf 6} \rangle \propto \begin{pmatrix}
i\sigma^2 & 0\\
0 & i \sigma^2\\
\end{pmatrix}.
\end{equation}
Then the pNGBs are parametrised as fluctuations around the vacuum:
\begin{equation}
\Sigma(\phi^i) = U\langle{\bf 6}\rangle U^T, \;\;\;\;\; U = \exp(i\phi^i X^i/f),
\end{equation}
where $\phi = \{H, \eta\}$ and $X^i$ are the broken generators\footnote{The calculations in this and the next section use a specific basis for the generators of $SU(4)$ and $SU(5)$. We use the bases given in \cite{Gripaios09, Ferretti14}, to which the interested reader can refer.}.
As outlined in the previous section, we will introduce a new fermionic field $\psi$, singlet under the SM. In order to disguise this model as $SO(5)/SO(4)$, we must look for a $\mathcal L_\slashed{\mathcal G}$ that explicitly breaks $\mathcal G$ to $\mathcal G' = SO(5)$. This can be done, for instance, with $\mathcal O_\psi$ in the ${\bf 6}$ of $SU(4)$. In this case \eqref{l_slashed} looks like
\begin{equation}
\mathcal L_\slashed{\mathcal G} = y_\psi f \overline\psi \Tr [\Delta \mathcal O_\psi] + h.c.
\end{equation}
The ${\bf 6}$ decomposes under $SU(2)_L \times SU(2)_R$ as:
\begin{equation}
{\bf 6} = ({\bf 2}, {\bf 2}) \oplus ({\bf 1}, {\bf 1}) \oplus ({\bf 1}, {\bf 1}).
\end{equation}
The new field $\psi$ must couple to a linear combination of the two singlets in this decomposition. The two singlets correspond to
\begin{equation}
\Delta_\pm = \begin{pmatrix}
\pm i\sigma_2 & 0 \\ 0 & i\sigma_2
\end{pmatrix},
\end{equation}
and one can verify that if we take
\begin{equation}
\label{spurion}
\Delta = \cos\theta \;\Delta_- + \sin\theta \;\Delta_+,
\end{equation}
the unbroken symmetry is indeed an $Sp(4)' \simeq SO(5)$ subgroup of the original $SU(4)$.
Notice that, using this notation, $\langle {\bf 6} \rangle \propto \Delta_+$. So long as $\theta \neq \pi/2$, the explicit and spontaneous breakings preserve \emph{different} $Sp(4)$ subgroups. That is, in our earlier notation:
\begin{align}
\begin{split}
\mathcal G' &= Sp(4)'\\
\mathcal H &= Sp(4)\\
\mathcal H' = \mathcal H \cap \mathcal G' &= Sp(4) \cap Sp(4)'.
\end{split}
\end{align}
If the spontaneous and explicit breakings preserved the \emph{same} $Sp(4)$ subgroup, then in the disguised model there would be no spontaneous symmetry breaking at all, since the spontaneously broken symmetry would never have been a good symmetry in the first place. In Fig.~\ref{breakings}, this would correspond to the $Sp(4)$ and $Sp(4)'$ circles coinciding. In such a model there would be no Goldstone bosons -- the explicit breaking leads $H$ and $\eta$ to acquire masses comparable to the other resonances of the strong sector.
Since we are trying to disguise $SU(4)/Sp(4)$ as $SO(5)/SO(4)$, we want the Higgs (but not $\eta$) to remain an exact Goldstone boson. One can verify that in the limit where $\theta \rightarrow 0$, the generators corresponding to the four degrees of freedom of the Higgs are preserved by the explicit breaking. This is the case shown in Fig.~\ref{breakings}: the Higgs lives in the part of $Sp(4)'$ which is spontaneously broken, while the $\eta$ lives in the part of $SU(4)$ which is broken by the explicit breaking, and thus acquires a large mass and is hidden. Thus we have disguised the $SU(4)/Sp(4)$ coset as $SO(5)/SO(4)$.
Note that the angle $\theta$ is parametrising some of our ignorance about the UV physics. Without having a specific UV model in mind we cannot predict the misalignment between the explicit breaking and the spontaneous breaking. With an explicit model one might be able to use lattice calculations, and/or an NJL-type analysis (see, for instance, \cite{Barnard13}), in order to obtain a better understanding of the true vacuum of the theory. For now, however, we are working at a more general level, and will treat $\theta$ as a free parameter.
Another way of seeing this mechanism at work is to look at the Coleman-Weinberg potential for the pNGBs. Including only the corrections from loops of the new fermion field $\psi$, the potential must be constructed out of invariants of $\Sigma$ and $\Delta$, i.e. it should be a function of $\Tr[\Delta^T \Sigma]$. Taking $\Delta$ as defined in \eqref{spurion}, the lowest order contribution to the CW potential is
\begin{equation}
V \propto -\Tr[\Delta^T \Sigma] \Tr[\Delta \Sigma^\dagger]
\end{equation}
\begin{equation}
\label{pNGB_masses}
= \cos^2\theta \; \eta^2 + \sin^2\theta\;(1 - h^2 - \eta^2).
\end{equation}
We can see that in the limit $\theta\rightarrow 0$, $h$ remains an exact Goldstone boson, living in the coset $SO(5)/SO(4)$.
One should note that, in arriving at the above expression, we performed the following field redefinitions of the pNGB fields (following \cite{Gripaios09}):
\begin{align}
\begin{split}
\frac{h}{\sqrt{h^2+\eta^2}} \sin\left(\frac{\sqrt{h^2 + \eta^2}}{f}\right) \rightarrow h, \\
\frac{\eta}{\sqrt{h^2+\eta^2}} \sin\left(\frac{\sqrt{h^2 + \eta^2}}{f}\right) \rightarrow \eta.
\end{split}
\end{align}
Field redefinitions of the form $\phi \rightarrow \phi\; f(\phi)$, (with $f(0) = 1$), are valid in the context of the sigma-model \cite{Callan69}; the above redefinition is especially useful since it makes clear the fact that $h$ is an \emph{exact} pNGB in the $\theta\rightarrow 0$ limit\footnote{Furthermore, in this basis it is precisely the VEV of $h$ which sets the scale of EWSB, i.e. $m_W \propto \langle h\rangle$.}.
In order for the disguising mechanism to work, we need a small value of $\sin\theta$ -- only then will there be a hierarchy between the masses of $\eta$ and $H$. Having large values of both $\sin\theta$ and $y_\psi$ will spoil the role of the Higgs as a Goldstone boson, giving it a mass closer to that of the other strong sector resonances.
\subsection{$SU(5)/SO(5)$}
Another coset with a realistic UV-completion is $SU(5)/SO(5)$ \cite{ArkaniHamed:2002qy, Vecchi:2013bja, Ferretti14}. In this section we show that, in complete analogy with the previous section, this model can also be disguised as the MCHM via a suitable choice of $\mathcal L_\slashed{\mathcal G}$\footnote{See \cite{vonGersdorff:2015fta} for a microscopic realisation}.
The spontaneous breaking $SU(5) \rightarrow SO(5)$ can be achieved with a VEV in the symmetric {\bf 15} of SU(5), which we take to be proportional to
\begin{equation}
\langle {\bf 15} \rangle \propto \begin{pmatrix}
{\mathbb 1}_4 & 0\\
0 & 1\\
\end{pmatrix}.
\end{equation}
This coset features 14 pNGBs, the Higgs, a charged $SU(2)_L$ triplet $\Phi_\pm$, a neutral triplet $\Phi_0$, and a singlet $\eta$. These are parametrised by
\begin{equation}
\Sigma = U \langle {\bf 15} \rangle U^T, \;\;\;\;\; U = \exp(i\phi^a X^a/f),
\end{equation}
but since in this case $\langle {\bf 15} \rangle$ is proportional to the identity, we can just write $\Sigma = UU^T$.
Let us assume that the new source of explicit breaking comes from a SM singlet fermion $\psi$. Then, just as before, $\mathcal L_\slashed{\mathcal G}$ is given by:
\begin{equation}
\mathcal L_\slashed{\mathcal G} = y_\psi f \overline\psi \Tr [\Delta \mathcal O_\psi] + h.c.
\end{equation}
where now we take $\mathcal O_\psi$ to be in the ${\bf 15}$ of $SU(5)$. Notice that in both this and the previous example, $\mathcal O_\psi$ was taken to be in the same representation as the operator whose VEV breaks the symmetry spontaneously.
Now the ${\bf 15}$ of $SU(5)$ decomposes under $SU(2)_L \times SU(2)_R$ as:
\begin{equation}
{\bf 15} = ({\bf 3},{\bf 3}) \oplus ({\bf 2},{\bf 2}) \oplus ({\bf 1},{\bf 1}) \oplus ({\bf 1},{\bf 1}).
\end{equation}
If we take the new source of breaking to be a SM singlet, then, just as in the $SU(4)/Sp(4)$ case, we have two singlets in the decomposition of the ${\bf 15}$ to which $\psi$ may couple. These two singlets correspond to:
\begin{equation}
\Delta_\pm = \begin{pmatrix}
{\mathbb 1}_4 & 0\\
0 & \pm 1\\
\end{pmatrix}.
\end{equation}
For a linear combination of the two singlets, $\Delta = \cos\theta \;\Delta_- + \sin\theta\;\Delta_+$, $SU(5)$ is explicitly broken to $SO(5)'$. Precisely as before, only in the limit $\theta \rightarrow 0$ is the Higgs untouched by the explicit breaking. Furthermore, the explicit breaking gives masses to $\Phi_\pm$, $\Phi_0$ and $\eta$. In the case where $y_\psi$ is large, the pNGB coset is disguised as $SO(5)/SO(4)$.
\section{Deforming with $t_R$}
\label{dwtR}
It has been noted \cite{Giudice07, Pomarol:2008bh, Batell:2007ez} that it is phenomenologically possible, and perhaps desirable, for the $t_R$ quark to be `mostly' composite, in the sense that $y_R$ in \eqref{partial_compositeness} is of order $g_\rho$. If this were the case, then the couplings of $t_R$ to the strong sector can indeed be thought of as changing the symmetry properties of the strong sector, and disguising the coset space as another.
Let us go back to the $SU(4)/Sp(4)$ example. Of course, unlike our hypothetical field $\psi$, $t_R$ is not a Standard Model singlet -- it is charged under $U(1)_Y$ and $SU(3)_c$. This does not change the discussion of Section~\ref{SU4Sp4}, however; we just replace $\mathcal O_\psi$ with $\mathcal O_R$, which has the same SM quantum numbers as $t_R$. In the original paper studying this coset \cite{Gripaios09}, the authors conclude that, in order to preserve the custodial symmetry that protects the $Z b \overline b$ coupling, the left and right handed quarks ought to be embedded into the $\bf 6$ of $SU(4)$ -- precisely as we did for $\psi$ in Sec.~\ref{SU4Sp4}.
It is clear that, if we want $t_R$ to couple to the Higgs and to participate in Yukawa interactions, then we must have $\theta \neq 0$. As stated earlier, we can always take $\theta$ to be small, such that a large hierarchy is generated between $\eta$ and $h$. First however, we should check that small values of $\theta$ are still consistent with a large enough top Yukawa coupling. We must embed $q_L$ into the $({\bf 2}, {\bf 2})$ of the $\bf 6$, which fixes
\begin{equation}
\Delta_L = \begin{pmatrix}
0 & Q \\
-Q^T & 0
\end{pmatrix},
\end{equation}
with $Q = (0, q_L)$. Let us assume that the couplings of $t_R$ are proportional to $\Delta_R$ in analogy to \eqref{spurion}:
\begin{equation}
\Delta_R = \cos\theta \; \Delta_- + \sin\theta \; \Delta_+.
\end{equation}
Then the Yukawa coupling of the top is obtained from the effective operator:
\begin{equation}
M_t \;\overline t_L t_R \Tr[\Delta_L^T \Sigma] \Tr[\Delta_R \Sigma^\dagger],
\end{equation}
where $M_t$ is a momentum-dependent form factor which encodes the integrated-out dynamics of the strong sector. Expanding this operator informs us that the coupling $\overline t_L t_R h$ will be proportional to $\sin\theta$.
We expect the Yukawa coupling also to be proportional to $y_L y_R$, and dimensional reasoning (discussed in detail in \cite{Matsedonskyi12}) suggests it should also be proportional to $f/m_T$, where $m_T$ is the mass of the lightest top-partner. Thus we conclude that the top Yukawa scales, up to some numeric prefactor, as
\begin{equation}
y_t \approx y_L y_R \sin\theta \frac{f}{m_T}.
\end{equation}
Furthermore, all contributions to the CW potential of the Higgs involving the right-handed top must be proportional to powers of $\Tr[\Delta_R \Sigma^\dagger]$ -- therefore the contributions to the potential must always depend on powers of $y_R \sin\theta$. In fact, the usual analyses of the size of the top Yukawa, the mass of the Higgs and the top-partners, and the required tuning for successful EWSB, proceed along all the usual lines, with the replacement $y_R \rightarrow y_R
\sin\theta$.
The disguising mechanism relies on small values of $\sin\theta$, but of course we can make $\sin\theta$ small as long as $y_R$ is sufficiently large. The mass of $\eta$ will be proportional to $\cos 2\theta$ (from equation \eqref{pNGB_masses}), and for small $\theta$ the hierarchy between the `true' pNGB $h$ and the disguised pNGB $\eta$ is assured. Thus the couplings of the top quark alone can fulfil the requirements of the disguising mechanism.
What is the phenomenology of such a scenario? We have a set of pNGBs which couple very strongly to the top -- in this example just the $\eta$, but in the $SU(5)/SO(5)$ case we would have $\Phi_\pm, \Phi_0$ and $\eta$. In ordinary composite Higgs models we expect these extra scalars to be heavier than the Higgs by roughly a factor $\xi = v^2/f^2$. In models with around 10\% tuning, this corresponds to a mass of around $400$-$500$ GeV. In our scenario, they would be significantly heavier (how much heavier is of course dependent on the value of $\theta$, or how \emph{disguised} the model is), but their Yukawa couplings to the top would be increased by the same factor.
At sufficiently high center of mass energies, these resonances would eventually appear, along with other fermionic and vector resonances. Evidence for the disguising mechanism would be the presence of \emph{split} multiplets. For instance, in the $SU(4)/Sp(4)$ model we have top-partners in the $\bf 6$ of $SU(4)$. In the disguised model, this would be split into ${\bf 5} \oplus {\bf 1}$ of the unbroken $SO(5)$, with the singlet coupling most strongly to $t_R$. We would expect the large breaking of the $SU(4)$ symmetry to lead to a mass splitting between the five-plet and the singlet.
\section{Conclusion}
\label{c}
We have presented a mechanism whereby the symmetry breaking pattern of the strong sector can be disguised, via couplings of an elementary field to a strong sector operator. This field could be a BSM field, or, as we argued in Section~\ref{dwtR}, it could be the right-handed top quark, avoiding the need for any new fields.
This is a useful observation, especially if one has reason to believe that some pNGB cosets might be more plausible than others -- perhaps because one is concerned about UV-completions of the model. We have shown that two UV-completable cosets, $SU(4)/Sp(4)$ and $SU(5)/SO(5)$, can be deformed in such a way that at low energies the pNGB spectrum is as we would expect in an $SO(5)/SO(4)$ model.
This is certainly not equivalent to claiming that a UV-completion for the $SO(5)/SO(4)$ coset has been found. After all, the mixing $\overline\psi \mathcal O_\psi + h.c.$ will arise from a non-renormalisable operator, presumably a four-fermion operator involving $\psi$ and three techni-fermions. Nonetheless, attempts at finding a `UV-completion' of composite Higgs models so far do not speculate on the origin of these four-fermion interactions\footnote{This discussion might call into question the usage of the term `UV-completion' -- there are always problems whose solutions can be delayed to a higher scale.} (their scale can be significantly higher than the compositeness scale). Therefore it is fair to say that we have found a UV-completion of the $SO(5)/SO(4)$ coset which is \emph{just as complete} as any other composite Higgs UV-completion.
In the case where the $t_R$ is responsible for the disguise, we have a model with a set of heavy scalar resonances with very strong couplings to the top -- very strong in this case meaning close to the non-perturbative limit. We leave a detailed phenomenological analysis for future work. It would be interesting to study whether the large couplings of the scalars to the top can lead to sizable contributions to effective operators, and whether these can have any impact on Higgs or gauge boson production cross-sections.
\bibliographystyle{utphys}
|
1,116,691,500,976 | arxiv | \section{Comparison of our protocol with that of Ref.\ \cite{Guerin2016}}\label{Compari}
Our protocol differs slightly from the original scheme proposed in Ref.\ \cite{Guerin2016}. The unitary transformation $X(\bm{x})$ was originally proposed in terms of modulo-two addition, as $X(\bm{x})=\sum_{\bm{z}\in\{0,1\}^{n}}|\bm{z}\oplus\bm{x}\rangle\langle\bm{\bm{z}}|$. Our definition of $X(x)=\sum_{z}|(z+x)~ \mathrm{mod}~ 2^{n+1} \rangle\langle z|$ retains the key property of $[X(x),X(y)]=0 ~\forall x~and~y $, while being experimentally much more readily realizable. Note that the fact that our target system is equivalent to $n+1$ qubits as opposed to the $n$ qubits of Ref.\ \cite{Guerin2016} is not a fundamental consequence of the time-bin implementation. One could also have the target system corresponding to $n$ qubits, and use the transformation $X(x)=\sum_{z=0}^{2^{n}-1}|(z+x) ~\mathrm{mod}~ 2^n \rangle\langle z|$. Such a permutation would require using fast switches in addition to the delays, as shown in Fig.\ \ref{fig:permutation}.
\section{Practical considerations}\label{Practical}
Our encoding of the whole multi-qubit system in a single photon entails both benefits and drawbacks. It is beneficial given the presence of probabilistic photon loss, an unavoidable experimental imperfection. Efficiency is a crucial factor in demonstrating the advantage provided by the quantum switch, and presents one of the main technical challenges of the experiment. Loss is detrimental because it effectively requires the repetition of the protocol, thereby increasing the communication, while the performance is benchmarked against optimal lossless causally ordered protocols. As a result of our implementation, our quantum system is only subjected to the probabilistic loss associated with one photon. However, the encoding of the target qubits in the photon arrival time also leads to an exponential growth of the number of required time bins as a function of the number of target qubits. This practically limits the ability to scale the protocol to an arbitrarily high value of $n$. Nevertheless, our experiment proves that it is possible to access the regime where our causally indefinite quantum protocol outperforms any causally definite protocol.
In contrast to previous communication complexity experiments on quantum fingerprinting that were based on coherent states \cite{Xu2015,Guan2016}, we use single photons in this experiment. The use of coherent states would arguably violate a rule of the exchange evaluation game, namely that Alice and Bob are each only allowed to receive a system from the outside environment once. As a test of one-way communication, Ref.\cite{Guerin2016} proposed having a (hypothetical) counter in each laboratory that is incremented by one whenever a system enters the laboratory. Our single photon constitutes an indivisible quantum system, so the meaning of ``a system entering" is unambiguous, and the counters would read one at the end of the protocol, in agreement with the rules of the game. However, for the case of a coherent state divided on a beam splitter, each component could be considered a quantum system on its own. In that case, the counters would read two, in violation of the rules of the game. It has recently also been experimentally shown that the use of single photons can enable secure communication with a hidden communication direction \cite{2018Del,2018massa}.
The coherence length of our source is approximately several centimeters, which does not cover the whole switch. For our setup, having a coherence length that covers the whole interferometer is not an option. First, we have an extremely long path in the interferometer and it is extremely challenging to have a narrow-bandwidth single photon source. Second, we use time-bin encoding so our photon must be contained in a time bin for modulation – the coherence length must be shorter than the separation between bins.
It is also worth noting that an indefinite causal order has been formally witnessed elsewhere without the use of an especially narrowband single photon~\cite{Rubino2017}.
\color{black}
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\linewidth]{permutation2.pdf}
\caption{Linear optic circuit for performing an alternative version of the transformation $X(x)$, such that the number of qubits in the target system is reduced to $n$. (a) The $2^{n}$ incoming modes pass through an active switch S which sends each mode either straight through or sends it to a delay line. The delay is equivalent to $2^{n}$ time bins. The operation of the switch depends on $x$ and it is such that only the first $\left(2^{n}-x\right)$ modes are sent through the delay. Here we have used $x=2^{n}-3$ as an example. A second switch then ensures that all modes exit in the same line. (b) After the previous step, the modes have been permuted as desired and an further delay equal to $x$ time bins must be added to ensure that the leading mode reaches the other party at the same time, regardless of the value of $x$.}
\label{fig:permutation}
\end{figure}
\section{Experimental details}\label{C}
\subsection{Heralded single-photon source}
The photon pairs are produced via type-\uppercase\expandafter{\romannumeral2} spontaneous parametric down-conversion. The pump light is emitted by a 780-nm continuous-wave laser diode (LD). After passing through a 945-nm short-pass filter (945SP) used to remove the residual long wavelengths from outside fluorescence, the pump light is focused into a 10 mm, periodically poled potassium titanyl phosphate (PPKTP) crystal to convert pump photons into pairs of orthogonally polarized photons at a wavelength of 1560 nm. The down-converted photons are separated by a polarizing beam splitter (PBS), and the residual pump light is removed by two filters. The down-converted photons are then coupled into single-mode fibers with a 76.9\% coupling efficiency. One of them passes a circulator before entering the Sagnac loop. To herald the presence of the correlated photon, the other one is directly detected by a high-quality superconducting nanowire single-photon detector (SNSPD1) with 80.6\% detection efficiency. Overall, the heralding efficiency is approximately $(62.0\pm 1.0)\%$, including all losses in the photon-pair source setup (but excluding the elements used to implement the communication complexity protocol).
\subsection{Sagnac loop}
The Sagnac loop is depicted in Fig.\ 2 of the main text. A 50:50 beam splitter (BS) is used to create the superposition of communication direction. Alice and Bob each possess a variable delay and a phase modulator to implement their unitary operators based on their inputs when the photon arrives at their stations. The unitaries consist of phase modulations followed by delays. However, because $f(0) = g(0) = 0$ and the photon is initialized in the first time bin, no phase modulation needs to be implemented at the first station, only the delays. The photon then continues along the Sagnac loop, through a 7 km fiber spool in which the pulse trains are stored, and to the second station. There, Alice and Bob each implement their phase modulation, followed by their delay. Finally, the two paths interfere at the BS and the photon is detected by two high-quality SNSPDs (SNSPD2 with a detection efficiency of 76.5\% and dark count rate of 9.3 Hz, SNSPD3 with a detection efficiency of 72.0\% and a dark count rate of 8.3 Hz). The two-photon coincidences are registered by a time-to-digital converter (TDC). All components in the loop have ultra-low insertion loss to minimize the communication complexity.
\subsection{Dark counts}
The coincidence time window in the causally indefinite protocol is very large. In contrast to typical two-photon coincidence experiments where the coincidence window tends to be a few ns, in the causally indefinite protocol, the heralding photon arrives in a given time bin while the heralded photon can be in one of $2^{n+1}$ time bins. This means that the coincidence time window is $2^{n+2}$ ns for the given 2 ns time bin, $\sim (n+1)/3$ orders of magnitude larger than that in normal two-photon coincidence detection. If there is a dark count during this time (or a subset of this interval when applying gating), it will be erroneously registered as an event. To suppress dark counts due to thermal background radiation, we use two high-quality SNSPDs that use a low-loss bandpass filter.
The SNSPDs can detect photons with an ultra-low dark-count rate of $< 10$ Hz and a high detection efficiency of $>72.5\%$.
\subsection{Losses}
To minimize the experimental communication in the causally indefinite protocol, the experimental setup should have as small a loss as possible. Here, three strategies are applied to minimize the total loss of the setup. First, for the heralded single-photon source, by setting the pump beam waist to $340~\mu\mathrm{m}$, the detection beam waist at the center of the PPKTP crystal to $170~\mu\mathrm{m}$, and by heralding the daughter photons using a high-quality SNSPD (detection efficiency of 80.6\%, dark count rate of 11.0 Hz), we achieve a high heralding efficiency of $(62.0\pm 1.0)\%$, corresponding to 2.07 dB loss in the source. Second, in the Sagnac loop, all components are custom-made to achieve ultra-low loss. As discussed in Section E of this Supplemental Material, instead of using highly lossy optical switches, we implement the delay by joining the different fiber segments via mating sleeves. There, we use different connector types, SC/PC to FC/PC mating sleeves (Thorlabs), which provide a lower loss ($ <0.11$ dB), compared with a typical loss of FC/PC to FC/PC mating sleeves ($\sim0.3$ dB). As a result, for each system size $n$, we implement the delays with extremely low loss. Lastly, for the detection we use two high-quality SNSPDs that have a high detection efficiency of 72.0\% and 76.5\% together with an ultra-low dark count rate of 8.3 Hz and 9.3 Hz, respectively. The overall system loss, excluding the loss of the variable delays in Alice and Bob's stations, is 11.62 dB.
We list all losses in the two sections of different lengths (Tab. \ref{tabledelay}), as well as losses in the two sections of each system size (Tab. \ref{tableresults}). The balancing loss in the beamsplitter is 0.27~dB.
\subsection{Measurement of the second-order correlation function}\label{Measurement second}
We need to demonstrate that a pure single photon are used, as this is underlying assumption\cite{Guerin2016}. This can be shown by measuring the second-order correlation function at zero delay of our heralded single-photon source, $g^{(2)}(0)$. For an ideal heralded single-photon source, the value of $g^{(2)}(0)$ is 0, which implies that only a single photon is sent to the Sagnac loop each time. To measure $g^{(2)}(0)$, in the same way as for the scheme in Ref.~\cite{2005Ren,2015JIN201547}, we simply steer one of the outputs of the beam splitter (in Fig. 2 of the main text) to SNSPD2, and the other one to SNSPD3. Since the pump light is emitted by a continuous-wave LD, we register all detection events by the TDC and post-select the coincidence events. $g^{(2)}(0)$ is evaluated with the following equation:
\begin{eqnarray}
{g^2}(0) = \frac{{2\times{C_{123}} \times {C_1}}}{{{{({C_{12}} + {C_{13}})}^2}}}
\end{eqnarray}
where $C_1$ is the single count of SNSPD$_1$; $C_{12}$ and $C_{13}$ are two-fold concidence rate between SNSPD$_1$ and SNSPD$_2$, and between SNSPD$_1$ and SNSPD$_3$, respectively; $C_{123}$ denotes the three-fold coincidence rate of SNSPD$_1$, SNSPD$_2$ and SNSPD$_3$.
At our pump power of 3~mW,we average the count rates over 5 minutes, obtaining $g^{(2)}(0)=(3.27\pm23.5)\times 10^{-5}$, where the error bars represent 1 standard deviation, following Poisson statistics. The value is very close to 0, which means that an almost perfect single-photon source is used in our experiment. Hence, similarly to Ref. \cite{Procopio2015,Guerin2016}, using the hypothetical photon-counter scheme where we place a counters at the output ports of Alice's and Bob's station to read the number of uses of channel, we would see that the counters of Alice and Bob are 1, which means that the channel is used only once.
\color{black}
\subsection{Implementation of the unitary transformations}
For our protocol, Alice (Bob) needs to realize one of the $2^n$ possible delays, depending on her (his) input bit string, and guarantee that her (his) PM correctly modulates the photon after the delay from Bob (Alice). Therefore, after the delay, the photon pulse must be fully contained in the appropriate time bin. Instead of using highly lossy switches, we implement the variable delays with $n$ segments of fiber, where for the $k$-th segment, there is a choice of two fibers that have a length difference equivalent to a delay of $2^{k-1}$ time bins, $k\in \{{1,2,\ldots,n}\}$, as illustrated in Fig. \ref{delay}. We use a 25 GHz arbitrary waveform generator (Tektronix70002), which is triggered by the heralding signal, to generate a modulation pulse sequence with a pulse width of 2 ns. The rising edge and the falling edge of the pulse are $<100$ ps. The delay between the heralding signal and the modulation pulse is accurately controlled such that the heralded photon is positioned in the middle of the first time bin.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\linewidth]{delayline.pdf}
\caption{(Color online) Schematic of the implementation of $X(x)$. For each of the $n$ segments, either the short option (black fiber) or the long option (yellow fiber) is chosen, depending on whether the corresponding digit of $x$ in binary representation is $0$ or $1$. For the $k$-th segment, the length of the long option and that of the short option are $t\times(2^{k-1}+1)$ and $t$, respectively, where $t=2$ ns is the pulse width of phase modulation.
}\label{delay}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.6\linewidth]{measuredelay.pdf}
\caption{(Color online) setup of the delay measurement. We use a continuous-wave laser diode (LD) at 1560 nm, an intensity modulator (IM), and an attenuator (Att) to generate pulse signals with a temporal width of $\sim100$ ps. Then we acquire the two arrival-time distributions of the photons after travelling through the fiber segments of interest, using an SNSPD and a time interval analysis (TIA, Picoquant HydraHarp400). The delay of the fiber spool is calculated as the difference of the mean values of the two distributions. Since the IM and the start of the TIA are triggered by the same electronics, the same detector is used for both distributions, and we use the time-tag unit, the difference is only affected by the resolution of the TIA, reaching 1 ps.
}\label{measuredelay}
\end{figure}
The time delay of commercial off-the-shelf fiber spools is measured using a conventional optical time domain reflectometer (OTDR). By itself, it has a low accuracy and does not meet the sensitivity demands of our experiment. For example, the distance uncertainty of a conventional state-of-the-art OTDR (Exfo FTB7600E) is $ \pm (0.75 + 0.001\times L)$ meter, corresponding to a time-delay uncertainty of $\pm (3.75+0.001\times L/v)$ ns, where $L$ is the length of the fiber spool and $v$ is group velocity in fiber. In our experiment, we home made all the fiber segments with a time-delay uncertainty of $\sim 2.5$ ps using the following method:
\begin{enumerate}
\item To make one of the segments whose time delay is $2\times(2^{k-1}+1)$ ns, we cut off a $2\times(2^{k-1}+1)+5$ ns time-delay fiber with an FC/PC connector at one end and an SC bare fiber terminator (Thorlabs) at the other end, using a commercial OTDR to approximate the length. For the segments whose time delay is in the event dead zone of the OTDR, we directly make it by using a ruler with a resolution of 0.5 mm (= 2.5 ps).
\item We measure the time delay of the segment using our home-built photon-counting OTDR variant shown in Fig.\ \ref{measuredelay}, which achieves a resolution of 1 ps. Then, we cleave the redundant fiber whose length is measured by a ruler with a resolution of 0.5 mm (= 2.5 ps).
\item We pull the fiber out the bare fiber terminator. Then the fiber is connectorised and is slightly polished with an SC/FC connector achieving a low insertion loss.
\end{enumerate}
The resultant length of each of the fiber segments for Alice and Bob is shown in Table \ref{tabledelay}. For different combinations of fiber segments, the largest deviation of the delay, compared to the target value, is 101 ps for Alice and 46 ps for Bob---both of which are far below 900 ps (the maximum deviation of the delay if we want to apply the correct phase to the photon pulse in the desired time bin).
\begin{table*}[ht!]
\scriptsize
\centering
\caption{ (a) Length and loss of each of Alice's fiber segments. (b) Length and loss of each of Bob's fiber segments. For the $k$-th segment, the length of the long option and that of the short option are $t\times (2^{k-1}+1)$ and $t$, respectively, where $t=2$ ns is the pulse width of phase modulation. Errors shown represent 1 standard deviation. }
\subtable[]{
\begin{tabular}{|ccccccc|}
\hline \hline
\multicolumn{1}{|c|}{\diagbox{Option}{$k$}}&1&2&3&4&5&6\\
\hline
\multirow{1}{*}{Long length~(ns)}&$4.006\pm0.001$ &$ 6.007\pm0.001 $&$ 10.004\pm0.000 $&$18.012\pm0.001 $&$33.999\pm0.002 $ & $66.002
\pm0.001 $ \\
Loss~(dB)&$0.101\pm0.001$ &$ 0.090\pm0.002 $&$ 0.139\pm0.002 $&$0.115\pm0.002 $&$0.125\pm0.003 $ & $0.130
\pm0.001 $ \\
\hline
\multirow{1}{*}{Short length~(ns)}&$ 2.008 \pm0.001 $ &$ 2.010 \pm0.001 $ & $ 2.008 \pm0.001 $ &$ 2.008 \pm0.001 $ & $ 2.010
\pm0.001 $ &$ 2.003 \pm0.001 $\\
Loss~(dB)&$0.104\pm0.002$ &$ 0.078\pm0.001 $&$ 0.068\pm0.003 $&$0.062\pm0.002 $&$0.046\pm0.003 $ & $0.031
\pm0.003 $ \\
\hline \hline
\multicolumn{1}{|c|}{\diagbox{Option}{$k$}}&7&8&9&10&11&12\\
\hline
\multirow{1}{*}{Long length~(ns)}&$ 130.008 \pm0.001 $&$257.948
\pm0.001 $&$513.995\pm0.001$ &$1025.985\pm0.001 $&$ 2049.936\pm0.002 $&$4098.000\pm0.005 $\\
Loss~(dB)&$0.089\pm0.002$ &$ 0.110\pm0.002 $&$ 0.154\pm0.001 $&$0.145\pm0.001 $&$0.198\pm0.003 $ & $0.198
\pm0.001 $ \\
\hline
\multirow{1}{*}{Short length~(ns)}&$ 2.003 \pm0.001 $ &$2.009 \pm0.001 $ & $ 2.006 \pm0.001 $ &$ 2.009 \pm0.001 $&$ 2.009 \pm0.001 $&$ 2.008 \pm0.001 $\\
Loss~(dB)&$0.053\pm0.002$ &$ 0.072\pm0.001 $&$ 0.094\pm0.002 $&$0.110\pm0.001 $&$0.082\pm0.003 $ & $0.076
\pm0.002 $ \\
\hline \hline
\end{tabular}\label{Tabalice}
}
\subtable[]{
\begin{tabular}{|ccccccc|}
\hline \hline
\multicolumn{1}{|c|}{\diagbox{Option}{$k$}}&1&2&3&4&5&6\\
\hline
\multirow{1}{*}{Long length~(ns)}&$4.013\pm0.001$ &$6.010\pm0.001 $&$ 10.003\pm0.001$&$18.004\pm0.001$ &$34.009 \pm0.000 $ & $66.056 \pm0.001 $ \\
Loss~(dB)&$0.0149\pm0.001$ &$0.0176\pm0.001 $&$ 0.250\pm0.001$&$0.098\pm0.002$ &$0.218 \pm0.001 $ & $0.170 \pm0.001 $ \\
\hline
\multirow{1}{*}{Short length~(ns)}&$ 2.008 \pm0.001 $ & $ 1.999 \pm0.001 $ & $ 2.008 \pm0.001 $ &$ 2.009 \pm0.001 $ & $ 2.007 \pm0.001 $ &$2.010 \pm0.001 $\\
Loss~(dB)&$0.073\pm0.002$ &$0.106\pm0.002 $&$ 0.096\pm0.001$&$0.076\pm0.001$ &$0.081 \pm0.001 $ & $0.098 \pm0.001 $ \\
\hline \hline
\multicolumn{1}{|c|}{\diagbox{Option}{$k$}}&7&8&9&10&11&12\\
\hline
\multirow{1}{*}{Long length~(ns)}&$ 130.002 \pm0.001 $&$257.992 \pm0.001 $&$514.004 \pm0.001$ &$ 1025.999\pm0.002 $&$ 2049.961\pm0.004 $& $ 4097.993
\pm0.007$\\
Loss~(dB)&$0.100\pm0.002$ &$0.070\pm0.003 $&$ 0.110\pm0.002$&$0.040\pm0.003$ &$0.400 \pm0.001 $ & $0.280 \pm0.001 $ \\
\hline
\multirow{1}{*}{Short length~(ns)}&$ 2.005\pm0.001 $ & $ 2.006 \pm0.001 $ & $ 2.005 \pm0.001 $ &$ 2.004\pm0.001 $ & $ 2.007 \pm0.001 $ &$ 2.002\pm0.001 $\\
length~(ns)&$0.089\pm0.001$ &$0.043\pm0.002 $&$ 0.073\pm0.002$&$0.033\pm0.003$ &$0.128 \pm0.001 $ & $0.113 \pm0.001 $ \\
\hline\hline
\end{tabular} \label{Tabbob}
}
\label{tabledelay}
\end{table*}
\section{Detailed Experimental results}\label{detailed experimental result}
Table \ref{tableresults} shows the complete experimental results.
Figure\ \ref{Figerror} shows the experimental error probabilities of different system sizes ranging from $n=11$ to 9. For each system size, the largest error probability is obtained in the worst case (red bar). The error bars represent 1 standard deviation. A possible reason why the error rises with growing system size is because the thermal expansion introduced by setting the different length of each section cannot be compensated, and results in a length mismatch as $n$ grows. \color{black}
\begin{table*}[ht!]
\caption{ Detailed experimental results. The system loss and the error probability are estimated for the worst-case inputs for Alice and Bob (consisting of $x=y=2^n-1$ and $f(z)=g(z)=0$ for $z$ even and $f(z)=g(z)=1$ for $z$ odd). The error bars indicate 1 standard deviation.}
\begin{tabular}{ccccc}
\hline \hline
System size ($n$)&9&10&11&12\\
Loss of Alice's section~(dB)&1.34$\pm$0.04&1.39$\pm$0.04&1.78$\pm$0.04&2.06$\pm$0.04\\
Loss of Bob's section~(dB)&0.9$\pm$1.06&1.26$\pm$0.04&1.26$\pm$0.04&1.46$\pm$0.04\\
System loss (dB)&$13.86\pm0.04$&$14.06\pm0.04$&$14.66\pm0.04$&$15.14\pm0.04$\\
Error probability ($\epsilon$)&$0.0405\pm0.0023$&$0.0498\pm0.0059$&$0.0543\pm0.0028$&$0.0638\pm0.0025$\\
Q&$271.26\pm2.50$&$308.45\pm2.84$&$383.66\pm3.53$&$461.45\pm4.25$\\
$\gamma$ (classical)&$1.636\pm0.015$&$0.930\pm0.008$&$0.578\pm0.005$&$0.348\pm0.003$\\
$\gamma$ (quantum definite)&$3.272\pm0.030$&$1.860\pm0.017$&$1.157\pm0.010$&$0.696\pm0.006$\\
\hline \hline
\end{tabular}
\label{tableresults}
\end{table*}
\begin{figure}[!htbp]
\subfigure[]{\label{error11bits}
\centering
\includegraphics[width=0.63\linewidth]{error11bits.pdf}
}
\subfigure[]{\label{error10bits}
\centering
\includegraphics[width=0.63\linewidth]{error10bits.pdf}
}
\subfigure[]{\label{error9bits}
\centering
\includegraphics[width=0.63\linewidth]{error9bits.pdf}
}
\caption{ (a) Error probability for the system size $n=11$. (b) Error probability for the system size $n=10$. (c) Error probability for the system size $n=9$. For each system size $n$, the red bar denotes the worst case of inputs $x=y=2^{n}-1$. The other values of $k\in{\{0,1,\ldots,11\}}$ correspond to Alice's $(k+1)$-th bit being flipped from 1 to 0. The red dashed line illustrates the worst error probability accounting for the uncertainty. The error bars indicate 1 standard deviation.}\label{Figerror}
\end{figure}
|
1,116,691,500,977 | arxiv | \section{Introduction}
What are the most perfect fluids in Nature with the smallest shear viscosity
($\eta $) per entropy density ($s$)? Kovtun, Son, and Starinets\ (KSS) \cite%
{KOVT1} suspected that they are a class of strongly interacting conformal
field theories (CFTs) whose $\eta /s=1/(4\pi )$. They even conjectured that $%
1/(4\pi )$ is the minimum bound for $\eta /s$\ for all physical systems.
Ever since the KSS bound was proposed, much progress has been made in
testing this bound and trying to identify the most perfect fluid (see \cite%
{Kapusta:2008vb,Schafer:2009dj} for recent reviews). It is found that $\eta
/s$ can be as small as possible (but still positive) in a carefully
engineered meson system \cite{Cohen:2007qr,Cherman:2007fj}, although the
system is metastable. Also, in strongly interacting CFTs, the universal
value $\eta /s=1/(4\pi )$ is obtained only in the limit of infinite $N$,
with $N$ the size of the gauge group, and infinite t'Hooft coupling limit
\cite{Son:2007vk}. $1/N$ corrections can be negative, however, \cite%
{Kats:2007mq,Brigante:2007nu}\ and can modify the $\eta /s$ bound slightly
\cite{Brigante:2008gz,Buchel:2008vz}.
In the real world, the smallest $\eta /s$ known so far belongs to a system
of hot and dense matter thought to be quark gluon plasma just above the
phase transition temperature produced at RHIC \cite{RHIC}\ with $\eta
/s=0.1\pm 0.1(\mathrm{theory})\pm 0.08(\mathrm{experiment})$ \cite%
{Luzum:2008cw}. A robust upper limit $\eta /s<5\times 1/(4\pi )$ was
extracted by another group \cite{Song:2008hj} and a lattice computation of
gluon plasma yields $\eta /s=0.134(33)$ \cite{etas-gluon-lat}. Progress has
been made in cold unitary fermi gases as well. An analysis of the damping of
collective oscillations gives $\eta /s\gtrsim 0.5$ \cite{Schafer,Turlapov}.
Even smaller values of $\eta /s$ are indicated by recent data on the
expansion of rotating clouds \cite{Clancy,Thomas} but more careful analyses
are needed \cite{Schaefer2}.
Previous studies have given some clues about where to find the most perfect
fluid in nature. The first one is to study strongly interacting systems
because strong interaction generally implies small $\eta /s$. The second
clue can be found in a large class of systems where $\eta /s$ goes to a
local minimum near the phase transition temperature ($T_{c}$) \cite%
{Csernai:2006zz,Chen:2006iga,Chen:2007jq}. In particular, $\eta /s$ develops
a cusp(jump) at $T_{c}$ for a second(first) order phase transition and a
smooth local minimum for a cross over. This behavior is seen in QCD with
zero baryon chemical potential \cite{Csernai:2006zz,Chen:2006iga} and near
the nuclear liquid-gas phase transition \cite{Chen:2007xe}. It is also seen
in cold unitary fermi gases \cite{etas-supfluid}, in H$_{2}$O, N, and He and
in all the matters with data available in the NIST database \cite%
{webbook,Csernai:2006zz,Chen:2007xe}. Theoretically, these behaviors can be
reproduced in controlled calculations of weakly interacting real scalar
field theories \cite{Chen:2007jq}. Thus, it was speculated that this feature
is universal. If this is indeed the case, then $\eta /s$ can be used to
probe some parts of the systems which are hard to explore otherwise. For
example, one can try to locate the critical point of QCD by measuring $\eta
/s$ \cite{Lacey:2006bc,Chen:2007xe}.
In this paper, however, we present a counterexample of the $\eta /s$
behavior speculated above. In this model, $\eta /s$ does not go to a local
minimum at the second order phase transition temperature. Our model is a
mixture of two weakly self-interacting real scalar fields with one
condensing at low temperatures while the other remains in the symmetric
phase. There is no interaction between the two fields. The advantage of this
model is that its $\eta /s$ can be computed reliably as in \cite{Chen:2007jq}
because of the small couplings \cite{Jeon}. Other counterexamples have been
asserted previously in literature. One of them is a $\sigma $ model
calculation with a local minimum below $T_{c}$ \cite{Dobado:2009ek}. In this
model, large couplings are used to mimic the case of QCD. Thus, it is not
clear this is due to the failure of the Boltzmann equation at large
couplings \cite{Jeon}, or if the effect is generic. Also, holographic models
have constant $\eta /s$ ($=1/4\pi $) in the limit of infinite $N$ and the
infinite 't Hooft coupling limit. If $1/N$ corrections are added, $\eta /s$
becomes monotonically increasing below $T_{c}$ and a constant above $T_{c}$
\cite{Buchel:2010wf}. Our model is a field theory model that one can compute
directly and reliably. Our final result shows that $\eta /s$ does not have
to develop a local minimum at $T_{c}$.
\section{The model}
We will study real scalar theories in cases I-III:%
\begin{eqnarray}
\mathcal{L}_{I} &=&\mathcal{L}_{1}, \notag \\
\mathcal{L}_{II} &=&\mathcal{L}_{2}, \notag \\
\mathcal{L}_{III} &=&\mathcal{L}_{1}+\mathcal{L}_{2},
\end{eqnarray}%
where%
\begin{equation}
\mathcal{L}_{i}=\frac{1}{2}(\partial _{\mu }\phi _{i})^{2}-\frac{1}{2}\mu
_{i}^{2}\phi _{i}^{2}-\frac{1}{4}\lambda \phi _{i}^{4}.
\end{equation}%
The $\eta /s$ of cases I and II are well studied in \cite{Chen:2007jq} and
we follow the treatment there. $\lambda $ and $\mu _{i}^{2}$ are
renormalized quantities and the counterterm Lagrangian is not shown. The
renormalization condition is that the counterterms do not change the
particle mass and the four-point coupling at threshold. We will set $%
0<\lambda \ll 1$ such that the systems are bounded from below and we can
compute to leading order, $\eta $ and $s$, in the\ $\lambda _{i}$
expansions. In case I, $\mu _{1}^{2}>0$ and $\phi _{1}$ stays in the
symmetric phase. The resulting $\eta /s$ is monotonically decreasing in
temperature ($T$) \cite{Chen:2007jq}. In case II, $\mu _{2}^{2}<0$. The $%
\phi _{2}\rightarrow -\phi _{2}$ symmetry is spontaneously broken below the
phase transition temperature $T_{c}$. The resulting $\eta /s$ is
monotonically decreasing when $T<T_{c}$ and becomes\ monotonically
increasing when $T>T_{c}$. Also, $\eta /s$ forms a cusp at $T_{c}$ under the
mean field approximation \cite{Chen:2007jq}.\
Because there is no interaction between $\phi _{1}$ and $\phi _{2}$ in case
III, the entropy density is just the sum of the $\phi _{1}$ and $\phi _{2}$
entropy
\begin{equation}
s_{III}=s_{I}+s_{II}. \label{s}
\end{equation}%
Analogously, in a linear response theory, the Kubo formula relates $\eta $
to an ensemble average of a correlator
\begin{equation}
\eta =-\frac{1}{5}\int_{-\infty }^{0}\mathrm{d}t^{\prime }\int_{-\infty
}^{t^{\prime }}\mathrm{d}t\int \mathrm{d}x^{3}\langle \left[
T^{ij}(0),T^{ij}(\mathbf{x},t)\right] \rangle \ , \label{Kubo}
\end{equation}%
where $T^{ij}$ is the spacial part of the off-diagonal energy momentum
tensor. $T_{III}^{ij}=T_{I}^{ij}+T_{II}^{ij}$ and $\langle \left[
T_{I}^{ij}(0),T_{II}^{ij}(\mathbf{x},t)\right] \rangle =\left[ \left\langle
T_{I}^{ij}(0)\right\rangle ,\left\langle T_{II}^{ij}(\mathbf{x}%
,t)\right\rangle \right] =0$, such that
\begin{equation}
\eta _{III}=\eta _{I}+\eta _{II}. \label{eta_III}
\end{equation}
\begin{figure}[tbp]
\scalebox{0.5}{\includegraphics{mixed_Fig1.eps}}
\caption{$m_{qp}$ vs. $T$ for cases \ I (solid curve, without out symmetry
breaking) and II (dashed curve, with symmetry breaking). Parameters can be
in arbitrary units.}
\end{figure}
The high $T$ behavior of $\eta /s$ can be analyzed using the $1/T$ expansion
as in Ref. \cite{Chen:2007xe}. By neglect the slow running of the coupling
constant, the dimensionful quantities $\mu _{i}^{2}$ and $T$ can only
contribute to the dimensionless ratio $\eta /s$ through the $\mu
_{i}^{2}/T^{2}$ combination (note that it is $\mu _{i}^{2}$, not $\mu _{i}$
that appears in the Lagrangian). As $T\rightarrow \infty $, $\eta /s$ has
the following $1/T$ expansion
\begin{eqnarray}
\frac{\eta _{I}}{s_{I}} &\rightarrow &\frac{c_{1}}{\lambda ^{2}}\left(
1+c_{2}\frac{\mu _{1}^{2}}{T^{2}}+\mathcal{O}(T^{-3})\right) , \\
\frac{\eta _{II}}{s_{II}} &\rightarrow &\frac{c_{1}}{\lambda ^{2}}\left(
1+c_{2}\frac{\mu _{2}^{2}}{T^{2}}+\mathcal{O}(T^{-3})\right) ,
\end{eqnarray}%
where $c_{1}>0$ and $c_{2}>0$. There is no $1/T$ term because as mentioned
above, the result does not depend on the sign of $\mu _{i}$. The $1/T$
expansion of $\eta /s$ in case III has the similar structure
\begin{equation}
\frac{\eta _{III}}{s_{III}}\rightarrow \frac{c_{1}^{\prime }}{\lambda ^{2}}%
\left( 1+c_{2}^{\prime }\frac{\mu _{1}^{2}+\mu _{2}^{2}}{2T^{2}}+\mathcal{O}%
(T^{-3})\right) .
\end{equation}%
Furthermore, in the limit of $\mu _{2}^{2}=\mu _{1}^{2}$, we have $\eta
_{I}=\eta _{II}$, $s_{I}=s_{II}$, and $\eta _{III}/s_{III}=\eta _{I}/s_{I}$
by Eqs.(\ref{s}) and (\ref{eta_III}). This implies $c_{1}^{\prime }=c_{1}$
and $c_{2}^{\prime }=c_{2}>0$. Therefore, if $\mu _{1}^{2}+\mu _{2}^{2}>0$, $%
\eta _{III}/s_{III}$ is monotonically decreasing in $T$ as $T\rightarrow
\infty $. The question is, whether this behavior persists from large $T$
down to $T_{c}$. Before answering this question numerically, we will try to
understand the behaviors of $\eta $ and $s$ separately.
In Fig. 1, we show the typical $m_{qp,i}$, the effective quasiparticle mass
of case $i$, as a function of $T$. When $T=0$, $m_{qp,I}^{2}=\mu _{1}^{2}$
and $m_{qp,II}^{2}=2\left\vert \mu _{2}^{2}\right\vert $. Then $m_{qp,I}^{2}$
increases for increasing $T$ due to the positive thermal mass effect, while $%
m_{qp,II}^{2}$ decreases to zero at $T_{c}$, and then increases again at
higher $T$. As $T\rightarrow \infty $, $m_{qp,I}^{2}-$ $m_{qp,II}^{2}=\mu
_{1}^{2}-\mu _{2}^{2}>0$. We have chosen the parameters such that $%
m_{qp,I}^{2}>$ $m_{qp,II}^{2}$ at all $T$, which gives $s_{I}<s_{II}$ in
Fig. 2. The cusp in $s_{II}$ is barely visible.
\begin{figure}[tbp]
\scalebox{0.5}{\includegraphics{mixed_Fig2.eps}}
\caption{$s/T^{3}$ and $\protect\eta /T^{3}$ for cases \ I (solid curve,
without out symmetry breaking) , II (dashed curve, with symmetry breaking),
and III (dotted curve, the mixture of I and II). Parameters can be in
arbitrary units.}
\end{figure}
In Fig. 2, $\eta /T^{3}$ is also shown. Its behavior is very similar to that
of $m_{qp}$. To further explore this relation, we use the kinetic theory
approximation
\begin{equation}
\eta \sim \rho vl,
\end{equation}%
where $\rho $, $v$ and $l$ are the quasiparticle density, velocity and mean
free path, respectively. Then using $l\sim 1/nv\sigma $, where $n$ is the
number density and $\sigma $ is the cross section between quasiparticles, we
have
\begin{equation}
\eta \sim \frac{\rho }{n\sigma }=\frac{\epsilon }{\sigma },
\end{equation}%
where $\epsilon $ is the averaged quasiparticle energy. In a weakly coupled
system, $\epsilon $ can be approximated as a gas of free particles with mass
$m_{qp}$%
\begin{equation}
\epsilon =\frac{\int d^{3}p\ \epsilon _{qp}f\left( \epsilon _{qp}\right) }{%
\int d^{3}p\ f\left( \epsilon _{qp}\right) }\left[ 1+\mathcal{O}(\lambda )%
\right] ,
\end{equation}%
where $\epsilon _{qp}=\sqrt{p^{2}+m_{qp}^{2}}$ and the Bose-Einstein
distribution $f\left( \epsilon _{qp}\right) =1/\left( e^{\epsilon
_{qp}/T}-1\right) $. In two body collisions, $\sigma $ can be approximated
as
\begin{equation}
\sigma \sim \frac{\lambda _{eff}^{2}}{\epsilon ^{2}}.
\end{equation}%
Thus,
\begin{equation}
\eta \sim \frac{\epsilon ^{3}}{\lambda _{eff}^{2}}. \label{eta}
\end{equation}
\begin{figure}[tbp]
\scalebox{0.5}{\includegraphics{mixed_Fig3.eps}}
\caption{$\protect\epsilon ^{3}/(\protect\lambda ^{2}T^{3})$ vs. $T$ for
cases I (solid curve, without out symmetry breaking) , II (dashed curve,
with symmetry breaking), and III (dotted curve, the mixture of I and II).
Parameters can be in arbitrary units.}
\end{figure}
The effective coupling $\lambda _{eff}$ is $T$ dependent. The explicit
expression for the scattering amplitude is \cite{Chen:2007xe}
\begin{equation}
i\mathcal{T}\sim 6\lambda +\left( 6\lambda \left\langle \phi \right\rangle
\right) ^{2}\left[ \frac{1}{s-m_{qp}^{2}}+\frac{1}{t-m_{qp}^{2}}+\frac{1}{%
u-m_{qp}^{2}}\right] .
\end{equation}%
When $T\sim 0$, $s\sim 4m_{qp}^{2}$ and $t\sim u\sim 0$. However, $t\sim
u\sim 0$ causes no momentum redistribution and hence the $t$- and $u$%
-channels have no contribution to $\eta $. Thus, we can approximate $\lambda
_{eff}$ as
\begin{equation}
\lambda _{eff}\sim \lambda +\frac{6\lambda ^{2}\left\langle \phi
\right\rangle ^{2}}{s-m_{qp}^{2}}.
\end{equation}%
Under this approximation, $\lambda _{eff}$ decreases smoothly from $2\lambda
$ at $T=0$, to $\lambda $ at $T_{c}$\ and stays constant above $T_{c}$.
Since $\lambda _{eff}$ only varies by a factor $2$, we can further
approximate it by a constant $\lambda $ such that $\eta \sim \epsilon
^{3}/\lambda ^{2}$. As shown in Fig. 3, the $T$ dependence of $\epsilon
^{3}/\lambda ^{2}$ is indeed qualitatively similar to that of $\eta $ in
Fig. 2.
\begin{figure}[tbp]
\scalebox{0.5}{\includegraphics{mixed_Fig4.eps}}
\caption{$\protect\eta /s$ vs. $T$ for cases \ I (solid curve, without out
symmetry breaking) , II (dashed curve, with symmetry breaking), and III
(dotted curve, the mixture of I and II). Parameters can be in arbitrary
units.}
\end{figure}
Finally, we present the $\eta /s$ results in Fig. 4. They are qualitatively
similar to $\eta /T^{3}$. As speculated above, $\eta _{III}/s_{III}$ is
indeed monotonically decreasing both below and above $T_{c}$. This is a
counterexample to the previous speculation that $\eta /s$ goes to a local
minimum at $T_{c}$ of a second order phase transition.
There are some approximations that we have made in this calculation but none
of them should change our conclusion qualitatively. The first one is the
\textquotedblleft Hartree approximation\textquotedblright\ that is used to
neglect all the sunset diagrams below $T_{c}$. This approximation\ is good
when $T\gg \lambda ^{1/2}T_{c}$. At lower $T$, the Hartree approximation is
not reliable. However, as $T\rightarrow 0$, $s$ approaches zero
exponentially (the excitations are massive) while $\eta $ approaches zero
via power laws. As a result, $\eta /s$ is decreasing in cases I-III at low $%
T $. This feature is not affected despite the Hartree approximation used.
The second approximation used is the mean field approximation. Unaccounted
quantum fluctuations can make the result reliable in the region $\left\vert
T-T_{c}\right\vert /T_{c}\lesssim O(\lambda )$. However, this region can be
made arbitrarily small by reducing $\lambda $.
Finally, the end point of a first order phase transition (called a critical
point (CP)) is also a second order phase transition. This is a special kind
of second order phase transition which can be modeled by an effective field
theory whose $\phi _{i}^{2}$ and $\phi _{i}^{4}$ couplings vanishes at $%
T_{c} $, so that the leading coupling is $\phi _{i}^{6}$. Hence the CP case
is different from the cases we consider here. In some systems $\eta $
diverges weakly near a CP.
\section{Conclusion}
The ratio $\eta /s$, the shear viscosity ($\eta )$ to entropy density ($s)$,
reaches its local minimum at the (second order) phase transition temperature
in a wide class of systems. It was suspected that this behavior might be
universal. However, we have presented a counterexample made of a system of
two weakly self-interacting real scalar fields with one of them condensing
at low temperatures while the other remains in the symmetric phase. There is
no interaction between the two fields. The resulting $\eta /s$ is
monotonically decreasing in temperature despite the phase transition.
We thank Brian Smigielski for careful reading of the manuscript. This work
is supported by the NSC and NCTS of Taiwan.
|
1,116,691,500,978 | arxiv | \section{Introduction}
\noindent Because of patient heterogeneity in response to various
aspects of treatment, the paradigm of biomedical and health policy
research is shifting from the ``one-size-fits-all'' treatment approach
to precision medicine \citep{hamburg2010path}. Toward that end, an
important step is to understand how treatment effect varies across
patient characteristics, known as heterogeneity of treatment effect
(HTE) \citep{rothwell2005subgroup,rothwell2005subgroups}. Randomized
trials (RTs) are the gold standard method for treatment effect evaluation,
because randomization of treatment ensures that treatment groups are
comparable, and biases are minimized to the extent possible. However,
due to eligibility criteria for recruiting patients, the trial sample
is often limited in patient diversity, which renders the trial under-powered
to estimate the HTE. On the other hand, big real-world (RW) data are
increasingly available for research purposes, such as electronic health
records, claims databases, and disease registries, that have much
broader demographic and diversity compared to RT cohorts. Recently,
several national organizations \citep{norris2010selecting} and regulatory
agencies \citep{sherman2016real} have advocated using RW data to
have a faster and less costly drug discovery process. Indeed, big
data provide unprecedented opportunities for new scientific discovery;
however, they also present challenges such as confounding due to lack
of randomization.
The motivating application is to evaluate adjuvant chemotherapy for
resected non-small cell lung cancer (NSCLC) at early-stage disease.
Adjuvant chemotherapy for resected NSCLC has been shown to be effective
in late-stage II and IIIA disease on the basis of RTs \citep{le2003results}.
However, the benefit of adjuvant chemotherapy in stage IB NSCLC disease
is unclear. Cancer and Leukemia Group B (CALGB) 9633 is the only RT
designed specifically for stage IB NSCLC \citep{strauss2008adjuvant};
however, it comprises about $300$ patients which was undersized to
detect clinically meaningful improvements for adjuvant chemotherapy.
\textit{``Who can benefit from adjuvant chemotherapy with stage IB
NSCLC?''} remains an important clinical question. An exploratory
analysis of CALGB 9633 showed that patients with tumor size $\geq4.0$
cm may benefit from adjuvant chemotherapy \citep{strauss2008adjuvant}.
This benefit was also found in an analysis of a dataset from the National
Cancer Database (NCDB) \citep{speicher2015adjuvant}, while an analysis
based on the Surveillance, Epidemiology, and End Results (SEER) database
found adjuvant chemotherapy extended survivals as compared to observation
in all tumor size groups \citep{morgensztern2016adjuvant}. Although
such population-based disease registries provide rich information
citing the real-world usage of adjuvant chemotherapy, the concern
is the confounding bias associated with RW data. Our goal is to integrate
the CALGB 9633 trial with data from the NCDB to improve the RT findings
regarding the HTE of adjuvant chemotherapy with both age and tumor
size.
Many authors have proposed methods for generalizing treatment effects
from RTs to the target population, whose covariate distribution can
be characterized by the RW data \citep{buchanan2018generalizing,zhao2019robustify}.
When both RT and RW data provide covariate, treatment and outcome
information, there are two main approaches for integrative analysis:
meta analysis \citep[e.g.,][]{verde2015combining} and pooled patient
data analysis. The major drawback of meta analysis is that it uses
only aggregated information and does not distinguish the roles of
the RT and RW data, both having unique strengths and weaknesses. The
second approach includes all patients, but pooling the data from two
sources breaks the randomization of treatments and therefore relies
on causal inference methods to adjust for confounding bias \citep[e.g.,][]{prentice2005combined}.
Importantly, we cannot rule out possible unmeasured confounding in
the RW data. Moreover, most existing integrative methods focused on
average treatment effects (ATEs) but not on HTEs, which lies at the
heart of precision medicine.
To acknowledge the advantages from the RT and RW data, we propose
an elastic algorithm of combining the RT and RW data for accurate
and robust estimation of the HTE. When the desirable assumptions required
for an integrative analysis of the RT and RW data are met, we use
the semiparametric efficiency theory \citep{bickel1993efficient,robins1994correcting}
to derive the semiparametric efficient estimator of the HTE. The main
identification assumptions underpinning our method are (i) the transportability
of the HTE from the RT and RW data to the target population and (ii)
no unmeasured confounding in the RW data. We further consider the
case when the RW data may violate the desirable assumption (ii). Utilizing
the design advantage of RTs, we derive a test statistic to gauge the
reliability of the RW data and decide whether or not to use the RW
data in an integrative analysis. Therefore, our test-based elastic
integrative estimator uses the efficient combining strategy for estimation
if the violation test is not significant and retains only the RT data
if the violation test is significant.
The proposed estimator belongs to pre-test estimation by construction
\citep{giles1993pre} and is non-regular. Exact inference for pre-test
estimation is difficult because the estimator depends on the randomness
of the test procedure. This issue cannot be solved by sample splitting
that divides the sample into two parts for testing and estimation,
separately \citep{toyoda1979pre}. This is because sample splitting
cannot bypass the issue of the additional randomness due to pre-testing
and therefore the impact of pre-testing still remains. Our test statistic
and estimator are constructed based on the whole sample data. To take
into account the impact of pre-testing, we decompose the test-based
elastic integrative estimator into orthogonal components, one is affected
by the pre-testing and the other is not. This step reveals the asymptotic
distributions of the proposed estimator to be mixture distributions
involving a truncated normal component with ellipsoid truncation and
a normal component. To demonstrate the non-regularity issue, we also
consider local alternatives \citep{staiger1994instrumental,cheng2008robust,laber2011adaptive}.
Importantly, the local alternative formulates the situation when the
assumption required for the RW data is weakly violated. This strategy
provides a better approximation of the finite-sample behavior of the
proposed estimator. Under this framework, we provide a data-adaptive
procedure to select the threshold of the test statistic that promises
the smallest mean square error (MSE) of the proposed estimator. Lastly,
we propose an elastic procedure to construct confidence intervals
(CIs), which is adaptive to the local alternative and the fixed alternative
and has good finite-sample coverage properties.
This article is organized as follows. Section \ref{sec:Basic-setup}
introduces the basic setup, the HTE, the identification assumptions,
and semiparametric efficient estimation. Section \ref{subsec: robust and elastic integ}
establishes a test statistic for gauging the reliability of the RW
data, a test-based elastic integrative estimator, the asymptotic properties,
and an elastic inference procedure. Section \ref{sec:Simulation}
presents a simulation study to evaluate the performance of the proposed
estimator in terms of robustness and efficiency. Section \ref{sec:Application}
applies the proposed method to combined RT and RW data to characterize
the HTE of adjuvant chemotherapy in patients with stage IB non-small
cell lung cancer. We relegate technical details and all proofs to
the supplementary material.
\section{Basic setup\label{sec:Basic-setup}}
\subsection{Notation, the HTE, and two data sources}
Let $A\in\{0,1\}$ be the binary treatment, $Z$ a vector of pre-treatment
covariates of interest with the first component being $1$, $X$ a
vector of auxiliary variables including $Z$, and $Y$ the outcome
of interest. To fix ideas, we consider $Y$ to be continuous or binary,
although our framework can be extended to general-type outcomes including
the survival outcome. We follow the potential outcomes framework \citep{splawa1990application,rubin1974estimating}
to define causal effects. Under the Stable Unit of Treatment Value
assumption, let $Y(a)$ be the potential outcome had the subject been
given treatment $a$, for $a=0,1$. By the causal consistency assumption,
the observed outcome is $Y=Y(A)=AY(1)+(1-A)Y(0)$.
Based on the potential outcomes, the individual treatment effect is
$Y(1)-Y(0)$, and the HTE can be characterized through $\tau(Z)=\mathbb{E}\{Y(1)-Y(0)\mid Z\}$.
For a binary outcome, $\tau(Z)$ is also called the causal risk difference.
In clinical settings, the parametric family of HTEs is desirable and
has wide applications in precision medicine for the discovery of optimal
treatment regimes that are tailored to individual's characteristics
\citep{chakraborty2013statistical}. We assume the HTE function to
be
\begin{equation}
\tau(Z)=\tau_{\psi_{0}}(Z)=\mathbb{E}\{Y(1)-Y(0)\mid Z;\psi_{0}\},\label{eq:SNMM}
\end{equation}
where $\psi_{0}\in\mathbb{R}^{p}$ is a vector of unknown parameters and $p$
is fixed.
We illustrate the HTE function in the following examples.
\begin{example}\label{eg. continuous outcome}(\citealp{lu2014asimplemethod};
\citealp{shi2016RobustLearning}) For a continuous outcome, a linear
HTE function is $\tau_{\psi_{0}}(Z)=Z^{\mathrm{\scriptscriptstyle T}}\psi_{0}$, where each component
of $\psi_{0}$ quantifies the treatment effect of each $Z$.
\end{example}
\begin{example}\label{eg. binary outcome}(\citealp{lu2014asimplemethod};
\citealp{richardson2017modeling}) For a binary outcome, an HTE function
for the causal risk difference is $\tau_{\psi_{0}}(Z)=\{\exp(Z^{\mathrm{\scriptscriptstyle T}}\psi_{0})-1\}/\{\exp(Z^{\mathrm{\scriptscriptstyle T}}\psi_{0})+1\}$,
ranging from $-1$ to $1$.
\end{example}
To evaluate the effect of adjuvant chemotherapy, let $Y$ be the indication
of cancer recurrence within $1$ year of surgery. Consider the HTE
function in Example \ref{eg. binary outcome} with $Z=(1,{\rm age},{\rm tumor}\ {\rm size})^{\mathrm{\scriptscriptstyle T}}$
and $\psi_{0}=(\psi_{0,0},\psi_{0,1},\psi_{0,2})^{\mathrm{\scriptscriptstyle T}}$. This model
entails that, on average, the treatment would increase or decrease
the risk of cancer recurrence had the patient received adjuvant chemotherapy
by $|\tau_{\psi_{0}}(Z)|$, and the magnitude of increase depends
on age and tumor size. If $Z^{\mathrm{\scriptscriptstyle T}}\psi_{0}<0$, it indicates that the
treatment is beneficial for this patient. Moreover, if $\psi_{0,1}<0$
and $\psi_{0,2}<0$, then older patients with larger tumor sizes would
have greater benefit from adjuvant chemotherapy.
We consider two independent data sources: one from the RT study and
the other from the RW study. Let $\delta=1$ denote RT participation,
and let $\delta=0$ denote RW study participation. Let $V$ summarize
the full record of observed variables $(A,X,\delta,Y)$. The RT data
consist of $\{V_{i}:\delta_{i}=1,i\in\mathcal{A}\}$ with sample size
$m$, and the RW data consist of $\{V_{i}:\delta_{i}=0,i\in\mathcal{B}\}$
with sample size $n$, where $\mathcal{A}$ and $\mathcal{B}$ are
sample index sets for the two data sources. Our setup requires the
RT sample and the RW sample to contain the information on $Z$ but
may have different sets of auxiliary information in $X$. The total
sample size is $N=m+n$. Generally, $n$ is larger than $m$. In our
asymptotic framework, we assume both $m$ and $n$ go to infinity,
and $m/n\rightarrow\rho$, where $0<\rho<1$.
For simplicity of exposition, we use the following notation throughout
the paper: $\mathbb{P}_{N}$ denotes the empirical measure over the
combined RT and RW data, $M^{\otimes2}$$=MM^{\mathrm{\scriptscriptstyle T}}$ for a vector or
matrix $M$, $\mathbb{E}_{a}(\cdot)$ and $\mathbb{V}_{a}(\cdot)$ are the asymptotic
expectation and variance of a random variable, $A_{n}\indep B_{n}$
denotes $A_{n}$ is independent of $B_{n}$, $A_{n}\sim B_{n}$ denotes
that $A_{n}$ follows the same distribution as $B_{n}$, and $A_{n}\stackrel{\cdot}{\sim}B_{n}$
denotes that $A_{n}$ and $B_{n}$ have the same asymptotic distribution
as $n\rightarrow\infty$.
\subsection{Identification of the HTE from the RT and RW data}
The fundamental problem of causal inference is that $Y(0)$ and $Y(1)$
are not jointly observable. Therefore, the HTE is not identifiable
without additional assumptions. Let $e_{\delta}(X)=\pr(A=1\mid X,\delta)$
be the propensity score. We first consider an idealistic situation
satisfying the following assumptions.
\begin{assumption}\label{Asump:transp} $\mathbb{E}\{Y(1)-Y(0)\mid X,\delta\}=\tau(Z)$.
\end{assumption}
\begin{assumption}\label{Asump:rand-rct}$Y(a)\indep A\mid(X,\delta=1)$
for $a\in\{0,1\}$, and $0<e_{1}(X)<1$ for all $(X,\delta=1)$.
\end{assumption}
\begin{assumption}\label{Asump:rand-rwd}$Y(a)\indep A\mid(X,\delta=0)$
for $a\in\{0,1\}$, and $0<e_{0}(X)<1$ for all $(X,\delta=0)$.
\end{assumption}
Assumption \ref{Asump:transp} states that the HTE function is transportable
from the RT and RW sample to the target population. It holds if $Z$
captures all important treatment effect modifiers. This assumption
is a common assumption in the data integration literature. Stronger
versions of Assumption \ref{Asump:transp} are also been considered
in the literature, including the ignorability of study participation,
i.e., $\{Y(0),Y(1)\}\indep\delta\mid X$ \citep{stuart2011use,buchanan2018generalizing},
or the mean exchangeability, i.e., $\mathbb{E}\{Y(a)\mid X,\delta\}=\mathbb{E}\{Y(a)\mid X\}$
for $a=0,1$ \citep{dahabreh2018generalizing}.
Assumptions \ref{Asump:rand-rct} and \ref{Asump:rand-rwd} entail
that treatment assignment in the RT and the RW study follows some
randomization mechanisms based on the pre-treatment variables $X$,
and all subjects have positive probabilities of receiving each treatment.
Assumption \ref{Asump:rand-rct} holds by the design of complete randomization
of treatment, where the treatment is independent of the potential
outcomes and covariates, i.e., $Y(a)\indep(A,X)\mid\delta=1$. It
also holds by the design of stratified block randomization of treatment
based on discrete $X$, where the treatment is independent of the
potential outcomes within each stratum of $X$. Moreover, the propensity
score $e_{1}(X)$ is known by design. Assumption \ref{Asump:rand-rwd}
holds if the observed covariates $X$ capture all the confounding
variables that are related to the treatment and outcome in observational
studies. Moreover, the propensity score $e_{0}(X)$ is usually unknown.
By trial design, we assume Assumption \ref{Asump:rand-rct} for the
RT data holds throughout the paper; however, we regard Assumption
\ref{Asump:rand-rwd} for the RW data as an idealistic assumption,
which may be violated. If Assumption \ref{Asump:rand-rwd} holds,
we will use a semiparametric efficient strategy to combine both data
sources for optimal estimation. However, if Assumption \ref{Asump:rand-rwd}
is violated, our proposed method will automatically detect the violation
and retain only the RT data for estimation.
Under Assumptions \ref{Asump:transp}--\ref{Asump:rand-rwd}, the
following identification formula holds for the HTE:
\[
\mathbb{E}\left\{ \left.\frac{AY}{e_{\delta}(X)}-\frac{(1-A)Y}{1-e_{\delta}(X)}\right\vert Z,\delta\right\} =\tau(Z).
\]
The identification formula motivates regression analysis based on
the modified outcome $A\{e_{\delta}(X)\}^{-1}Y-(1-A)\{1-e_{\delta}(X)\}^{-1}Y$
to estimate the HTE. This approach involves the inverse of the treatment
probability, and thus the resulting estimator may be unstable if some
estimated treatment probabilities are close to zero or one. It calls
for a principled way to construct improved estimators of the HTE.
Toward this end, we derive the semiparametric efficiency score (SES)
of the HTE under Assumptions \ref{Asump:transp}--\ref{Asump:rand-rwd}
that motivates improved estimators.
\subsection{Semiparametric efficiency score\label{subsec:The-efficient-score} }
The semiparametric model consists of model (\ref{eq:SNMM}) with the
parameter of interest $\psi_{0}$ and the unspecified distribution.
Although Assumptions \ref{Asump:transp}--\ref{Asump:rand-rwd} do
not have testable implications (except for the positivity of the propensity
score), they impose restrictions on $\psi_{0}$. To see this, define
\begin{equation}
H_{\psi}=Y-\tau_{\psi}(Z)A.\label{eq:def of H(k)}
\end{equation}
Intuitively, $H_{\psi_{0}}$ subtracts from the subject's observed
outcome $Y$ the treatment effect of the subject's observed treatment
$\tau_{\psi_{0}}(Z)A$, so it is mimicking the potential outcome $Y(0)$.
Formally, following \citet{robins1994correcting}, we can show that
$\mathbb{E}(H_{\psi_{0}}\mid A,X,\delta)=\mathbb{E}\{Y(0)\mid A,X,\delta\}$. Therefore,
by Assumption \ref{Asump:transp}, $\psi_{0}$ must satisfy the restriction\textit{:
\begin{equation}
\mathbb{E}(H_{\psi_{0}}\mid A,X,\delta)=\mathbb{E}(H_{\psi_{0}}\mid X,\delta).\label{eq:model-part1}
\end{equation}
}For simplicity of exposition, denote
\[
\mathbb{E}(H_{\psi_{0}}\mid X,\delta)=\mu_{\delta}(X),\quad\mathbb{V}(H_{\psi_{0}}\mid X,\delta)=\sigma_{\delta}^{2}(X),
\]
where $\mu_{\delta}(X)$ is the outcome mean function and $\sigma_{\delta}^{2}(X)$
is the outcome variance function. By viewing $(X,\delta)$ jointly
as the set of confounders, we invoke the SES of the structural nested
mean model in \citet{robins1994correcting}. We further make a simplifying
assumption that
\begin{equation}
\mathbb{E}(H_{\psi_{0}}^{2}\mid A,X,\delta)=\mathbb{E}(H_{\psi_{0}}^{2}\mid X,\delta),\label{eq:simplyassumption}
\end{equation}
which is a natural extension of (\ref{eq:model-part1}). This assumption
allows us to derive the SES of $\psi_{0}$ as
\begin{equation}
S_{\psi_{0}}(V)=q^{*}(X,\delta)\{H_{\psi_{0}}-\mu_{\delta}(X)\}\{A-e_{\delta}(X)\},\ \ q^{*}(X,\delta)=\left\{ \partial\tau_{\psi_{0}}(Z)/\partial\psi\right\} \left\{ \sigma_{\delta}^{2}(X)\right\} ^{-1},\label{eq:eff psi}
\end{equation}
which separates the term with the outcome, i.e., $H_{\psi_{0}}-\mu_{\delta}(X)$,
and the term with the treatment, i.e., $A-e_{\delta}(X)$. This feature
relaxes model assumptions of the nuisance functions while retaining
root-$n$ consistency in the estimation of $\psi_{0}$; see Section
\ref{subsec:From-SES}. Even without the simplifying assumption in
(\ref{eq:simplyassumption}), by the mean independence property in
(\ref{eq:model-part1}), we can verify that
\[
\mathbb{E}\{S_{\psi_{0}}(V)\}=\mathbb{E}[q^{*}(X,\delta)\mathbb{E}\{H_{\psi_{0}}-\mu_{\delta}(X)\mid X,\delta\}\times\mathbb{E}\{A-e_{\delta}(X)\mid X,\delta\}]=0.
\]
Therefore, if (\ref{eq:simplyassumption}) holds, $S_{\psi_{0}}(V)$
is the SES of $\psi_{0}$; if (\ref{eq:simplyassumption}) does not
hold, $S_{\psi_{0}}(V)$ is unbiased and permits robust estimation.
Before delving into robust estimation in the next subsection, we provide
examples to elucidate the SES below.
\begin{example}\label{example(A3.2)}
For a continuous outcome and the HTE function given in Example \ref{eg. continuous outcome},
the SES of $\psi_{0}$ is
\[
S_{\psi_{0}}(V)=Z\left\{ \sigma_{\delta}^{2}(X)\right\} ^{-1}\{H_{\psi_{0}}-\mu_{\delta}(X)\}\{A-e_{\delta}(X)\}.
\]
For a binary outcome and the HTE function given in Example \ref{eg. binary outcome},
the SES of $\psi_{0}$ is
\[
S_{\psi_{0}}(V)=Z\frac{2\exp(Z^{\mathrm{\scriptscriptstyle T}}\psi_{0})}{\{\exp(Z^{\mathrm{\scriptscriptstyle T}}\psi_{0})+1\}^{2}}[\mu_{\delta}(X)\{1-\mu_{\delta}(X)\}]^{-1}\{H_{\psi_{0}}-\mu_{\delta}(X)\}\{A-e_{\delta}(X)\}.
\]
\end{example}
\subsection{From SES to robust estimation\label{subsec:From-SES}}
In principle, an efficient estimator for $\psi_{0}$ can be obtained
by solving $\mathbb{P}_{N}S_{\mathrm{eff},\psi}(V)=0$. However, $S_{\mathrm{eff},\psi}$
depends on the unknown distribution through $e_{0}(X)$, $\mu_{\delta}(X)$,
and $\sigma_{\delta}^{2}(X)$, and thus solving $\mathbb{P}_{N}S_{\mathrm{eff},\psi}(V)=0$
is infeasible. The state-of-art causal inference literature suggests
that estimators constructed based on SES are robust to approximation
errors using machine learning methods, the so-called rate double robustness;
see, e.g., \citet{chernozhukov2018double,rotnitzky2019characterization}.
In order to obtain a robust estimator with good efficiency properties,
we consider approximating the unknown functions using non-parametric
or machine learning methods. Our algorithm for the estimation of $\psi_{0}$
proceeds as follows.
\begin{description}
\item [{Step$\ 1.$}] Obtain an estimator of $e_{0}(X)$ using non-parametric
or machine learning methods, denoted by $\widehat{e}_{0}(X),$ based
on $\{(A_{i},X_{i},\delta_{i}=0):i\in\mathcal{B}\}$.
\item [{Step$\ 2.$}] Obtain a preliminary estimator $\widehat{\psi}_{\mathrm{p}}$
by solving $\sum_{i\in\mathcal{A}}[q^{*}(X_{i},\delta_{i})\{A_{i}-e_{1}(X_{i})\}H_{\psi,i}]=0$,
based on $\{(A_{i},X_{i},Y_{i},\delta_{i}=1):i\in\mathcal{A}\}$.
\item [{Step$\ 3.$}] Obtain the estimators of $\mu_{1}(X)$ and $\mu_{0}(X)$
using non-parametric or machine learning methods, denoted by $\widehat{\mu}_{1}(X)$
and $\widehat{\mu}_{0}(X)$, based on $\{(A_{i},X_{i},H_{\widehat{\psi}_{\mathrm{p}},i},$
$\delta_{i}=1):i\in\mathcal{A}\}$ and $\{(A_{i},X_{i},H_{\widehat{\psi}_{\mathrm{p}},i},\delta_{i}=0):i\in\mathcal{B}\}$,
respectively.
\item [{Step$\ 4.$}] Let $\widehat{S}_{\mathrm{eff},\psi}(V)$ be $S_{\mathrm{eff},\psi}(V)$
with the unknown quantities replaced by the estimated parametric models
in Steps 1 and 3. Obtain the efficient integrative estimator $\widehat{\psi}_{\mathrm{eff}}$
by solving
\begin{equation}
\mathbb{P}_{N}\widehat{S}_{\psi}(V)=0.\label{eq:optimal ee}
\end{equation}
\end{description}
The estimator $\widehat{\psi}_{\mathrm{eff}}$ depends on the approximation
of nuisance functions. To establish the asymptotic properties of $\widehat{\psi}_{\mathrm{eff}}$,
we provide the regularity conditions.
\begin{assumption}\label{assump:o1} (i) $\Vert\widehat{e}_{0}(X)-e_{0}(X)\Vert=o_{\mathbb{P}}(1)$
and $\Vert\widehat{\mu}_{\delta}(X)-\mu_{\delta}(X)\Vert=o_{\mathbb{P}}(1)$;
(ii) $\Vert\widehat{e}_{0}(X)-e_{0}(X)\Vert\times\Vert\widehat{\mu}_{\delta}(X)-\mu_{\delta}(X)\Vert=o_{\mathbb{P}}(n^{-1/2})$;
and (iii) additional regularity conditions in Assumption S1.\end{assumption}
Assumption \ref{assump:o1} is typical regularity conditions for Z-estimation
or M-estimation \citep{Vaart2000}. Assumption \ref{assump:o1} (i)
states that we require that the posited models to be consistent for
the two nuisance functions. Assumption \ref{assump:o1} (ii) states
that the \textit{combined rate} of convergence of the posited models
is of $o_{\mathbb{P}}(n^{-1/2})$. Assumption S1 regularizes the complexity
of the functional space. Importantly, these conditions ensure $\widehat{\psi}_{\mathrm{eff}}$
retains the parametric-rate consistency, allowing flexible data adaptive
models and not restricting to stringent parametric models.
\begin{theorem}\label{Thm:Consistency-DML}
Suppose Assumptions \ref{Asump:transp}--\ref{assump:o1} hold. Then,
$\widehat{\psi}_{\mathrm{eff}}$ is root-$n$ consistent for $\psi_{0}$ and
asymptotically normal.
\end{theorem}
Theorem \ref{Thm:Consistency-DML} implies that asymptotically, $\widehat{\psi}_{\mathrm{eff}}$
can be viewed as the solution to $\mathbb{P}_{N}S_{\psi}(V)=0$ when
the nuisance functions are known. Therefore, for consistent variance
estimation of $\widehat{\psi}_{\mathrm{eff}}$, we can use the standard sandwich
formula or the non-parametric bootstrap, treating the nuisance functions
to be known.
\section{Test-based elastic integrative analysis\label{subsec: robust and elastic integ}}
A major concern for integrating the RT and RW data lies in the possibly
poor quality of the RW data. For example, if the RW data did not capture
all confounding variables, Assumption \ref{Asump:rand-rwd} is violated.
Then, combining the RT and RW data into an integrative analysis would
lead to a biased HTE estimator. In this section, we address the key
challenge of preventing any biases present in the RW data to leak
into the proposed estimator.
\subsection{Detection of violation of the assumption required for the RW data}
We consider all assumptions in Theorem \ref{Thm:Consistency-DML}
hold except that Assumption \ref{Asump:rand-rwd} may be violated.
We derive a test that detects the violation of this key assumption
for using the RW data. For simplicity, we denote the SES based solely
on the RT or RW data as
\[
S_{\mathrm{rt},\psi}(V)=\delta S_{\psi}(V),\ S_{\mathrm{rw},\psi}(V)=(1-\delta)S_{\psi}(V),
\]
respectively. Moreover, let $\widehat{S}_{\mathrm{rt},\psi}(V)$ and $\widehat{S}_{\mathrm{rw},\psi}(V)$
be $S_{\mathrm{rt},\psi}(V)$ and $S_{\mathrm{rw},\psi}(V)$ with the nuisance functions
replaced by their estimates, and let ${\mathcal{I}}_{\mathrm{rt}}=\mathbb{E}\{S_{\mathrm{rt},\psi_{0}}(V)^{\otimes2}\mid\delta=1\}$
and ${\mathcal{I}}_{\mathrm{rw}}=\mathbb{E}\{S_{\mathrm{rw},\psi_{0}}(V)^{\otimes2}\mid\delta=0\}$
be Fisher information matrices.
We now formulate the null hypothesis ${\rm H}_{0}$ for the case when
Assumption \ref{Asump:rand-rwd} holds and fixed and local alternatives
${\rm H}_{a}$ and ${\rm H}_{a,n}$ for the case when Assumption \ref{Asump:rand-rwd}
is violated:
\begin{description}
\item [{${\rm H}_{0}$}] (Null) $\mathbb{E}\{S_{\mathrm{rw},\psi_{0}}(V)\}=0$.
\item [{${\rm H}_{a}$}] (Fixed alternative) $\mathbb{E}\{S_{\mathrm{rw},\psi_{0}}(V)\}=\eta_{{\rm fix}}$
, where $\eta_{{\rm fix}}$ is a $p$-vector of constants with at
least one non-zero component.
\item [{${\rm H}_{a,n}$}] (Local alternative) $\mathbb{E}\{S_{\mathrm{rw},\psi_{0}}(V)\}=n^{-1/2}\eta$
, where $\eta$ is a $p$-vector of constants with at least one non-zero
component.
\end{description}
Considering the fix alternative is common to establish asymptotic
properties of standard estimators and tests; however, the local alternative
is useful to study finite-sample properties and regularity of non-standard
estimators and tests. In finite samples, the violation of Assumption
\ref{Asump:rand-rwd} may be weak; e.g., there exists a hidden confounder
in the RW data, but the association between the hidden confounder
and the outcome or the treatment is small. In these cases, in the
local alternative ${\rm H}_{a,n}$, the bias of $S_{\mathrm{rw},\psi_{0}}(V)$
may be small, quantified by $n^{-1/2}\eta$. The values of $\eta$
represent different tracks that the bias of $S_{\mathrm{rw},\psi_{0}}(V)$
follows to coverage to zero. The local alternative encompasses the
null and fixed alternative as special cases by considering different
values of $\eta$. In particular, ${\rm H}_{0}$ corresponds to ${\rm H}_{a,n}$
with $\eta=0$. Also, ${\rm H}_{a}$ corresponds to ${\rm H}_{a,n}$
with $\eta=\pm\infty$; hence, considering ${\rm H}_{a}$ alone is
not informative about the finite-sample behaviors of the proposed
test and estimator.
Our detection of biases in the RW data is based on two key insights.
First, we obtain an initial estimator $\widehat{\psi}_{\mathrm{rt}}$ by solving
the estimating equation based solely on the RT data, $\sum_{i\in\mathcal{A}}\widehat{S}_{\mathrm{rt},\psi}(V_{i})=0$.
It is important to emphasize that the propensity score in the RT,
$e_{1}(X)$, is known by design, and therefore, $\widehat{\psi}_{\mathrm{rt}}$
is always consistent. Second, if Assumption \ref{Asump:rand-rwd}
holds for the RW data, $S_{\mathrm{rw},\psi_{0}}(V)$ is unbiased, but if
it is violated, $S_{\mathrm{rw},\psi_{0}}(V)$ is no longer unbiased. Therefore,
large values of $n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})$
provide evidence about the violation of Assumption \ref{Asump:rand-rwd}.
To detect the violation of Assumption \ref{Asump:rand-rwd} for using
the RW data, we construct the test statistic
\begin{equation}
T=\left\{ n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})\right\} ^{\mathrm{\scriptscriptstyle T}}\widehat{\Sigma}_{SS}^{-1}\left\{ n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})\right\} ,\label{eq:T}
\end{equation}
where $\Sigma_{SS}=\Gamma^{\mathrm{\scriptscriptstyle T}}{\mathcal{I}}_{\mathrm{rt}}\Gamma+{\mathcal{I}}_{\mathrm{rw}}$ is the
asymptotic variance of $n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})$,
$\Gamma={\mathcal{I}}_{\mathrm{rt}}^{-1}{\mathcal{I}}_{\mathrm{rw}}\rho^{-1/2}$, and $\widehat{\Sigma}_{SS}$
is a consistent estimator for $\Sigma_{SS}$. The test statistic $T$
measures the distance between $n^{-1/2}\sum_{i\in\mathcal{B}}S_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})$
and zero. If the idealistic assumption holds, we expect $T$ to be
small. By the standard asymptotic theory, we show in the supplementary
material that under ${\rm H}_{0},$ $T\stackrel{\cdot}{\sim}\chi_{p}^{2},$
a Chi-square distribution with degrees of freedom $p$, a $n\rightarrow\infty$.
This result serves as the basis for detecting the violation of the
assumption required for the RW data.
\subsection{Elastic integration\label{subsec:Elastic} }
Let $c_{\gamma}=\chi_{p,\gamma}^{2}$ be the $(1-\gamma)$ percentile
of $\chi_{p}^{2}$. For a small $\gamma,$ if $T\geq c_{\gamma}$,
there is strong evidence to reject ${\rm H}_{0}$ for the RW data;
i.e., there is a detectable bias for the RW data estimator. In this
case, we would only use the RT data for estimation. On the other hand,
if $T<c_{\gamma}$, there is no strong evidence for the bias of the
RW data estimator, and therefore, we would combine both the RT and
RW data for optimal estimation. Our strategy leads to the elastic
integrative estimator $\widehat{\psi}_{\mathrm{elas}}$ solving
\begin{equation}
\sum_{i\in\mathcal{A}\cup\mathcal{B}}\left\{ \delta_{i}\widehat{S}_{\psi}(V_{i})+\mathbf{1}(T<c_{\gamma})(1-\delta_{i})\widehat{S}_{\psi}(V_{i})\right\} =0.\label{eq:EE}
\end{equation}
The choice of $\gamma$ involves the bias-variance tradeoff. On the
one hand, under ${\rm H}_{0}$, the acceptance probability of integrating
the RW data is $\pr(T<c_{\gamma})=1-\gamma$. Therefore, for relatively
large sample sizes, we will accept good-quality RW data with probability
$1-\gamma$ and reject good-quality RW data with type I error $\gamma$.
On the other hand, under ${\rm H}_{a,n}$, if $\eta$ is small, the
bias due to accepting the RW data is small compared to the increased
variance due to rejecting the RW data, and hence small $\gamma$ is
desirable; while if $\eta$ is large, the reverse is true and hence
large $\gamma$ is desirable.
To formally investigate the tradeoff, we characterize the asymptotic
distributions of the elastic integrative estimator $\widehat{\psi}_{\mathrm{elas}}$
under the null, fixed, and local alternatives. We do not discuss the
trivial cases when $\gamma=0$ and $1$, corresponding to $\widehat{\psi}_{\mathrm{elas}}=\widehat{\psi}_{\mathrm{rt}}$
or $\widehat{\psi}_{\mathrm{eff}}$. With $\gamma\in(0,1),$ $\widehat{\psi}_{\mathrm{elas}}$
mixes two distributions, namely, $\widehat{\psi}_{\mathrm{rt}}\mid(T\geq c_{\gamma})$
and $\widehat{\psi}_{\mathrm{eff}}\mid(T<c_{\gamma})$. Each distribution
is non-standard because the estimators and test are constructed based
on the same data and therefore may be asymptotically dependent.
To characterize those non-standard distributions, we decompose this
task into three steps. First, by the standard asymptotic theory, it
follows that $T\stackrel{\cdot}{\sim}\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}$, where
$\cZ_{1}$ is a standard $p$-variate normal random vector, $n^{1/2}(\widehat{\psi}_{\mathrm{rt}}-\psi_{0})\stackrel{\cdot}{\sim}\text{\ensuremath{\mathcal{N}_{\mathrm{rt}}}},$
and $n^{1/2}(\widehat{\psi}_{\mathrm{eff}}-\psi_{0})\stackrel{\cdot}{\sim}\mathcal{N}_{\mathrm{eff}},$
where $\mathcal{N}_{\mathrm{rt}}$ and $\mathcal{\mathcal{N}_{\mathrm{eff}}}$ are
some $p$-variate normal random vectors with variances $V_{\mathrm{rt}}=(\rho{\mathcal{I}}_{\mathrm{rt}})^{-1}$
and $V_{\mathrm{eff}}=(\rho{\mathcal{I}}_{\mathrm{rt}}+{\mathcal{I}}_{\mathrm{rw}})^{-1}$, respectively.
Second, we find another standard $p$-variate normal random vector
$\cZ_{2}$ that is independent of $\cZ_{1}$, and decompose the normal
distributions $\text{\ensuremath{\mathcal{N}_{\mathrm{rt}}}}$ and $\mathcal{N}_{\mathrm{eff}}$
into two orthogonal components: i) one corresponds to $\cZ_{1}$ and
ii) the other one corresponds to $\cZ_{2}$. Importantly, component
i) would be affected by the test constraints induced by $\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}$,
but component ii) would not be affected. For $\mathcal{N}_{\mathrm{eff}},$
we show that it is fully represented by $\cZ_{2}$ as $\mathcal{N}_{\mathrm{eff}}=-V_{\mathrm{eff}}^{1/2}\cZ_{2}.$
Therefore, its distribution is not affected by $\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}<c_{\gamma}$;
that is,
\[
\mathcal{N}_{\mathrm{eff}}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}<c_{\gamma})\sim-V_{\mathrm{eff}}^{1/2}\cZ_{2}.
\]
For $\text{\ensuremath{\mathcal{N}_{\mathrm{rt}}}}$, we show that $\text{\ensuremath{\mathcal{N}_{\mathrm{rt}}}}=V_{\textnormal{rt-eff}}^{1/2}\cZ_{1}-V_{\mathrm{eff}}^{1/2}\cZ_{2}$
with $V_{\textnormal{rt-eff}}=V_{\mathrm{rt}}-V_{\mathrm{eff}}$. Due to the independence between
$\cZ_{1}$ and $\cZ_{2}$, $\text{\ensuremath{\mathcal{N}_{\mathrm{rt}}}}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c_{\gamma})$
is a mixture distribution
\[
\mathcal{N}_{\mathrm{rt}}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c_{\gamma})\sim V_{\textnormal{rt-eff}}^{1/2}\cZ_{c_{\gamma}}^{\mathrm{t}}-V_{\mathrm{eff}}^{1/2}\cZ_{2},
\]
mixing a non-normal component, where $\cZ_{c}^{\mathrm{t}}$ represents
the truncated normal distribution $\cZ_{1}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c)$,
and a normal component. For illustration, Figure \ref{fig:representation}
demonstrates the geometry of the decomposition of distributions with
scalar variables.
Third, we formally characterize the distribution of $\cZ_{c}^{\mathrm{t}}$,
a multivariate normal distribution with ellipsoid truncation \citep{tallis1963elliptical}.
This step enables us to quantify the asymptotic bias and variance
of the proposed estimator; see Section \ref{subsec:Asymptotic-bias-and}.
\begin{figure}
\centering{}\includegraphics[scale=0.4]{representation}
\begin{itemize}
\item {\scriptsize{}$\mathcal{N}_{\mathrm{rt}}=V_{\textnormal{rt-eff}}^{1/2}\cZ_{1}-V_{\mathrm{eff}}^{1/2}\cZ_{2}$
and $\mathcal{N}_{\mathrm{rt}}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c_{\gamma})\sim V_{\textnormal{rt-eff}}^{1/2}\cZ_{1}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c_{\gamma})-V_{\mathrm{eff}}^{1/2}\cZ_{2}$ }{\scriptsize\par}
\item {\scriptsize{}$\mathcal{N}_{\mathrm{eff}}=-V_{\mathrm{eff}}^{1/2}\cZ_{2}$ and $\mathcal{N}_{\mathrm{eff}}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}<c_{\gamma})\sim\mathcal{N}_{\mathrm{eff}}$}{\scriptsize\par}
\end{itemize}
\caption{\label{fig:representation}Representation of the normal distributions
$\mathcal{N}_{\mathrm{rt}}$ and $\mathcal{N}_{\mathrm{eff}}$ based on $\cZ_{1}$
and $\cZ_{2}$ with $p=1$}
\end{figure}
Let $F_{p}(\cdot)$ be the cumulative distribution function (CDF)
of a $\chi_{p}^{2}$ random variable, and $F_{p}(\cdot;\lambda)$
be the CDF of a $\chi_{p}^{2}(\lambda)$ random variable, where $\chi_{p}^{2}$
and $\chi_{p}^{2}(\lambda)$ are the central Chi-square distribution
and the non-central Chi-square distribution with the non-centrality
parameter $\lambda$, respectively. Theorem \ref{Thm:elas} summarizes
the asymptotic distribution of $\widehat{\psi}_{\mathrm{elas}}$.
\begin{theorem}\label{Thm:elas} Suppose assumptions in Theorem \ref{Thm:Consistency-DML}
hold except that Assumption \ref{Asump:rand-rwd} may be violated.
Let $\cZ_{1}$ and $\cZ_{2}$ be independent normal random vectors
with mean $\mu_{1}=$$\Sigma_{SS}^{-1/2}\eta$ and $\mu_{2}=$$V_{\mathrm{eff}}^{1/2}\eta$,
respectively, and covariance $I_{p\times p}$. Let $\cZ_{c}^{\mathrm{t}}$
be the truncated normal distribution $\cZ_{1}\mid(\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c)$.
Let the elastic integrative estimator $\widehat{\psi}_{\mathrm{elas}}$ be
obtained by solving (\ref{eq:EE}). Then, $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
has a limiting mixture distribution
\begin{equation}
\mathcal{M}(\gamma;\eta)=\begin{cases}
\mathcal{M}_{1}(\gamma;\eta)=V_{\textnormal{rt-eff}}^{1/2}\cZ_{c_{\gamma}}^{\mathrm{t}}-V_{\mathrm{eff}}^{1/2}\cZ_{2}, & \text{w.p. }\xi,\\
\mathcal{M}_{2}(\eta)=-V_{\mathrm{eff}}^{1/2}\cZ_{2}, & \text{w.p. }1-\xi,
\end{cases}\label{eq:MixC1}
\end{equation}
\begin{enumerate}
\item Under ${\rm H}_{0},$ $\mu_{1}=\mu_{2}=0$ and $\xi=1-F_{p}(c_{\gamma})=\gamma$.
\item Under ${\rm H}_{a},$ $\mu_{1}=\mu_{2}=\pm\infty$ and $\xi=1$; i.e.,
(\ref{eq:MixC1}) reduces to a normal distribution with mean $0$
and variance $V_{\mathrm{rt}}$.
\item Under ${\rm H}_{a,n},$ $\mu_{1}=\Sigma_{SS}^{-1/2}\eta$ , $\mu_{2}=V_{\mathrm{eff}}^{1/2}\eta$
with $\eta\in\mathbb{R}^{p}$, and $\xi=1-F_{p}(c_{\gamma};\lambda)$, where
$\lambda=\eta^{\mathrm{\scriptscriptstyle T}}\Sigma_{SS}^{-1}\eta$.
\end{enumerate}
\end{theorem}
In Theorem \ref{Thm:elas}, $\mathcal{M}(\gamma;\eta)$ in (\ref{eq:MixC1})
a general characterization of the asymptotic distribution of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$.
It implies different asymptotic behaviors of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
depending on whether Assumption \ref{Asump:rand-rwd} is strongly,
weakly or not violated. First, ${\rm H}_{a}$ corresponds to the situation
where Assumption \ref{Asump:rand-rwd} is strongly violated. Under
${\rm H}_{a}$, $T$ rejects the RW data (i.e., $\cZ_{1}^{\mathrm{\scriptscriptstyle T}}\cZ_{1}\geq c_{\gamma}$
holds) with probability coverages to one, $\cZ_{c_{\gamma}}^{\mathrm{t}}$
becomes $\cZ_{1}$, and $\mathcal{M}(\gamma;\eta=\pm\infty)$ becomes
$V_{\textnormal{rt-eff}}^{1/2}\cZ_{1}-V_{\mathrm{eff}}^{1/2}\cZ_{2}$, a normal distribution
with mean $0$ and variance $V_{\mathrm{rt}}$. As expected, under ${\rm H}_{a}$,
$n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$ is asymptotically normal
and regular. Second, ${\rm H}_{0}$ and ${\rm H}_{a,n}$ correspond
to the situations when Assumption \ref{Asump:rand-rwd} is not and
weakly violated, respectively. Under ${\rm H}_{0}$ and ${\rm H}_{a,n}$,
$T$ has positive probabilities of accepting and rejecting the RW
data, $\widehat{\psi}_{\mathrm{elas}}$ switches between $\widehat{\psi}_{\mathrm{eff}}$
and $\widehat{\psi}_{\mathrm{rt}}$, and $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
follows a limiting mixing distribution $\mathcal{M}(\gamma;\eta)$,
indexed by $\eta$. Although the exact form of $\mathcal{M}(\gamma;\eta)$
is complicated, the entire distribution and summary statistics such
as mean, variance and quantiles can be simulated by rejective sampling;
see Section \ref{subsec:rej}. Importantly, under ${\rm H}_{0}$ and
${\rm H}_{a,n}$, $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$ is non-normal
and non-regular. The non-regularity is determined by the local parameter
$\eta$, which entails that the asymptotic distribution of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
may change abruptly when ${\rm H}_{0}$ is slightly violated. It is
worth emphasizing that the local asymptotic provides a better approach
to demonstrate the finite-sample properties of the test and estimators
than the fix asymptotic does.
\subsection{Asymptotic bias and MSE \label{subsec:Asymptotic-bias-and}}
Based on Theorem \ref{Thm:elas}, it is important to understand the
asymptotic behaviors of $\cZ_{c}^{\mathrm{t}}$ and the truncated multivariate
normal distribution in general. Toward that end, we derive the moment
generating functions (MGFs) of such distributions in the supplementary
material, which shed light on the moments of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$.
Corollary \ref{cor:var} provides the analytical formula of the asymptotic
bias and MSE of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$.
\begin{corollary}\label{cor:var}
Suppose assumptions in Theorem \ref{Thm:Consistency-DML} hold except
that Assumption \ref{Asump:rand-rwd} may be violated.
\begin{enumerate}
\item Under ${\rm H}_{0},$ the bias and MSE of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
are $\textnormal{bias}=0$ and $\textnormal{mse}=V_{\mathrm{eff}}+V_{\textnormal{rt-eff}}\{1+F_{p}(c_{\gamma})-2F_{p+2}(c_{\gamma})\}.$
\item Under ${\rm H}_{a},$ the bias and MSE of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
are $\textnormal{bias}=0$ and $\textnormal{mse}=V_{\mathrm{rt}}.$
\item Under ${\rm H}_{a,n},$ the bias and MSE of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
are
\begin{equation}
\textnormal{bias}(\gamma,\eta)=V_{\mathrm{eff}}\eta\{F_{p}(c_{\gamma};\lambda)-2F_{p+2}(c_{\gamma};\lambda)\},\label{eq:bias2}
\end{equation}
and
\begin{eqnarray}
\textnormal{mse}(\gamma,\eta) & = & V_{\mathrm{eff}}+V_{\textnormal{rt-eff}}\{1+F_{p}(c_{\gamma};\lambda)-2F_{p+2}(c_{\gamma};\lambda)\}\label{eq:mse2}\\
& & +\eta V_{\mathrm{eff}}^{\otimes2}\eta^{\mathrm{\scriptscriptstyle T}}\{F_{p}(c_{\gamma};\lambda)-2F_{p+2}(c_{\gamma};\lambda)\}^{2}\nonumber \\
& & +\eta V_{\mathrm{eff}}^{\otimes2}\eta^{\mathrm{\scriptscriptstyle T}}\left\{ 1-F_{p}(c_{\gamma};\lambda)+4F_{p+2}(c_{\gamma};\lambda)-4F_{p+4}(c_{\gamma};\lambda)\right\} \nonumber \\
& & -\eta V_{\mathrm{eff}}^{\otimes2}\eta^{\mathrm{\scriptscriptstyle T}}\frac{\left\{ 1+F_{p}(c_{\gamma};\lambda)-2F_{p+2}(c_{\gamma};\lambda)\right\} ^{2}}{1-F_{p}(c_{\gamma};\lambda)},\nonumber
\end{eqnarray}
with $\lambda=\eta^{\mathrm{\scriptscriptstyle T}}\Sigma_{SS}^{-1}\eta$.
\end{enumerate}
\end{corollary}
Corollary \ref{cor:var} enables us to demonstrate the potential advantages
and disadvantages of $\widehat{\psi}_{\mathrm{elas}}$ compared with $\widehat{\psi}_{\mathrm{rt}}$
and $\widehat{\psi}_{\mathrm{eff}}$ under different scenarios.
\begin{enumerate}
\item Under ${\rm H}_{0}$, because $1+F_{p}(c_{\gamma})-2F_{p+2}(c_{\gamma})\leq1-F_{p+2}(c_{\gamma})\leq1$
and hence $\mathbb{V}_{a}\{n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})\}\leq V_{\textnormal{rt-eff}}+V_{\mathrm{eff}}=V_{\mathrm{rt}},$
$\widehat{\psi}_{\mathrm{elas}}$ gains efficiency over $\widehat{\psi}_{\mathrm{rt}}$,
and the equality folds if and only if $\gamma=0$ (i.e., $c_{\gamma}=0$).
Moreover, because $1+\gamma-2F_{p+2}(c_{\gamma})\geq0$ and hence
$\mathbb{V}_{a}\{n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})\}\geq V_{\mathrm{eff}},$
$\widehat{\psi}_{\mathrm{elas}}$ losses efficiency compared with $\widehat{\psi}_{\mathrm{eff}}$,
and the equality folds if and only if $\gamma=1$ (i.e., $c_{\gamma}=\infty$).
\item Under ${\rm H}_{a}$, $\widehat{\psi}_{\mathrm{elas}}$ reduces to $\widehat{\psi}_{\mathrm{rt}}$
in large samples, correspondingly $\mathbb{V}_{a}\{n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})\}=V_{\mathrm{rt}}$.
\item Under ${\rm H}_{a,n}$, for a given $\gamma\in(0,1)$, the bias and
MSE of $\widehat{\psi}_{\mathrm{elas}}$ increase with the local parameter
$\eta.$ If $\gamma=0$ (implying $c_{\gamma}=\infty$ and $F_{k}(c_{\gamma};\lambda)=1$
for any integer $k$), $\widehat{\psi}_{\mathrm{elas}}$ becomes $\widehat{\psi}_{\mathrm{eff}}$
in large samples, and hence the asymptotic bias and MSE of $\widehat{\psi}_{\mathrm{elas}}$
are $V_{\mathrm{eff}}\eta$ and $V_{\mathrm{eff}}+\eta V_{\mathrm{eff}}^{\otimes2}\eta^{\mathrm{\scriptscriptstyle T}}$,
which can be obtain by $\textnormal{bias}(0,\eta)$ in (\ref{eq:bias2}) and $\textnormal{mse}(0,\eta)$
in (\ref{eq:mse2}). If $\gamma=1$ (implying $c_{\gamma}=0$ and
$F_{k}(c_{\gamma};\lambda)=0$ for any integer $k$), $\widehat{\psi}_{\mathrm{elas}}$
becomes $\widehat{\psi}_{\mathrm{rt}}$ in large samples, and hence the asymptotic
bias and MSE of $\widehat{\psi}_{\mathrm{elas}}$ are $0$ and $V_{\mathrm{rt}}$,
which can be obtain by $\textnormal{bias}(1,\eta)$ in (\ref{eq:bias2}) and $\textnormal{mse}(1,\eta)$
in (\ref{eq:mse2}). This observation motivates our adaptive selection
of $\gamma$ in Section \ref{subsec:Adaptive-selection} in order
to produce an elastic integrative estimator that has small bias and
mean squared error for a possible value of $\eta$.
\end{enumerate}
\subsection{Illustration of the asymptotic distributions by rejective sampling\label{subsec:rej}}
\begin{figure}
\subfloat[\label{fig:Pink-object}Mixing components.]{\centering{}\includegraphics[scale=0.4]{mixture}
}\subfloat[\label{fig:A-star}Mixture and normal distributions.]{\begin{centering}
\includegraphics[scale=0.4]{mixVSnorm}
\par\end{centering}
\centering{}
}
\subfloat[\label{fig:Pink-object-1-1}Mixture distributions with $\rho$ (rho).]{\centering{}\includegraphics[scale=0.4]{mixturerho}
}\subfloat[\label{fig:A-star-1-1}Mixture distributions with $\gamma$ (gamma)]{\centering{}\includegraphics[scale=0.4]{mixture80to99}
}
\caption{\label{fig:mix}Illustration of the limiting distributions under ${\rm H}_{0}$.
In both Panels (a) and (b), Mixture is $\mathcal{M}(\gamma;\eta=0)$.
In Panel (a): Mixture.C1 is the mixing component $-V_{\textnormal{rt-eff}}^{1/2}\cZ_{c_{\gamma}}^{\mathrm{t}}+V_{\mathrm{eff}}^{1/2}\cZ_{2}$,
and Mixture.C2 is the mixing component $V_{\mathrm{eff}}^{1/2}\cZ_{2}$. In
Panel (b) N\_eff is $V_{\mathrm{eff}}^{1/2}\cZ_{2}$, and N\_rt is $V_{\mathrm{rt}}^{1/2}\cZ_{1}$.
The value $\gamma$ is $0.8$. Panel (c) shows the mixture distributions
with $\gamma=1$ and $\rho\in\{0.1,0.33,0.5,1\}$. Panel (d) shows
the mixture distributions with $\rho=1$ and $\gamma\in\{0.10,0.25,0.50,0.90\}$.}
\end{figure}
\begin{figure}
\subfloat[\label{fig:Pink-object-1}Mixing components.]{\centering{}\includegraphics[scale=0.4]{mixturev2}
}\subfloat[\label{fig:A-star-1}Mixture and normal distributions.]{\begin{centering}
\includegraphics[scale=0.4]{mixVSnormv2}
\par\end{centering}
\centering{}
}
\subfloat[\label{fig:Pink-object-1-1-1}Mixture distributions with $\rho$ (rho).]{\centering{}\includegraphics[scale=0.4]{mixturerhov2}
}\subfloat[\label{fig:A-star-1-1-1}Mixture distributions with $\gamma$ (gamma)]{\centering{}\includegraphics[scale=0.4]{mixture80to99v2}
}
\caption{\label{fig:mix-1}Illustration of the limiting distributions under
${\rm H}_{a,n}$. In both Panels (a) and (b), Mixture is $\mathcal{M}(\gamma;\eta=1)$.
In Panel (a): Mixture.C1 is the mixing component $-V_{\textnormal{rt-eff}}^{1/2}\cZ_{c_{\gamma}}^{\mathrm{t}}+V_{\mathrm{eff}}^{1/2}\cZ_{2}$,
and Mixture.C2 is the mixing component $V_{\mathrm{eff}}^{1/2}\cZ_{2}$. In
Panel (b) N\_eff is $V_{\mathrm{eff}}^{1/2}\cZ_{2}$, and N\_rt is $V_{\mathrm{rt}}^{1/2}\cZ_{1}$.
The value $\gamma$ is $0.8$. Panel (c) shows the mixture distributions
with $\gamma=1$ and $\rho\in\{0.1,0.33,0.5,1\}$. Panel (d) shows
the mixture distributions with $\rho=1$ and $\gamma\in\{0.10,0.25,0.50,0.90\}$.}
\end{figure}
We can use rejective sampling to simulate the asymptotic distribution
$\mathcal{M}(\gamma;\eta)$ of $n^{1/2}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$.
For illustration, consider scalar random variables $\cZ_{1}$ and
$\cZ_{2}$ with $p=1,$ varying values of $\eta$, and ${\mathcal{I}}_{\mathrm{rw}}={\mathcal{I}}_{\mathrm{rt}}=1$.
We simulate $\cZ_{c_{\gamma}}^{\mathrm{t}}$ by rejective sampling: draw
values $z^{*}$ from the standard normal distribution and only retain
the values that satisfy $z^{*2}\geq c_{\gamma}$.
Figure \ref{fig:mix} displays the limiting distributions with $\eta=0$
corresponding to ${\rm H}_{0}$. Figure \ref{fig:mix} (a) illustrates
the mixture distribution $\mathcal{M}(\gamma;0)$ and its two mixing
components. Mixture.C1 is the mixing component $-V_{\textnormal{rt-eff}}^{1/2}\cZ_{c_{\gamma}}^{\mathrm{t}}+V_{\mathrm{eff}}^{1/2}\cZ_{2}$,
which centers at zero and has two modes. Mixture.C2 is the mixing
component $V_{\mathrm{eff}}^{1/2}\cZ_{2}$, which centers at zero and is normally
distributed. The mixture distribution mixes the two components which
has lighter tails than Mixture.C1 but heavier tails than Mixture.C2.
Figure \ref{fig:mix} (b) compares $\mathcal{M}(\gamma;0)$ with $\mathcal{N}_{\mathrm{eff}}$
and $\mathcal{N}_{\mathrm{rt}}$. It is clear that the variances decrease
from $\mathcal{N}_{\mathrm{rt}}$ to $\mathcal{M}(\gamma;0)$ and then to
$\mathcal{N}_{\mathrm{eff}}$. Moreover, the mixture distribution depends
on some factors other than $V_{\mathrm{eff}}$ and $V_{\mathrm{rt}}$ including $\rho$
and $\gamma$. Figure \ref{fig:mix} (c) illustrates the mixture distribution
with $\gamma=0.8$ and $\rho\in\{0.1,0.33,0.5,1\}$. As discussed
before, $\rho$ is the relative sample size of the RT data compared
with the RW data. As $\rho$ increases, the mixture distribution becomes
narrower. Figure \ref{fig:mix} (d) illustrates the mixture distribution
with $\rho=1$ and $\gamma\in\{0.10,0.25,0.50,0.90\}$. As $\gamma$
decreases, the mixture distribution becomes narrower. In this situation,
a small value of $\gamma$ is desirable.
Figure \ref{fig:mix-1} displays the limiting distributions with $\eta=1$
corresponding to ${\rm H}_{a,n}$. Figure \ref{fig:mix-1} (a) illustrates
the mixture distribution $\mathcal{M}(\gamma;1)$ and its two mixing
components. Mixture.C1 and Mixture.C2 center at non-zero values. The
mixture distribution mixes the two components and is non-normal. From
Figure \ref{fig:mix-1} (b), $\mathcal{N}_{\mathrm{rt}}$ is unbiased and
$\mathcal{N}_{\mathrm{eff}}$ is biased, and $\mathcal{M}(\gamma;1)$ reduces
the bias of $\mathcal{N}_{\mathrm{eff}}$. Figure \ref{fig:mix-1} (c) illustrates
the mixture distribution with $\gamma=0.8$ and $\rho\in\{0.1,0.33,0.5,1\}$.
As $\rho$ increases, the mixture distribution becomes narrower but
bias increases. Figure \ref{fig:mix-1} (d) illustrates the mixture
distribution with $\rho=1$ and $\gamma\in\{0.10,0.25,0.50,0.90\}$.
As $\gamma$ decreases, the mixture distribution becomes narrower
but is more biased. In this situation, a large value of $\gamma$
is desirable.
\subsection{Inference}
The nonparametric bootstrap inference has been shown to be successful
in many situations. However, it requires that the given estimator
be smooth. This feature prevents the use of the nonparametric bootstrap
inference for $\widehat{\psi}_{\mathrm{elas}}$, because the indicator function
in (\ref{eq:EE}) renders $\widehat{\psi}_{\mathrm{elas}}$ a non-smooth estimator
\citep{shao1994bootstrap}. We formally show in the supplementary
material that the inconsistency of the nonparametric bootstrap inference.
Alternatively, \citet{laber2011adaptive} proposed an adaptive confidence
interval for the test error in classification, a non-regular statistics,
by bootstrapping the upper and lower bounds of the test error. Similar
to \citet{laber2011adaptive}, we propose an adaptive procedure for
robust inference of $\psi_{0}$ accommodating the strength of violation
of Assumption \ref{Asump:rand-rwd} in finite samples.
Let $e_{k}$ be a $p$-vector of zeros except that the $k$th component
is one, and let $e_{k}^{\mathrm{\scriptscriptstyle T}}\psi_{0}$ be the $k$th component of $\psi_{0}$,
for $k=1,\ldots,p$. Because the asymptotic of $n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
is different under the local and fixed alternatives, we propose different
strategies of constructing CIs: under ${\rm H}_{a,n}$, the asymptotic
is non-standard, we construct a least favorable CI that guarantees
good coverage properties uniformly over possible values of the local
parameter $\eta$; under ${\rm H}_{a}$, the asymptotic is standard,
we construct the usual Wald CI based on the normal limiting distribution.
First, under ${\rm H}_{a,n}$, Assumption \ref{Asump:rand-rwd} is
weakly violated and the strength of violation in finite samples is
determined by the local parameter $\eta$. For a fixed $\eta,$ let
$\widehat{Q}_{k,\alpha}(\eta)$ be the approximated $\alpha$-th quantile
of $\mathcal{M}(\gamma;\eta)$, which can be obtained by rejective
sampling. We can construct a $(1-\alpha)$ confidence interval of
$n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$ is $[\widehat{Q}_{k,\alpha/2}(\eta),\widehat{Q}_{k,1-\alpha/2}(\eta)]$.
Different CIs are required for different values of $\eta$. To accommodate
different possible values of $\eta$, one solution is to construct
the least favorable CI by taking the infimum of the lower bound of
the CI $\widehat{Q}_{k,\alpha/2}(\eta)$ and the supremum of the upper
bound of the CI $\widehat{Q}_{k,1-\alpha/2}(\eta)$ over all possible
values of $\eta$. However, the range of $\eta$ can be very wide,
rendering the least favorable CI non-informative. We identify that
the plausible values of $\eta$ following a multivariate normal distribution
with mean $n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})$
and variance $\widehat{\Sigma}_{SS}$. Let $\widetilde{\alpha}=1-(1-\alpha)^{1/2}$,
such that $(1-\widetilde{\alpha})^{2}=1-\alpha$, and let $\mathcal{B}_{1-\widetilde{\alpha}}$
be a bounded region of $\eta$ with probability $1-\widetilde{\alpha}$.
We construct the $(1-\alpha)$ least favorable CI for $n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
as $[\inf_{\eta\in\mathcal{B}_{1-\widetilde{\alpha}}}\widehat{Q}_{k,\widetilde{\alpha}/2}(\eta),\sup_{\eta\in\mathcal{B}_{1-\widetilde{\alpha}}}\widehat{Q}_{k,1-\widetilde{\alpha}/2}(\eta)]$.
Here, using the wider $(1-\widetilde{\alpha})$ quantile range of
$\widehat{Q}_{k}(\eta)$ instead of the $(1-\alpha)$ quantile range
is necessary to guarantee the coverage of $(1-\alpha)$ due to ignoring
other possible values of $\eta$ outside $\mathcal{B}_{1-\widetilde{\alpha}}$.
Second, under ${\rm H}_{a},$ Assumption \ref{Asump:rand-rwd} is
strongly violated. As shown in Theorem \ref{Thm:elas}, $n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
is regular and asymptotically normal, denoted by $\mathcal{M}(\gamma;\pm\infty)$.
Therefore, a $(1-\alpha)$ confidence interval of $n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$
can be constructed based on the $\alpha/2$- and $(1-\alpha/2)$-th
quantiles of the normal distribution $\mathcal{M}(\gamma;\pm\infty)$,
denoted by $[\widehat{Q}_{k,\alpha/2}(\pm\infty),\widehat{Q}_{k,1-\alpha/2}(\pm\infty)]$.
Finally, because the least favorable CI may be unnecessarily wide
under ${\rm H}_{a}$, we require a strategy to distinguish between
${\rm H}_{a,n}$ corresponding to finite values of $\eta$ and ${\rm H}_{a}$
corresponding to $\eta=\pm\infty$. To do this, we use the test statistic
$T$. Under ${\rm H}_{a,n}$, $T=O_{\pr}(1)$; while under ${\rm H}_{a},$
$T=\infty$. Therefore, we specify a sequence of thresholds $\{\kappa_{n}:n\geq1\}$
that diverges to infinity as $n\rightarrow\infty$ and compare $T$
to $\kappa_{n}$. Many choices of $\kappa_{n}$ can be considered,
e.g., $\kappa_{n}=\{\log(n)\}^{1/2}$ which is similar to the BIC
criteria \citep{cheng2008robust,andrews2010inference}. If $T\leq\kappa_{n}$,
we choose the local alternative strategy to construct the least favorable
CI, and if $T>\kappa_{n}$, we choose the fix alternative strategy
to construct a normal CI, leading to an elastic CI
\begin{equation}
{\rm ECI}_{k,1-\alpha}=\begin{cases}
[\inf_{\eta\in\mathcal{B}_{1-\widetilde{\alpha}}}\widehat{Q}_{k,\widetilde{\alpha}/2}(\eta),\sup_{\eta\in\mathcal{B}_{1-\widetilde{\alpha}}}\widehat{Q}_{k,1-\widetilde{\alpha}/2}(\eta)], & \text{if }T\leq\kappa_{n},\\{}
[\widehat{Q}_{k,\alpha/2}(\pm\infty),\widehat{Q}_{k,1-\alpha/2}(\pm\infty)], & \text{if }T>\kappa_{n}.
\end{cases}\label{eq:elastCI}
\end{equation}
\begin{theorem}\label{Thm:elas-1} Suppose assumptions in Theorem
\ref{Thm:Consistency-DML} hold except that Assumption \ref{Asump:rand-rwd}
may be violated. The asymptotic coverage rate of the elastic CI of
$n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})$ in (\ref{eq:elastCI})
satisfies
\begin{eqnarray*}
& & \lim_{n\rightarrow\infty}\pr\left\{ n^{1/2}e_{k}^{\mathrm{\scriptscriptstyle T}}(\widehat{\psi}_{\mathrm{elas}}-\psi_{0})\in{\rm ECI}_{k,1-\alpha}\right\} \geq1-\alpha,
\end{eqnarray*}
and the equality holds under ${\rm H}_{a}$.
\end{theorem}
\subsection{Adaptive selection of $\gamma$\label{subsec:Adaptive-selection}}
The selection of $\gamma$ involves the bas-variance tradeoff and
therefore is important to determine the MSE of $\widehat{\psi}_{\mathrm{elas}}$.
Corollary \ref{cor:var} indicates that under ${\rm H}_{a,n}$, the
MSE of $\widehat{\psi}_{\mathrm{elas}}$ in (\ref{eq:mse2}) involves two
terms: Term 1 is $V_{\mathrm{eff}}+V_{\textnormal{rt-eff}}\{1+F_{p}(c_{\gamma};\lambda)-2F_{p+2}(c_{\gamma};\lambda)\}$,
and remaining Term 2 involving $\eta V_{\mathrm{eff}}^{\otimes2}\eta^{\mathrm{\scriptscriptstyle T}}$.
If $\eta$ is small, the MSE is dominated by Term 1, which can be
made small if we select a small $\gamma$; while if $\eta$ is large,
the MSE is dominated by Term 2, which can be made small if we select
a large $\gamma.$
The above observation motivates an adaptive selection of $\gamma.$
We propose to estimate $\eta$ by $\widehat{\eta}=n^{-1/2}\sum_{i\in\mathcal{B}}\widehat{S}_{\mathrm{rw},\widehat{\psi}_{\mathrm{rt}}}(V_{i})$
and select $\gamma$ that minimizes $\textnormal{mse}(\gamma;\widehat{\eta})$,
where $\textnormal{mse}(\gamma;\eta)$ is given by (\ref{eq:mse2}) or approximated
by rejective sampling. In practice, we can specify a grid of values
from $0$ to $1$ for $\gamma$, denoted by $\mathcal{G}$, simulate
the distribution of $\mathcal{M}(\gamma;\widehat{\eta})$ for all
$\gamma\in\mathcal{G}$, and finally choose $\gamma$ to be the one
in $\mathcal{G}$ that minimizes the MSE of $\mathcal{M}(\gamma;\widehat{\eta})$.
As corroborated by simulation, the selection strategy is effective
in the sense that when the signal of violation is weak, the selected
value of $\gamma$ is small and when the signal of violation is strong,
the selected value of $\gamma$ is large.
\section{Simulation\label{sec:Simulation}}
We evaluate the finite sample performance of the proposed elastic
estimator via simulation for the robustness against unmeasured confounding
and the adaptive inference. Specifically, we compare the RT estimator,
the efficient combining estimator, and the elastic estimator under
settings that vary the strength of unmeasured confounding in the RW
data.
We first generate populations of size $10^{5}$. For each population,
we generate the covariate $X=(1,X_{1},X_{2},X_{3})^{\mathrm{\scriptscriptstyle T}}$, where $X_{j}\sim\mathrm{Normal }(0,1)$
for $j=1,2,3$, and the treatment effect modifier is $Z=(1,X_{1},X_{2})^{\mathrm{\scriptscriptstyle T}}$.
We generate $Y(a)$ by
\begin{eqnarray}
Y(a)\mid X & = & \mu(X;b)+a\times\tau(Z)+\epsilon(a),\nonumber \\
\mu(X;b) & = & X_{1}+bX_{3},\label{eq:mu}\\
\tau(Z) & = & \psi_{1}+\psi_{2}X_{1}+\psi_{3}X_{2},\nonumber \\
\epsilon(a) & \sim & \mathrm{Normal }(0,1),\nonumber
\end{eqnarray}
for $a=0,1$. Throughout the simulation, the parameter of interest
$\psi=(\psi_{1},\psi_{2},\psi_{3})^{\mathrm{\scriptscriptstyle T}}=(1,1,1)^{\mathrm{\scriptscriptstyle T}}$ is fixed; however,
we vary $b$ to indicate the different strengths of unmeasured confounding
in the analysis (violation of Assumption \ref{Asump:rand-rwd}). We
then generate two samples from the target population. We generate
the RT selection indicator by $\delta\mid X\sim\mathrm{Bernoulli}\{\pi_{\delta}(X)\},$
where $\text{logit}\{\pi_{\delta}(X)\}=-6.5+X_{1}+X_{2}.$ Under this selection
mechanism, the selection rate is around $0.3\%$, which results in
$n\approx300$ RT subjects, similar to the CALGB 9633 trial in the
motivating application. We also take a random sample of size $m=1000$
from the population to form a RW sample. In the RT sample, the treatment
assignment is $A\mid X,\delta=1\sim\mathrm{Bernoulli}\{e_{1}(X)\},$
where $e_{1}(X)=0.5$. In the RW sample, $A\mid X,\delta=0\sim\mathrm{Bernoulli}\{e_{0}(X)\}$,
where logit$\{e_{0}(X)\}=1-2X_{1}-2X_{3}$. The observed outcome $Y$
in both samples is $Y=AY(1)+(1-A)Y(0)$.
To assess the robustness of the elastic integrative estimator against
unmeasured confounding, we consider the omission of $X_{3}$ in all
estimators, resulting in unmeasured confounding in the RW data. The
strength of unmeasured confounding is indexed by $b$ in (\ref{eq:mu});
high values of $b$ indicate strong levels of unmeasured confounding
and vice versa. We specify the range of $b$ by exponentiating seven
values in a regular grid from $3$ to $5$ and dividing by $m^{1/2}$,
i.e., $b\in\{0.10,0.17,0.29,0.51,0.89,1.54,2.69\}$. We compare the
following estimators for the HTE parameter $\psi_{0}$:
\begin{enumerate}
\item $\widehat{\psi}_{\mathrm{rt}}$: the covariate adjustment approach of \citet{lu2014asimplemethod}
that fits $Y_{i}$ against the adjusted covariate $(A_{i}-0.5)Z_{i}$
based only on the RT data;
\item $\widehat{\psi}_{\mathrm{eff}}$: the efficient integrative estimator solving
(\ref{eq:optimal ee});
\item $\widehat{\psi}_{\mathrm{elas}}$: the proposed elastic integrative estimator
solving (\ref{eq:EE}) with adaptive selection of $\gamma$.
\end{enumerate}
For $\widehat{\psi}_{\mathrm{eff}}$ and $\widehat{\psi}_{\mathrm{elas}}$, we estimate
the propensity score function by a logistic sieve model with the power
series $X$, $X^{2}$ and their two-way interactions (omitting $X_{3}$)
and the outcome mean function by a linear sieve model with the power
series $X$, $X^{2}$ and their two-way interactions (omitting $X_{3}$).
The CIs are constructed for $\widehat{\psi}_{\mathrm{rt}}$ and $\widehat{\psi}_{\mathrm{eff}}$
based on the nonparametric bootstrap with the bootstrap size $100$
and for $\widehat{\psi}_{\mathrm{elas}}$ based on the elastic approach with
$\kappa_{m}=\{\log(m)\}^{1/2}$.
\begin{figure}
\centering
\begin{centering}
\includegraphics[scale=0.8]{elastres1}
\par\end{centering}
\vspace{0.5cm}
\caption{\label{fig:violin}Violin plots of estimators of $\psi^{\mathrm{\scriptscriptstyle T}}=(\psi_{1},\psi_{2},\psi_{3})$
subtracting the true values varying the strength of unmeasured confounding.
In each plot, the three estimators $\widehat{\psi}_{\mathrm{rt}}$, $\widehat{\psi}_{\mathrm{eff}}$,
and $\widehat{\psi}_{\mathrm{elas}}$ are labeled by ``rt'', ``eff'',
and ``elas'', respectively. Each column of the plots corresponds
to a different strength of unmeasured confounding labeled by ``b'';
each row of the plots corresponds to a different component of $\psi$:
``psi: 1'' for $\psi_{1}$, ``psi: 2'' for $\psi_{2}$, ``psi:
3'' for $\psi_{3}$.}
\end{figure}
\begin{table}
\caption{\label{table:cvg}Simulation results for coverage rates of 95\% confidence
interval and ratio $\times10^{2}$ of MSE of the estimator and MSE
of $\widehat{\psi}_{\mathrm{rt}}$}
\centering
\vspace{0.5cm}
\resizebox{\textwidth}{!}
\begin{tabular}{cccccccccccccccccc}
\toprule
& \multicolumn{9}{c}{Coverage rate} & \multicolumn{7}{c}{Ratio of MSEs} & \tabularnewline
\cmidrule{2-17} \cmidrule{3-17} \cmidrule{4-17} \cmidrule{5-17} \cmidrule{6-17} \cmidrule{7-17} \cmidrule{8-17} \cmidrule{9-17} \cmidrule{10-17} \cmidrule{11-17} \cmidrule{12-17} \cmidrule{13-17} \cmidrule{14-17} \cmidrule{15-17} \cmidrule{16-17} \cmidrule{17-17}
& \multicolumn{3}{c}{RT} & \multicolumn{3}{c}{Eff} & \multicolumn{3}{c}{Elastic} & \multicolumn{3}{c}{Eff} & \multicolumn{3}{c}{Elastic} & & \tabularnewline
\midrule
$b$ & $\psi_{1}$ & $\psi_{2}$ & $\psi_{3}$ & $\psi_{1}$ & $\psi_{2}$ & $\psi_{3}$ & $\psi_{1}$ & $\psi_{2}$ & $\psi_{3}$ & $\psi_{1}$ & $\psi_{2}$ & $\psi_{3}$ & $\psi_{1}$ & $\psi_{2}$ & $\psi_{3}$ & & $\gamma$\tabularnewline
\midrule
0.1 & 94.1 & 94.8 & 93.5 & 94.1 & 94.7 & 94.8 & 95.1 & 96.4 & 96.7 & 3 & 4 & 4 & 10 & 6 & 8 & & 0.13\tabularnewline
0.17 & 94.3 & 94.9 & 93.8 & 92.1 & 94.5 & 95.1 & 94.2 & 96.6 & 96.3 & 3 & 4 & 4 & 11 & 6 & 8 & & 0.13\tabularnewline
0.29 & 94.9 & 94.9 & 93.9 & 80.7 & 94.3 & 92.8 & 93.3 & 96.0 & 95.7 & 6 & 4 & 5 & 14 & 6 & 9 & & 0.15\tabularnewline
0.51 & 94.9 & 95.0 & 94.0 & 19.6 & 89.9 & 86.3 & 91.5 & 94.3 & 93.4 & 26 & 5 & 6 & 31 & 9 & 14 & & 0.33\tabularnewline
0.89 & 95.1 & 95.7 & 94.2 & 0.0 & 74.6 & 59.0 & 95.5 & 93.6 & 92.1 & 142 & 12 & 17 & 51 & 14 & 26 & & 0.82\tabularnewline
1.54 & 95.3 & 95.7 & 94.0 & 0.0 & 48.9 & 14.0 & 95.2 & 94.4 & 93.2 & 547 & 26 & 60 & 55 & 19 & 36 & & 0.82\tabularnewline
2.69 & 95.7 & 95.3 & 93.6 & 0.0 & 31.4 & 0.8 & 94.5 & 94.5 & 93.5 & 1289 & 41 & 163 & 78 & 27 & 48 & & 0.88\tabularnewline
\bottomrule
\end{tabular}
}
\end{table}
Figure~\ref{fig:violin} presents the violin plots based on 2000
simulated datasets. Each column of the plots corresponds to a different
strength of unmeasured confounding indexed by $b$. Table \ref{table:cvg}
reports the coverage rates of $95\%$ CIs, the ratios of the MSE of
the estimator and the MSE of $\widehat{\psi}_{\mathrm{rt}}$, and the selected
$\gamma$. The covariate adjustment estimator based only on the RT
data $\widehat{\psi}_{\mathrm{rt}}$ is unbiased across different scenarios,
and the coverage rates are close to the nominal level. However, the
variability of $\widehat{\psi}_{\mathrm{rt}}$ is large, due to the small
sample size of the RT sample. The efficient integrative estimator
$\widehat{\psi}_{\mathrm{eff}}$ gains efficiency over $\widehat{\psi}_{\mathrm{rt}}$
by leveraging the large sample size of the RW data. However, the bias
of $\widehat{\psi}_{\mathrm{eff}}$ increases and the coverage rate is more
off the nominal level as $b$ increases. As shown by the ratios of
the MSE of $\widehat{\psi}_{\mathrm{eff}}$ and the MSE of $\widehat{\psi}_{\mathrm{rt}}$,
for small values of $b$, the ratio is smaller than one due to the
small bias and small variance of $\widehat{\psi}_{\mathrm{eff}}$; while for
large values of $b$, the ratio is larger than one due to the large
bias of $\widehat{\psi}_{\mathrm{elas}}$. The elastic integrative estimator
$\widehat{\psi}_{\mathrm{elas}}$ with the adaptive selection of $\gamma$
has small biases across all scenarios regardless of the strength of
unmeasured confounding. As demonstrated in the last column of Table
\ref{table:cvg}, the selected $\gamma$ increases as $b$ increases,
which show the proposed adaptive selection strategy is effective.
Moreover, compared with $\widehat{\psi}_{\mathrm{rt}}$, $\widehat{\psi}_{\mathrm{elas}}$
has smaller variances and MSEs by integrating the RW data across all
scenarios, suggesting that the integration of the RW data is beneficial
to improve estimation efficiency. The coverage rates of the elastic
CIs are all close to the nominal level for all settings with different
values of $b$.
\section{Application\label{sec:Application}}
\begin{table}
\centering\caption{\label{tab:covbal}Covariate means by treatment group in the CALGB
9633 trial sample and the NCDB sample.}
\vspace{0cm}
\begin{tabular}{lccccccc}
\toprule
& & & age & tumor size & male & squamous & white\tabularnewline
& $N$ & $A$ & (years) & (cm) & (y/n) & (y/n) & (y/n)\tabularnewline
\midrule
RT: CALGB 9633 & 156 & $A$=1 & 60.6 & 4.62 & 64.1\% & 40.4\% & 90.4\%\tabularnewline
& 163 & $A$=0 & 61.1 & 4.57 & 63.8\% & 39.3\% & 88.3\%\tabularnewline
RW: NCDB & 4263 & $A$=1 & 63.9 & 5.19 & 54.3\% & 35.6\% & 88.6\%\tabularnewline
& 10903 & $A$=0 & 69.4 & 4.67 & 54.8\% & 40.5\% & 90.0\%\tabularnewline
\bottomrule
\end{tabular}
\end{table}
We illustrate the potential benefit of the proposed elastic estimator
to evaluate the effect of adjuvant chemotherapy for early-stage resected
non-small cell lung cancer (NSCLC) using the CALGB 9633 data and a
large clinical oncology database, the NCDB. In CALGB 9633, we include
$319$ patients with $163$ randomly assigned to observation ($A=0$)
and $156$ randomly assigned to chemotherapy ($A=1$). The comparable
sample from the NCDB includes $15337$ patients diagnosed with NSCLC
between years 2004 -- 2016 in stage IB disease with $11021$ on observation
and $4316$\textcolor{red}{{} }received chemotherapy after surgery.
The numbers of treated and controls are relatively balanced in the
CALGB 9633 trial while they are unbalanced in the NCDB. We include
five covariates in the analysis: gender ($1=\textrm{male}$, $0=\textrm{female}$),
age, indicator for histology ($1=\textrm{squamous}$, $0=\textrm{non-squamous}$),
race ($1=\text{white},0=\text{non-white}$), and tumor size in cm.
The outcome is the indicator of cancer recurrence within 3 years after
the surgery, i.e. $Y=1$ if recurrence occurred and $Y=0$ otherwise.
Table \ref{tab:covbal} reports the covariate means by treatment group
in the two samples. Due to treatment randomization, covariates are
balanced between the treated and the control in the CALGB 9633 trial
sample. While due to lack of treatment randomization, covariates are
highly unbalanced in the NCDB sample. It can be seen that older patients
with histology and smaller tumor size are likely to choose a conservative
treatment, on observation. Moreover, we can not rule out the possibility
of unmeasured confounders in the NCDB sample.
\begin{figure}
\centering
\begin{centering}
\includegraphics[scale=0.8]{tsize}
\par\end{centering}
\vspace{0cm}
\caption{\label{fig:tsize}Estimated treatment effect as a function of the
(standardized) tumor size along with the $95\%$ Wald confidence intervals:
tumor size{*}$=($tumor size$-4.82)/1.72$}
\end{figure}
We assume a linear HTE function with tumor size as the treatment effect
modifier. We compare the same set of estimators and variance estimators
considered in the simulation study. Table \ref{tab:real-results}
reports the results. Figure \ref{fig:tsize} shows the estimated treatment
effect as a function of the standardized tumor size. Due to the limited
sample size of the trial sample, all components in $\widehat{\psi}_{\mathrm{covj.rt}}$
are not significant. By directly combining the trial sample and the
NCDB sample, $\widehat{\psi}_{\mathrm{eff}}$ reveals that adjuvant chemotherapy
significantly in reducing cancer recurrence within 3 years after the
surgery, and patients with larger tumor sizes benefit more from adjuvant
chemotherapy. But this finding may be subject to unmeasured confounding
biases of the NCDB sample. In the proposed elastic integrative analysis,
the test statistic is $T=1.9$, and hence there is no strong evidence
that the NCDB presents hidden confounding in our analysis. As a result,
the elastic integrative estimator $\widehat{\psi}_{\mathrm{elas}}$ remains
the same as $\widehat{\psi}_{\mathrm{eff}}$. In reflection of the pre-testing
procedure, the estimated standard error of r $\widehat{\psi}_{\mathrm{elas}}$
is larger than that of $\widehat{\psi}_{\mathrm{eff}}$. From Figure \ref{fig:tsize},
patients with tumor sizes in $[4.82+1.72\times(-0.67),4.82+1.72\times(2.18)]=[3.67,8.57]$
significantly benefit from adjuvant adjuvant chemotherapy in reducing
cancer recurrence within 3 years after the surgery.
\begin{table}
\caption{\label{tab:real-results}Point estimate, standard error and 95\% Wald
confidence interval of the causal risk difference between adjuvant
chemotherapy and observation based on the CALGB 9633 trial sample
and the NCDB sample: tumor size{*}$=($tumor size$-4.82)/1.72$}
\centering
\vspace{0cm}
\begin{tabular}{ccccccccc}
\toprule
& \multicolumn{4}{c}{Intercept ($\psi_{0,1}$)} & \multicolumn{4}{c}{$\text{tumor size}^{*}$ ($\psi_{0,2}$)}\tabularnewline
& Est. & S.E. & \multicolumn{2}{c}{C.I.} & Est. & S.E. & \multicolumn{2}{c}{C.I.}\tabularnewline
\midrule
rt & -0.094 & 0.054 & (-0.202, & 0.015) & 0.002 & 0.055 & (-0.107, & 0.111)\tabularnewline
eff & -0.076 & 0.0083 & (-0.093, & -0.059) & -0.026 & 0.009 & (-0.043, & -0.009)\tabularnewline
elas & -0.076 & 0.0196 & (-0.115, & -0.037) & -0.026 & 0.029 & (-0.084, & 0.032)\tabularnewline
\bottomrule
\end{tabular}
\end{table}
\section{Concluding remarks\label{sec:Concluding-remarks}}
The proposed elastic estimator integrates ``high-quality small data''
with ``big data'' to simultaneously leverage small but carefully
controlled unbiased experiments and massive but possibly biased RW
datasets for HTEs. Most of causal inference methods require the no
unmeasured confounding assumption. However, this assumption may not
hold for the RW data due to the uncontrolled real-world data collection
mechanism and is unverifiable based only on the RW data. Utilizing
the design advantage of RTs, we are able to gauge the reliability
of the RW data and decide whether or not to use RW data in an integrative
analysis. The elastic integrative estimator gains efficiency over
the RT-only estimator by integrating the reliable RW data with good
quality and also automatically detects the existence of bias in the
RW data and gears to the RT data. The proposed estimator is non-regular
and belongs to pre-test estimation by construction \citep{giles1993pre}.
To demonstrate the non-regularity issue, we characterize the distribution
of the elastic integrative estimator under local alternatives, which
provide better approximation of the finite-sample behaviors. Moreover,
we provide data adaptive selection of the threshold in the testing
procedure which guarantees small MSE of the estimator and elastic
confidence intervals which are valid under all hypotheses ${\rm H}_{0}$,
${\rm H}_{a,n}$ and ${\rm H}_{a}.$
We have assumed that the treatment effect function is correctly specified.
In future work, we will derive tests based on over-identification
restrictions tests \citep{yang2015gof} for evaluating a treatment
effect model. Moreover, to evaluate the treatment effect modifications
of adjuvant chemotherapy, the set of treatment effect modifiers are
suggested based on subject matter knowledge. Without such knowledge,
it is important to identify the true treatment effect modifiers among
a set of variables. We will develop a variable selection procedure
for identifying effect modifiers. The insight is that we can create
a larger number of estimating functions than the number of parameters.
The problem for effect modifiers selection falls into the recent work
of \citet{chang2017new} on high-dimensional statistical inferences
with over-identification.
The current framework allows the outcome to be continuous or binary.
In cancer clinical trials, survival outcomes are most common. Following
\citet{yang2018semiparametric}, we will consider the structural failure
time model for the HTE that specifies the relationship of the potential
baseline failure time $T(0)$ and the actual observed failure time
$T$. We assume that, given any $X$, $T(0)\sim T\exp\{A\tau_{\psi_{0}}(X)\},$
where $\sim$ means ``has the same distribution as.'' This model
entails that the treatment effect is to accelerate or decelerate the
failure time compared to the baseline failure time $T(0)$. Intuitively,
$\exp\{A\tau_{\psi_{0}}(X)\}$ describes the relative increase/decrease
in the failure time had the patient received treatment compared to
had the treatment always been withheld and the effect rate of the
treatment can possibly be modified by $\tau_{\psi_{0}}(X)$. Then,
we define $H_{\psi}$ in (\ref{eq:def of H(k)}) as $H_{\psi}=T\exp\{A\tau_{\psi}(X)\}$
to mimic the baseline failure time $T(0)$. A unique challenge in
the survival outcome setting is the possibility of censoring which
prevents observing all $T$'s. In our future work, we will develop
elastic algorithms to combine the RT and RW data for right-censored
survival outcome.
Because the RW data were not collected for research purposes, data
quality may be poor such as measurement errors and missingness. Moreover,
data collected from RTs may not be available in RW data or vice versa,
resulting in data structure misalignment \citep{li2020causal}. Developing
principled methods to deal with these practical issues will be our
future work.
\section*{Supplementary material}
Supplementary material online includes technical details and proofs.
\bibliographystyle{dcu}
|
1,116,691,500,979 | arxiv | \section{Introduction}
Let
$X_1$, $X_2$, $\dots$
be a sequence of random variables
with the common distribution function
$F$.
Let us consider the $L$-statistic
$$
L_n=\frac{1}{n}\sum_{i=1}^nc_{ni}h(X_{n:i}),\eqno(1)
$$
where
$X_{n:1}\le\ldots\le X_{n:n}$
are the order statistics based on the sample
$\{X_i, i\le n\}$,
$h$
is a measurable function called a {\it kernel},
$c_{ni}$, $i=1,\dots, n$,
are some constants called {\it weights}.
The aim of this paper is to establish the strong law of large
numbers (SLLN) for $L$-statistics (1) based on sequences of
weakly dependent random variables. The similar problems were
considered in the papers \cite{aaronson} and \cite{gilat}, where the SLLN was proved for
aforementioned $L$-statistics based on stationary ergodic
sequences. For example, in \cite{gilat} the case of linear kernels
($h(x)=x$) and {\it asymptotic regular} weights was considered,
i.~e.
$$
c_{ni}=n\int\limits_{(i-1)/n}^{i/n} J_n(t)\,dt,\eqno(2)
$$
with $J_n$ denoting an integrable function. In addition,
the existence of a function $J$ such that for all
$t\in(0,1)$
$$
\int\limits_0^t J_n(s)\,ds\to\int\limits_0^t J(s)\,ds
$$
was imposed there.
The statistics (1) with linear kernels and {\it regular} weights, i.~e.
$J_n\equiv J$
in (2), were considered in \cite{aaronson}.
In the present paper we relax the regularity assumption on
$c_{ni}$
and, furthermore, consider the
$L$-statistics (1) based on both stationary ergodic sequences and $\varphi$-mixing
sequences. We also do not impose {\it monotonicity} of the kernel in (1).
Note, that if
$h$
is a monotonic function, then the
$L$-statistic (1) can be represented as a statistic
$$
\frac{1}{n}\sum_{i=1}^nc_{ni}Y_{n:i},
$$
based on a sample
$\{Y_i=h(X_i), i\le n\}$
(see \cite{bakl} for more detail).
As an auxiliary result we obtain the Glivenko--Cantelli theorem
for $\varphi$-mixing sequences.
\section{Notations and Results}
\subsection{Assumptions and notations}
We first introduce our main notations. Let
$F^{-1}(t)=\inf\{x:F(x)\ge t\}$
be the quantile function corresponding to the distribution function
$F$
and let
$U_1$, $U_2$, $\dots$
be a sequence of
uniformly distributed on
$[0,1]$
random variables. Due to the fact
that joint distributions of random vectors
$(X_{n:1},\dots, X_{n:n})$
and
$(F^{-1}(U_{n:1}),\dots, F^{-1}(U_{n:n}))$
coincide, we have that
$$
L_n\stackrel{d}{=}\frac{1}{n}\sum^n_{i=1}c_{ni}H(U_{n:i}),
$$
where
$H(t)=h(F^{-1}(t))$,
and
$\stackrel{d}{=}$
denotes the
equality in distribution. Let us consider a sequence of functions
$c_n(t)=c_{ni}$, $t\in((i-1)/n, i/n]$, $i=1$, \dots, $n$,
$c_n(0)=c_{n1}$.
It is not difficult to see that in this case we have:
$$
L_n=\int\limits_0^1 c_n(t)H(G^{-1}_n(t))\,dt,
$$
where
$G_n^{-1}$
is the quantile function corresponding to the
empirical distribution function
$G_n$
based on the sample
$\{U_i, i\le n\}$.
We also introduce the following notation:
$$
\mu_n=\int\limits_0^1 c_n(t)H(t)\,dt,
$$
$$
C_n(q)=\left\{\begin{array}{ll}n^{-1}\sum\limits_{i=1}^n|c_{ni}|^q\quad&\mbox{if}\quad 1\le q<\infty,\\
\max\limits_{i\le n}|c_{ni}|\quad&\mbox{if}\quad q=\infty.\end{array}\right.
$$
Further we will use the following conditions on the weights
$c_{ni}$
and the function
$H$:\vspace{2mm}
(i) the function
$H$
is continuous on
$[0,1]$
and
$\sup\limits_{n\ge1}C_n(1)<\infty$.
(ii)
${\bf E}|h(X_1)|^p<\infty$
and
$\sup\limits_{n\ge1}C_n(q)<\infty$ ($1\le p<\infty$, $1/p+1/q=1$).
\vspace{2mm}
Assumptions (i) and (ii)
guarantee the existence of
$\mu_n$.
We also note that
$C_n(\infty)=\|c_n\|_{\infty}=\sup\limits_{0\le t\le1}|c_n(t)|$
and
$C_n(q)=\|c_n\|_q^q=\int\limits_0^1 |c_n(t)|^q\,dt$
for
$1\le q<\infty$.
\subsection{SLLN for ergodic and stationary sequences}
Let us formulate our main statement for stationary ergodic sequences.
\begin{theorem}
Let
$\{X_n, n\ge 1\}$
be a strictly stationary and ergodic
sequence and let either {\rm (i)} or {\rm (ii)} hold.
Then, as $n\to\infty$,
$$
L_n-\mu_n\to0\quad\mbox{a. s.}\eqno(3)
$$
\end{theorem}
\textsc{Remark.}
Let us consider the case of regular weights:
$$
c_{ni}=n\int\limits_{(i-1)/n}^{i/n}J(t)\,dt.
$$
Then
$$
L_n=\sum_{i=1}^nH(U_{n:i})\int\limits_{(i-1)/n}^{i/n}J(t)\,dt=
\int\limits_0^1J(t)H(G_n^{-1}(t))\,dt.
$$
Hence, assuming $c_n(t)=J(t)$ in Theorem~1, we have
$$
L_n\to\int\limits_0^1J(t)H(t)\,dt\quad\mbox{a. s.}
$$
Also note that the convergence
$\mu_n\to\mu$, $|\mu|<\infty$,
yields that
$L_n\to\mu$
a.~s. In particular, if
$c_n(t)\to c(t)$
uniformly in
$t\in[0,1]$,
then
$\mu_n\to\int\limits_0^1c(t)H(t)\,dt$.
Without the requirement that the coefficients
$c_{ni}$
are regular
one can easily construct an example when the
assumptions of Theorem~1 are satisfied, but the sequence
$c_{n}(t)$
does not converges in any reasonable sense to a limit function.
Let, for simplicity,
$h(x)=x$
and let
$X_1$ be uniformly distributed on [0, 1].
Set
$c_{ni}=(i-1)\delta_n$
as
$1\le i\le k$
and
$c_{ni}=(2k-i)\delta_n$
as
$k+1\le i\le 2k$, $k=k(n)=[n^{1/2}]$, $\delta_n=n^{-1/2}$.
Thus, the
function $c_{n}(t)$ is defined on the interval $[0, 2k/n]$. On the remaining part of $[0, 1]$ we extend
$c_{n}(t)$ periodically
with period $2k/n$: $c_n(t)=c_n(t-2k/n)$, $2k/n\le t\le 1$
(see also \cite[p.~138]{bakl}).
Note that
$0\le c_{n}(t)\le 1$.
One can show that in
this case
$\mu_n\to 1/4$.
In view of this fact we have that the
assumptions of Theorem~1 are satisfied and, consequently,
$$
L_n\to 1/4\quad\mbox{a. s.}
$$
\subsection{SLLN for $\varphi$-mixing sequences}
We will now formulate our main statement for mixing sequences.
Let us define the mixing coefficients:
$$
\varphi(n)=\sup_{k\ge 1}\sup\{|\mathbf{P}(B|A)-\mathbf{P}(B)|: A\in\mathcal{F}_1^k, B\in\mathcal{F}_{k+n}^\infty, \mathbf{P}(A)>0\},
$$
where
$\mathcal{F}_1^k$
and
$\mathcal{F}_{k+n}^\infty$
denote the
$\sigma$-fields generated by
$\{X_i, 1\le i\le k\}$
and
$\{X_i, i\ge k+n\}$
respectively.
The sequence
$\{X_i, i\ge 1\}$
is called {\it $\varphi$-mixing} (uniform mixing)
if
$\varphi(n)\to0$
as
$n\to\infty$.
\begin{theorem}
Let $\{X_n, n\ge 1\}$ be a $\varphi$-mixing sequence of identically
distributed random variables such that
$$
\sum_{n\ge 1}\varphi^{1/2}(2^n)<\infty,\eqno(4)
$$
and let any of the conditions {\rm (i)} or {\rm (ii)} hold. Then
the statement {\rm (3)} remains true.
\end{theorem}
The proof of Theorem~2 essentially uses the result of the Lemma~1
below. The statement (a) of Lemma~1 is the SLLN for
$\varphi$-mixing sequences. The statement (b) is a
Glivenko--Cantelli-type result for $\varphi$-mixing sequences and
is of independent interest. We note that neither in Theorem~2 nor
in Lemma~1 we do not assume the stationarity of the sequence
$\{X_n\}$.
\begin{lemma}
Let $\{X_n, n\ge 1\}$ be a $\varphi$-mixing sequence of identically
distributed random variables such that the statement {\rm (4)} holds.
Then
(a) for any function
$f$
such that
${\bf E}|f(X_1)|<\infty$,
$$
\frac{1}{n}\sum_{i=1}^nf(X_i)\to\mathbf{E}f(X_1)\quad\mbox{a. s.}\eqno(5)
$$
(b)
$$
\sup_{-\infty<x<\infty}|F_n(x)-F(x)|\to0\quad\mbox{a. s.},\eqno(6)
$$
where
$F_n$ is the empirical distribution function based on the sample
$\{X_i, i\le n\}$.
\end{lemma}
\section{Proofs}
\subsection{Proof of Theorem 1}
\begin{lemma}
Let the function
$H$
be continuous on
$[0,1]$.
Then
$$
\sup_{0\le t\le1}|H(G_n^{-1}(t))-H(t)|\to0\quad\mbox{a. s.}\eqno(7)
$$
\end{lemma}
\textsc{Proof} of Lemma~2.
Using the equality
$$
\sup_{0\le t\le1}|G^{-1}_n(t)-t|=\sup_{0\le t\le1}|G_n(t)-t|
$$
(see, for example, \cite[p.~95]{sh-well})
and the Glivenko--Cantelli theorem for stationary ergodic sequences, we get
$$
\sup_{0\le t\le1}|G^{-1}_n(t)-t|\to0\quad\mbox{a. s.},
$$
i.~e.
$G^{-1}_n(t)\to t$
a.~s. uniformly in
$t\in[0,1]$
as
$n\to\infty$.
Since the function
$H$
is uniformly continuous on the compact
$[0,1]$,
it follows that
$H(G_n^{-1}(t))\to H(t)$
a. s. uniformly in
$t\in[0,1]$.
This concludes the proof.
Let the condition (i) hold. Now, by Lemma~2,
$$
|L_n-\mu_n|\le\int\limits_0^1|c_n(t)||H(G_n^{-1}(t))-H(t)|\,dt
$$
$$
\le
C_n(1)\sup_{0\le t\le 1}|H(G_n^{-1}(t))-H(t)|\to0\quad\mbox{a. s.}
$$
Consequently, the proof of Theorem~1 for the first case is complete.
\begin{lemma}
Let
${\bf E}|h(X_1)|^p<\infty$.
Then
$$
\int\limits_0^1|H(G_n^{-1}(t))-H(t)|^p\,dt\to0\quad\mbox{a. s.}\eqno(8)
$$
\end{lemma}
\textsc{Proof} of Lemma~3.
First note that the set of all continuous on the interval
$[0,1]$
functions is everywhere dense in
$L_p[0,1]$, $1\le p<\infty$.
Therefore, for any
$\varepsilon>0$
and any function
$f\in L_p[0,1]$
there exists a continuous on
$[0,1]$
function
$f_{\varepsilon}$
such that
$\int\limits_0^1|f(t)-f_{\varepsilon}(t)|^p\,dt<\varepsilon$.
Since
$\mathbf{E}|h(X_1)|^p=\int\limits_0^1|H(t)|^p\,dt<\infty$,
this implies that there exists a continuous on
$[0,1]$
function
$H_{\varepsilon}$
such that
$$
\int\limits_0^1|H(t)-H_{\varepsilon}(t)|^pdt<\varepsilon/2.
$$
Further,
$$
\int\limits_0^1|H(G_n^{-1}(t))-H(t)|^p\,dt\le3^{p-1}
\int\limits_0^1|H(t)-H_{\varepsilon}(t)|^p\,dt
$$
$$
+3^{p-1}\int\limits_0^1|H(G_n^{-1}(t))-H_{\varepsilon}(G_n^{-1}(t))|^p\,dt+
3^{p-1}\int\limits_0^1|H_{\varepsilon}(G_n^{-1}(t))-H_{\varepsilon}(t)|^p\,dt.\eqno(9)
$$
From Lemma~2 it follows that
$H_{\varepsilon}(G_n^{-1}(t))\to H_{\varepsilon}(t)$
a.~s. uniformly in
$t$
as
$n\to\infty$.
Hence, the last integral on the right hand side of (9) converges to zero a. s. as
$n\to\infty$.
Now let us consider the second integral. By ergodic theorem for stationary sequences,
\begin{eqnarray*}
&&
\int\limits_0^1|H(G_n^{-1}(t))-H_{\varepsilon}(G_n^{-1}(t))|^p\,dt\\
&=&\frac{1}{n}\sum_{i=1}^{n}|H(U_i)-H_{\varepsilon}(U_i)|^p
\to_{\mbox{a. s.}}\mathbf{E}|H(U_1)-H_{\varepsilon}(U_1)|^p\\
&=&\int\limits_0^1|H(t)-H_{\varepsilon}(t)|^p\,dt<\varepsilon/2.
\end{eqnarray*}
Consequently,
$$
\limsup_{n\to\infty}\int\limits_0^1|H(G_n^{-1}(t))-H(t)|\,dt<3^{p-1}\varepsilon\quad\mbox{a. s.}
$$
Since
$\varepsilon$
is arbitrary, we obtain (8).
Now let the assumption (ii) hold. Using H\"{o}lder's inequality, we get
$$
|L_n-\mu_n|\le C_n^{1/q}(q)\left(\int\limits_0^1|H(G_n^{-1}(t)-H(t))|^p\,dt\right)^{1/p}\quad\mbox{for }p>1,
$$
and
$$
|L_n-\mu_n|\le C_n(\infty)\int\limits_0^1|H(G_n^{-1}(t)-H(t))|\,dt\quad\mbox{for }p=1.
$$
The statement (3) follows from Lemma~3.
This completes the proof of Theorem~1.
\subsection{Proof of Theorem 2}
We now prove Lemma~1.
Note that for any measurable function $f$
the sequence
$\{f(X_n), n\ge 1\}$
has its $\varphi$-mixing coefficient bounded by the corresponding
coefficient of the initial sequence, since
for any measurable $f$
the $\sigma$-field generated by $\{f(X_n), n\ge 1\}$ is contained in the
$\sigma$-field generated by $\{X_n, n\ge 1\}$.
Therefore, if the sequence $\{X_n, n\ge 1\}$ is $\varphi$-mixing,
then so is the sequence $\{f(X_n), n\ge 1\}$.
Hence, the condition (4) holds
for mixing coefficients of the sequence
$\{f(X_n), n\ge 1\}$.
The statement (5) follows from the SLLN for
$\varphi$-mixing sequences
(see \cite[p.~200]{linlu}).
The statement (6) is an immediate corollary of (5) and classical Glivenko--Cantelli theorem.
The proof of Theorem~2 is similar to the proof of Theorem~1. Indeed, the statement (7)
follows from the Glivenko--Cantelli theorem (6); using the SLLN (5), we get the statement (8).
Thus the proof of Theorem~2 is complete.
|
1,116,691,500,980 | arxiv | \section{Introduction}
Graphene has been proposed for use in many applications, ranging from solar cells \cite{Wang2008} to photon detectors \cite{Vora2011,Yan2011,Betz2012a,Fong2012a}. The low heat capacity \cite{Fong2012a} of graphene makes it an appealing candidate for detecting low energy photons. In particular, a detector of far-IR terahertz (THz) photons is of great interest for both laboratory experiments \cite{Schmutt} and astronomical studies in space-based observatories \cite{Karasik}. Transition edge sensors (TESs) using the superconducting transition of titanium (Ti) have been proposed as a detector for photons in the far-IR \cite{Karasik2005}. These TES detectors have achieved single-photon sensitivity in the near- and mid-IR \cite{Lita2008,Karasik}. Quantum capacitance\cite{Stone2012} and kinetic inductance \cite{Day2003,mkid} detectors are also emerging as candidates for detecting single THz photons.
Unlike a transition edge sensor biased on its superconducting transition, the resistance of graphene is not significantly temperature dependent for the conditions where it is most promising for photon detection. An alternate readout mechanism must be employed. We have previously modeled the use of Johnson noise to measure the absorption of THz photons in graphene \cite{McKitterick2013}. This method of reading out the electron temperature introduces an additional source of noise \cite{Dicke1946}, Eq.~\ref{eq:dicke} below, that must be considered when evaluating detector performance. To determine the sensitivity of the graphene detector, the change in temperature due to an incident THz photon, $\Delta T$, must be compared to the uncertainty in the temperature measurement, $\delta T$.
There are two dominant contributions to the temperature uncertainty: intrinsic temperature fluctuations due to the exchange of energy (e.g., phonons, photons, hot electrons) between the graphene and its environment($\delta T_\mathrm{intr}$) \cite{Mather1984} and apparent temperature fluctuations due to the inaccuracy of the Johnson noise-readout ($\delta T_\mathrm{readout}$). These sources of noise add in quadrature and are presented below:\\
\begin{minipage}{.5\linewidth}
\begin{equation}
\label{eq:intr}\delta T_\mathrm{intr}\approx\sqrt{\frac{k_\mathrm{B}(T^2+T_0^2)}{2C}}
\end{equation}
\end{minipage
\begin{minipage}{.5\linewidth}
\begin{equation}
\label{eq:dicke}\delta T_\mathrm{readout}=\frac{T_\mathrm{a}+T}{\sqrt{B\tau}}.
\end{equation}
\end{minipage}
Here, $T$ denotes the electron temperature, $T_0$ is the substrate temperature, $C$ is the heat capacity, $T_\mathrm{a}$ is the noise temperature of the amplifier, $B$ is the measurement bandwidth, and $\tau$ is the time interval over which the temperature is being averaged. With a smaller thermal conductance, the thermal relaxation time of the system is increased, allowing the noise due to the readout to be reduced by averaging for a longer time. For graphene to be a practical detector, it is critical to achieve very low values of thermal conductance.
There are additional constraints on the device. Graphene samples need to be fabricated with a very low heat capacity and used at a low bath temperature ($T_\mathrm{0}\sim 100$~mK) in order to realize the necessary sensitivity to detect single THz photons. Operating with this low heat capacity, however, means that an incident THz photon will cause a temperature rise that is significantly larger than the equilibrium temperature of the bath ($T\gg T_\mathrm{0}$) so the standard equilibrium formula for Eq.~\ref{eq:intr} that takes $T=T_0$ is no longer valid. In this case, where $T\gg T_0$, the temperature used to calculate the noise of the system for a particular averaging interval is taken to be the average electron temperature over that interval, $T=T_\mathrm{avg}$.
By using previously measured values for thermal conductance \cite{Betz2012} and heat capacity \cite{Fong2012a}, and extrapolating to lower temperatures, we performed calculations to determine the ideal operating conditions for a graphene detector. We find that best detection of single photons occurs for parameters such that $T_\mathrm{avg}\gg T_0 $. An approximate upper bound on heat capacity of $2\times 10^{-22}$~J/K at 100~mK is necessary to achieve good performance in detecting single THz photons \cite{McKitterick2013}. This value of heat capacity corresponds to a 4~$\mu\mathrm{m}^2$ flake of graphene with a carrier density of $n=10^{12}~\mathrm{cm}^{-2}$. In Fig.~\ref{fig:hista}, we present histograms of theoretical photon counts of single 1~THz photons for a hypothetical graphene detector with these characteristics; this shows the ensemble behavior
In Fig.~\ref{fig:hista}, the histogram on the right represents the relative probability of reading out a certain change in temperature, $\Delta T_\mathrm{det}$, when a 1~THz photon arrives at the detector. Similarly, the histogram at the left represents the relative probability of observing an apparent change in temperature when no photons are absorbed in the averaging interval. If this detector had no noise, these histograms would resemble delta functions and be centered either at the average temperature change due to the arrival of a photon, or at zero detected temperature change. The broadening of the histograms is due to the contributions of the noise outlined in Eqs.~\ref{eq:intr} and \ref{eq:dicke}.
The detection peak for single-photon events is well-separated from the histogram of counts when no photons arrive at the detector. Using the calculated thermal relaxation time of $\tau=0.5~\mu$s and setting a threshold on the minimum change in electron temperature detected, $\Delta T_\mathrm{det}$, of 200~mK, one can limit the dark (zero-photon) counts to the background count rate, taken to be 1000 photons per second \cite{Karasik2011a}, without losing many single-photon events. This is an effective photon counter. With a cold, tunable frequency pre-filter, this graphene photon-counting detector could provide THz spectroscopy at the single-photon level \cite{Karasik2005}.
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[width=.48\columnwidth]{hist-oldmod.eps}
\caption{(Color online) Normalized histograms for photon counts using a previously reported thermal conductance \cite{Betz2012} scaled to 100~mK and a heat capacity of $2\times 10^{-22}~$J/K at 100~mK. The amplifier used for these calculations is described in McKitterick et al. \cite{McKitterick2013}.}
\label{fig:hista}
\end{wrapfigure}
In order to determine whether the parameters assumed are valid, it is necessary to measure how the thermal conductance behaves at low bath temperatures. We assume there are three cooling pathways for the hot electrons: coupling to phonon modes ($G_\mathrm{el-ph}$), emission of microwave photons ($G_\mathrm{photon}$) \cite{Schmidt2004}, and out-diffusion of hot electrons ($G_\mathrm{diff}$), yielding a total thermal conductance of $G_\mathrm{tot}=G_\mathrm{el-ph}+G_\mathrm{photon}+G_\mathrm{diff}$.
To achieve the low value of thermal conductance that is necessary for THz photon detection, the cooling pathway due to hot electrons diffusing out the leads needs to be suppressed and the electron-phonon coupling must be very weak at low temperatures. Superconducting contacts have been proposed to virtually eliminate $G_\mathrm{diff}$ at low temperatures, as demonstrated for lead (Pb) contacts \cite{Borzenets2011}. However, Pb is unsuitable for the applications proposed herein due to its formation of a thick oxide barrier. Effective confinement of electrons in graphene by other large-gap superconductors has not previously been reported.
Theoretical predictions for the electron-phonon coupling suggest a power law form for the electron-phonon cooling, $G_\mathrm{el-ph}\propto T^p$, with $p$ equal to 2, 3, or 4 \cite{Song2011,Chen2012,Tse2009}. Previous measurements of electron-phonon thermal conductance at cryogenic temperatures have yielded inconsistent results. Two groups \cite{Fong2012a,Betz2012} have found $G_\mathrm{el-ph}\propto T^3,$ above 2~K but their extracted values for electron-phonon coupling strength differ by over two orders of magnitude. Other studies have observed a $T^2$ dependence for electron-phonon thermal conductance from 10 to 300 ~K \cite{graham2012photocurrent,Betz2012a} and 100~mK to 700~mK \cite{Borzenets2012}.
In this article, we present preliminary measurements on graphene fabricated on silicon dioxide (SiO$_2$) that aim to provide a better understanding of $G_\mathrm{el-ph}$ and establish the capability of superconducting contacts on graphene to block $G_\mathrm{diff}$.
\section{Methods}
\begin{figure}[]\sidecaption
\centering
\includegraphics[width=.6\columnwidth]{RvsT.eps}
\caption{(Color online) (A) dV/dI as a function of temperature measured with a $1~\mu$A AC bias. Above 10~K, the resistance is primarily due to the NbN leads. (B) Schematic of graphene device (not to scale). A gate voltage is used to tune the carrier density and Ti is used as an adhesion layer for the NbN. (C) Optical micrograph of NbN lead structure. The spacing between the leads is approximately 1$~\mu$m.}
\label{fig:gsamp}
\end{figure}
The graphene samples used in this study are fabricated at Stony Brook University, where graphene is exfoliated onto a silicon substrate (15-20~$\Omega\cdot$cm at 300~K) with a 500~nm thick layer of SiO$_2$. Niobium nitride (NbN) leads are deposited on the graphene flake with a Ti adhesion layer. A thin layer of Pd is deposited between the Ti and NbN to reduce interaction between the two layers during deposition. The sample presented here has a NbN transition temperature of approximately 10~K with graphene dimensions $10\times 4~\mathrm{\mu m}$ ($10\times 1~\mathrm{\mu m}$ between leads). This transition is due to the superconducting resistance change of the NbN leads.
The samples are mounted in a vacuum can that is cooled to approximately 2~K in a pumped liquid helium cryostat. Measurements of thermal conductance are performed by heating up the electrons with a DC current and using the microwave readout shown in Fig.~\ref{fig:tcon}A to measure the emitted Johnson noise.
The Johnson noise is rectified using a diode to present a DC voltage to a digital multimeter. To convert this voltage to the electron temperature, we set $V_\mathrm{DC}=0$ so $T=T_0$ and increase the temperature of the graphene sample using a resistive heater and record the resulting linear change in voltage read out by our diode.
Using this calibration, we find the electron temperature as a function of the DC heating power due to an applied current. We then use the electron temperature to determine the thermal conductance, $G=\frac{dP}{dT}.$
We present this in Fig.~\ref{fig:tcon}B along with the calculated value for $G_\mathrm{diff}$ using the sample resistance to determine $G_\mathrm{diff}$.
\section{Discussion}
For large values of electron temperature, adding a power law term to $G_\mathrm{diff}$ describes the thermal conductance fairly well and yields a power of 2 for the fit parameter $p$. This is consistent with recent theory \cite{Song2011} predicting that disorder-assisted scattering, described as supercollisions, can dominate the electron-phonon cooling, which results in a $G_\mathrm{el-ph}\propto T^2$. The disorder-assisted scattering theory predicts a crossover temperature, $T_*$, below which $G_\mathrm{el-ph}\propto T^3$. In that article, the authors expect $T_*\gtrsim 10$~K for typical carrier densities. For our data, the large contribution of $G_\mathrm{diff}$ below approximately 8~K overrides the contribution of electron-phonon cooling at these temperatures, preventing the extraction of $T_*$.
\begin{figure}[]
\centering
\includegraphics[width=.8\linewidth]{eTemp-h2.eps}
\caption{(Color online) (A) Schematic of experimental setup for Johnson-noise measurements. LP and BP stand for low-pass and band-pass respectively. (B) Calculated thermal conductance from measured electron temperature. $G_\mathrm{diff}=\frac{12\mathcal{L}T}{R}$ is calculated from the measured electron temperature and device resistance. $\mathcal{L}=2.44\times 10^{-8}~\mathrm{W\Omega K^{-2}}$ is the Lorentz number and $R$ is the electrical resistance \cite{Prober1993}. Here $G_\mathrm{diff}$ is computed assuming non-superconducting contacts.}
\label{fig:tcon}
\end{figure}
At temperatures lower than 5~K, this fit ceases to accurately describe the measured thermal conductance. We believe that the significant drop of the measured $G$ below the fit line is the result of the superconducting contacts blocking the hot electrons from diffusing out of the graphene.
\begin{figure}[]
\centering
\includegraphics[width=.8\linewidth]{hist-new.eps}
\caption{(Color online) (A) Estimated electron-phonon contribution to thermal conductance, assuming $T_*=10$~K and an effective area of $4~\mathrm{\mu m}$. $G_\mathrm{rad}$ is calculated using a coupled bandwidth of 150~MHz. (B) Normalized histograms for photon counts using the thermal conductance from (A) and a heat capacity of $2\times 10^{-22}~$J/K at 100~mK. The dashed line indicates a threshold $\Delta T_\mathrm{det}$ which would have few dark counts, but would count approximately $85\%$ of the single-photon events.}
\label{fig:hist-new}
\end{figure}
For the purposes of THz detection, we now consider a similar graphene flake to the one measured in Figs.~\ref{fig:tcon} and \ref{fig:hist-new}, but with an area ten times smaller, corresponding to the optimal heat capacity found in previous calculations \cite{McKitterick2013}. At 100~mK, a device with these dimensions may permit a Josephson current \cite{Borzenets2012}. That would not be desirable, due to the excess noise at finite DC voltages needed to read out the noise. In order to couple to the device, it may be necessary to use superconductor-insulator-graphene (SIG) contacts with a thin tunnel barrier to suppress the Josephson current. For Johnson-noise readout at microwave frequencies, the finite impedance of the tunnel barriers would be short circuited by the tunnel junction capacitance. The device would be impedance matched to the readout circuit.
To evaluate the performance at 100~mK, we made several assumptions about the behavior of the thermal conductance below 1~K. First, we assume that $G_\mathrm{diff}\rightarrow 0$ for $T<1$~K so that $G=G_\mathrm{el-ph}+G_\mathrm{photon}$, where $G_\mathrm{photon}=k_\mathrm{B}B$ and $B$ is the coupled bandwidth to an impedance-matched load. We assume that $G_\mathrm{el-ph}\propto T^3$ below 10~K; this is shown in Fig.~\ref{fig:hist-new}. From these assumptions, the cooling power due to electron-phonon coupling is given by $P_\mathrm{el-ph}=\frac{A}{4T_*}(T^4-T_0^4)$. We can then use the heat flow equation to calculate the temperature of the electrons as a function of time after absorbing a THz photon at 100~mK, as outlined \cite{McKitterick2013}. From the calculation of electron temperature as a function of time, we determine the rms apparent temperature fluctuations, $\delta T$. The expected histograms of 1~THz photon counts are shown in Fig.~\ref{fig:hist-new}.
This detector would have a sufficient resolving power to count single 1~THz photons. By setting a threshold $\Delta T_\mathrm{det} = 300$~mK, this detector would be limit the dark counts to the background count rate (taken to be 1000 photons per second \cite{Karasik2011a}) while still recording approximately $85\%$ of single-photon events. With a well matched antenna, this detector would achieve a single-photon detection efficiency for 1~THz photons much greater than any device reported so far \cite{Komiyama}.
There are still open questions about the cooling of hot electrons in graphene. Previous measurements above 1~K are not in agreement. Also, it is not clear what area of the graphene contributes to the electron-phonon thermal conductance. At high temperatures, $G_\mathrm{el-ph}\gg G_\mathrm{diff}$, and the effective area should be much smaller than the total area. At low temperatures, a larger area may contribute. Finally, the effect of the substrate on electron-phonon cooling is still not well understood. These questions define the goals for future studies.
\section{Conclusions}
The measurements we present here are consistent with graphene being a good single-photon THz detector. To really evaluate the effectiveness of this type of detector, measurements need to be made down to the base temperature of 100~mK. Such measurements would serve both to provide a better understanding of the electron-phonon cooling and would demonstrate the capability of superconducting contacts to effectively confine hot electrons. Both of these goals are critical to the design of a graphene THz photon detector.
{\footnotesize
\bibliographystyle{Science}
|
1,116,691,500,981 | arxiv | \section{Introduction}
Large-scaled knowledge graph supports a lot of downstream natural language processing tasks like question answering, response generation, etc. However, there are large amount of important facts missing in existing KG, which has significantly limited the capability of KG's application. Therefore, automated reasoning, or the ability for computing systems to make new inferences from the observed evidence, has attracted lots of attention from the research community. In recent years, there are surging interests in designing machine learning algorithms for complex reasoning tasks, especially in large knowledge graphs (KGs) where the countless entities and links have posed great challenges to traditional logic-based algorithms. Specifically, we situate our study in this large KG multi-hop reasoning scenario, where the goal is to design an automated inference model to complete the missing links between existing entities in large KGs. For examples, if the KG contains a fact like \textit{president}(\textit{BarackObama}, \textit{USA}) and \textit{spouse}(\textit{Michelle, BarackObama}), then we would like the machines to complete the missing link \textit{livesIn}(\textit{Michelle}, \textit{USA}) automatically. Systems for this task are essential to complex question answering applications.
To tackle the multi-hop link prediction problem, various approaches have been proposed. Some earlier works like PRA~\cite{lao2011random,gardner2014incorporating,gardner2013improving} use bounded-depth random walk with restarts
to obtain paths. More recently, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go}, frame the path-finding problem as a Markov Decision Process (MDP) and utilize reinforcement learning (RL) to maximize the expected return. Another line of work along with ours are Chain-of-Reasoning~\cite{das2016chains} and Compositional Reasoning~\cite{neelakantan2015compositional}, which take multi-hop chains learned by PRA as input and aim to infer its relation.
Here we frame the KG reasoning task as a two sub-steps, i.e. ``Path-Finding'' and ``Path-Reasoning''. We found that most of the related research is only focused on one step, which leads to major drawbacks---lack of interactions between these two steps. More specifically, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go} can be interpreted as enhancing the ``Path-Finding'' step while compositional reasoning~\cite{neelakantan2015compositional} and chains of reasoning~\cite{das2016chains} can be interpreted as enhancing the ``Path-Reasoning'' step. DeepPath is trained to find paths more efficiently between two given entities while being agnostic to whether the entity pairs are positive or negative, whereas MINERVA learns to reach target nodes given an entity-query pair while being agnostic to the quality of the searched path\footnote{MINERVA assigns constant rewards to all paths reaching the destination while ignoring their qualities.}. In contrast, chains of reasoning and compositional reasoning only learn to predict relation given paths while being agnostic to the path-finding procedure. The lack of interaction prevents the model from understanding more diverse inputs and make the model very sensitive to noise and adversarial samples.
In order to increase the robustness of existing KG reasoning model and handle noisier environments, we propose to combine these two steps together as a whole from the perspective of the latent variable graphic model. This graphic model views the paths as discrete latent variables and relation as the observed variables with a given entity pair as the condition, thus the path-finding module can be viewed as a prior distribution to infer the underlying links in the KG. In contrast, the path-reasoning module can be viewed as the likelihood distribution, which classifies underlying links into multiple classes. With this assumption, we introduce an approximate posterior and design a variational auto-encoder~\cite{kingma2013auto} algorithm to maximize the evidence lower-bound. This variational framework closely incorporates two modules into a unified framework and jointly train them together. By active cooperations and interactions, the path finder can take into account the value of searched path and resort to the more meaningful paths. Meanwhile, the path reasoner can receive more diverse paths from the path finder and generalizes better to unseen scenarios.
Our contributions are three-fold:
\begin{itemize}
\item We introduce a variational inference framework for KG reasoning, which tightly integrates the path-finding and path-reasoning processes to perform joint reasoning.
\item We have successfully leveraged negative samples into training and increase the robustness of existing KG reasoning model.
\item We show that our method can scale up to large KG and achieve state-of-the-art results on two popular datasets.
\end{itemize}
The rest of the paper is organized as follow. In Section~\ref{sec:related} we will outline related work on KG embedding, multi-hop reasoning, and variational auto-encoder. We describe our variational knowledge reasoner \textsc{Diva} in Section~\ref{sec:model}. Experimental results are presented in Section~\ref{sec:exp}, and we conclude in Section~\ref{sec:conclusion}.
\section{Related Work}
\label{sec:related}
\subsection{Knowledge Graph Embeddings}
Embedding methods to model multi-relation data from KGs have been extensively studied in recent years~\cite{nickel2011three,bordes2013translating,socher2013reasoning,lin2015learning,trouillon2017knowledge}. From a representation learning perspective, all these methods are trying to learn a projection from symbolic space to vector space. For each triple $(e_s, r, e_d)$ in the KG, various score functions can be defined using either vector or matrix operations. Although these embedding approaches have been successful capturing the semantics of KG symbols (entities and relations) and achieving impressive results on knowledge base completion tasks, most of them fail to model multi-hop relation paths, which are indispensable for more complex reasoning tasks. Besides, since all these models operate solely on latent space, their predictions are barely interpretable.
\subsection{Multi-Hop Reasoning}
The Path-Ranking Algorithm (PRA) method is the first approach to use a random walk with restart mechanism to perform multi-hop reasoning. Later on, some research studies~\cite{gardner2014incorporating,gardner2013improving} have revised the PRA algorithm to compute feature similarity in the vector space. These formula-based algorithms can create a large fan-out area, which potentially undermines the inference accuracy. To mitigate this problem, a Convolutional Neural Network(CNN)-based model~\cite{toutanova2015representing} has been proposed to perform multi-hop reasoning. Recently, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go} view the multi-hop reasoning problem as a Markov Decision Process, and leverages REINFORCE~\cite{williams1992simple} to efficiently search for paths in large knowledge graph. These two methods are reported to achieve state-of-the-art results, however, these two models both use heuristic rewards to drive the policy search, which could make their models sensitive to noises and adversarial examples.
\subsection{Variational Auto-encoder}
Variational Auto-Encoder~\cite{kingma2013auto} is a very popular algorithm to perform approximate posterior inference in large-scale scenarios, especially in neural networks. Recently, VAE has been successfully applied to various complex machine learning tasks like image generation~\cite{mansimov2015generating}, machine translation~\cite{zhang2016variational}, sentence generation~\cite{guu2017generating} and question answering~\cite{zhang2017variational}. ~\citet{zhang2017variational} is closest to ours, this paper proposes a variational framework to understand the variability of human language about entity referencing. In contrast, our model uses a variational framework to cope with the complex link connections in large KG. Unlike the previous research in VAE, both ~\citet{zhang2017variational} and our model both use discrete variable as the latent representation to infer the semantics of given entity pairs. More specifically, we view the generation of relation as a stochastic process controlled by a latent representation, i.e. the connected multi-hop link existed in the KG. Though the potential link paths are discrete and countable, its amount is still very large and poses challenges to direct optimization. Therefore, we resort to variational auto-encoder as our approximation strategy.
\section{Our Approach}
\label{sec:model}
\subsection{Background}
Here we formally define the background of our task. Let $\mathcal{E}$ be the set of entities and $R$ be the set of relations. Then a KG is defined as a collection of triple facts $(e_s, r, e_d)$, where $e_s, e_d \in \mathcal{E}$ and $r \in \mathcal{R}$. We are particularly interested in the problem of relation inference, which seeks to answer the question in the format of $(e_s, ?, e_d)$, the problem setting is slightly different from standard link prediction to answer the question of $(e_s, r, ?)$. Next, in order to tackle this classification problem, we assume that there is a latent representation for given entity pair in the KG, i.e. the collection of linked paths, these hidden variables can reveal the underlying semantics between these two entities. Therefore, the link classification problem can be decomposed into two modules -- acquire underlying paths (Path Finder) and infer relation from latent representation (Path Reasoner).
\paragraph{Path Finder}
The state-of-the-art approach~\cite{xiong2017deeppath,das2017go} is to view this process as a Markov Decision Process (MDP). A tuple $<S, A, P>$ is defined to represent the MDP, where $S$ denotes the current state, e.g. the current node in the knowledge graph, $A$ is the set of available actions, e.g. all the outgoing edges from the state, while $P$ is the transition probability describing the state transition mechanism. In the knowledge graph, the transition of the state is deterministic, so we do not need to model the state transition $P$.
\paragraph{Path Reasoner}
The common approach~\cite{lao2011random,neelakantan2015compositional,das2016chains} is to encode the path as a feature vector and use a multi-class discriminator to predict the unknown relation. PRA~\cite{lao2011random} proposes to encode paths as binary features to learn a log-linear classifier, while ~\cite{das2016chains} applies recurrent neural network to recursively encode the paths into hidden features and uses vector similarity for classification.
\subsection{Variational KG Reasoner (\textsc{Diva})}
Here we draw a schematic diagram of our model in~\autoref{fig:graphic-model}. Formally, we define the objective function for the general relation classification problem as follows:
\begin{align}
\small
\begin{split}
Obj =& \sum_{(e_s, r, e_d) \in D} \log p(r|(e_s, e_d))\\
=& \sum_{(e_s, r, e_d) \in D} \log \sum_L p_{\theta}(L|(e_s, e_d)) p(r|L)
\end{split}
\end{align}
where $D$ is the dataset, $(e_s, r, e_d)$ is the triple contained in the dataset, and $L$ is the latent connecting paths. The evidence probability $p(r|(e_s, e_d))$ can be written as the marginalization of the product of two terms over the latent space.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{graphic-model}
\caption{The probabilistic graphical model of our proposed approach. Arrows with dotted border represent the approximate posterior, which is modeled as a multinomial distribution over the whole link space. Arrows with solid border represent the prior and likelihood distributions. }
\label{fig:graphic-model}
\end{figure}
However, this evidence probability is intractable since it requires summing over the whole latent link space. Therefore, we propose to maximize its variational lower bound as follows:
\begin{align}
\small
\begin{split}
ELBO =& \expect{L \sim q_{\varphi}(L|r, (e_s, e_d))} [\log p_{\theta}(r|L)] - \\
& D_{KL}(q_{\varphi}(L|r, (e_s, e_d))||p_{\beta}(L|(e_s, e_d)))
\end{split}
\end{align}
Specifically, the ELBO~\cite{kingma2013auto} is composed of three different terms -- likelihood $p_{\theta}\big(r|L)$, prior $p_{\beta}\big(L|(e_s, e_t))$, and posterior $q_{\varphi}\big(L|(e_s, e_d),r)$. In this paper, we use three neural network models to parameterize these terms and then follow~\cite{kingma2013auto} to apply variational auto-encoder to maximize the approximate lower bound. We describe these three models in details below:
\paragraph{Path Reasoner (Likelihood).} Here we propose a path reasoner using Convolutional Neural Networks (CNN)~\cite{lecun1995convolutional} and a feed-forward neural network. This model takes path sequence $L = \{a_1, e_1, \cdots, a_i, e_i, \cdots a_n, e_n\}$ to output a softmax probability over the relations set $R$, where $a_i$ denotes the $i$-th intermediate relation and $e_i$ denotes the $i$-th intermediate entity between the given entity pair. Here we first project them into embedding space and concatenate i-th relation embedding with $i$-th entity embedding as a combined vector, which we denote as $\{ f_1, f_2, \cdots, f_n \}$ and $f_i \in \mathcal{R}^{2E}$. As shown in~\autoref{fig:conv}, we pad the embedding sequence to a length of $N$. Then we design three convolution layers with window size of $(1 \times 2E), (2 \times 2E), (3 \times 2E)$, input channel size $1$ and filter size $D$. After the convolution layer, we use $(N \times 1), (N-1 \times 1), (N-2 \times 1)$ to max pool the convolution feature map. Finally, we concatenate the three vectors as a combined vector $F \in \mathcal{R}^{3D}$. Finally, we use two-layered MLP with intermediate hidden size of $M$ to output a softmax distribution over all the relations set $R$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{conv}
\caption{Overview of the CNN Path Reasoner.}
\label{fig:conv}
\end{figure}
\begin{gather}
F = f(f_1, f_2, \cdots, f_N)\\
p(r|L; \theta) = softmax(W_r F + b_r)
\end{gather}
where $f$ denotes the convolution and max-pooling function applied to extract reasoning path feature $F$, and $W_r, b_r$ denote the weights and bias for the output feed-forward neural network.
\paragraph{Path Finder (Prior).} Here we formulate the path finder $p(L|(e_s, e_d))$ as an MDP problem, and recursively predict actions (an outgoing relation-entity edge $(a, e)$) in every time step based on the previous history $h_{t-1}$ as follows:
\begin{gather}
c_t = ReLU(W_{h} [h_t;e_d] + b_{h})\\
p((a_{t+1}, e_{t+1}) | h_t, \beta) = softmax(A_t c_t)
\end{gather}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{path-finder}
\caption{An overview of the path finder model. Note that $r_q$ (query relation) exists in the approximate posterior while disappearing in the path finder model and $e_t$ represents the target entity embedding, $c_{\tau}$ is the output of MLP layer at time step $\tau$, $a', e'$ denotes the connected edges and ends in the knowledge graphs.}
\label{fig:rnn}
\end{figure}where the $h_t \in \mathcal{R}^H$ denotes the history embedding, $e_d \in \mathcal{R}^E$ denotes the entity embedding, $A_t \in \mathcal{R}^{|A| \times 2E}$ is outgoing matrix which stacks the concatenated embeddings of all outgoing edges and $|A|$ denotes the number of outgoing edge, we use $W_h$ and $b_h$ to represent the weight and bias of the feed-forward neural network outputting feature vector $c_t \in \mathcal{R}^{2E}$. The history embedding $h_t$ is obtained using an LSTM network~\cite{hochreiter1997long} to encode all the previous decisions as follows:
\begin{align}
h_t = LSTM(h_{t-1}, (a_t, e_t))
\end{align}
As shown in~\autoref{fig:rnn}, the LSTM-based path finder interacts with the KG in every time step and decides which outgoing edge $(a_{t+1}, e_{t+1})$ to follow, search procedure will terminate either the target node is reached or the maximum step is reached.
\paragraph{Approximate Posterior.} We formulate the posterior distribution $q(L|(e_s, e_d), r)$ following the similar architecture as the prior. The main difference lies in the fact that posterior approximator is aware of the relation $r$, therefore making more relevant decisions. The posterior borrows the history vector from finder as $h_t$, while the feed-forward neural network is distinctive in that it takes the relation embedding also into account. Formally, we write its outgoing distribution as follows:
\begin{align}
\begin{split}
u_t = ReLU(W_{hp} [h_t;e_d;r] + b_{hp})\\
q((a_{t+1}, e_{t+1}) | h_t; \varphi) = softmax(A_t u_t)
\end{split}
\end{align}
where $W_{hp}$ and $b_{hp}$ denote the weight and bias for the feed-forward neural network.
\subsection{Optimization}
In order to maximize the ELBO with respect to the neural network models described above, we follow VAE~\cite{kingma2013auto} to interpret the negative ELBO as two separate losses and minimize these them jointly using a gradient descent:
\paragraph{Reconstruction Loss.} Here we name the first term of negative ELBO as reconstruction loss:
\begin{align}
J_R = \expect{L \sim q_{\varphi}(L|r, (e_s, e_d))} [-\log p_{\theta}(r|L)]
\end{align}
this loss function is motivated to reconstruct the relation $R$ from the latent variable $L$ sampled from approximate posterior, optimizing this loss function jointly can not only help the approximate posterior to obtain paths unique to particular relation $r$, but also teaches the path reasoner to reason over multiple hops and predict the correct relation.
\paragraph{KL-divergence Loss.} We name the second term as KL-divergence loss:
\begin{align}
\small
J_{KL} = D_{KL}(q_{\varphi}(L|r, e_s, e_d)|p_{\beta}(L|e_s, e_d))
\end{align}
this loss function is motivated to push the prior distribution towards the posterior distribution. The intuition of this loss lies in the fact that an entity pair already implies their relation, therefore, we can teach the path finder to approach the approximate posterior as much as possible. During test-time when we have no knowledge about relation, we use path finder to replace posterior approximator to search for high-quality paths.
\paragraph{Derivatives.} We show the derivatives of the loss function with respect to three different models. For the approximate posterior, we re-weight the KL-diverge loss and design a joint loss function as follows:
\begin{align}
J = J_R + \lambda_{KL} J_{KL}
\end{align}
where $\lambda_{KL}$ is the re-weight factor to combine these two losses functions together. Formally, we write the derivative of posterior as follows:
\begin{align}
\small
\begin{split}
\frac{\partial J}{\partial \varphi} =\expect{L \sim q_{\varphi}(L))}[& -f_{re}(L) \frac{\partial \log{q_{\varphi}(L| (e_s, e_d), r)}}{\partial \varphi}]
\end{split}
\label{eq:path-finder}
\end{align}
where $f_{re}(L) = \log{p_{\theta}} + \lambda_{KL} \log\frac{p_{\beta}}{q_{\varphi}}$ denotes the probability assigned by path reasoner. In practice, we found that a large KL-regularizer $\log \frac{p_{\beta}}{q_{\varphi}}$ causes severe instability during training, therefore we keep a low $\lambda_{KL}$ value during our experiments~\footnote{we set $\lambda_{KL}=0$ through our experiments.}.
For the path reasoner, we also optimize its parameters $\theta$ with regard to the reconstruction as follows:
\begin{align}
\frac{\partial J_R}{\partial \theta} = \expect{L \sim q_{\varphi}(L)} -\frac{\partial \log{p_{\theta}(r|L)}}{\partial \theta}
\label{eq:path-reasoner}
\end{align}
For the path finder, we optimize its parameters $\beta$ with regard to the KL-divergence to teach it to infuse the relation information into the found links.
\begin{align}
\frac{\partial J_{KL}}{\partial \beta} = \expect{L \sim q_{\varphi}(L)} -\frac{\partial \log{p_{\beta}(L|(e_s, e_d))}}{\partial \beta}
\label{eq:prior}
\end{align}
\begin{algorithm}[t]
\caption{The \textsc{Diva} Algorithm.}\label{alg:diva}
\begin{algorithmic}[1]
\Procedure{Training \& Testing}{}
\State\hskip-\ALG@thistlm \emph{Train}:
\For{episode $\leftarrow$ 1 to N}
\State Rollout K paths from posterior $p_{\varphi}$
\If{Train-Posterior}
\State $\varphi \leftarrow \varphi - \eta \times \frac{\partial L_r}{\partial \varphi}$
\ElsIf{Train-Likelihood}
\State $\theta \leftarrow \theta - \eta \times \frac{\partial L_r}{\partial \theta}$
\ElsIf{Train-Prior}
\State $\beta \leftarrow \beta - \eta \times \frac{\partial L_{KL}}{\partial \beta}$
\EndIf
\EndFor
\State\hskip-\ALG@thistlm \emph{Test MAP}:
\State Restore initial parameters $\theta, \beta$
\State Given sample $(e_s, r_q, (e_1, e_2, \cdots, e_n))$
\State $L_i \leftarrow BeamSearch(p_{\beta}(L|e_s, e_i))$
\State $S_i \leftarrow \frac{1}{|L_i|}\sum_{l \in L_i} p_{\theta}(r_q|l)$
\State Sort $S_i$ and find positive rank $ra^+$
\State $MAP \leftarrow \frac{1}{1 + ra^+}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Train \& Test} During training time, in contrast to the preceding methods like~\newcite{das2017go,xiong2017deeppath}, we also exploit negative samples by introducing an pseudo ``n/a'' relation, which indicates ``no-relation'' between two entities. Therefore, we manage to decompose the data sample $(e_q, r_q, [e_1^-, e_2^-, \cdots, e_n^+])$ into a series of tuples $(e_q, r_q', e_i)$, where $r_q'=r_q$ for positive samples and $r_q'=n/a$ for negative samples. During training, we alternatively update three sub-modules with SGD. During test, we apply the path-finder to beam-search the top paths for all tuples and rank them based on the scores assign by path-reasoner. More specifically, we demonstrate the pseudo code in~\autoref{alg:diva}.
\subsection{Discussion}
We here interpret the update of the posterior approximator in equation~\autoref{eq:path-finder} as a special case of REINFORCE~\cite{williams1992simple}, where we use Monte-Carlo sampling to estimate the expected return $\log p_{\theta}(r|L)$ for current posterior policy. This formula is very similar to DeepPath and MINERVA~\cite{xiong2017deeppath,das2017go} in the sense that path-finding process is described as an exploration process to maximize the policy's long-term reward. Unlike these two models assigning heuristic rewards to the policy, our model assigns model-based reward $\log{p_{\theta}(r|L)}$, which is known to be more sophisticated and considers more implicit factors to distinguish between good and bad paths.
Besides, our update formula for path reasoner~\autoref{eq:path-reasoner} is also similar to chain-of-reasoning~\cite{das2016chains}, both models are aimed at maximizing the likelihood of relation given the multi-hop chain. However, our model is distinctive from theirs in a sense that the obtained paths are sampled from a dynamic policy, by exposing more diverse paths to the path reasoner, it can generalize to more conditions. By the active interactions and collaborations of two models, \textsc{Diva} is able to comprehend more complex inference scenarios and handle more noisy environments.
\section{Experiments}
\label{sec:exp}
To evaluate the performance of \textsc{Diva}, we explore the standard link prediction task on two different-sized KG datasets and compare with the state-of-the-art algorithms. Link prediction is to rank a list of target entities $(e_1^-, e_2^-, \cdots, e_n^+)$ given a query entity $e_q$ and query relation $r_q$. The dataset is arranged in the format of $(e_q, r_q, [e_1^-, e_2^-, \cdots, e_n^+])$, and the evaluation score (Mean Averaged Precision, MAP) is based on the ranked position of the positive sample.
\subsection{Dataset and Setting}
We perform experiments on two datasets, and the details of the statistics are described in~\autoref{tab:stat}. The samples of FB15k-237~\cite{toutanova2015representing} are sampled from FB15k~\cite{bordes2013translating}, here we follow DeepPath~\cite{xiong2017deeppath} to select 20 relations including Sports, Locations, Film, etc. Our NELL dataset is downloaded from the released dataset\footnote{https://github.com/xwhan/DeepPath}, which contains 12 relations for evaluation. Besides, both datasets contain negative samples obtained by using the PRA code released by~\newcite{lao2011random}. For each query $r_q$, we remove all the triples with $r_q$ and $r_q^{-1}$ during reasoning. During training, we set number of rollouts to 20 for each training sample and update the posterior distribution using Monte-Carlo REINFORCE~\cite{williams1992simple} algorithm. During testing, we use a beam of 5 to approximate the whole search space for path finder. We follow MINERVA~\cite{das2017go} to set the maximum reasoning length to 3, which lowers the burden for the path-reasoner model. For both datasets, we set the embedding size $E$ to 200, the history embedding size $H$ to 200, the convolution kernel feature size $D$ to 128, we set the hidden size of MLP for both path finder and path reasoner to 400.
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|c|c|c|}
\hline
Dataset & \#Ent & \#R & \#Triples & \#Tasks\\
\hline
FB15k-237 & 14,505 & 237 & 310,116 & 20\\
\hline
NELL-995 & 75,492 & 200 & 154,213 & 12\\
\hline
\end{tabular}
\caption{Dataset statistics.}
\label{tab:stat}
\end{table}
\subsection{Quantitative Results}
We mainly compare with the embedding-based algorithms~\cite{bordes2013translating,lin2015learning,ji2015knowledge,wang2014knowledge}, PRA~\cite{lao2011random}, MINERVA~\cite{das2017go}, DeepPath~\cite{xiong2017deeppath} and Chain-of-Reasoning~\cite{das2016chains}, besides, we also take our standalone CNN path-reasoner from \textsc{Diva}. Besides, we also try to directly maximize the marginal likelihood $p(r|e_s, e_d) = \sum_L p(L|e_s, e_d) p(r|L)$ using only the prior and likelihood model following MML~\cite{guu2017language}, which enables us to understand the superiority of introducing an approximate posterior. Here we first report our results for NELL-995 in~\autoref{tab:nell-result}, which is known to be a simple dataset and many existing algorithms already approach very significant accuracy. Then we test our methods in FB15k~\cite{toutanova2015representing} and report our results in~\autoref{tab:fb-result}, which is much harder than NELL and arguably more relevant for real-world scenarios.
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|c|}
\hline
Model & 12-rel MAP & 9-rel MAP\\
\hline
RPA~\cite{lao2011random} & 67.5 & - \\
\hline
TransE~\cite{bordes2013translating} & 75.0 &- \\%74.93
\hline
TransR~\cite{lin2015learning} & 74.0 & -\\
\hline
TransD~\cite{ji2015knowledge} & 77.3 & -\\
\hline
TransH~\cite{wang2014knowledge} & 75.1 & -\\
\hline
MINERVA~\cite{das2017go} & - & \textbf{88.2}\\%88.16
\hline
DeepPath~\cite{xiong2017deeppath} & 79.6 & 80.2\\%80.16
\hline
RNN-Chain~\cite{das2016chains} & 79.0 & 80.2 \\
\hline
\hline
CNN Path-Reasoner & 82.0 & 82.2 \\
\hline
\textsc{Diva} & \textbf{88.6} & 87.9 \\
\hline
\end{tabular}
\caption{MAP results on the NELL dataset. Since MINERVA~\cite{das2017go} only takes 9 relations out of the original 12 relations, we report the known results for both version of NELL-995 dataset.}
\label{tab:nell-result}
\end{table}
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|}
\hline
Model & 20-rel MAP\\
\hline
PRA~\cite{lao2011random} & 54.1\\
\hline
TransE~\cite{bordes2013translating} & 53.2 \\
\hline
TransR~\cite{lin2015learning} & 54.0 \\
\hline
MINERVA~\cite{das2017go} & 55.2 \\
\hline
DeepPath~\cite{xiong2017deeppath} & 57.2 \\
\hline
RNN-Chain~\cite{das2016chains} & 51.2 \\
\hline
\hline
CNN Path-Reasoner & 54.2 \\
\hline
MML~\cite{guu2017language} & 58.7 \\
\hline
\textsc{Diva} & \textbf{59.8} \\
\hline
\end{tabular}
\caption{Results on the FB15k dataset, please note that MINERVA's result is obtained based on our own implementation.}
\label{tab:fb-result}
\end{table}
Besides, we also evaluate our model on FB-15k 20-relation subset with HITS@N score. Since our model only deals with the relation classification problem $(e_s, ?, e_d)$ with $e_d$ as input, so it's hard for us to directly compare with MINERVA~\cite{das2017go}. However, here we compare with chain-RNN~\cite{das2016chains} and CNN Path-Reasoner model, the results are demonstrated as~\autoref{tab:fb-result-hit}. Please note that the HITS@N score is computed against relation rather than entity.
\begin{table}[!htb]
\small
\centering
\begin{tabular}{|l|c|c|}
\hline
Model & HITS@3 & HITS@5\\
\hline
RNN-Chain~\cite{das2016chains} & 0.80 & 0.82\\
\hline
CNN Path-Reasoner & 0.82 & 0.83 \\
\hline
\textsc{Diva} & \textbf{0.84} & \textbf{0.86}\\
\hline
\end{tabular}
\caption{HITS@N results on the FB15k dataset}
\label{tab:fb-result-hit}
\end{table}
\paragraph{Result Analysis}
We can observe from the above tables~\autoref{tab:fb-result} and~\autoref{tab:nell-result} that our algorithm has significantly outperformed most of the existing algorithms and achieves a very similar result as MINERVA~\cite{das2017go} on NELL dataset and achieves state-of-the-art results on FB15k. We conclude that our method is able to deal with more complex reasoning scenarios and is more robust to the adversarial examples. Besides, we also observe that our CNN Path-Reasoner can outperform the RNN-Chain~\cite{das2016chains} on both datasets, we speculate that it is due to the short lengths of reasoning chains, which can extract more useful information from the reasoning chain.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{beam-size-graph}
\caption{MAP results varying beam size and the error type's occurrence w.r.t to beam size. A beam size that is too large or too small would cause performance to drop.}
\label{fig:beam-vis}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{error-analysis}
\caption{Error analysis for the NELL and FB15k link prediction task. Since FB15k dataset uses placeholders for entities, we are not able to analyze whether the error comes from KG noise.}
\label{fig:error-analysis}
\end{figure}
From these two pie charts in~\autoref{fig:error-analysis}, we can observe that in NELL-995, very few errors are coming from the path reasoner since the path length is very small. A large proportion only contains a single hop. In contrast, most of the failures in the FB15k dataset are coming from the path reasoner, which fails to classify the multi-hop chain into correct relation. This analysis demonstrates that FB15k is much harder dataset and may be closer to real-life scenarios.
\begin{table*}[t]
\centering
\small
\begin{tabular}{lll}\toprule
\multicolumn{1}{l}{Type} & \multicolumn{1}{l}{Reasoning Path} & \multicolumn{1}{l}{Score} \\ \midrule
Negative & athleteDirkNowitzki $\rightarrow$ \textit{(athleteLedSportsteam)} $\rightarrow$ sportsteamMavericks & 0.98 \\
Positive & athleteDirkNowitzki $\rightarrow$ \textit{(athleteLedSportsteam)} $\rightarrow$ sportsteamDallasMavericks & 0.96 \\
\textit{Explanation} & \textit{``maverick'' is equivalent to ``dallas-maverick'', but treated as negative sample} & - \\ \midrule
Negative & athleteRichHill $\rightarrow$ \textit{(personBelongsToOrganization)} $\rightarrow$ sportsteamChicagoCubs & 0.88 \\
Positive & athleteRichHill $\rightarrow$ \textit{(personBelongsToOrganization)} $\rightarrow$ sportsteamBlackhawks & 0.74 \\
\textit{Explanation} & \textit{Rich Hill plays in both sportsteam but the knowledge graph only include one} & - \\
\midrule
\multirow{2}*{Negative} & coachNikolaiZherdev $\rightarrow$ \textit{(athleteHomeStadium)} $\rightarrow$ stadiumOreventvenueGiantsStadium & \\
& $\rightarrow$ \textit{(teamHomestadium$^{-1}$)} $\rightarrow$ sportsteamNewyorkGiants & 0.98 \\
\multirow{2}*{Positive} & coachNikolaiZherdev $\rightarrow$ \textit{(athleteHomeStadium)} $\rightarrow$ stadiumOreventvenueGiantsStadium & \\
& $\rightarrow$ \textit{(teamHomestadium$^{-1}$)} $\rightarrow$ sportsteam-rangers & 0.72 \\
\textit{Explanation} &\textit{The home stadium accommodates multiple teams, therefore the logic chain is not valid } & - \\
\bottomrule
\end{tabular}
\caption{The three samples separately indicates three frequent error types, the first one belongs to ``duplicate entity'', the second one belongs to ``missing entity'', while the last one is due to ``wrong reasoning''. Please note that the parenthesis terms denote relations while the non-parenthesis terms denote entities.}
\label{tab:failure}
\end{table*}
\subsection{Beam Size Trade-offs}
Here we are especially interested in studying the impact of different beam sizes in the link prediction tasks. With larger beam size, the path finder can obtain more linking paths, meanwhile, more noises are introduced to pose greater challenges for the path reasoner to infer the relation. With smaller beam size, the path finder will struggle to find connecting paths between positive entity pairs, meanwhile eliminating many noisy links. Therefore, we first mainly summarize three different types and investigate their changing curve under different beam size conditions:
\begin{enumerate}
\item No paths are found for positive samples, while paths are found for negative samples, which we denote as Neg$>$Pos=0.
\item Both positive samples and negative samples found paths, but the reasoner assigns higher scores to negative samples, which we denote as Neg$>$Pos$>$0.
\item Both negative and positive samples are not able to find paths in the knowledge graph, which we denote as Neg=Pos=0.
\end{enumerate}
We draw the curves for MAP and error ratios in~\autoref{fig:beam-vis} and we can easily observe the trade-offs, we found that using beam size of 5 can balance the burden of path-finder and path-reasoner optimally, therefore we keep to this beam size for the all the experiments.
\subsection{Error Analysis}
In order to investigate the bottleneck of \textsc{Diva}, we take a subset from validation dataset to summarize the causes of different kinds of errors. Roughly, we classify errors into three categories, 1) KG noise: This error is caused by the KG itself, e.g some important relations are missing; some entities are duplicate; some nodes do not have valid outgoing edges. 2) Path-Finder error: This error is caused by the path finder, which fails to arrive destination. 3) Path-Reasoner error: This error is caused by the path reasoner to assign a higher score to negative paths. Here we draw two pie charts to demonstrate the sources of reasoning errors in two reasoning tasks.
\subsection{Failure Examples}
We also show some failure samples in~\autoref{tab:failure} to help understand where the errors are coming from. We can conclude that the ``duplicate entity'' and ``missing entity'' problems are mainly caused by the knowledge graph or the dataset, and the link prediction model has limited capability to resolve that. In contrast, the ``wrong reasoning'' problem is mainly caused by the reasoning model itself and can be improved with better algorithms.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a novel variational inference framework for knowledge graph reasoning. In contrast to prior studies that use a random walk with restarts~\cite{lao2011random} and explicit reinforcement learning path finding~\cite{xiong2017deeppath}, we situate our study in the context of variational inference in latent variable probabilistic graphical models. Our framework seamlessly integrates the path-finding and path-reasoning processes in a unified probabilistic framework, leveraging the strength of neural network based representation learning methods. Empirically, we show that our method has achieved the state-of-the-art performances on two popular datasets.
\section{Acknowledgement}
The authors would like to thank the anonymous reviewers for their thoughtful comments. This research
was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF IIS 1528175. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.
\section{Introduction}
Large-scaled knowledge graph supports a lot of downstream natural language processing tasks like question answering, response generation, etc. However, there are large amount of important facts missing in existing KG, which has significantly limited the capability of KG's application. Therefore, automated reasoning, or the ability for computing systems to make new inferences from the observed evidence, has attracted lots of attention from the research community. In recent years, there are surging interests in designing machine learning algorithms for complex reasoning tasks, especially in large knowledge graphs (KGs) where the countless entities and links have posed great challenges to traditional logic-based algorithms. Specifically, we situate our study in this large KG multi-hop reasoning scenario, where the goal is to design an automated inference model to complete the missing links between existing entities in large KGs. For examples, if the KG contains a fact like \textit{president}(\textit{BarackObama}, \textit{USA}) and \textit{spouse}(\textit{Michelle, BarackObama}), then we would like the machines to complete the missing link \textit{livesIn}(\textit{Michelle}, \textit{USA}) automatically. Systems for this task are essential to complex question answering applications.
To tackle the multi-hop link prediction problem, various approaches have been proposed. Some earlier works like PRA~\cite{lao2011random,gardner2014incorporating,gardner2013improving} use bounded-depth random walk with restarts
to obtain paths. More recently, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go}, frame the path-finding problem as a Markov Decision Process (MDP) and utilize reinforcement learning (RL) to maximize the expected return. Another line of work along with ours are Chain-of-Reasoning~\cite{das2016chains} and Compositional Reasoning~\cite{neelakantan2015compositional}, which take multi-hop chains learned by PRA as input and aim to infer its relation.
Here we frame the KG reasoning task as a two sub-steps, i.e. ``Path-Finding'' and ``Path-Reasoning''. We found that most of the related research is only focused on one step, which leads to major drawbacks---lack of interactions between these two steps. More specifically, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go} can be interpreted as enhancing the ``Path-Finding'' step while compositional reasoning~\cite{neelakantan2015compositional} and chains of reasoning~\cite{das2016chains} can be interpreted as enhancing the ``Path-Reasoning'' step. DeepPath is trained to find paths more efficiently between two given entities while being agnostic to whether the entity pairs are positive or negative, whereas MINERVA learns to reach target nodes given an entity-query pair while being agnostic to the quality of the searched path\footnote{MINERVA assigns constant rewards to all paths reaching the destination while ignoring their qualities.}. In contrast, chains of reasoning and compositional reasoning only learn to predict relation given paths while being agnostic to the path-finding procedure. The lack of interaction prevents the model from understanding more diverse inputs and make the model very sensitive to noise and adversarial samples.
In order to increase the robustness of existing KG reasoning model and handle noisier environments, we propose to combine these two steps together as a whole from the perspective of the latent variable graphic model. This graphic model views the paths as discrete latent variables and relation as the observed variables with a given entity pair as the condition, thus the path-finding module can be viewed as a prior distribution to infer the underlying links in the KG. In contrast, the path-reasoning module can be viewed as the likelihood distribution, which classifies underlying links into multiple classes. With this assumption, we introduce an approximate posterior and design a variational auto-encoder~\cite{kingma2013auto} algorithm to maximize the evidence lower-bound. This variational framework closely incorporates two modules into a unified framework and jointly train them together. By active cooperations and interactions, the path finder can take into account the value of searched path and resort to the more meaningful paths. Meanwhile, the path reasoner can receive more diverse paths from the path finder and generalizes better to unseen scenarios.
Our contributions are three-fold:
\begin{itemize}
\item We introduce a variational inference framework for KG reasoning, which tightly integrates the path-finding and path-reasoning processes to perform joint reasoning.
\item We have successfully leveraged negative samples into training and increase the robustness of existing KG reasoning model.
\item We show that our method can scale up to large KG and achieve state-of-the-art results on two popular datasets.
\end{itemize}
The rest of the paper is organized as follow. In Section~\ref{sec:related} we will outline related work on KG embedding, multi-hop reasoning, and variational auto-encoder. We describe our variational knowledge reasoner \textsc{Diva} in Section~\ref{sec:model}. Experimental results are presented in Section~\ref{sec:exp}, and we conclude in Section~\ref{sec:conclusion}.
\section{Related Work}
\label{sec:related}
\subsection{Knowledge Graph Embeddings}
Embedding methods to model multi-relation data from KGs have been extensively studied in recent years~\cite{nickel2011three,bordes2013translating,socher2013reasoning,lin2015learning,trouillon2017knowledge}. From a representation learning perspective, all these methods are trying to learn a projection from symbolic space to vector space. For each triple $(e_s, r, e_d)$ in the KG, various score functions can be defined using either vector or matrix operations. Although these embedding approaches have been successful capturing the semantics of KG symbols (entities and relations) and achieving impressive results on knowledge base completion tasks, most of them fail to model multi-hop relation paths, which are indispensable for more complex reasoning tasks. Besides, since all these models operate solely on latent space, their predictions are barely interpretable.
\subsection{Multi-Hop Reasoning}
The Path-Ranking Algorithm (PRA) method is the first approach to use a random walk with restart mechanism to perform multi-hop reasoning. Later on, some research studies~\cite{gardner2014incorporating,gardner2013improving} have revised the PRA algorithm to compute feature similarity in the vector space. These formula-based algorithms can create a large fan-out area, which potentially undermines the inference accuracy. To mitigate this problem, a Convolutional Neural Network(CNN)-based model~\cite{toutanova2015representing} has been proposed to perform multi-hop reasoning. Recently, DeepPath~\cite{xiong2017deeppath} and MINERVA~\cite{das2017go} view the multi-hop reasoning problem as a Markov Decision Process, and leverages REINFORCE~\cite{williams1992simple} to efficiently search for paths in large knowledge graph. These two methods are reported to achieve state-of-the-art results, however, these two models both use heuristic rewards to drive the policy search, which could make their models sensitive to noises and adversarial examples.
\subsection{Variational Auto-encoder}
Variational Auto-Encoder~\cite{kingma2013auto} is a very popular algorithm to perform approximate posterior inference in large-scale scenarios, especially in neural networks. Recently, VAE has been successfully applied to various complex machine learning tasks like image generation~\cite{mansimov2015generating}, machine translation~\cite{zhang2016variational}, sentence generation~\cite{guu2017generating} and question answering~\cite{zhang2017variational}. ~\citet{zhang2017variational} is closest to ours, this paper proposes a variational framework to understand the variability of human language about entity referencing. In contrast, our model uses a variational framework to cope with the complex link connections in large KG. Unlike the previous research in VAE, both ~\citet{zhang2017variational} and our model both use discrete variable as the latent representation to infer the semantics of given entity pairs. More specifically, we view the generation of relation as a stochastic process controlled by a latent representation, i.e. the connected multi-hop link existed in the KG. Though the potential link paths are discrete and countable, its amount is still very large and poses challenges to direct optimization. Therefore, we resort to variational auto-encoder as our approximation strategy.
\section{Our Approach}
\label{sec:model}
\subsection{Background}
Here we formally define the background of our task. Let $\mathcal{E}$ be the set of entities and $R$ be the set of relations. Then a KG is defined as a collection of triple facts $(e_s, r, e_d)$, where $e_s, e_d \in \mathcal{E}$ and $r \in \mathcal{R}$. We are particularly interested in the problem of relation inference, which seeks to answer the question in the format of $(e_s, ?, e_d)$, the problem setting is slightly different from standard link prediction to answer the question of $(e_s, r, ?)$. Next, in order to tackle this classification problem, we assume that there is a latent representation for given entity pair in the KG, i.e. the collection of linked paths, these hidden variables can reveal the underlying semantics between these two entities. Therefore, the link classification problem can be decomposed into two modules -- acquire underlying paths (Path Finder) and infer relation from latent representation (Path Reasoner).
\paragraph{Path Finder}
The state-of-the-art approach~\cite{xiong2017deeppath,das2017go} is to view this process as a Markov Decision Process (MDP). A tuple $<S, A, P>$ is defined to represent the MDP, where $S$ denotes the current state, e.g. the current node in the knowledge graph, $A$ is the set of available actions, e.g. all the outgoing edges from the state, while $P$ is the transition probability describing the state transition mechanism. In the knowledge graph, the transition of the state is deterministic, so we do not need to model the state transition $P$.
\paragraph{Path Reasoner}
The common approach~\cite{lao2011random,neelakantan2015compositional,das2016chains} is to encode the path as a feature vector and use a multi-class discriminator to predict the unknown relation. PRA~\cite{lao2011random} proposes to encode paths as binary features to learn a log-linear classifier, while ~\cite{das2016chains} applies recurrent neural network to recursively encode the paths into hidden features and uses vector similarity for classification.
\subsection{Variational KG Reasoner (\textsc{Diva})}
Here we draw a schematic diagram of our model in~\autoref{fig:graphic-model}. Formally, we define the objective function for the general relation classification problem as follows:
\begin{align}
\small
\begin{split}
Obj =& \sum_{(e_s, r, e_d) \in D} \log p(r|(e_s, e_d))\\
=& \sum_{(e_s, r, e_d) \in D} \log \sum_L p_{\theta}(L|(e_s, e_d)) p(r|L)
\end{split}
\end{align}
where $D$ is the dataset, $(e_s, r, e_d)$ is the triple contained in the dataset, and $L$ is the latent connecting paths. The evidence probability $p(r|(e_s, e_d))$ can be written as the marginalization of the product of two terms over the latent space.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{graphic-model}
\caption{The probabilistic graphical model of our proposed approach. Arrows with dotted border represent the approximate posterior, which is modeled as a multinomial distribution over the whole link space. Arrows with solid border represent the prior and likelihood distributions. }
\label{fig:graphic-model}
\end{figure}
However, this evidence probability is intractable since it requires summing over the whole latent link space. Therefore, we propose to maximize its variational lower bound as follows:
\begin{align}
\small
\begin{split}
ELBO =& \expect{L \sim q_{\varphi}(L|r, (e_s, e_d))} [\log p_{\theta}(r|L)] - \\
& D_{KL}(q_{\varphi}(L|r, (e_s, e_d))||p_{\beta}(L|(e_s, e_d)))
\end{split}
\end{align}
Specifically, the ELBO~\cite{kingma2013auto} is composed of three different terms -- likelihood $p_{\theta}\big(r|L)$, prior $p_{\beta}\big(L|(e_s, e_t))$, and posterior $q_{\varphi}\big(L|(e_s, e_d),r)$. In this paper, we use three neural network models to parameterize these terms and then follow~\cite{kingma2013auto} to apply variational auto-encoder to maximize the approximate lower bound. We describe these three models in details below:
\paragraph{Path Reasoner (Likelihood).} Here we propose a path reasoner using Convolutional Neural Networks (CNN)~\cite{lecun1995convolutional} and a feed-forward neural network. This model takes path sequence $L = \{a_1, e_1, \cdots, a_i, e_i, \cdots a_n, e_n\}$ to output a softmax probability over the relations set $R$, where $a_i$ denotes the $i$-th intermediate relation and $e_i$ denotes the $i$-th intermediate entity between the given entity pair. Here we first project them into embedding space and concatenate i-th relation embedding with $i$-th entity embedding as a combined vector, which we denote as $\{ f_1, f_2, \cdots, f_n \}$ and $f_i \in \mathcal{R}^{2E}$. As shown in~\autoref{fig:conv}, we pad the embedding sequence to a length of $N$. Then we design three convolution layers with window size of $(1 \times 2E), (2 \times 2E), (3 \times 2E)$, input channel size $1$ and filter size $D$. After the convolution layer, we use $(N \times 1), (N-1 \times 1), (N-2 \times 1)$ to max pool the convolution feature map. Finally, we concatenate the three vectors as a combined vector $F \in \mathcal{R}^{3D}$. Finally, we use two-layered MLP with intermediate hidden size of $M$ to output a softmax distribution over all the relations set $R$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{conv}
\caption{Overview of the CNN Path Reasoner.}
\label{fig:conv}
\end{figure}
\begin{gather}
F = f(f_1, f_2, \cdots, f_N)\\
p(r|L; \theta) = softmax(W_r F + b_r)
\end{gather}
where $f$ denotes the convolution and max-pooling function applied to extract reasoning path feature $F$, and $W_r, b_r$ denote the weights and bias for the output feed-forward neural network.
\paragraph{Path Finder (Prior).} Here we formulate the path finder $p(L|(e_s, e_d))$ as an MDP problem, and recursively predict actions (an outgoing relation-entity edge $(a, e)$) in every time step based on the previous history $h_{t-1}$ as follows:
\begin{gather}
c_t = ReLU(W_{h} [h_t;e_d] + b_{h})\\
p((a_{t+1}, e_{t+1}) | h_t, \beta) = softmax(A_t c_t)
\end{gather}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{path-finder}
\caption{An overview of the path finder model. Note that $r_q$ (query relation) exists in the approximate posterior while disappearing in the path finder model and $e_t$ represents the target entity embedding, $c_{\tau}$ is the output of MLP layer at time step $\tau$, $a', e'$ denotes the connected edges and ends in the knowledge graphs.}
\label{fig:rnn}
\end{figure}where the $h_t \in \mathcal{R}^H$ denotes the history embedding, $e_d \in \mathcal{R}^E$ denotes the entity embedding, $A_t \in \mathcal{R}^{|A| \times 2E}$ is outgoing matrix which stacks the concatenated embeddings of all outgoing edges and $|A|$ denotes the number of outgoing edge, we use $W_h$ and $b_h$ to represent the weight and bias of the feed-forward neural network outputting feature vector $c_t \in \mathcal{R}^{2E}$. The history embedding $h_t$ is obtained using an LSTM network~\cite{hochreiter1997long} to encode all the previous decisions as follows:
\begin{align}
h_t = LSTM(h_{t-1}, (a_t, e_t))
\end{align}
As shown in~\autoref{fig:rnn}, the LSTM-based path finder interacts with the KG in every time step and decides which outgoing edge $(a_{t+1}, e_{t+1})$ to follow, search procedure will terminate either the target node is reached or the maximum step is reached.
\paragraph{Approximate Posterior.} We formulate the posterior distribution $q(L|(e_s, e_d), r)$ following the similar architecture as the prior. The main difference lies in the fact that posterior approximator is aware of the relation $r$, therefore making more relevant decisions. The posterior borrows the history vector from finder as $h_t$, while the feed-forward neural network is distinctive in that it takes the relation embedding also into account. Formally, we write its outgoing distribution as follows:
\begin{align}
\begin{split}
u_t = ReLU(W_{hp} [h_t;e_d;r] + b_{hp})\\
q((a_{t+1}, e_{t+1}) | h_t; \varphi) = softmax(A_t u_t)
\end{split}
\end{align}
where $W_{hp}$ and $b_{hp}$ denote the weight and bias for the feed-forward neural network.
\subsection{Optimization}
In order to maximize the ELBO with respect to the neural network models described above, we follow VAE~\cite{kingma2013auto} to interpret the negative ELBO as two separate losses and minimize these them jointly using a gradient descent:
\paragraph{Reconstruction Loss.} Here we name the first term of negative ELBO as reconstruction loss:
\begin{align}
J_R = \expect{L \sim q_{\varphi}(L|r, (e_s, e_d))} [-\log p_{\theta}(r|L)]
\end{align}
this loss function is motivated to reconstruct the relation $R$ from the latent variable $L$ sampled from approximate posterior, optimizing this loss function jointly can not only help the approximate posterior to obtain paths unique to particular relation $r$, but also teaches the path reasoner to reason over multiple hops and predict the correct relation.
\paragraph{KL-divergence Loss.} We name the second term as KL-divergence loss:
\begin{align}
\small
J_{KL} = D_{KL}(q_{\varphi}(L|r, e_s, e_d)|p_{\beta}(L|e_s, e_d))
\end{align}
this loss function is motivated to push the prior distribution towards the posterior distribution. The intuition of this loss lies in the fact that an entity pair already implies their relation, therefore, we can teach the path finder to approach the approximate posterior as much as possible. During test-time when we have no knowledge about relation, we use path finder to replace posterior approximator to search for high-quality paths.
\paragraph{Derivatives.} We show the derivatives of the loss function with respect to three different models. For the approximate posterior, we re-weight the KL-diverge loss and design a joint loss function as follows:
\begin{align}
J = J_R + \lambda_{KL} J_{KL}
\end{align}
where $\lambda_{KL}$ is the re-weight factor to combine these two losses functions together. Formally, we write the derivative of posterior as follows:
\begin{align}
\small
\begin{split}
\frac{\partial J}{\partial \varphi} =\expect{L \sim q_{\varphi}(L))}[& -f_{re}(L) \frac{\partial \log{q_{\varphi}(L| (e_s, e_d), r)}}{\partial \varphi}]
\end{split}
\label{eq:path-finder}
\end{align}
where $f_{re}(L) = \log{p_{\theta}} + \lambda_{KL} \log\frac{p_{\beta}}{q_{\varphi}}$ denotes the probability assigned by path reasoner. In practice, we found that a large KL-regularizer $\log \frac{p_{\beta}}{q_{\varphi}}$ causes severe instability during training, therefore we keep a low $\lambda_{KL}$ value during our experiments~\footnote{we set $\lambda_{KL}=0$ through our experiments.}.
For the path reasoner, we also optimize its parameters $\theta$ with regard to the reconstruction as follows:
\begin{align}
\frac{\partial J_R}{\partial \theta} = \expect{L \sim q_{\varphi}(L)} -\frac{\partial \log{p_{\theta}(r|L)}}{\partial \theta}
\label{eq:path-reasoner}
\end{align}
For the path finder, we optimize its parameters $\beta$ with regard to the KL-divergence to teach it to infuse the relation information into the found links.
\begin{align}
\frac{\partial J_{KL}}{\partial \beta} = \expect{L \sim q_{\varphi}(L)} -\frac{\partial \log{p_{\beta}(L|(e_s, e_d))}}{\partial \beta}
\label{eq:prior}
\end{align}
\begin{algorithm}[t]
\caption{The \textsc{Diva} Algorithm.}\label{alg:diva}
\begin{algorithmic}[1]
\Procedure{Training \& Testing}{}
\State\hskip-\ALG@thistlm \emph{Train}:
\For{episode $\leftarrow$ 1 to N}
\State Rollout K paths from posterior $p_{\varphi}$
\If{Train-Posterior}
\State $\varphi \leftarrow \varphi - \eta \times \frac{\partial L_r}{\partial \varphi}$
\ElsIf{Train-Likelihood}
\State $\theta \leftarrow \theta - \eta \times \frac{\partial L_r}{\partial \theta}$
\ElsIf{Train-Prior}
\State $\beta \leftarrow \beta - \eta \times \frac{\partial L_{KL}}{\partial \beta}$
\EndIf
\EndFor
\State\hskip-\ALG@thistlm \emph{Test MAP}:
\State Restore initial parameters $\theta, \beta$
\State Given sample $(e_s, r_q, (e_1, e_2, \cdots, e_n))$
\State $L_i \leftarrow BeamSearch(p_{\beta}(L|e_s, e_i))$
\State $S_i \leftarrow \frac{1}{|L_i|}\sum_{l \in L_i} p_{\theta}(r_q|l)$
\State Sort $S_i$ and find positive rank $ra^+$
\State $MAP \leftarrow \frac{1}{1 + ra^+}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Train \& Test} During training time, in contrast to the preceding methods like~\newcite{das2017go,xiong2017deeppath}, we also exploit negative samples by introducing an pseudo ``n/a'' relation, which indicates ``no-relation'' between two entities. Therefore, we manage to decompose the data sample $(e_q, r_q, [e_1^-, e_2^-, \cdots, e_n^+])$ into a series of tuples $(e_q, r_q', e_i)$, where $r_q'=r_q$ for positive samples and $r_q'=n/a$ for negative samples. During training, we alternatively update three sub-modules with SGD. During test, we apply the path-finder to beam-search the top paths for all tuples and rank them based on the scores assign by path-reasoner. More specifically, we demonstrate the pseudo code in~\autoref{alg:diva}.
\subsection{Discussion}
We here interpret the update of the posterior approximator in equation~\autoref{eq:path-finder} as a special case of REINFORCE~\cite{williams1992simple}, where we use Monte-Carlo sampling to estimate the expected return $\log p_{\theta}(r|L)$ for current posterior policy. This formula is very similar to DeepPath and MINERVA~\cite{xiong2017deeppath,das2017go} in the sense that path-finding process is described as an exploration process to maximize the policy's long-term reward. Unlike these two models assigning heuristic rewards to the policy, our model assigns model-based reward $\log{p_{\theta}(r|L)}$, which is known to be more sophisticated and considers more implicit factors to distinguish between good and bad paths.
Besides, our update formula for path reasoner~\autoref{eq:path-reasoner} is also similar to chain-of-reasoning~\cite{das2016chains}, both models are aimed at maximizing the likelihood of relation given the multi-hop chain. However, our model is distinctive from theirs in a sense that the obtained paths are sampled from a dynamic policy, by exposing more diverse paths to the path reasoner, it can generalize to more conditions. By the active interactions and collaborations of two models, \textsc{Diva} is able to comprehend more complex inference scenarios and handle more noisy environments.
\section{Experiments}
\label{sec:exp}
To evaluate the performance of \textsc{Diva}, we explore the standard link prediction task on two different-sized KG datasets and compare with the state-of-the-art algorithms. Link prediction is to rank a list of target entities $(e_1^-, e_2^-, \cdots, e_n^+)$ given a query entity $e_q$ and query relation $r_q$. The dataset is arranged in the format of $(e_q, r_q, [e_1^-, e_2^-, \cdots, e_n^+])$, and the evaluation score (Mean Averaged Precision, MAP) is based on the ranked position of the positive sample.
\subsection{Dataset and Setting}
We perform experiments on two datasets, and the details of the statistics are described in~\autoref{tab:stat}. The samples of FB15k-237~\cite{toutanova2015representing} are sampled from FB15k~\cite{bordes2013translating}, here we follow DeepPath~\cite{xiong2017deeppath} to select 20 relations including Sports, Locations, Film, etc. Our NELL dataset is downloaded from the released dataset\footnote{https://github.com/xwhan/DeepPath}, which contains 12 relations for evaluation. Besides, both datasets contain negative samples obtained by using the PRA code released by~\newcite{lao2011random}. For each query $r_q$, we remove all the triples with $r_q$ and $r_q^{-1}$ during reasoning. During training, we set number of rollouts to 20 for each training sample and update the posterior distribution using Monte-Carlo REINFORCE~\cite{williams1992simple} algorithm. During testing, we use a beam of 5 to approximate the whole search space for path finder. We follow MINERVA~\cite{das2017go} to set the maximum reasoning length to 3, which lowers the burden for the path-reasoner model. For both datasets, we set the embedding size $E$ to 200, the history embedding size $H$ to 200, the convolution kernel feature size $D$ to 128, we set the hidden size of MLP for both path finder and path reasoner to 400.
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|c|c|c|}
\hline
Dataset & \#Ent & \#R & \#Triples & \#Tasks\\
\hline
FB15k-237 & 14,505 & 237 & 310,116 & 20\\
\hline
NELL-995 & 75,492 & 200 & 154,213 & 12\\
\hline
\end{tabular}
\caption{Dataset statistics.}
\label{tab:stat}
\end{table}
\subsection{Quantitative Results}
We mainly compare with the embedding-based algorithms~\cite{bordes2013translating,lin2015learning,ji2015knowledge,wang2014knowledge}, PRA~\cite{lao2011random}, MINERVA~\cite{das2017go}, DeepPath~\cite{xiong2017deeppath} and Chain-of-Reasoning~\cite{das2016chains}, besides, we also take our standalone CNN path-reasoner from \textsc{Diva}. Besides, we also try to directly maximize the marginal likelihood $p(r|e_s, e_d) = \sum_L p(L|e_s, e_d) p(r|L)$ using only the prior and likelihood model following MML~\cite{guu2017language}, which enables us to understand the superiority of introducing an approximate posterior. Here we first report our results for NELL-995 in~\autoref{tab:nell-result}, which is known to be a simple dataset and many existing algorithms already approach very significant accuracy. Then we test our methods in FB15k~\cite{toutanova2015representing} and report our results in~\autoref{tab:fb-result}, which is much harder than NELL and arguably more relevant for real-world scenarios.
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|c|}
\hline
Model & 12-rel MAP & 9-rel MAP\\
\hline
RPA~\cite{lao2011random} & 67.5 & - \\
\hline
TransE~\cite{bordes2013translating} & 75.0 &- \\%74.93
\hline
TransR~\cite{lin2015learning} & 74.0 & -\\
\hline
TransD~\cite{ji2015knowledge} & 77.3 & -\\
\hline
TransH~\cite{wang2014knowledge} & 75.1 & -\\
\hline
MINERVA~\cite{das2017go} & - & \textbf{88.2}\\%88.16
\hline
DeepPath~\cite{xiong2017deeppath} & 79.6 & 80.2\\%80.16
\hline
RNN-Chain~\cite{das2016chains} & 79.0 & 80.2 \\
\hline
\hline
CNN Path-Reasoner & 82.0 & 82.2 \\
\hline
\textsc{Diva} & \textbf{88.6} & 87.9 \\
\hline
\end{tabular}
\caption{MAP results on the NELL dataset. Since MINERVA~\cite{das2017go} only takes 9 relations out of the original 12 relations, we report the known results for both version of NELL-995 dataset.}
\label{tab:nell-result}
\end{table}
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|}
\hline
Model & 20-rel MAP\\
\hline
PRA~\cite{lao2011random} & 54.1\\
\hline
TransE~\cite{bordes2013translating} & 53.2 \\
\hline
TransR~\cite{lin2015learning} & 54.0 \\
\hline
MINERVA~\cite{das2017go} & 55.2 \\
\hline
DeepPath~\cite{xiong2017deeppath} & 57.2 \\
\hline
RNN-Chain~\cite{das2016chains} & 51.2 \\
\hline
\hline
CNN Path-Reasoner & 54.2 \\
\hline
MML~\cite{guu2017language} & 58.7 \\
\hline
\textsc{Diva} & \textbf{59.8} \\
\hline
\end{tabular}
\caption{Results on the FB15k dataset, please note that MINERVA's result is obtained based on our own implementation.}
\label{tab:fb-result}
\end{table}
Besides, we also evaluate our model on FB-15k 20-relation subset with HITS@N score. Since our model only deals with the relation classification problem $(e_s, ?, e_d)$ with $e_d$ as input, so it's hard for us to directly compare with MINERVA~\cite{das2017go}. However, here we compare with chain-RNN~\cite{das2016chains} and CNN Path-Reasoner model, the results are demonstrated as~\autoref{tab:fb-result-hit}. Please note that the HITS@N score is computed against relation rather than entity.
\begin{table}[!htb]
\small
\centering
\begin{tabular}{|l|c|c|}
\hline
Model & HITS@3 & HITS@5\\
\hline
RNN-Chain~\cite{das2016chains} & 0.80 & 0.82\\
\hline
CNN Path-Reasoner & 0.82 & 0.83 \\
\hline
\textsc{Diva} & \textbf{0.84} & \textbf{0.86}\\
\hline
\end{tabular}
\caption{HITS@N results on the FB15k dataset}
\label{tab:fb-result-hit}
\end{table}
\paragraph{Result Analysis}
We can observe from the above tables~\autoref{tab:fb-result} and~\autoref{tab:nell-result} that our algorithm has significantly outperformed most of the existing algorithms and achieves a very similar result as MINERVA~\cite{das2017go} on NELL dataset and achieves state-of-the-art results on FB15k. We conclude that our method is able to deal with more complex reasoning scenarios and is more robust to the adversarial examples. Besides, we also observe that our CNN Path-Reasoner can outperform the RNN-Chain~\cite{das2016chains} on both datasets, we speculate that it is due to the short lengths of reasoning chains, which can extract more useful information from the reasoning chain.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{beam-size-graph}
\caption{MAP results varying beam size and the error type's occurrence w.r.t to beam size. A beam size that is too large or too small would cause performance to drop.}
\label{fig:beam-vis}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{error-analysis}
\caption{Error analysis for the NELL and FB15k link prediction task. Since FB15k dataset uses placeholders for entities, we are not able to analyze whether the error comes from KG noise.}
\label{fig:error-analysis}
\end{figure}
From these two pie charts in~\autoref{fig:error-analysis}, we can observe that in NELL-995, very few errors are coming from the path reasoner since the path length is very small. A large proportion only contains a single hop. In contrast, most of the failures in the FB15k dataset are coming from the path reasoner, which fails to classify the multi-hop chain into correct relation. This analysis demonstrates that FB15k is much harder dataset and may be closer to real-life scenarios.
\begin{table*}[t]
\centering
\small
\begin{tabular}{lll}\toprule
\multicolumn{1}{l}{Type} & \multicolumn{1}{l}{Reasoning Path} & \multicolumn{1}{l}{Score} \\ \midrule
Negative & athleteDirkNowitzki $\rightarrow$ \textit{(athleteLedSportsteam)} $\rightarrow$ sportsteamMavericks & 0.98 \\
Positive & athleteDirkNowitzki $\rightarrow$ \textit{(athleteLedSportsteam)} $\rightarrow$ sportsteamDallasMavericks & 0.96 \\
\textit{Explanation} & \textit{``maverick'' is equivalent to ``dallas-maverick'', but treated as negative sample} & - \\ \midrule
Negative & athleteRichHill $\rightarrow$ \textit{(personBelongsToOrganization)} $\rightarrow$ sportsteamChicagoCubs & 0.88 \\
Positive & athleteRichHill $\rightarrow$ \textit{(personBelongsToOrganization)} $\rightarrow$ sportsteamBlackhawks & 0.74 \\
\textit{Explanation} & \textit{Rich Hill plays in both sportsteam but the knowledge graph only include one} & - \\
\midrule
\multirow{2}*{Negative} & coachNikolaiZherdev $\rightarrow$ \textit{(athleteHomeStadium)} $\rightarrow$ stadiumOreventvenueGiantsStadium & \\
& $\rightarrow$ \textit{(teamHomestadium$^{-1}$)} $\rightarrow$ sportsteamNewyorkGiants & 0.98 \\
\multirow{2}*{Positive} & coachNikolaiZherdev $\rightarrow$ \textit{(athleteHomeStadium)} $\rightarrow$ stadiumOreventvenueGiantsStadium & \\
& $\rightarrow$ \textit{(teamHomestadium$^{-1}$)} $\rightarrow$ sportsteam-rangers & 0.72 \\
\textit{Explanation} &\textit{The home stadium accommodates multiple teams, therefore the logic chain is not valid } & - \\
\bottomrule
\end{tabular}
\caption{The three samples separately indicates three frequent error types, the first one belongs to ``duplicate entity'', the second one belongs to ``missing entity'', while the last one is due to ``wrong reasoning''. Please note that the parenthesis terms denote relations while the non-parenthesis terms denote entities.}
\label{tab:failure}
\end{table*}
\subsection{Beam Size Trade-offs}
Here we are especially interested in studying the impact of different beam sizes in the link prediction tasks. With larger beam size, the path finder can obtain more linking paths, meanwhile, more noises are introduced to pose greater challenges for the path reasoner to infer the relation. With smaller beam size, the path finder will struggle to find connecting paths between positive entity pairs, meanwhile eliminating many noisy links. Therefore, we first mainly summarize three different types and investigate their changing curve under different beam size conditions:
\begin{enumerate}
\item No paths are found for positive samples, while paths are found for negative samples, which we denote as Neg$>$Pos=0.
\item Both positive samples and negative samples found paths, but the reasoner assigns higher scores to negative samples, which we denote as Neg$>$Pos$>$0.
\item Both negative and positive samples are not able to find paths in the knowledge graph, which we denote as Neg=Pos=0.
\end{enumerate}
We draw the curves for MAP and error ratios in~\autoref{fig:beam-vis} and we can easily observe the trade-offs, we found that using beam size of 5 can balance the burden of path-finder and path-reasoner optimally, therefore we keep to this beam size for the all the experiments.
\subsection{Error Analysis}
In order to investigate the bottleneck of \textsc{Diva}, we take a subset from validation dataset to summarize the causes of different kinds of errors. Roughly, we classify errors into three categories, 1) KG noise: This error is caused by the KG itself, e.g some important relations are missing; some entities are duplicate; some nodes do not have valid outgoing edges. 2) Path-Finder error: This error is caused by the path finder, which fails to arrive destination. 3) Path-Reasoner error: This error is caused by the path reasoner to assign a higher score to negative paths. Here we draw two pie charts to demonstrate the sources of reasoning errors in two reasoning tasks.
\subsection{Failure Examples}
We also show some failure samples in~\autoref{tab:failure} to help understand where the errors are coming from. We can conclude that the ``duplicate entity'' and ``missing entity'' problems are mainly caused by the knowledge graph or the dataset, and the link prediction model has limited capability to resolve that. In contrast, the ``wrong reasoning'' problem is mainly caused by the reasoning model itself and can be improved with better algorithms.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a novel variational inference framework for knowledge graph reasoning. In contrast to prior studies that use a random walk with restarts~\cite{lao2011random} and explicit reinforcement learning path finding~\cite{xiong2017deeppath}, we situate our study in the context of variational inference in latent variable probabilistic graphical models. Our framework seamlessly integrates the path-finding and path-reasoning processes in a unified probabilistic framework, leveraging the strength of neural network based representation learning methods. Empirically, we show that our method has achieved the state-of-the-art performances on two popular datasets.
\section{Acknowledgement}
The authors would like to thank the anonymous reviewers for their thoughtful comments. This research
was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF IIS 1528175. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.
|
1,116,691,500,982 | arxiv | \section{\label{}}
\section{Introduction}
In July 2012, the ATLAS and CMS collaborators reported the discovery of a new particle
with a mass of $125$ GeV\cite{LHC}.
Data revealed that the phenomenological profile of the new particle resembles that of the Higgs
boson of the Standard Model (SM).
The SM has been established as an effective theory at low energy scales below $\mathcal{O}(100)\ \mathrm{GeV}$.
However, there are some unsolved problems.
In the SM, neutrinos are massless particles although neutrino oscillation phenomena were discovered\cite{SK}.
The SM does not contain any candidate which can explain the dark matter relic abundance\cite{WMAP,Ade:2013zuv}.
In the theoretical view point, the SM has the problem that quantum corrections to the squared Higgs boson mass diverge quadratically.
A weakly interacting massive particle (WIMP) is a good dark matter candidate.
The relic density and the annihilation cross section of the WIMP are related to each other if the WIMP was in thermal equilibrium in the early universe.
In order to reproduce the observed relic density, the annihilation cross section of $\order{\mathrm{pb}}$ is required.
The cross section implies the new physics energy scale of $\order{1\mbox{TeV}}$ according to the dimensional analysis.
These parameter regions are explored by dark matter direct and indirect detections and the Large Hadron Collider (LHC).
In this paper, we focus on the model extended by introducing right-handed neutrinos and supersymmetry (SUSY).
The right-handed neutrinos produce Dirac neutrino masses.
The extended model with SUSY particles can avoid the hierarchy problem because of the cancelations between the quantum corrections to the squared Higgs boson mass from the SM particles and those from their superpartners.
The superpartners of the right-handed neutrinos, right-handed sneutrinos can mix with left-handed ones.
If the lighter mixed sneutrino is the lightest SUSY particle (LSP), the sneutrino acts as a WIMP candidate\cite{ArkaniHamed:2000bq,Belanger:2010cd,Dumont:2012ee}.
Earlier works analyze parameter regions with the gaugino mass universality and find that there are allowed regions where the mixed sneutrino dark matter mass is heavier than half of the Higgs boson one.
However, the lighter mass regions are excluded by the limits of the relic density and the branching ratio of the Higgs boson invisible decay.
We explore the GeV-mass mixed sneutrino dark matter scenarios without the gaugino mass universality.
In such a region,
dark matter direct detections are insensitive because of their energy thresholds, and
a large sneutrino trilinear coupling triggers a deeper vacuum than the SM-like one.
We calculate the decay rate of the false vacuum neglected in earlier works and impose the vacuum stability bound on parameter space.
We show that there is a region consistent with all phenomenological constraints, and the allowed region can be examined by the search for the Higgs boson invisible decay at future colliders.
\section{Model}
The mixed sneutrino model contains right-handed neutrinos $\nu_{Ri}$ and sneutrinos $\widetilde{\nu}_{Ri}$ in addition to the usual particles of the Minimal Supersymmetric Standard Model (MSSM).
Here, the index $i=1,\ 2,\ 3$ denotes the generation.
Neutrino Yukawa interaction, sneutrino soft mass, and sneutrino trilinear coupling terms are introduced to the MSSM Lagrangian.
The soft masses and the trilinear couplings among the right-handed sneutrino, the left-handed slepton doublet $\widetilde{\ell}_i$, and the Higgs doublet with hypercharge $Y=+1/2$, $h_u$ are written as
\begin{eqnarray}
\Delta {\cal L}_{\rm soft} = m^2_{\widetilde N_i} |\widetilde \nu_{Ri} |^2 +
A_{\tilde\nu_i} \widetilde \ell_i \widetilde \nu_{Ri}^* h_u + {\rm h.c.} \,,
\end{eqnarray}
where $m_{\widetilde N_i}$ denote the soft mass parameters of the right-handed sneutrinos,
and $A_{\tilde\nu_i}$ are the sneutrino trilinear coupling constants.
After the Higgs bosons develop vacuum expectation values,
the couplings contribute to non-diagonal components of the sneutrino mass matrix.
The mass matrix for one generation is given by
\begin{eqnarray}
{\cal M}^2_{\tilde\nu} =
\left(
\begin{array}{cc}
{m}^2_{\widetilde{L}} +\frac{1}{2} m^2_Z \cos 2\beta & \frac{1}{\sqrt{2}} A_{\tilde\nu}\, v \sin\beta\\
\frac{1}{\sqrt{2}} A_{\tilde\nu}\, v \sin\beta& {m}^2_{\widetilde{N}}
\end{array}\right) \, ,
\label{eq:sneutrino_tree}
\end{eqnarray}
where $m_{\widetilde{L}}$ means the soft mass parameter of the left-handed sleptons.
The sum of the squared vacuum expectation values (the ratio of the vacuum expectation values) is written by $v^2 = v_1^2 + v_2^2$ ($\tan\beta = v_2/v_1$), where
$v_1\ (v_2)$ is the vacuum expectation value of the Higgs doublet with $Y=-1/2\ (Y=+1/2)$.
Therefore, the left- and right-handed sneutrinos mix and one obtains mass eigenstates,
\begin{eqnarray}
\tilde\nu_1 = \cos\theta_{\tilde\nu} \, \tilde\nu_R - \sin\theta_{\tilde\nu}\, \tilde\nu_L \,, \quad
\tilde\nu_2 = \sin\theta_{\tilde\nu} \, \tilde\nu_R + \cos\theta_{\tilde\nu}\, \tilde\nu_L .
\end{eqnarray}
In this paper, we consider the third generation mixed sneutrino WIMP scenarios assuming that the first and second sneutrinos are heavier than any experimental limit.
\section{Constraints}
We discuss phenomenological constraints on the GeV-mass sneutrino WIMP scenarios.
We use the experimental results listed in TABLE \ref{tab:expresulte} in order to analyze the parameter space.
\begin{table}
\begin{center}
\caption{Observables and experimental constraints.}
\begin{tabular}{|c||c|c|}
\hline
Observable & Experimental result\\ \hline \hline
$\Omega h^2$ & $0.1196 \pm 0.0062\ (95\%\ \mathrm{CL})$ \cite{Ade:2013zuv}\\ \hline
$\sigma_{\rm N}^{\rm SI}$ & $(m_{\rm DM},\ \sigma_{\rm N}^{\rm SI})$ constraints \\
& from LUX \cite{Akerib:2013tjd} and SuperCDMS \cite{Agnese:2014aze} \\ \hline
$\sigma_{\rm ann} v$ & $(m_{\rm DM},\ \sigma_{\rm ann}v)$ constraint \\
& from FermiLAT \cite{Ackermann:2013yva} \\ \hline
$\Delta \Gamma (Z \rightarrow \mathrm{inv.} )$ & $< 2.0\ \mbox{MeV} \ (95\%\ \mathrm{CL})$
\cite{ALEPH:2005ab} \\ \hline
$\mathrm{Br}(h \rightarrow \mathrm{inv.} )$ & $< 0.29 \ (95\%\ \mathrm{CL})$ \cite{ATLAS-CONF-2015-004} \\ \hline
$m_{\tilde{\tau}_R}$ & $> 90.6\ \mathrm{GeV} \ (95\%\ \mathrm{CL})$ \cite{Aad:2014yka} \\ \hline
$ m_{\widetilde{\chi}^{\pm}_1}$ & $> 420\ \mathrm{GeV} \ (95\%\ \mathrm{CL})$ \cite{Aad:2014yka} \\ \hline
$ m_{\tilde{g}}$ & $> 1.4 \ \mathrm{TeV} \ (95\%\ \mathrm{CL})$
\cite{Aad:2014lra, Chatrchyan:2014lfa} \\ \hline
\end{tabular}
\label{tab:expresulte}
\end{center}
\end{table}
The GeV-mass sneutrino dark matter tends to annihilate via neutralinos.
Then, the annihilation cross section depends on the neutralino masses as well as the sneutrino mixing angle.
For $M_{\widetilde{B}}\ll M_{\widetilde{W}}$,
the relic abundance is given approximately by
\begin{equation}
\Omega h^2 \sim 0.1 \times
\left ( \frac{\sin \theta_{\tilde{\nu}}}{0.1} \right)^{-4}
\left( \frac{m_{\tilde{\chi}^0_1}}{1\ \mathrm{GeV}} \right)^2\, .
\label{eq:rough omega}
\end{equation}
If the lightest neutralino mass is around $1$ GeV and the mixing angle is around $0.1$,
one obtains the correct relic abundance.
Coupling constants of dark matter candidates are constrained through direct and indirect detections\cite{Akerib:2013tjd,Agnese:2014aze,Ackermann:2013yva}.
However, direct detections are insensitive to GeV-mass region because of their energy thresholds.
Indirect detections which observe charged particles and photon from dark matter annihilations cannot impose limits on the GeV-mass sneutrino dark matter.
Let us turn to constraints from collider experiments.
The invisible decays of the $Z$ and Higgs bosons are explored by LEP2 and the LHC experiments, respectively\cite{ALEPH:2005ab, ATLAS-CONF-2015-004}.
In our model, the decay rates are proportional to $\sin^4\theta_{\widetilde{\nu}}$.
Then, the experimental limits impose upper limits on the sneutrino mixing angle.
Productions of electroweak superparticles at the LHC are associated with signals with two or three leptons.
In our model, the lightest chargino (next-to-lightest neutralino) decay to one tau (two taus) with missing energy.
Therefore, we impose the constraint from the search for two or three taus\cite{Aad:2014yka}.
LEP2 and the LHC experiments search for a mono-photon event with missing energy also\cite{Achard:2003tx,Aad:2014tda}.
Such constraints are not serious since the cross section is enough small in our model\cite{Belanger:2010cd}.
In the MSSM, a large trilinear soft SUSY breaking term triggers a deeper vacuum than the SM-like one\cite{ccb}.
In our model, the sneutrino trilinear coupling is large.
Through tracing the scalar potential along the $D$-flat direction, $|h_u^0| = |\tilde{\nu}_L| = |\tilde{\nu}_R| = a$,
$\theta_{\tilde{\nu}} \gtrsim 2\times 10^{-12}$
suggests a lepton-number breaking global minimum.
We calculate the decay rate of the false vacuum and check whether the lifetime of the universe is enough long.
The vacuum meta-stability bound impose the upper limit, $\theta_{\tilde{\nu}}\leq 0.52$ for $m_{\tilde{\nu}_1} = 0.1\ \mbox{GeV}$.
The upper limit is relaxed in the larger mass region since the trilinear coupling constant is proportional to $m_{\tilde{\nu}_2}^2-m_{\tilde{\nu}_1}^2$.
\section{Analysis}
We perform a scan for the parameter region as listed in TABLE \ref{tab:inputparameter},
and show the phenomenological constraints in the $(m_{\tilde{\nu}_1},\ \sin\theta_{\tilde{\nu}})$ plane in FIG. \ref{fig:general1g} \cite{update}.
\begin{table}[]
\begin{center}
\caption{Parameters and reference values/scan bounds.
}
\label{tab:inputparameter}
\begin{tabular}{|c||c|c|}
\hline
Parameter & Reference value/Scan bound\\ \hline \hline
$ \mu $ & $500\ \mathrm{GeV} $ \\ \hline
$ \tan \beta$ & $10 $ \\ \hline
$m_{\tilde{\nu}_2}$ & $125\ \mathrm{GeV} $ \\ \hline
$m_{\tilde{\tau}_R}$ & $120\ \mathrm{GeV}$ \\ \hline
$M_{\widetilde{W}}$ & $500\ \mathrm{GeV}$ \\ \hline
$m_{\tilde{\nu}_1}$ & $[0.1\ \mathrm{GeV},\ 10\ \mathrm{GeV}] $ \\ \hline
$\sin \theta_{\tilde{\nu}}$& $[0.01,\ 0.3] $ \\ \hline
$M_{\widetilde{B}}$ & $[0.1 \ \mathrm{GeV} ,\ 20\
\mathrm{GeV} ] $ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[]
\includegraphics{region_v2.eps}
\caption{\footnotesize The results of our parameter scan for light
mixed sneutrino WIMP scenarios in the $(m_{{\tilde \nu}_1},
\sin \theta_{\tilde{\nu}})$ plane. The yellow (light-gray) and pink
(dark-gray) regions are excluded by the constraints of the relic
abundance \cite{Ade:2013zuv} and the Higgs boson invisible decay
\cite{ATLAS-CONF-2015-004}, respectively. We also show the upper
limits of the spin-independent elastic WIMP-nucleon cross section by
the LUX (blue dotted line) \cite{Akerib:2013tjd} and the SuperCDMS
(dark-green line) \cite{Agnese:2014aze}. The black dashed (red
solid) line denotes the Higgs boson invisible decay branching
fraction of $10\%$ ($2\%$). }
\label{fig:general1g}
\end{figure}
The colored regions are ruled out by the constraints of the relic abundance and the Higgs boson invisible decay.
The white region is consistent with the vacuum stability bound as well as the experimental constraints.
The allowed region will be narrowed by searches for the Higgs boson invisible decay at future colliders.
The high-luminosity LHC with the center-of-mass energy of $\sqrt{s}=14\ \mbox{TeV}$ and the luminosity of $L=3000\ \mathrm{fb}^{-1}$can impose the upper limits of $\mathrm{Br}(h \rightarrow \mathrm{inv.}) <8.0\%\ (95\%\ \mathrm{CL})$ \cite{ATL-PHYS-PUB-2013-014} and
$\mathrm{Br}(h \rightarrow \mathrm{inv.}) <6.4\%\ (95\%\ \mathrm{CL})$\cite{CMS:2013xfa} .
The International Linear Collider (ILC) can constrain the branching ratio up to $0.69\%\ (95\%\ \mathrm{CL})$\cite{Ishikawa}.
The ILC is capable of excluding mixed sneutrino dark matter scenarios for $0.1\ \mathrm{GeV} \leq m_{\tilde{\nu}_1} < 3\ \mathrm{GeV}$.
\section{Conclusion}
We have analyzed GeV-mass mixed sneutrino dark matter scenarios relaxing the gaugino mass universality and imposing the vacuum stability bound.
If the mass of the lightest neutralino is of the order of GeV, these scenarios are consistent with all the phenomenological constraints.
The allowed region can be probed through the searches for the Higgs boson invisible decay although dark matter direct detections are insensitive to the GeV-mass dark matter region.
\begin{acknowledgments}
The work presented here is done in collaboration with Mitsuru Kakizaki, Eun-Kyung Park, and Jae-hyeon Park to whom I express my deep thanks for fruitful collaborations.
\end{acknowledgments}
\bigskip
|
1,116,691,500,983 | arxiv | \section{Introduction}
\label{sec:Intro}
Complex phenomena are everywhere in the physical world. Typically, these emerge from simple interactions among elements in a network, such as atoms making up molecules or organisms in a society. Despite their diversity, it is possible to approach these subjects with a common set of tools, using numerical and statistical techniques to relate microscopic details to emergent macroscopic properties~\cite{Thurner2018}. There has long been a trend of applying these tools to the brain, the archetypal complex system, and much of neuroscience is concerned with relating electrical activity in networks of neurons to psychological and cognitive phenomena~\cite{CognitiveNeurosciences}. In particular, there is a growing body of experimental evidence~\cite{Boly2013}, that neural firing patterns can be strongly related to the level of conscious arousal in animals.
In humans, level of consciousness varies from very low in coma and under deep general anaesthesia, to very high in fully wakeful states of conscious arousal~\cite{Laureys2012}. With the current technology, precise discrimination between unconscious vegetative states and minimally conscious states are particularly challenging and remains a clinical challenge~\cite{NeurologyOfConsciousness}. Therefore, substantial improvement in accuracy of determining such conscious states using neural recording data will have significant societal impacts. Towards such a goal, neural data has been analysed using various techniques and notions of \textit{complexity} to try to find the most reliable measure of consciousness~\cite{Engemann2018, Sitt2014}.
One of the most successful techniques to date in distinguishing levels of conscious arousal is the \textit{perturbational complexity index}~\cite{Massimini2005, Casali2013, Casarotto2016}, which measures the neural activity patterns that follows a perturbation of the brain through magnetic stimulation. The evoked patterns are processed through a pipeline then finally summarised using Lempel-Ziv complexity~\cite{Casali2013}. This method is inspired by a theory of consciousness, called \textit{integrated information theory} (\textbf{IIT})~\cite{Tononi2004, Tononi2016}, which proposes that a high level of conscious arousal should be correlated with the amount of so-called \textit{integrated information}, or the degree of differentiated integration in a neural system (see Ref.~\cite{Oizumi2014} for details). While there are various ways to capture this essential concept~\cite{Mediano2019, Barrett2011}, one way to interpret integrated information is as the amount of loss of information a system has on its own future or past states based on its current state, when the system is minimally disconnected~\cite{Tegmark2016, Oizumi2016PNAS, Oizumi2016PLOS}.
These complexity measures, inspired by IIT, are motivated by the fundamental properties of conscious phenomenology, such as informativeness and integratedness of any experience \cite{Tononi2004}. While there are ongoing efforts to accurately translate these phenomenological properties into mathematical postulates~\cite{Oizumi2014}, such translation often contains assumptions about the underlying process which are not necessarily borne out in reality. For example, the derived mathematical postulates in IIT assume Markovian dynamics, i.e., that the future evolution of a neural system is determined statistically by its present state~\cite{Barrett2011}. Moreover, IIT requires computing the correlations across all possible partitions between subsystems, which is computationally heavy~\cite{Tegmark2016} in relation to methods which do not require such partitioning to work. Assuming that the hierarchical causal influences in the brain would manifest as oscillations across a range of frequencies and spatial regions~\cite{Buzsaki2006}, non-Markovian temporal correlations likely play a significant role in explaining any experimentally measurable behaviours, including the level of conscious arousal. There is therefore, scope for applying more general notions of complexity to meaningfully distinguish macroscopic brain states that support consciousness.
A conceptually simple approach to quantifying the complexity of time series data, such as the fluctuating potential in a neuron, is to construct the minimal model which statistically reproduces it. Remarkably, this minimal model, known as an \textit{epsilon machine} (\ensuremath{\epsilon\text{-machine}}{}), can be found via a systematic procedure which has been developed within the field of computational mechanics~\cite{CrutchPRL1989, epsilonMachines2, CrutcharXiv2017}. Crucially, \ensuremath{\epsilon\text{-machines}}{} account for multiple temporal correlations contained in the data and can be used to quantify the \emph{statistical complexity} of a process -- the minimal amount of information required to specify its state. As such they have been applied over various fields, ranging from neuroscience~\cite{Haslinger2009, Klinkner2006} and psychology~\cite{CSSR2} to crystallography~\cite{Varn2004} and ecology~\cite{Boschetti2008}, to the stock market~\cite{Park2007}. Lastly, unlike IIT the \ensuremath{\epsilon\text{-machine}}{} analysis can be performed for data coming from a single channel.
In this paper, we use the statistical complexity derived from an \ensuremath{\epsilon\text{-machine}}{} analysis of neural activity to distinguish states of conscious arousal in fruit flies (\textit{D. melanogaster}). We analyse neural data collected from flies under different concentrations of isoflurane~\cite{CohenEneuro2016, CohenEneuro}. By analysing signals from individual electrodes and disregarding spatial correlations, we find that statistical complexity distinguishes between the two states of conscious arousal through temporal correlations alone. In particular, as the degree of temporal correlations increases, the difference in complexity between the wakeful and anaesthetised states becomes larger. In addition to measuring complexity, the \ensuremath{\epsilon\text{-machine}}{} framework also allows us to assess the temporal irreversibility of a process- the difference in the statistical structure of the process when read forwards vs. backwards in time. This may be particularly important for wakeful brains which are thought to be sensitive to the statistical structure of the environment which runs forward in time~\cite{CohenEneuro, Hohwy2013, Tononi2010}. Using the nuanced characterisation of temporal information flow offered by the \ensuremath{\epsilon\text{-machine}}{} framework~\cite{cryptCrutch}, we then analyse the time irreversibility and crypticity of the neural signals to further distinguish the conscious states. We find that the asymmetry in information structure between forward and reverse-time neural signals is reduced under anaesthesia.
The present approach singularly differentiates between highly random and highly complex information structure; accounts for temporal correlations beyond the Markov assumption; and quantifies temporal asymmetry of the process. None of the standard methods possesses all of these features within a single unified framework. Before presenting these results in detail in Sec.~\ref{sec:Complexity} and discussing their implications in Sec.~\ref{sec:discussion/conclusion}, we begin with a brief overview of the \ensuremath{\epsilon\text{-machine}}{} framework we will use for our analysis.
\section{Theory: $\epsilon$-Machines and statistical complexity} \label{sec:Background}
To uncover the underlying statistical structure of neural activity that characterises a given conscious state, we treat the measured neural data, given by voltage fluctuations in time, as discrete time series. To analyse these time series, we use the mathematical tools of computational mechanics, which we outline in this section. We start with a general discussion on the ways to use time series data to infer a model of a system while placing \ensuremath{\epsilon\text{-machines}}{} in this context. Next, we explain how we construct \ensuremath{\epsilon\text{-machines}}{} in practice. Finally, we show how this can be used to extract a meaningful notion of statistical complexity of a process.
\subsection{From time series to $\epsilon$-Machines}
\label{sec:Bkg-eMs}
In abstract terms, a discrete-time series is a sequence of symbols $\mathbf{r} = (r_0, \ldots, r_{k}, \ldots)$ that appear over time, one after the other~\cite{Rabiner1989}. Each element of $\mathbf{r}$ corresponds to a symbol from a finite alphabet $\mathcal{A}$ observed at the discrete time step labelled by the subscript $k$. The occurrence of a symbol, at a given time step, is random in general and thus the process, which produces the time series, is stochastic~\cite{DoobStochastic}. However, the symbols may not appear in a completely independent manner, i.e., the probability of seeing a particular symbol may strongly depend on symbols observed in the past. These temporal correlations are often referred to as \textit{memory}, and they play an important role in constructing models that are able to predict the \textit{future} behaviour of a given stochastic process~\cite{Gu2012}.
Relative to an arbitrary time $k$, let us denote the future and the past partitions of the complete sequence as $\mathbf{r} = (\cev{r}, \vec{r})$, where the past and the future are $\cev{r} = (\ldots, r_{k-2},r_{k-1})$ and $\vec{r} = (r_{k}, r_{k+1}, \ldots)$ respectively. In general, for the prediction of the immediate future symbol $r_k$, knowledge of the past $\ell$ symbols $\cev{r}_{\ell} :=(r_{k-\ell}, \ldots, r_{k-2}, r_{k-1})$, may be necessary. The number of past symbols we need to account for in order to optimally predict the future sequence is called the Markov order~\cite{Gagniuc2017}.
In general, the difficulty of modelling a time series increases exponentially with its Markov order. However, not all distinct pasts lead to unique future probability distributions, leaving room for compression in the model. In a seminal work, Crutchfield and Young showed the existence of a class of models, which they called $\epsilon$-machines, that are provably the optimal predictive models for a non-Markovian process under the assumption of statistical stationarity~\cite{CrutchPRL1989, epsilonMachines2}. Constructing the $\epsilon$-machine is achieved by partitioning sets of \textit{partial} past observations $\cev{r}_{\ell}$ into \textit{causal states}. That is, two distinct sequences of partial past observations $\cev{r}_{\ell}$ and $\cev{r}_{\ell}'$ belong to the same causal state $S_i \in \mathcal{S}$, if the probability of observing a specific $\vec{r}$ given $\cev{r}_{\ell}$ or $\cev{r}_{\ell}'$ is the \textit{same}; that is
\begin{gather}
\cev{r}_{\ell} \sim_\epsilon \cev{r}_{\ell}' \quad\text{if}\quad P(\vec{r} \;|\; \cev{r}_{\ell}) = P(\vec{r} \;|\; \cev{r}_{\ell}'),
\label{eq:equivRelation}
\end{gather}
where $\sim_\epsilon$ indicates that two histories correspond to the same causal state. The conditional probability distributions in Eq.~\eqref{eq:equivRelation} may always be estimated from a finite set of statistically stationary data via the naive maximum likelihood estimate, given by $P(r_k|\cev{r}_{\ell}) =\nu(r_k,\cev{r}_{\ell})/\nu(\cev{r}_{\ell})$, where $\nu(X)$ is the frequency of occurrence of sub-sequence $X$ in the data. For the case of non-stationary data, the probabilities obtained by this method will produce a non-minimal model that corresponds to a time-averaged representation of the time series. We now discuss how to practically construct an \ensuremath{\epsilon\text{-machine}}{} for a given time series.
\subsection{Constructing \ensuremath{\epsilon\text{-machines}}{} with the CSSR algorithm}
\label{sec:Bkg-cssr}
Several algorithms have been developed to construct \ensuremath{\epsilon\text{-machines}}{} from time series data~\cite{Tino2001, CrutchPRL1989, Crutchfield1990}. Here, we briefly explain the \textit{Causal State Splitting Reconstruction} \textbf{(CSSR)} algorithm~\cite{CSSR2}, which we use in this work to infer \ensuremath{\epsilon\text{-machines}}{} predicting the statistics of neural data we provide as input.
The CSSR algorithm proceeds to iteratively construct sets of causal states accounting for longer and longer sub-sequences of symbols. In each iteration, the algorithm first estimates the probabilities $P(r_k|\cev{r}_{\ell})$ of observing a symbol conditional on each length $\ell$ prior sequence and compares them with the distribution $P(r_k | \mathcal{S} = S_i)$ it would expect from the causal states it has so far reconstructed. If $P(r_k|\cev{r}_{\ell}) = P(r_k | \mathcal{S} = S_i)$ for some causal state, then $\cev{r}_{\ell}$ is identified with it. If the probability is found to be different for all existing $S_i$, then a new causal state is created to accommodate the sub-sequence. By constructing new causal states only as necessary, the algorithm guarantees a minimal model that describes the non-Markovian behaviour of the data (up to a given memory length), and hence the corresponding \ensuremath{\epsilon\text{-machine}}{} of the process.
The CSSR algorithm compares probability distributions via the \emph{Kolmogorov-Smirnov} \textbf{(KS)} test~\cite{Massey1951, Hollander2013}. The hypothesis that $P(r_k|\cev{r}_{\ell})$ and $P(r_k | \mathcal{S} = S_i)$ are identical up to statistical fluctuations is rejected by the KS test at the significance level $\sigma$ when a distance $\mathcal{D}_{KS}$~\footnote{The distance $\mathcal{D}_{KS} = \max | F(r_k | \mathcal{S} = S_i) - F(r_k | \unexpanded{\cev{r}}_{\ell})|$, where $F(r_k | \mathcal{S} = S_i)$ and $F(r_k | \unexpanded{\cev{r}}_{\ell})$ are cumulative distributions of $P(r_k | \mathcal{S} = S_i)$ and $P(r_k | \unexpanded{\cev{r}}_{\ell})$ respectively.} is greater than tabulated critical values of $\sigma$~\cite{Miller1956}. In other words, $\sigma$ sets a limit on the accuracy of the history grouping by parametrising the probability that an observed history $\cev{r}_{\ell}$ belonging to a causal state $S_i$, is mistakenly split off and placed in a new causal state $S_j$. Our analysis, in agreement with Ref.~\cite{CSSR2}, found that the choice of this value does not affect the outcome of CSSR within the tested range of $0.001 < \sigma < 0.01$. As a result, we set $\sigma = 0.005$.
As it progresses, the CSSR algorithm compares future probabilities for longer sub-sequences, up to a maximum past history length of $\lambda$, which is the only important parameter that must be selected prior to running CSSR in addition to $\sigma$. If the considered time series is generated by a stochastic process of Markov order $\ell$, choosing $\lambda < \ell$ results in poor prediction because the inferred $\ensuremath{\epsilon\text{-machine}}{}$ cannot capture the long-memory structures present in the data. Despite this, the CSSR algorithm will still produce an \ensuremath{\epsilon\text{-machine}}{} that is consistent with the approximate future statistics of the process up to order-$\lambda$ correlations~\cite{CSSR2}. Given sufficient data, choosing $\lambda \geq \ell$ guarantees convergence on the true $\ensuremath{\epsilon\text{-machine}}{}$. One important caveat to note is that the time complexity of the algorithm scales asymptotically as $\mathcal{O} (|\mathcal{A}|^{2\lambda+1})$, putting an upper limit to the longest history length that is computationally feasible to use.
Furthermore, the finite length of the time series data implies an upper limit on an `acceptable' value of $\lambda$. Estimating $P(r_k | \cev{r}_{\lambda})$ requires sampling strings of length $\lambda$ from the finite data sequence. Since the number of such strings grows exponentially with $\lambda$, a value of $\lambda$ that is too long relative to the size $N$ of the data, will result in a severely under-sampled estimation of the distribution. A distribution $P(r_k | \cev{r}_{\lambda})$ that has been estimated from an under-sampled space is almost always never equal to $P(r_k | \mathcal{S} = S_i)$, resulting in the algorithm creating a new causal state for every string of length $\lambda$ it encounters. A bound for the largest permissible history length is $L(N) \geq \log_2 N/\log_2 |\mathcal{A}|$, where $L(N)$ denotes maximum length for a given data size of $N$~\cite{MartonAoP, CoverThomas}. Once these considerations have been taken into account, the \ensuremath{\epsilon\text{-machine}}{} produced by the algorithm provides us with a meaningful quantifier of the complexity of the process generating the time series, as we now discuss.
\subsection{Measuring the complexity and asymmetry of a process}
The output of the CSSR algorithm is the set of causal states and rules for transitioning from one state to another. That is, CSSR gives a Markov chain represented by a digraph~\cite{CrutchPRL1989, Gagniuc2017} $G(V,E)$ consisting of a set of vertices $v_i \in V$ and directed edges $\{i,j\} \in E$, e.g. Figs.~\ref{fig:workflow}(c) and (d). Using these rules, one can find $P(S_i)$, which represents the probability that the \ensuremath{\epsilon\text{-machine}}{} is in the causal state $S_i$ at a any time. The Shannon entropy of this distribution quantifies the minimal number of bits of information required to optimally predict the future process; this measure, first introduced in Ref.~\cite{CrutchPRL1989}, is called the \emph{statistical complexity}:
\begin{gather}
C_{\mu} := H\left[\mathcal{S}\right] = -\sum_i P(S_i) \log P(S_i).
\label{eq:statComplexity}
\end{gather}
Formally, the causal states of a time series depend upon the direction in which the data is read~\cite{cryptCrutch}. The main consequence of this result is that the set of causal states obtained by reading the time series in the forward direction $\mathcal{S}^+$, are not necessarily the same as those obtained by reading the time series in the reverse direction $\mathcal{S}^-$. Naturally, this corresponds to potential differences in forward and reverse-time processes and the associated complexities, which is known as \textit{causal irreversibility}
\begin{gather}
\Xi := C_{\mu}^{+} - C_{\mu}^{-},
\label{eq:CausalIrrev}
\end{gather}
capturing the time-asymmetry of the process.
Another (stronger) measure of time-asymmetry is \textit{crypticity}:
\begin{gather}
d := 2C_{\mu}^{\pm}-C_{\mu}^{+}-C_{\mu}^{-}.
\label{eq:crypticity}
\end{gather}
This quantity measures the amount of information hidden in the forwards and reverse \ensuremath{\epsilon\text{-machines}}{} that is not revealed in the future or past time series, respectively. Specifically, it combines the information that must be supplemented to determine the forwards \ensuremath{\epsilon\text{-machine}}{} given the reverse \ensuremath{\epsilon\text{-machines}}{} and the information to determine reverse \ensuremath{\epsilon\text{-machines}}{} given the forwards \ensuremath{\epsilon\text{-machine}}{}. In each case, this is equivalent to the difference between the complexity of a \emph{bidirectional} \ensuremath{\epsilon\text{-machine}}{}, denoted $C_{\mu}^{\pm}$~\cite{cryptCrutch}, and that of the corresponding unidirectional machine. Throughout this manuscript, we implicitly refer to the usual forward-time statistical complexity $C_{\mu}^+$ when writing $C_{\mu}$, unless otherwise stated.
Finally, an operational measure for time-asymmetry is defined by the \textit{microscopic irreversibility}, which quantifies how statistically distinguishable the forwards and reverse \ensuremath{\epsilon\text{-machines}}{} are, in terms of the sequences of symbols they produce. If the forward-time \ensuremath{\epsilon\text{-machine}}{} produces the same sequences with similar probabilities to the reverse-time \ensuremath{\epsilon\text{-machine}}{}, then the process is reversible. Should a sequence available to $M^+$ be impossible for $M^-$ to produce, then the process is strictly irreversible. Here, we assess the distinguishability between two \ensuremath{\epsilon\text{-machines}}{} by estimating the asymptotic rate of (symmetric) \emph{Kullback-Leibler} (\textbf{KL}) \emph{divergence} $\mathcal{D}_{KLS}$ between long output sequences; this measure is commonly applied to stochastic models~\cite{Yang2019}. Specifically
\begin{gather}
\mathcal{D}_{KLS} = \mathcal{D}_{KL}(M^+ \| M^-) + \mathcal{D}_{KL}(M^- \| M^+),
\end{gather}
where $\mathcal{D}_{KL}$ is the regular, non-symmetric estimated KL divergence rate~\cite{Rached2004}. The KL divergence can be proved to be a unique measure that satisfies all of the theoretical requirements of information-geometry~\cite{amari2016book, Oizumi2016PNAS, amari2018}.
A few remarks are in order: in general, any one of the above measures vanishing does not imply that the other measures must also vanish. For instance, consider the case where the \emph{structures} of the forward ($M^+$) and reverse-time ($M^-$) \ensuremath{\epsilon\text{-machines}}{} are different but they happen to have the same complexities, i.e., $C_{\mu}^{+} = C_{\mu}^{-}$. Then, clearly we have $\Xi = 0$ but $d \ne 0$ and $\mathcal{D}_{KLS} \ne 0$. On the other hand, consider the case when $M^+$ and $M^-$ are the same; here, we have $\Xi = \mathcal{D}_{KLS} = 0$, yet may not have $d = 0$. This means that vanishing $\mathcal{D}_{KLS}$ implies that $\Xi=0$ (but not the converse, and not $d=0$). This turns out be an interesting extremal case because, while the forward and reverse processes are identical, the non-vanishing crypticity accounts for the information required to synchronise the corresponding \ensuremath{\epsilon\text{-machines}}{}, i.e., producing the joint statistics of the paired \ensuremath{\epsilon\text{-machines}}. Moreover, we can conclude that microscopic irreversibility is a stronger measure than causal irreversibility; this comes at the expense of computational cost, i.e., the former is harder to compute than the latter. In essence, each measure above represents a different notion of temporal asymmetry, with its own operational significance. Causal irrversibility and crypticity are information-theoretic constructs, while microscopic irrversibility is a information-geometric construct.
In the next section, we describe the experimental and analytical methods, as well as the results: that the statistical complexity and temporal asymmetry of the neural time series, taken from fruit flies, significantly differ between states of conscious arousal.
\begin{figure*}[th]
\centering
\includegraphics[width=\textwidth]
{figures/logicv4.png}
\caption{Evolution of experimental data from neural signals to \ensuremath{\epsilon\text{-machines}}.
\textbf{(a)} Representative schematic of \textit{D. melanogaster} brain (modified from Ref.~\cite{Paulk2013b}) depicted with probe and approximate channel locations. Each channel $c\in [1,15]$ samples around a localised region in the brain, with numerical labels ordered from the central ($c=1$) to peripheral ($c=15$) regions.
\textbf{(b)} Example reading of a processed local field potential \textbf{(LFP)} for a single channel. Points along the x-axis represent LFP measurements at each sampling time step. The median LFP measurement of the sample is shown as the grey line bisecting data. LFP binarisation is determined via splitting over the median with the encoding scheme $0:$ LFP $\leq$ Median, and $1:$ otherwise. The \ensuremath{\epsilon\text{-machines}}{} are inferred by using the binary string as the input to the CSSR algorithm.
\textbf{(c)} Digraph representation of the CSSR-inferred \ensuremath{\epsilon\text{-machine}}{} for channel 1 readings of fly 1 under anaesthesia ($0.6$ vol.\% isoflurane) with $\sigma = 0.005$ and $\lambda = 3$. Graph vertices correspond to causal states. Vertex labels distinguishing causal states are assigned arbitrarily and do not imply state equivalence across multiple graphs. Directed edges correspond to transitions between causal states. Edge labels denote the probability (2 significant figures) of a transition occurring, and edge colour encodes the emitted symbol upon making the transition (1: Red, 0: Blue). The histories stored in the causal states for this \ensuremath{\epsilon\text{-machine}}{} are visualised in Fig.~\ref{fig:em-with-histories}.
\textbf{(d)} Digraph representation of \ensuremath{\epsilon\text{-machine}}{} for the wakeful ($0$ vol.\% isoflurane) level of conscious arousal for the same channel, fly, $\sigma$, and $\lambda$ as in (c). We report the forward-time statistical complexities $C_{\mu}^{a} = 1.88$ and $C_{\mu}^{w} = 2.96$ for (c) and (d) respectively.}
\label{fig:workflow}
\end{figure*}
\section{Experimental results and analysis}
\label{sec:Complexity}
\subsection{Methods}
\label{sec:Methods}
We analysed local field potential \textbf{(LFP)} data from the brains of awake and isoflurane-anaesthetised \emph{D. melanogaster} (Canton S wild type) flies. Here, we briefly provide the essential experimental outline that is necessary to understand this paper. The full details of the experiment are presented in Refs.~\cite{CohenEneuro, CohenEneuro2016}. LFPs were recorded by inserting a linear silicon probe (Neuronexus 3mm-25-177) with 16 electrodes separated by 25 $\mu$m. The probe covered approximately half of the fly brain and recorded neural activity as illustrated in Fig.~ \ref{fig:workflow}(a). A tungsten wire inserted into the thorax acted as the reference. The LFPs at each electrode were recorded for 18s while the fly was awake and 18s more after the fly was anaesthetised (isoflurane, 0.6\% by volume, through an evaporator). Flies' unresponsiveness during anaesthesia was confirmed by the absence of behavioural responses to a series of air puffs, and recovery was also confirmed after isoflurane gas was turned off~\cite{CohenEneuro2016}.
We used data sampled at 1kHz for the analysis~\cite{CohenEneuro2016}, and to obtain an estimate of local neural activity, the 16 electrodes were re-referenced by subtracting adjacent signals giving 15 channels which we parametrise as $c \in [1,15]$. Line noise was removed from the recordings, followed by linear de-trending and removing the mean. The resulting data is a fluctuating voltage signal, which is time-binned (1ms bins) and binarised by splitting over the median, leading to a time series, see Fig.~\ref{fig:workflow}(b).
For each of the 13 flies in our data set, we considered 30 time series of length $N = 18,000$. These correspond to the 15 channels, labelled numerically from the central to peripheral region as depicted in Fig.~\ref{fig:workflow}(a), and the two states of conscious arousal. Using the CSSR algorithm~\cite{CSSR2}, we constructed \ensuremath{\epsilon\text{-machines}}{} for each of these time series as a function of maximum memory length within the range $\lambda \in [2,11]$, measured in milliseconds. This is below the memory length $L(N) \sim 14$ beyond which we would be unable to reliably determine transition probabilities for a sequence of length $N$ (see Sec.~\ref{sec:Bkg-cssr})~\footnote{$L(N) \sim 14$ only serves as a lower bound on $\lambda$, past which CSSR is guaranteed to return incorrect causal states for the neural data. In practice, this may occur at even lower memory lengths than this limit. We observed this effect marked by an exponential increase in the number of inferred causal states for $\lambda > 11$, and thus excluded these memory lengths from the study.}. For a given time direction $\xi \in \{+:\text{forward},-:\text{reverse}\}$, we recorded the resulting $3,900$ \ensuremath{\epsilon\text{-machine}}{} structures and their corresponding statistical complexities $C_{\mu}^{(\xi,\psi)}$, and grouped them according to their respective level of conscious arousal, $\psi \in \{w,a\}$ for awake and anaesthesia, channel location, $c$, and maximum memory length, $\lambda$. Thus, the statistical complexity we computed in a given time direction is a function of the set of parameters $\{\psi,c,\lambda\}$ for each fly, $f$. We also determined the irreversibility $\Xi$, crypticity $d$, and symmetric KL divergence rate $\mathcal{D}_{KLS}$ for each fly and again grouped them over the same set of parameters $\{\psi,c,\lambda\}$. While we found that not all the data is strictly stationary, in that the moving means of the LFP signals were not normally distributed, the conclusions we draw from them are still broadly valid. As mentioned in Sec.~\ref{sec:Bkg-eMs}, \ensuremath{\epsilon\text{-machines}}{} reconstructed from approximately stationary data are time-averaged models, and are likely to \emph{underestimate} the true statistical complexity of the corresponding neural processes.
We are principally interested in the differences the informational quantities $Q^\psi \in \{C_{\mu}^{(\xi,\psi)},\; \Xi^{\psi},\; d^{\psi},\; \mathcal{D}^{\psi}_{KLS}\}$ have over states of conscious arousal and thus consider
\begin{gather}
\Delta Q := Q^{w} - Q^{a}, \label{eq:Dcmu}
\end{gather}
for fixed values of $\{f, c, \lambda\}$. Positive values of $\Delta Q$ indicate higher complexities observed in the wakeful state relative to the anaesthetised one. Finally, we use the notation $\langle Q^{\psi} \rangle_x$ to denote taking an average of any information quantity $Q^{\psi}$, over a specific parameter $x \in \{f, c, \lambda\}$. For example $\langle \Delta C_{\mu}^+ \rangle_f$ means taking the fly-averaged difference in forward-time statistical complexity.
To assess the significance of each of the parameters $\psi$, $c$, $\lambda$, and $\xi$, or some combination of them, have on the response of the elements in the set $Q$ across flies, we conducted a statistical analysis using linear mixed effects modelling~\cite{Harrison2018lme} (\textbf{LME}). The LME analysis describes the response of a given quantity $\mathcal{Q}$ by modelling it as a multidimensional linear regression of the form
\begin{gather}
\mathcal{Q} = \v{F}\gv{\beta} + \v{R}\v{b} + \mathcal{E}.
\label{eq:glme}
\end{gather}
The resulting model in Eq.~\eqref{eq:glme} consists of a family of equations where $\mathcal{Q}$ is the vector allowing for different responses of a quantity $Q$ for each specific fly, channel location, level of conscious arousal, and time direction where applicable. Memory length $\lambda$, channel location $c$, state of conscious arousal $\psi$, and time direction $\xi$ (again, where applicable) are the parameters that $\mathcal{Q}$ responds to. To account for variations in the response caused by interactions between parameters (e.g. between memory length and channel location), we included them in the model. Letting $X = \{\lambda,c,\psi,\xi\}$ be the set of the parameters which may induce responses, we can write all the non-empty $k$-combinations between them as $\mathcal{F} = \{\lambda,c,\psi,\xi, \lambda c, \lambda\psi,...,\lambda c \psi \xi\} = \binom{X}{k}\backslash \emptyset$. The elements in $\mathcal{F}$ are known as the \emph{fixed effects} of Eq.~\eqref{eq:glme}, and are contained as elements within the matrix $\v{F}$. The vector $\gv{\beta}$, contains the regression coefficients describing the strength of each of the fixed effects $\mathcal{F}$.
In addition to fixed effects affecting the response of an element of $Q$ in our experiment, we also took into account any variation in response caused by known \emph{random effects}. In particular, we expected stronger response variations to be caused by correlations occurring between the channels within a single fly, compared to between channels across flies. These random effects are contained as elements of the matrix $\v{R}$, and the vector $\v{b}$ encodes the regression coefficients describing their strengths. Finally, the vector $\mathcal{E}$ describes the normally-distributed \emph{unknown} random effects in the model. The regression coefficients contained in the vectors $\gv{\beta}$ and $\v{b}$, were obtained via maximum likelihood estimation such that $\mathcal{E}$ are minimised. The explicit form of Eq.~\eqref{eq:glme} used in this analysis is detailed in the Appendix~\ref{app:lme}.
With the full linear mixed effects model given by Eq.~\eqref{eq:glme}, we tested the statistical significance of a fixed effect in $\mathcal{F}$. This was accomplished by comparing the log-likelihood of the full model with all fixed effects, to the log-likelihood of a reduced model with the effect of interest removed~\cite{Bates2015} (regression coefficients associated with the effect are removed). This comparison between the likelihood models is given by $\Lambda = 2 (h_{\text{full}} -h_{\text{reduced}})$, where $\Lambda$ is the likelihood ratio, $h_{\text{full}}$ is the log-likelihood of full model, and $h_{\text{reduced}}$ is the log-likelihood of the model with the effect of interest removed.
Under the null hypothesis, when a fixed effect does not have any influence on an informational quantity $Q$, i.e., the regression coefficients for the effect are vanishing, the likelihood ratio $\Lambda$ is $\chi^2$ distributed with degrees of freedom equal to the difference in the number of coefficients between the models. Therefore, we considered any fixed effect in the set $\mathcal{F}$ to have a statistically significant effect on a quantity if the probability of obtaining the likelihood ratio given the relevant $\chi^2$ distribution was less than 5\% ($p<0.05$). Thus, for each significant effect we report the fixed effect being tested, i.e., an element of $\mathcal{F}$, the obtained likelihood ratio $\chi^2(n-1)$ with $n$ associated degrees of freedom, and the associated probability $p$ of obtaining the statistic under the null hypothesis.
In addition to all the quantities in the set $Q^\psi$, the LME and likelihood ratio test was also performed for $\Delta Q$, in order to find the significant interaction effects of the parameters. Here, we also modelled $\Delta Q$ as dependent on a fixed effect in $\mathcal{F}$ as in Eq.~\eqref{eq:glme}, but excluding the parameter $\psi$ as it was already implicitly considered. Once the significant effects of memory length, level of conscious arousal, and channel location were characterised with our statistical analysis, we followed with post-hoc, paired $t$-tests for elements in $Q^{\psi}$ given by
\begin{gather}
t = \frac{\langle \Delta Q\rangle_f}{s_f / \sqrt{|f|}},
\label{eq:ttest-cmu}
\end{gather}
where $s_f$ is the standard deviation of $\langle \Delta Q \rangle_f$, and $|f| = 13$ is the sample size. The paired $t$-tests examine the nature of interactions between the parameters on a given quantity over the two states of conscious arousal. Positive $t$-scores indicate a quantity is larger for the wakeful state. We present the results of these analyses in the following sections, sorted categorically by whether time-direction is considered.
\subsection{Results}
\label{sec:results}
\subsubsection{Forward-time complexity results}
In order to observe the effects of isoflurane on neural complexity, we began by visually inspecting the structure of the reconstructed \ensuremath{\epsilon\text{-machines}}{} for the two levels of conscious arousal for the forward-time direction. We took special interest in observing the differences in the characteristics of the two groups of \ensuremath{\epsilon\text{-machines}}{} heralding the two levels of conscious arousal. Here, memory length $\lambda$ plays an important role. At a given $\lambda$, the maximum number of causal states that may be generated scales according to $|\mathcal{A}|^{\lambda}$~\cite{CSSR2}. In our case, the alphabet is binary, $\mathcal{A} = \{0,1\}$. This greatly restricts the space of \ensuremath{\epsilon\text{-machine}}{} configurations available for short history lengths~\cite{Johnson2010}. For $\lambda = 2$ we can observe up to four distinct configurations, which is unlikely to reveal the difference based on conscious states. Given the previous findings~\cite{CohenEneuro}, we generally expected that the data from the wakeful state would present more complexity than those from the anaesthetised state.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/DC_Lmax_channel_cond.pdf}
\caption{Colour map of statistical complexity response averaged over $(n=13)$ flies $\langle C^{(+,\psi)}_{\mu} \rangle_f$, during wakefulness (left) and anaesthesia (right), over channel location and memory length $\lambda$, measured in milliseconds. Hatched cells on the right sub-figure, show regions where $C_{\mu}$ did not decrease under anaesthesia.}
\label{fig:Cmu-raw-response}
\end{figure}
Visual inspection of the directed graphs indeed suggested higher \ensuremath{\epsilon\text{-machine}}{} complexity during the wakeful state compared to the anaesthetised state, at a given set of parameters $\{f,c,\lambda\}$. In particular, the data from the anaesthetised state tended to result in fewer causal states and overall reduced graph connectivity. Panels (c) and (d) of Fig.~\ref{fig:workflow} are examples of \ensuremath{\epsilon\text{-machines}}{} (channel 1 data recorded from fly 1, at maximum memory length $\lambda = 3$), where a simpler \ensuremath{\epsilon\text{-machine}}{} is derived from the data under the anaesthetised condition. Differentiating between two conscious arousal states by visual inspection quickly becomes impractical because of the large number of \ensuremath{\epsilon\text{-machines}}{}. Moreover, for large values of $\lambda$ the number of causal states is exponentially large and it becomes difficult to see the difference in two graphs. To overcome these challenges, we looked at a simpler index, the statistical complexity $C_\mu$, to differentiate between conscious arousal states. To systematically determine the relationships between $C_\mu$ and the set of variables $\{c,f,\psi\}$ we employed the LME analysis outlined in Sec.~\ref{sec:Methods}. We first tested whether $\lambda$ significantly affects $C_{\mu}$. We found $\lambda$ to indeed have a significant effect on $C_{\mu}$ ($\lambda$, ${\chi}^2(1)=443.64$, $p<10^{-16}$). Fig.~\ref{fig:Cmu-raw-response} shows that independent of the conscious arousal condition or channel location, $C_{\mu}$ increases with larger $\lambda$. This indicates that the Markov order of the neural data is much larger than the largest memory length ($\lambda=11$) we consider. Nevertheless, we have enough information to work with.
We then sought to confirm if the complexity of \ensuremath{\epsilon\text{-machines}}{} during anaesthesia are reduced, as suggested from visual inspection. Our statistical analysis indicates that $C_{\mu}$ is not invariably reduced during anaesthesia ($\psi$, ${\chi}^2(1)=0.212$, $p=0.645$) at all levels of $\lambda$ and all channel locations. This means that $C_\mu$ cannot simply indicate the causal arousal state without some additional information about time ($\lambda$) or space ($c$). In addition, we found that neither $c$ alone nor $c\psi$ strongly affects $C_\mu$. However, we found significant reductions in complexity when either the level of conscious arousal or the channel location, interacted with memory length ($\lambda\psi$, ${\chi}^2(1)=14.63$, $p=1.31\times10^{-4}$) and (${c\lambda}$, $\chi^2(14)=42.876$, $p=8.97\times 10^{-5}$) respectively. Moreover, the three-way interaction also had a strong effect ($\lambda\psi c$, ${\chi}^2(14)=24.00$, $p=0.0458$).
As the three-way interaction between $\lambda$, $\psi$, and $c$ complicates interpretation of their effects, we performed a second LME analysis where we modelled \ensuremath{\Delta C_{\mu}}{} instead of $C_{\mu}$, thus accounting for $\psi$ implicitly. In doing so, we investigated whether the change in statistical complexity due to anaesthesia is affected by memory length $\lambda$ or channel location $c$. Using this model, we found a non-significant effect of $c$ on \ensuremath{\Delta C_{\mu}}{}, while a significant effect of $\lambda$ on \ensuremath{\Delta C_{\mu}}{} was seen ($\lambda$, ${\chi}^2(1)=20.97$, $p=4.65\times10^{-6}$), indicating that \ensuremath{\Delta C_{\mu}}{} overall changes with $\lambda$. Specifically, \ensuremath{\Delta C_{\mu}}{} tended to increase with larger $\lambda$ when ignoring channel location, as is evident in Fig.~\ref{fig:DCmulti} (top). Further, explaining our previous interaction between $\lambda$ and $\psi$, \ensuremath{\Delta C_{\mu}}{} was not clearly larger than $0$ for small memory length ($\lambda=2$; the top panel of Fig.~\ref{fig:DCmulti}). This suggests that the information to differentiate between states of conscious arousal is contained in higher order correlations. We also found that the interaction between ${\lambda}$ and channel location has a significant effect on \ensuremath{\Delta C_{\mu}}{} ($\lambda c$, ${\chi}^2(14)=37.19$, $p=6.90\times10^{-4}$), indicating that the effect of $\lambda$ is not constant across channels. Given that \ensuremath{\Delta C_{\mu}}{} overall increases with $\lambda$, we considered that that the largest \ensuremath{\Delta C_{\mu}}{} should occur at the largest $\lambda$. Fig.~\ref{fig:DCmulti} (bottom) examines \ensuremath{\Delta C_{\mu}}{} across channels at $\lambda=11$.
To further break down the interaction between $\lambda$ and $c$, we performed a one sample $t$-test at each value of memory length and channel location to find regions in the parameter space $(\lambda, c)$ where $C_{\mu}$ reliably differentiates wakefulness from anaesthesia across flies. We plot the $t$-statistic at each parameter combination in the top-left panel of Fig.~\ref{fig:heatmap}, outlining regions in the parameter space where \ensuremath{\Delta C_{\mu}}{} is significantly greater than $0$ (with $p<0.05$, uncorrected, two-tailed), finding that the majority of the significance map is directed towards positive values of the $t$-statistic. However, only a subset of $(\lambda,c)$ cells contain values which are significantly different from $0$. Interestingly, we observed that for $\lambda=2$, \ensuremath{\Delta C_{\mu}}{} is actually significantly negative, corresponding to greater complexity during anaesthesia, not during wakefulness. This marks $\lambda=2$ as anomalous relative to other levels of $\lambda$, and this reversal of the direction of the effect of anaesthesia likely contributed to the interaction between $\lambda$ and $\psi$.
Disregarding $\lambda=2$, we find \ensuremath{\Delta C_{\mu}}{} to be significantly greater than $0$ for channels 1, 3, 5-7, 9, 10, and 13, at varying levels of $\lambda$. As expected from our reported interaction between $\lambda$ and $c$, we observe \ensuremath{\Delta C_{\mu}}{} to already be significantly greater than $0$ at small $\lambda$ for channels 5-7, while \ensuremath{\Delta C_{\mu}}{} only became significantly greater at larger $\lambda$ for channels 1, 3, 10 and 13. Further, other channels such as the most peripheral channel ($c=15$) did not have \ensuremath{\Delta C_{\mu}}{} significantly greater than $0$ at any $\lambda$. All significance results, due to LME tests, are reported in Table~\ref{tab:LMEresults}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/DC-multi.pdf}
\caption{Statistical complexity differences $\Delta C_{\mu} =C_{\mu}^{w} - C_{\mu}^{a}$ of \ensuremath{\epsilon\text{-machines}}{} between states of conscious arousal for: \textbf{(Top)} increasing memory length $\lambda$. Grey lines indicate complexity averages over channels per fly $(n=13)$, $\langle \Delta C_{\mu}\rangle_{c}$, while the blue line denotes the average over both channels and flies $\langle \Delta C_{\mu} \rangle_{c,f}$. Error bars are $95\%$ confidence intervals of the population. \textbf{(Bottom)} maximum memory length $\lambda = 11$ (in milliseconds), mapped throughout the fly brain (channels). Grey and red lines indicate the result per fly and the average over $(n=13)$ flies, $\langle \Delta C_{\mu} \rangle_{f}$, respectively. Error bars corresponding to the $95\%$ confidence intervals over the sample of files.}
\label{fig:DCmulti}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]
{figures/tScoreQuad.pdf}
\caption{Colour map of two-tailed paired $t$-scores over channel location and memory length $\lambda$ for statistical complexity differences $\langle\Delta C^+_{\mu}\rangle_f = \langle C_{\mu}^{(+,w)} - C_{\mu}^{(+,a)}\rangle_f$ (top left); causal irreversibility differences $\langle \Delta \Xi \rangle_f = \langle \Xi^{w} - \Xi^{a} \rangle_f$ (top right); crypticity differences $\langle\Delta d\rangle_f = \langle d^{w} - d^{a}\rangle_f$ (bottom left); and the differences in KL divergence rate $\langle\Delta \mathcal{D}_{KLS}\rangle_f = \langle \mathcal{D}_{KLS}^{w} - \mathcal{D}_{KLS}^{a}\rangle_f$ (bottom right). The dotted lines indicate the memory length and channel locations that exceed $p < 0.05$ (uncorrected). The colour scale is consistent across all subplots.}
\label{fig:heatmap}
\end{figure*}
The above results suggest that the measured difference in complexity is present across various brain regions (top-left panel of Fig.~\ref{fig:heatmap}), and that it grows as longer temporal correlations are taken into account (up to the largest value $\lambda=11$ tested). While Fig.~\ref{fig:DCmulti} shows a continued increase in the difference of statistical complexity, $\Delta C_{\mu}$, as a function of $\lambda$, we did not analyse longer history lengths, due to limitations in the amount of the data and stability of the estimation of $C_{\mu}$. In addition to this general observation of increasing \ensuremath{\Delta C_{\mu}}{} with $\lambda$, we observe that, remarkably, some brain regions discriminate the conscious arousal states with a history length of only 3. One trivial explanation for this effect is that under anaesthesia, the required memory length is indeed $\lambda =2$, while the optimal $\lambda$ for awake is much larger. However, a quick observation of Fig.~\ref{fig:Cmu-raw-response} rules out this simple possibility; under both wakeful and anaesthetised states, $C_{\mu}$ continues to increase.
It is likely, however, that the tested range for $\lambda$ remains below the Markov order of the neural data; this is clearly indicated by the lack of a plateau in statistical complexity in Fig.~\ref{fig:DCmulti}. This suggests that we are far from saturating the Markov order of the process, and with more data we would be able to further distinguish between the two states. Future analyses with longer time series would also contribute to our understanding of the Markov order (maximum memory length) differences between the two states of conscious arousal. Nevertheless, our results, in Figs.~\ref{fig:DCmulti} and~\ref{fig:heatmap}, demonstrate that saturation of Markov order is not required for discrimination between conscious arousal states. This finding has a practical implication about the empirical utility of \ensuremath{\epsilon\text{-machines}}{}; even if the history length is too low, the inferred \ensuremath{\epsilon\text{-machine}}{} and its statistical complexity can be useful. We now discuss the temporal asymmetry of neural processes.
\subsubsection{Temporal asymmetry}\label{sec:cryp}
Unlike other complexity measures, we obtain a distinct \ensuremath{\epsilon\text{-machine}}{} from each given time series, and for each direction we read the time series, i.e., forward or backward in time. Based on the notion that wakeful brains should be better at predicting the next sensory input~\cite{CohenEneuro}, we expect that anaesthesia should alter the information structures depending on the time direction. Our expectation translates to the following three hypotheses:
\begin{enumerate}
\item Causal irreversibility ($\Xi := C_{\mu}^{+} - C_{\mu}^{-})$, which is purely based on the summary measure of statistical complexity, should be higher for awake but lower for anaesthetised brains;
\item Crypticity ($d := 2C_{\mu}^{\pm}-C_{\mu}^{+}-C_{\mu}^{-}$) should be higher for wakeful than anaesthetised brains;
\item Symmetric KL divergence rate ($\mathcal{D}_{KLS}$) should behave similarly.
\end{enumerate}
On visual inspection of the variation in $\Xi$ for the wakeful and anaesthetised conditions, both appeared close to zero, suggesting that $\Xi$ would not have significant dependence on the condition. This impression was confirmed statistically with two-tailed $t$-tests against zero with corrections for multiple comparisons, as shown in the top-right panel of Fig.~\ref{fig:heatmap}. Thus, Hypothesis 1 above, that irreversibility should be higher for wakeful over anaesthetised brains, is not supported by the data. However, as mentioned earlier, vanishing $\Xi$ does not imply that either $d=0$ or $\mathcal{D}_{KLS}=0$. To rule out the possibility that the information structure of \ensuremath{\epsilon\text{-machines}}{} are different when read forwards, as opposed to backwards, depending on the condition, we also tested the latter two hypotheses.
With respect to crypticity, first, visual inspection of the two-tailed $t$-score map, which compares crypticity for the wakeful $d^{w}$ and anaesthetised $d^{a}$ conditions (bottom-left panel of Fig.~\ref{fig:heatmap}) strongly implies that crypticity is larger in the former compared to the latter. This difference is largest over channels 5-7 and 9. To systematically evaluate this impression, we used LME statistical analysis (described in Sec.~\ref{sec:Methods}) to determine the relationships between crypticity, $d$, and the set of variables $\{c,\lambda,\psi\}$ we employ. As expected, we found that both memory length ($\lambda$) and level of conscious arousal ($\psi$) significantly affected crypticity ($\lambda$, $\chi^2(1)=470.5$, $p<10^{-16}$) and ($\psi$, $\chi^2(1)=5.896$, $p=0.0152$) respectively. Crypticity also depended on a significant interaction between memory length and condition ($\lambda \psi$, $\chi^2(1)=6.119$, $p=0.0134$). Specific increases in crypticity around the middle brain region (bottom-left panel of Fig.~\ref{fig:heatmap}) were also evident, with a strong interaction between channel location and memory length ($\lambda c$, $\chi^2(14)=35.86$, $p=1.09\times10^{-3}$), which is similar to the result obtained for \ensuremath{\Delta C_{\mu}}{}. This LME analysis, together with the direction of effects in Fig.~\ref{fig:heatmap} (bottom-left) strongly confirms our Hypothesis 2.
Furthermore, as a more direct measure of microscopic structure, we also analysed the symmetric KL divergence rate, $\mathcal{D}_{KLS}$. Again, the two-tailed $t$-score map (Fig.~\ref{fig:heatmap}, bottom-right panel) showed support for our hypothesis. Our formal statistical analysis with LME confirmed a critical interaction between memory length and condition ($\lambda \psi$, $\chi^2(1) = 15.37$, $p<10^{-16}$), meaning that time-asymmetric information structure is lost due to anaesthesia, especially when a long memory length is taken into account. (We also note other significant effects: mainly the effect of memory length ($\lambda$, $\chi^2(1) = 127.4$, $p<10^{-16}$) and interaction between memory length and channel location ($\lambda c$, $\chi^2(14) =85.81$, $p<10^{-16}$). Again, all significant results, due to LME tests, are reported in Table~\ref{tab:LMEresults}.
\begin{figure*}[th]
\centering
\includegraphics[width=\textwidth]
{figures/bidirectional_v4.pdf}
\caption{Exemplary digraph representations of \ensuremath{\epsilon\text{-machines}}{} for wakeful \textbf{(a-d)} and anaesthetised \textbf{(e-g)} conditions for forward-time \textbf{(a, e)}, reverse-time \textbf{(c, f)}, and bidirectional \textbf{(d, g)} analyses, all constructed from channel 5 in fly 7, at memory length $\lambda=3$. Panel \textbf{(b)} gives an example emission sequence and causal state sequence for forward and reverse-time \ensuremath{\epsilon\text{-machine}}{} pair (a) and (c). The vertex labelling denoting causal states in (a-d) is consistent to show composition of forward and reverse-time \ensuremath{\epsilon\text{-machines}}{} in the bidirectional \ensuremath{\epsilon\text{-machine}}{}.
The \ensuremath{\epsilon\text{-machines}}{} for the wakeful condition have statistical complexity of $C_{\mu}^{(+,w)} = 1.76$, $C_{\mu}^{(-,w)} = 1.50$, and $C_{\mu}^{(\pm,w)}=3.25$. In this example the process is irreversible for all three quantities.
The \ensuremath{\epsilon\text{-machines}}{} for the anaesthetised condition have statistical complexity of $C_{\mu}^{(+,a)} = C_{\mu}^{(-,a)} = 1.0$ and $C_{\mu}^{(\pm,a)} = 1.9989$. The process is causally and microscopically reversible, but has finite crypticity.}\label{fig:fwdbwd}
\end{figure*}
Taken together, these results show that the relative complexity of the forward versus reverse direction, as measured by causal irreversibility, does not distinguish between the wakeful and anaesthetised states. However, our crypticity results demonstrate that, under anaesthesia, the structures of the forward and reverse processes are relatively similar, whereas during wakefulness their structures differ. Fig.~\ref{fig:fwdbwd} demonstrates this effect with exemplar \ensuremath{\epsilon\text{-machines}}{} reconstructed from a representative channel, from which we derived six distinct \ensuremath{\epsilon\text{-machines}}{}: three for wakeful (a, c, d), and three for anaesthetised (e-g) flies. Panel (b) shows how the time series and the transitions in the causal states of forward, reversed, and bidirectional \ensuremath{\epsilon\text{-machines}}{} are related.
Our finding, that causal irreversibilities were not above zero for wakeful brains, corresponds to the fact that complexities of forward and reverse \ensuremath{\epsilon\text{-machines}}{} were not significantly different. However, the bidirectional \ensuremath{\epsilon\text{-machines}}{} for the wakeful condition were substantially more complex than those for the anaesthetised condition. The statistical complexity of bidirectional \ensuremath{\epsilon\text{-machines}}{} should equal that of forward or reverse \ensuremath{\epsilon\text{-machines}}{} if the process is completely time symmetric and deterministic~\cite{cryptCrutch}, resulting in zero crypticity. However, for non-deterministic processes, additional information for synchronising the forward and reversed process may be needed, which would mean $d>0$. For instance, in Fig.~\ref{fig:fwdbwd}(e-f), if we are told that the forward machine is in causal state $A$, we need extra information to determine whether the reversed machine is in causal state $X$ or $Y$. Yet, the detailed structure of the forward and reverse machines are the same in this example. Our analysis is supplemented with a study of the symmetric KL divergence rate between forwards and backwards processes, which measures the distance between the reconstructed \ensuremath{\epsilon\text{-machines}}{}. In other words, crypticity and symmetric KL divergence rate quantify two different notions of temporal asymmetry; the former is information theoretic, and the latter is information-geometric. Indeed, in general we find that the processes in the two directions are different in both ways and, further, their difference varies significantly between conditions, as shown in the bottom two panels of Fig.~\ref{fig:heatmap}.
\section{Discussion}
\label{sec:discussion/conclusion}
Discovering a reliable measure of conscious arousal in animals and humans remains one of the major outstanding challenges of neuroscience. The present study addresses this challenge by connecting a complexity measure to the degree of conscious arousal, taking a step forward to strengthening the link between physics, complexity science, and neuroscience. Here, we have taken tools from the former and have applied them to a problem in the latter. Namely, we have studied the statistical complexity and time asymmetry of neural recordings in the brains of flies over two states of conscious arousal: awake and anaesthetised. We have demonstrated that differences between these macroscopic states can be revealed by both the statistical complexity of local electrical fluctuations in various brain regions, and various measures of temporal asymmetry of hidden models that explain their behaviour. Specifically, we have analysed the single-channel signals from electrodes embedded in the brain using the \ensuremath{\epsilon\text{-machine}}{} formalism, and quantified the statistical complexity $C_{\mu}$, causal and microscopic reversibility $\Xi$ \& $\mathcal{D}_{KLS}$, and crypticity $d$ of the recorded data for 15 channels in 13 flies over two states of conscious arousal. We find the statistical complexity is larger on average when a fly is awake than when the same fly is anaesthetised ($\Delta C_{\mu} > 0$; Figs.~\ref{fig:DCmulti} and~\ref{fig:heatmap}), and that the structural complexity of information and its time reversibility captured by crypticity and KL rate are also reduced under anaesthesia ($\Delta d > 0$ and $\Delta \mathcal{D}_{KLS} > 0$;~Fig.~\ref{fig:heatmap}).
As we have demonstrated in this study, the local information contained within a single channel can contain information about global conscious states, which are believed to arise from interactions among many neurons. Theoretically, single channels can reflect the complexity of the multiple channels due to the concept of Sugihara causality~\cite{Sugihara2012}. This arises due to any one region of the brain causally interacting with the rest of the brain, making the temporal correlation in a single channel time series contain information about the spatial correlations, i.e., information that would be contained in multiple channels. With this logic, Ref.~\cite{Tajima2015} infers the complexity of the multi-channel interactions from a single channel temporal structure of the time series. This is often known as the backflow of information in non-Markovian dynamics~\cite{BreuerPRL2009}. The periodic structure of statistical complexity observed across channels in Fig.~\ref{fig:Cmu-raw-response}, demonstrates an unexpected example of spatial effects present in our study -- one that was not observed with conventional LFP analyses. This observation may provide a motivation for multi-channel analyses.
While we already find differences between conscious states in the single channel based \ensuremath{\epsilon\text{-machine}}{} analysis, it would be beneficial to extend the present analysis to the multi-channel scenario, in which \ensuremath{\epsilon\text{-machine}}{} can be contrasted with the methods of IIT~\cite{Casali2013, Casarotto2016, Tononi2004, Tononi2016, Oizumi2014, Mediano2019, Barrett2011, Tegmark2016, Oizumi2016PNAS, Oizumi2016PLOS}. Formal comparison of the distinguishing power of conscious states among proposed methods (such as those in Ref.~\cite{Engemann2018,Sitt2014}) will contribute to refining models and theories of consciousness.
Our results can be informally compared with a previous study, where the \emph{power spectra} of the same data in the frequency domain~\cite{CohenEneuro} was analysed. There, a principal observation was the power in low-frequency signals in central and peripheral regions, which was more pronounced in the central region (corresponding to channel 1-6 in this study). Our \ensuremath{\epsilon\text{-machine}}{} analysis here reveals that the region between periphery and centre (channels 5-7) shows most consistent difference in $C_{\mu}$ across history length $\lambda > 2$. Ultimately, the reason for this difference is due to our distinct approach, insofar as \ensuremath{\epsilon\text{-machines}}{} are provably the optimal predictive models of a large class of time series that take into account higher order correlations memory structure~\cite{CrutchPRL1989, epsilonMachines2}. Thus, our application of \ensuremath{\epsilon\text{-machines}}{} contrasts with the power spectra analysis, by considering these higher order correlations for very high-frequency signals, instead of only two-point correlations in both high- and low-frequency signals. Finally, the top-left panel of Fig.~\ref{fig:heatmap} shows that in regions corresponding to channels 1 and 13, the differences in the conditions are only seen at high values of $\lambda$.
Our multi-time analysis further reveals an interesting effect when we look more closely at, e.g., the anaesthetised \ensuremath{\epsilon\text{-machine}}{} example shown in Fig.~\ref{fig:workflow}(c). When we examine the binary strings belonging to each causal state, we find a clear split between active (consecutive strings of ones) and inactive (consecutive strings of zeros) neural behaviour corresponding to the left and right hand sides of Fig.~\ref{fig:em-with-histories} respectively. Previous studies have demonstrated an increase in low-frequency LFP and EEG power for mammals and birds during sleep and anaesthesia, mediated by similar neural states of activity and inactivity known as `up' and `down' states~\cite{Sarasso2015, Lewis2012}. A similar phenomenon has recently been observed in sleep deprived flies~\cite{raccuglia2019}. Consistent with other studies, our study, using general anaesthesia, does not observe this slow oscillations. Future studies with more formal comparisons between up and down states and \ensuremath{\epsilon\text{-machines}}{}, in both theory and computer simulations, may be a fruitful avenue for further research in this regard.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/sleep-c1-f1-L3-histories.pdf}
\caption{\ensuremath{\epsilon\text{-machine}}{} for same channel, fly, conscious state as Fig.~\ref{fig:workflow}(c), but with histories stored in each causal state explicitly stated. The sequences after the asterisk $*$ represent the sequence of symbol observations with the most recent observed symbol on the far right. Sequences collected within a causal state (grey circle) warrant significantly different future statistics to observed sequences in other causal states. The red lines emit a ``1" upon transition, and blue lines emit ``0"s.}
\label{fig:em-with-histories}
\end{figure}
An analysis in terms of \ensuremath{\epsilon\text{-machines}}{} has also allowed us to discriminate between levels of conscious arousal by examining causal structures found in both forward and reverse time directions. Based on our previous finding~\cite{CohenEneuro} as well as related concepts in temporal predictive~\cite{Hohwy2013, Friston2010, Tani1999, Bastos2012} and causal matching~\cite{Tononi2010}, we hypothesised that the wakeful brain may be tuned to causal structures of the world, which run forward in time, and thus \ensuremath{\epsilon\text{-machines}}{} would be more complex for forward than reverse readings. Further, we hypothesised that such temporally tuned structural matching will be lost under anaesthesia. Our results (Sec.~\ref{sec:cryp}) are highly intriguing in three ways. First, near-zero \emph{causal} irreversibility indicates that reducing the structural complexity to a simple index is not enough to capture effects on the information structure that are sensitive to the direction of time. This is the case regardless of the level of consciousness (at least at the timescales of this study). Second, nonzero crypticity indicates that the underlying information structure is not symmetric in time. More precisely, the signals themselves encode different amounts of information when run forwards as opposed to backwards. Third, the KL divergence rate analysis definitively demonstrated the existence of greater temporal irreversibility in the wakeful as opposed to the anaesthetised state. Having said this, we are limited in drawing strong conclusions due, in part, to the relatively small observed effect size of $\Xi$, likely a consequence of our relatively small data set. Despite this, even at millisecond time scales, our study successfully identifies significant differences in the time direction of the neural recordings.
Identifying the decrease in temporal-reversibility due to anaesthesia in tandem with complexity is of broad interest in neuroscience. While some physicists and neuroscientists have conjectured links among physics, the brain, and even consciousness through the lens of the direction of the time, their accounts have remained rather speculative, and not built on any solid theoretical foundations (for related and alternative theoretical foundations, see the work by Cofr\'{e} and colleagues~\cite{cofre2019,cofre2018}). For example, using reversely played movies, the sensitivity to the direction of time is shown to differ across brain regions in humans~\cite{hasson2008}. In animal studies, some populations of neurons (in the hippocampus) are known to become activated in a particular sequential order while the animal experiences a particular event. For example, in anticipation of the event, the neurons activate in a forward direction, but in retrospection, the neurons activate in reverse order~\cite{Diba2007}. While direct links between these empirical findings and the \ensuremath{\epsilon\text{-machine}}{} framework remains elusive, we foresee that our unified theoretical and analytical framework can potentially bridge this gap in the future.
Our study is not the first to apply complexity measure in consciousness research. Indeed, many definitions and measures of complexity have been proposed in the literature (see Ref.~\cite{Edmonds1997} for a list). Moreover, there is a flow of ideas going the other way as well~\cite{PhysRevLett.119.225301, PhysRevA.97.052320, QIIT}. However, many, if not most, of these measures cannot account for temporal correlations (memory), temporal asymmetry, or differentiate between random and structured processes. Our interdisciplinary study, based on \ensuremath{\epsilon\text{-machines}}{}, opens up new possibilities; physics can improve its theoretical constructs through the application of tools to empirical data, while neuroscience can benefit from rigorous quantitative tools that have proven their physical basis across different spatio-temporal scales. Among those complexity measures, $C_{\mu}$ can be easily interpreted in terms of temporal structure~\cite{whyCmu}, as it has a direct relation to process predictability and memory requirements. We emphasise that statistical complexity $C_{\mu}$ derived from \ensuremath{\epsilon\text{-machines}}{}, drastically differentiates itself from other scalar complexity indices such as Lempel-Ziv complexity~\cite{schartner2015}. For one Lempel-Ziv complexity is maximal for a random noise process whereas statistical complexity for the same process is zero (see Eq.~\eqref{eq:statComplexity}). In addition, the notion of temporal reversibility available in the \ensuremath{\epsilon\text{-machine}}{} framework has no counterpart in Lempel-Ziv complexity. This is a critical difference since it is known that a low-complexity forward-time \ensuremath{\epsilon\text{-machine}}{} consisting of only two causal states can have a very high-complexity reverse-time \ensuremath{\epsilon\text{-machine}}{} with countably infinite states~\cite{Ellison2011}. Thus, explicitly considering the influence of time is critical for addressing questions about complexity. When coupled with our results, we can conclude that anaesthetised brains become less structured, more random, more reversible, and approach a stochastic process with a smaller memory capacity compared to the wakeful brains.
Overall, our results suggest that measures of complexity extracted from \ensuremath{\epsilon\text{-machines}}{} might be able to identify further structures that are affected by anaesthesia at different spatial and temporal scales. It is also likely that applying a similar analysis to other data sets, in particular, human EEG data will lead to new discoveries regarding the relationship between consciousness and complexity that can be retrieved simply at the single channel level.
\begin{acknowledgements}
RNM, FAP, NT, KM acknowledge support from Monash University's Network of Excellence scheme and the Foundational Questions Institute (FQXi) grant on \textit{Agency in the Physical World}. AZ was supported through Monash University's Science-Medicine Interdisciplinary Research grant. DC was funded by an Overseas JSPS Postdoctoral Fellowship. NT was funded by Australian Research Council Discovery Project grants (DP180104128, DP180100396). NT and CD were supported by a grant (TWCF0199) from Templeton World Charity Foundation, Inc. We thank Felix Binder, Alec Boyd, Mile Gu, Rhiannon Jeans, and Jayne Thompson for valuable comments. KM is
supported through Australian Research Council Future Fellowship FT160100073.
\end{acknowledgements}
\section*{Appendix}
\subsection{Linear mixed-effects model}
\label{app:lme}
In this section, we demonstrate an example of an LME analysis for the case of statistical complexity $C_{\mu}$ in the forward time direction. For the case when time direction $\xi$ is included as an effect, the only change this makes to the process is increasing the dimensions of the effects matrix $\mathbf{F}$. Performing an LME analysis on other quantities used in this study like crypticity or KL rate follow the same procedure outlined here.
The main goal of the LME analysis we perform in this study is to determine the degree of contributions each and combinations of memory length ($\lambda$), channel location ($c$), and level of conscious arousal ($\psi$) have on statistical complexity $C_{\mu}$. LME accomplishes this by modelling statistical complexity as a general linear regression equation (Eq.~\eqref{eq:glme}), whose response is predicted by the aforementioned parameters $\lambda$, $c$, and $\psi$. In this Appendix, we show the exact form of the linear regression equation used in this analysis, while referring to the terminology introduced in the methods (Sec.~\ref{sec:Methods}).
We begin by restating Eq.~\eqref{eq:glme} for the case of statistical complexity, $\mathcal{C} = \v{F}\gv{\beta} + \v{R}\v{b} + \mathcal{E}$, which has the form of a general multidimensional linear equation. We will set aside the right hand side of the equality for now. On the left hand side, statistical complexity takes the form of a column vector $\mathcal{C}$. Each row corresponds to the unique response of $C_{\mu}$, at a specific selection of parameters. There is a general freedom of choice associated with the number of parameters one would like to assign to the elements $\mathcal{C}$. We index the rows with fly number $f$, channel location $c$, and the conscious arousal state $\psi$. That is, the $(i,j,k)$th element is
\begin{gather}
[\mathcal{C}]_{(i,j,k)} = C^{(i,j,k)}_\mu.
\end{gather}
In other words, it is the $i$th fly's $j$th channel in $k$th condition. Thus, $\mathcal{C}$ has length of $|f|\times|c|\times|\psi|=390$. Each $C_{\mu}$ in this vector is a function of $\lambda$.
\begin{table*}[t]
\footnotesize
\centering
\begin{tabular}{c|ll|ll|ll}
\hline \hline
$Q$ & \multicolumn{2}{c|}{$1^{st}$ Order} & \multicolumn{2}{c|}{$2^{nd}$ Order} & \multicolumn{2}{c}{$3^{rd}$ Order} \\ \hline
$C_{\mu}^+$ & $\lambda:\;\chi^2(1)=443.64$ & $\;\;p<10^{-16}$ & $\lambda c:\;\chi^2(14)=42.876$ & $\;\;p = 8.97\times 10^{-5}$ & $\lambda c \psi:\; \chi^2(14)=24.00$ & $\;\;p=0.0458$ \\
{} & {} & {} & $\lambda \psi:\;\chi^2(1)=14.63$ & $\;\;p=1.31\times 10^{-4}$ & {} & {} \\
&&&&&& \\
$\Delta C_{\mu}^{+}$ & $\lambda:\; \chi^2(1)=20.97$ & $\;\;p=4.65\times 10^{-6}$ & $\lambda c:\; \chi^2(14)=37.19$ & $\;\;p=6.90\times 10^{-4}$ & {} & {} \\
&&&&&& \\
$\Xi$ & $\psi:\; \chi^2(1)=4.870$ & $\;\;p=0.0273$ & $\lambda \psi:\; \chi^2(1)=5.565$ & $\;\;p=0.0183$ & $\lambda c \psi:\; \chi^2(14)=31.79$ & $\;\;p=4.29\times 10^{-3}$ \\
{} & $\lambda:\; \chi^2(1)=6.725$ & $\;\;p=9.51\times 10^{-3}$ & {} & {} & {} & {} \\
&&&&&& \\
$d$ & $\psi:\; \chi^2(1) = 5.896$ & $\;\;p=0.0152$ & $\lambda \psi:\; \chi^2(1)=6.119$ & $\;\;p=0.0134$ & {} & {} \\
{} & $\lambda:\;\chi^2(1)=460.5$ & $\;\;p<10^{-16}$ & $\lambda c:\;\chi^2(14)=35.86$ & $\;\;p=1.09\times 10^{-3}$ & {} & {} \\
&&&&&& \\
$\mathcal{D}_{KLS}$ & $\lambda:\; \chi^2(1)=127.4$ & $\;\;p<10^{-16}$ & $\lambda c:\; \chi^2(14)=85.81$ & $p<10^{-16}$ & {} & {} \\
{} & {} & {} & $\lambda \psi:\; \chi^2(1)=127.4$ & $\;\;p<10^{-16}$ & {} & {} \\
\hline \hline
\end{tabular}
\caption{Significant $\chi^2$ and $p$ values of effects of channel $c$, memory length $\lambda$, and condition $\psi$, for informational quantities $Q$ obtained via LME analysis. First order effects correspond to significant channel, memory, or condition responses on an informational quantity, while second and third-order effects correspond to interactions between these effects. $\chi^2$ values are reported with $n-1$ degrees of freedom in the parentheses, corresponding to the number of effects removed under the null model, described in Sec.~\ref{sec:Methods}}
\label{tab:LMEresults}
\end{table*}
The matrix $\v{F}$ introducing the set of fixed effects $\mathcal{F} = \{\lambda, c, \psi, \lambda c, \lambda \psi, c \psi, \lambda c \psi \}$ into the model (known in the context of general linear models as the \emph{design matrix}) can then be represented as $\v{F} = (\v{F}_1,\dots, \v{F}_{13})^T$, with each element corresponding to the design matrix of a specific fly. These individual fly response matrices can be explicitly expressed as
\begin{gather}
\v{F}_f =
\begin{pmatrix}
\vec{\gv{\lambda}} & \v{D} & \vec{\gv\Psi}_W &
\lambda\v{D} & \lambda\vec{\gv{\Psi}}_W & \v{D}_{\Psi_W} & \lambda\v{D}_{\Psi_W}\\
\vec{\gv{\lambda}} & \v{D} & \vec{\gv\Psi}_A &
\lambda\v{D} & \lambda\vec{\gv{\Psi}}_A & \v{D}_{\Psi_A} & \lambda\v{D}_{\Psi_A}
\end{pmatrix},
\end{gather}
where $\vec{\gv{\lambda}} = (\lambda,\ldots,\lambda)^{T}$ and $\vec{\gv{\Psi}}_X = (\Psi_X,\ldots,\Psi_X)^{T}$ are column vectors of length $15$ containing the predictor variables of memory length and level of conscious arousal respectively, $\v{D}$ is the $15\times 15$ identity matrix which ``selects out" the channel of interest, $\v{D}_{\Psi_X} = \text{diag}(\Psi_X,\ldots, \Psi_X)$ is the $15\times 15$ matrix which ``selects out" the condition of interest correlated with the level of conscious arousal, where
\begin{gather}
\Psi_{W (A)} =
\begin{cases}
1 & \text{if $\psi=$ wakeful (anaesthetised)} \\
0 & \text{otherwise}.
\end{cases}
\end{gather}
In a similar fashion, the expression for the matrix containing the random effects $\v{R}$ can be determined. For the case of our study, we only consider random effects arising due to correlations between channels within a specific fly. The result of this is an adjustment to the intercept of the linear model for each fly and channel combination. Therefore, the random effects matrix $\v{R}$ is simply an identity matrix of dimension $390$.
The accompanying elements of the random effects vector $\v{b}$ consist of regression coefficients $b_{fc}$ describing the strength of each intercept adjustment.
The execution of the LME analysis which included coefficient fitting, and log-likelihood estimations was facilitated by running \texttt{fitlme.m} in MATLAB R2108b.
|
1,116,691,500,984 | arxiv | \section{GLAST}
The Gamma-ray Large Area Space Telescope (GLAST) is scheduled to be
launched in October 2007 and will operate for 5-10 years in a
low-earth orbit. Unlike its predecessor, the Compton Gamma-ray
Observatory (CGRO), GLAST is not intended to make pointed
observations. Instead, it will operate primarily as an all-sky monitor
in which it continuously scans the sky, rocking $\pm 35^{\circ}$ about
the zenith every 90-minute orbit. GLAST will carry two instruments:
\begin{enumerate}
\item the Large Area Telescope (LAT), the main GLAST instrument,
sensitive to gamma rays between 20 MeV and 300 GeV, and
\item the GLAST Burst Monitor (GBM), dedicated to detecting gamma-ray
bursts (GRBs) between 8 keV and 25 MeV.
\end{enumerate}
\noindent Both instruments have completed all environmental testing and are
currently being integrated onto the spacecraft.
The GBM \cite{vonKienlin04} consists of twelve NaI crystal detectors
with sensitivity from 8 keV to 1 MeV and two BGO crystal detectors
with sensitivity from 150 keV to 30 MeV. The instrument has a field of
view of 9.5 sr (the entire sky not occulted by the Earth) and $\sim$
$12\%$ energy resolution at 511 keV. The GBM is capable of on-board
localizations of $<15^{\circ}$ in 1.8 seconds and $2-3^{\circ}$ within
several seconds to a few minutes. It is anticipated to detect $\sim$
200 GRBs per year, $> 50$ of which will be in the field of view of the
LAT.
The LAT \cite{michelson06} is a pair-conversion instrument. In each of
16 precision trackers, 14 layers of tungsten foil facilitate pair
conversion and 18 layers of X-Y pairs of single-sided silicon strip
detectors measure the pair tracks. The pair-initiated shower deposits
its energy in a calorimeter, composed of 1536 CsI crystals located at
the bottom of the LAT. A segmented array of plastic scintillators
surrounding the instrument detects charged particles as they enter and
is used to veto background events depending on energy and on the
correspondence of the hit scintillator tiles with tracks found in the
tracker.
Table \ref{LAT capabilities} \cite{lat_url} summarizes the LAT
performance. With a field of view of 2.4 sr, the LAT will ``see''
$20\%$ of the sky at any instant and will scan the entire sky once
every two orbits, or three hours. The predicted one-year sensitivity
is $F (E>100 \textrm{MeV}) > 3 \times 10^{-9} \textrm{cm}^{-2}
\textrm{s}^{-1}$ for a point source with a differential photon
spectrum proportional to $E^{-2}$ observed at high latitude. The
brightest point sources will be localized to $\sim0.4'$ and the
weakest sources to several arcminutes. The LAT will be much more
sensitive than its predecessor, the EGRET instrument aboard CGRO; in
one day, it will detect (at $5\sigma$) the weakest sources that EGRET
detected during the entire CGRO mission. The LAT is projected to
detect thousands of gamma-ray sources over the lifetime of the GLAST
mission.
\begin{center}
\begin{table}[h]
\caption{\label{LAT capabilities}LAT capabilities.}
\centering
\begin{tabular}{@{}*{7}{l}}
\br
Parameter&Present Design Value\\
\mr
Peak Effective Area&$10,000 \textrm{cm}^2$ at 10 GeV\\
Energy Resolution, 100 MeV, on-axis&$9\%$\\
Energy Resolution, 10-300 GeV, on-axis&$<15\%$\\
PSF, $68\%$, on-axis, 10 GeV (100 MeV) & $0.09^{\circ} (3.4^{\circ}) $\\
Field of view & $2.4 \textrm{sr}$\\
Source Location Determination & $< 0.4'$\\
\br
\end{tabular}
\end{table}
\end{center}
\section{Blazar physics with the GLAST LAT}
The LAT is expected to advance the scientific understanding of all
types of gamma-ray emitting objects, including Solar System sources
like the Sun and Moon, Galactic sources like supernova remnants and
pulsars, and extragalactic sources such as active galaxies and
GRBs. It will map the structured diffuse emission from the Milky Way
and will detect, or perhaps resolve, the diffuse extragalactic
emission as well. The LAT may also detect gamma rays from dark matter
annihilation and will almost certainly find new catagories of
gamma-ray sources. Each of these topics is covered in
\cite{michelson06}. Here we have chosen to concentrate on one type of
gamma-ray emitter, blazars, a population with significant scientific
overlap with ground-based TeV telescopes. We explore the potential of
LAT observations for understanding the physics of AGN jets.
\subsection{\label{sec: variability}Monitoring variability}
The frequency and uniformity of the sky coverage of the LAT will allow
sensitive, evenly-sampled monitoring of AGN variability across the
sky. Figure \ref{fig: variability} shows a 55-day synthetic light
curve that includes stochasic variability and a moderately bright
flare (solid line). The data points indicate fluxes derived for
one-day intervals from simulated LAT data. The data were analyzed
using an unbinned maximum likelihood technique that is being developed
as a standard analysis tool. The inset shows the hardness ratios ($F
(E ~ > 1 ~ \textrm{GeV}) / F (E ~ < 1 ~ \textrm{GeV})$) recovered from the
likelihood analysis vs. the true hardness ratios, indicating that
hardness ratios can be accurately measured on daily timescales, even
in low states. The horizontal line indicates the threshold for a
public data release in the first year; fluxes and flux ratios on any
object whose flux above 100 MeV exceeds $2 \times 10^{-6}
\textrm{cm}^{-2} \textrm{s}^{-1}$ will be released to the community
for follow-up observations and monitoring \cite{drp}. The right-hand
plot in Figure \ref{fig: variability} shows a close-up of the flare
with 12-hour time intervals. During moderate flares like the one
shown, fluxes can be measured to better than $10\%$ accuracy and
spectral indices can be measured to better than $5\%$ on 12-hour time
scales. Over the duration of the GLAST mission, the LAT is expected to
measure daily fluxes from thousands of sources with this level of
accuracy. Twelve-hour and hourly spectra can be measured for
approximately 100 and ten sources, respectively.
\begin{figure}[h]
\includegraphics[width=20pc]{daily_lc.eps}%
\includegraphics[width=20pc]{flare_lc.eps}\hspace{2pc}%
\begin{minipage}[b]{40pc}\caption{\label{fig: variability}\emph{left}: A 55-day synthetic blazar light curve (solid line) plus one-day LAT exposures (data points). The inset shows the recovered vs. true hardness ratios. \emph{right}: Close-up of the flare indicated in the left panel, with LAT data in 12-hour exposures.}
\end{minipage}
\end{figure}
\subsection{Time-resolved spectral energy distributions}
The level of performance indicated in Section \ref{sec: variability}
suggests that the LAT will be able to measure the high-energy emission
from dozens of blazars on timescales of several hours. The synchrotron
cooling timescale for a population of relativistic electrons in the
inner jet can be several days for reasonable choices of the jet
parameters \cite{bottcher02}. Therefore, within the context of
leptonic models, 12-hour LAT spectra represent snapshots of the
particle distribution as it cools, and the LAT can track changes in
the gamma-ray spectral index as the highest-energy electrons
preferentially lose their energy to inverse-Compton scattering. In the
simplest SSC models, the free parameters are the magnetic field $B$,
the particle spectral index $p$ and upper- and lower- energy cutoff
$\gamma_1$ and $\gamma_2$, respectively, the size of the emitting
region $R$, and the bulk Lorentz factor $\Gamma$. Each LAT snapshot
constrains $B$, $p$, $\gamma_1$, and $\gamma_2$. If, in addition,
simultaneous X-ray observations that resolve the shortest variability
timescales are available, then these measure $R$ and $\Gamma$. The
X-ray spectral energy distributions (SEDs) also independently
constrain $B$, $p$, $\gamma_1$, and $\gamma_2$. We would expect the
constraint on $B$ to be particularly severe with such a set of
observations, if indeed it remains constant as the electron population
cools.
\subsection{Time-averaged SEDs}
The estimated number of blazars that GLAST will detect ranges from at
least a thousand \cite{dermer06} to several thousand \cite{stecker96,
chiang98, mucke00}. The majority of these will be faint, and long
integration times will be required to build up a reasonable
high-energy SED. Here we explore the physics that can be probed with
SEDs that measure only the time-averaged properties of the jet. In
particular, we consider the case of a week of observations of
Markarian 501 (Mrk 501). In 1997, Mrk 501 was monitored by radio,
optical, X-ray ($2<E<12 ~ \textrm{keV,} ~ 20<E<200 ~ \textrm{keV}$),
and TeV ($E>800 ~ \textrm{GeV}$) telescopes simultaneously, and two
week-long epochs in medium and high states of activity were used to
fit SSC models \cite{petry00}. The modeling was realistic in that it
evolved the electron population self-consistently as it
cooled. Unfortunately, because no data existed on the rising edge of
the inverse-Compton peak, the models could not constrain $B$ or
$\gamma_1$, and so these parameters were fixed at nominal values. In
Figure \ref{fig: mrk 501}, we show the models for the 1997 medium- and
high-state epochs (solid lines). The X-ray points in Figure \ref{fig:
mrk 501} represent 25.2 ks (or 1 hour per day for a week) from a
BeppoSAX-like instrument; these cover the low-energy peak of the
SED. The gamma-ray points assume a week's worth of sky survey
observations with the LAT.
As Figure \ref{fig: mrk 501} shows, joint LAT and and VERITAS
observations of Markarian 501, and of other high-frequency-peaked BL
Lac objects, will cover the entire high-energy peak of the SED. This
is an extremely powerful measurement for understanding the origin of
the high-energy emission, and such broad high-energy coverage will not
be possible until the launch of GLAST. In the context of leptonic
models, the LAT coverage of the low-energy half of the SED can
constrain $B$ and $\gamma_1$, unlike the previous modeling of
\cite{petry00}. If simultaneous X-ray data are also available that
cover the low-energy peak of the SED, then the overall energetics of
the inner jet are known. We can directly measure the relative
contributions of synchrotron and inverse-Compton cooling in the
jet. This type of complete, simultaneous coverage constrains all of
the parameters of simple SSC models: $B$, $p$, $\gamma_1$, $\gamma_2$,
$\Gamma$, and $R$.
\begin{figure}[h]
\includegraphics[width=20pc]{plot_both_fortevconf.eps}\hspace{2pc}%
\begin{minipage}[b]{20pc}\caption{\label{fig: mrk 501}The SSC models in high and medium states from \cite{petry00} (solid lines) are used to predict the LAT counts from a week of observations in survey mode. The points show the predicted LAT and X-ray counts from a binned likelihood analysis, and the shaded band indicates the $3\sigma$ LAT error from an unbinned likelihood analysis. The U-shaped line indicates the VERITAS sensitivity expected from 15 hours of observations (courtesy of R. Ong).}
\end{minipage}
\end{figure}
\section{Conclusions}
We have described the two GLAST instruments and explored the
constraints that LAT observations can make on leptonic emission models
of AGN jets. We emphasize that none of the results shown require
pointed LAT observations; they are all achievable with the all-sky
scanning mode of observing. Of course, the most interesting findings
may be from sources where the LAT data rule out a simple SSC
picture. In these cases, either more complicated leptonic modeling or
hadronic modeling must be invoked. Finally, it is clear from the
examples here that in order to optimize the scientific return of GLAST
for blazars, simultaneous multi-wavelength data are essential,
especially from X-ray satellites and from TeV instruments such as
VERITAS and H.E.S.S.
\section*{References}
|
1,116,691,500,985 | arxiv | \section{Introduction}
Slow roll inflation\footnote{For a general discussion
and references on inflation see \cite{lcovi:ly98}} requires
the flatness conditions $\epsilon\ll 1$ and $|\eta|\ll 1$
on the potential, where
\begin{equation}
\epsilon \equiv \frac{1}{2} M_{Pl}^2
\left( \frac{V'}{V} \right)^2 ; \,\,\,
\eta \equiv M_{Pl}^2{V^{\prime\prime}\over V} \,,
\end{equation}
and $M_{Pl} \equiv (8\pi G)^{-1/2} = 2.4\times 10^{18}
\mbox{GeV}$.
The first condition is easily satisfied in most inflationary
models where the inflaton lies near to an extremum of the
potential, but the second is problematic. In fact, during
inflation in a generic supergravity potential all scalar fields
\cite{lcovi:dine,lcovi:coughlan}, and in particular the inflaton
\cite{lcovi:cllsw} acquire a contribution to the mass-squared
of magnitude $V/M_{Pl}^2$, which spoils this condition.
Very few proposals have been put forward to solve this problem
which would not be relying on some sort of fine
tuning~\cite{lcovi:ly98}; specific types of K\"ahler potential
and superpotential can succeed in canceling the dangerous
contribution, as it happens in no--scale supergravity.
The aim of this paper is to investigate instead the proposal of
Stewart~\cite{lcovi:st97,lcovi:st97bis}: in this scenario
the contribution to the inflaton mass is unsuppressed at
high scale, but loop corrections can flatten the inflaton
potential to realize sufficient inflation without any
significant fine-tuning.
We will explore the potential in a general model of this kind
and find the region of parameter space allowed by the observed
magnitude and spectral index of the curvature perturbation.
We will finally discuss the naturalness of such picture
and the consequence of future observations.
\section{The running mass models}
In the model proposed by Stewart \cite{lcovi:st97}, slow-roll
inflation occurs, with the following Renormalization Group (RG)
improved potential
for the canonically normalized inflaton field $\phi$;
\begin{equation}
V =V_0 + \frac{1}{2} m_\phi^2 (\phi) \phi^2 +
\frac{1}{2} m_\psi^2 (\phi) \psi^2
+ \frac{1}{4} \lambda (\phi) \phi^2 \psi^2
\cdots \,.
\label{vinf}
\end{equation}
The constant term $V_0$ is supposed to come from the supersymmetry
breaking and to dominate at all relevant field values.
Non--re\-nor\-ma\-li\-zable terms, represented by the dots,
give the potential a minimum at large $\phi$,
but they are supposed to be negligible during inflation.
The last two terms also vanish during inflation, since $\psi =0$,
but are responsible for the hybrid exit from the inflationary period.
The inflaton mass-squared and all the other parameters
depend on the renormalization scale $Q$, and following
\cite{lcovi:st97,lcovi:st97bis} we have taken $ Q=\phi $,
where now $\phi$ denotes the classical v.e.v of the inflaton field
during inflation.
Such choice for the renormalization scale minimizes the one loop
correction to the potential, since the main contribution
goes like $\ln (\phi/Q)$ for $\phi$ larger than any other scale,
and therefore the potential in eq. (\ref{vinf}) is effectively
equivalent to the full one loop potential.
If the inflaton v.e.v. is not the dominant scale, then some other
choice of $Q$ will be appropriate and the simplification we have
made is no more viable. We will assume that the
inflaton v.e.v. is the dominant scale up to the end of inflation.
At the Planck scale, $m_\phi^2 (M_{Pl})$ is supposed to have
the generic magnitude
\begin{equation}
|m^2_0 |=|m^2_\phi (M_{Pl}) | \sim V_0
\label{mexpect1}
\end{equation}
coming from supergravity corrections \cite{lcovi:cllsw,lcovi:clr98}.
Without running, this would give $|\eta|\sim 1$, preventing
slow--roll inflation. But at field values below the
Planck scale, the RG drives $m^2_\phi (\phi)$
to small values, corresponding to $|\eta(\phi)|\ll 1$, and slow--roll
inflation can take place. We have in fact that the slow-roll parameters
are given in our case by
\begin{eqnarray}
\epsilon &=& {M^2_{Pl}\phi^2\over 2 V_0^2}\left[ m^2_\phi (\phi) + {1\over 2}
{d m^2_\phi \over d\ln (\phi)} \right]^2\\
\eta &=& {M^2_{Pl}\over V_0} \left[ m^2_\phi (\phi) + {3\over 2}
{d m^2_\phi \over d\ln (\phi)} + {1\over 2} {d^2 m^2_\phi \over d\ln^2(\phi)}
\right];
\end{eqnarray}
since the derivatives of $m^2_\phi$ are suppressed by the coupling constant,
both $\epsilon$ and $\eta$ are small around the value of $\phi$ where
the inflaton mass vanishes and inflation can successfully happen in such
conditions.
Since in this model the $\eta$ parameter changes considerably
as $\phi$ decreases, slow-roll inflation will continue until some
epoch $\phi_{end}$, when either the critical value
$\phi_c = - 2 m_\psi/\lambda $ is reached or $\eta(\phi)$ becomes of
order $1$.
To reduce the number of parameters involved in our analysis, we will
assume the latter to be the case, so that both the flattening of the
potential and the end of slow roll inflation are due to the
mass running; the critical value $\phi_c$ will be reached after
a brief phase of fast--roll, that should not change considerably the
e-foldings number.
We have then that the number of e-folds generated while the inflaton
runs from value $\phi$ to $\phi_{end}$ is given in the slow
roll approximation by
\begin{eqnarray}
{\cal N}(\phi) &=& \int_{\phi_{end}}^{\phi} d\phi {V \over M^2_{Pl} V'}
\nonumber\\
&=& {V_0\over M^2_{Pl}} \int_{\phi_{end}}^{\phi}
{d\ln(\phi) \over m^2_\phi(\phi) +
{1 \over 2} {d m^2_\phi(\phi)\over d\ln(\phi)}}
\label{lcovi:N}
\end{eqnarray}
where $ m^2_\phi(\phi) $ is given by solving the RG equations.
\section{The scale dependence of the spectral index}
The scale dependence of the spectral index in this kind of models
is strictly related to the RG equation of the inflaton mass.
The case of a gauge coupling dominated running has been studied
in \cite{lcovi:st97bis,lcovi:clr98,lcovi:cl98} while the Yukawa
dominated running has been considered in \cite{lcovi:co98}.
We will review the two extreme cases in a toy model in the
following, concentrating in the case where inflation takes place
while the inflaton rolls from the region $m^2_\phi \simeq 0$
towards the origin.
A useful way to understand the general behaviour, is to
consider the linear approximation for the running inflaton
mass
\begin{equation}
m^2_\phi (\phi) \simeq - {V_0\over M^2_{Pl}} \left[
\mu^2_\star + c \ln (\phi/\phi_\star) \right]
\end{equation}
where the $\star $ denotes values of the variables where
$V'$ vanishes and the constant $c$ is small and proportional
to the relevant coupling.
We have then that the observational quantities can all be written
as function of the three adimensional parameters
\begin{eqnarray}
c &=& - {d m^2_\phi \over d\ln(\phi)}|_{\phi=\phi_\star}
= - 2 \mu^2_\star \\
\tau &=& - |c| \ln(\phi_\star/ M_{Pl}) \\
\sigma &=& \lim_{\phi\rightarrow\phi_\star} c e^{c {\cal N}(\phi)}
\ln (\phi_\star/\phi),
\end{eqnarray}
where ${\cal N}(\phi) $ is given by eq.~(\ref{lcovi:N}). Assuming
the linear approximation to hold up to the end of inflation, such
expression simplifies to
\begin{eqnarray}
{\cal N}(\phi) &=& - {1\over c} \int_{\phi_{end}}^{\phi}
{ d\ln(\phi)\over \ln (\phi/\phi_\star)}\\
&=& - {1\over c} \ln \left| {\ln \left( \phi /\phi_\star\right)
\over \ln \left( \phi_{end} /\phi_\star\right)} \right| .
\end{eqnarray}
Note that the e-folding number is inversely proportional to the
coupling (contained in $c$) so that a small coupling gives automatically
sufficient inflation.
We see also that the parameter $\sigma $ gives directly a measure
of the departure from the linear approximation at $\phi_{end}$,
since in the case this holds all the way, $\sigma $ should be given by
$ \sigma = \pm 1+c$ for $\phi \rightarrow \phi_\star$,
i.e. would be a number of order 1. In general such approximation
breaks down well before $\phi_{end}$ and $\sigma$ can take also very
large values.
We obtain then for the spectral index in the linear approximation:
\begin{equation}
n({\cal N})-1= 2\sigma e^{-c{\cal N}} - 2 c;
\label{lcovi:n-1}
\end{equation}
while the COBE normalization imposes a constraint on $V_0$:
\begin{equation}
{V_0^{1/2}\over M^2_{Pl}} = 5.3\times 10^{-4} |\sigma| \exp
\left[ -{\tau\over c} - c {\cal N}_{COBE}
- {\sigma \over c} e^{-c {\cal N}_{COBE}} \right].
\end{equation}
So for every particular model, the experimental constraints
limit the range of the parameters allowed. We see in
Fig. \ref{lcovi:1} the region in the $\sigma - c$ plane compatible
with a spectral index $|n-1| \leq 0.2 $ at ${\cal N} = 50$.
\begin{figure}[t!]
\centerline{
\epsfig{file=lcovifig1a.ps,height=3in,width=3in}
}
\vspace{10pt}
\caption{
Lines of constant spectral index in the $\sigma - c$ plane
for ${\cal N} = 50$; assuming that this e-folding number
corresponds to COBE scales the allowed region is between the
long--dashed and dotted lines.
}
\label{lcovi:1}
\end{figure}
Note that every quadrant correspond to a different type of inflationary
model:
\begin{itemize}
\item{positive $c$ implies that the inflaton mass changes sign from
negative to positive and therefore a maximum in the RG improved potential
develops around $m^2_{\phi}\simeq 0$; in such case the sign of sigma
indicates if the inflaton is rolling towards the origin ($\sigma>0$) or
towards large fields values ($\sigma<0$), always away from the maximum;}
\item{negative $c$ implies on the contrary that the inflaton mass changes
sign from positive to negative and a minimum in the RG improved potential
develops around $m^2_{\phi}\simeq 0$; again the sign of sigma is
related to the direction of the inflaton's motion: $\sigma >0$ means
that the inflaton is rolling towards the minimum from the large field
values while $\sigma <0 $ from the small field values.}
\end{itemize}
The COBE normalization gives additional bounds on the value of $\tau$
for every choice of $c,\sigma$, since $V_0$ has to be surely larger
that the scale of nucleosynthesis (for instant reheating we have in fact
$T_{RH} = V_0^{1/4}$, but usually $T_{RH} < V_0^{1/4}$).
\section{A simple toy model}
Let us consider as an example the case of the superpotential
\begin{equation}
W = \lambda S \mbox{\rm Tr} \left(\phi_1 \phi_2\right)
\label{W}
\end{equation}
where $S$ is a singlet chiral superfield, while $\phi_i$ are chiral
superfield
in the adjoint representation of the gauge group $SU(N)$.
We can easily compute the scalar potential given by (\ref{W})
in the limit of unbroken supersymmetry and, writing the adjoint fields in
the fundamental basis\footnote{We define the fundamental representation
of $SU(N)$ $t_a$ such that $\mbox{\rm Tr}
\left( t_a t_b \right) = {1\over 2}
\delta_{ab} $ and $[t_a,t_b] = f_{abc} T_c $, while for the adjoint
representation, e.g. $T^a_{ij} = f_{aij} $, we have
$\mbox{\cal Tr} \left( T_a T_b \right) = N \delta_{ab} $.}
$\phi_i = \phi_i^a t_a $, it is given by:
\begin{equation}
V = {\lambda^2 \over 4} | \phi_1^a \phi_2^a|^2 + {\lambda^2 \over 4} |S|^2
(| \phi_1^a |^2 + |\phi_2^a |^2) + {|D_a|^2 \over 2}
\label{scalpot}
\end{equation}
where $S,\phi_i$ indicate now the scalar components of
the chiral multiplets, summation over $a$ is implicit and
\begin{equation}
D_a = i {g\over 2} f_{abc} \left( \phi_1^{b*} \phi_1^c
+ \phi_2^{b*} \phi_2^c \right)
\label{Dterm}
\end{equation}
with $g$ denoting the $SU(N)$ gauge coupling.
We see clearly that a flat direction exists for
\begin{eqnarray}
S &=& 0 \\
\phi_1^a \phi_2^a &=& 0 \\
f_{abc} \ \phi_i^{b*} \phi_i^c &=& 0.
\end{eqnarray}
This is not the most general case and other flat directions are present,
parameterized by gauge invariant polynomials \cite{fd}.
In the following we will consider the case when the inflaton is
one of the components of the charged fields, i.e. we will take
$\phi = Re \left[ \phi_1^a\right] $ to be the inflaton, while all the other
fields are supposed to vanish during inflation.
Then the potential for the inflaton is reduced only to the soft susy
breaking terms \cite{ssb} and assumes the form of eq. (\ref{vinf}),
where $V_0$ is a cosmological constant that is generated by some other
sector of the theory and is canceled in the true vacuum by the v.e.v.
of a field in our sector, playing the role of the $\psi$.
From supergravity, we expect all the susy breaking scalar masses,
respectively $m_S, m_{1/2}$ for the singlet and charged fields, to be of
order of $V^{1/2}_0/M_{Pl}$ and the trilinear parameter $Y$, in
$ {Y \over 2} \lambda S \phi_1^a \phi_2^a + h.c.$, to be of the
same order, as $V_0^{1/4}$ is the scale of explicit supersymmetry
breaking during inflation. Note however that while the contribution
to the scalar masses coming from $V_0$ is always present, the trilinear
coupling $Y_0$ not always receives a contribution proportional to
$V^{1/2}_0$.
Moreover, at the end of inflation $V_0$ vanishes and the susy breaking
parameters will be connected instead to the gravitino mass in the
usual way, so in principle the susy breaking parameters during and
after inflation are different.
In order to write the RG improved potential, we will need to consider
the one loop renormalization group equations for all our parameters
and extract the behaviour of the inflaton mass.
Following \cite{ma93}, we write down the equations for our particle content.
The gauge field strength $\alpha = g^2/(4\pi) $ and the gaugino mass satisfy
\begin{eqnarray}
{d\alpha\over dt} &=& {\beta\over 2\pi} \alpha^2 \label{RGEalpha}\\
{d\tilde m\over dt} &=& {\beta\over 2\pi} \alpha \tilde m
\label{RGEmtilde}
\end{eqnarray}
where $t= \ln (Q) $ is the renormalization scale and
$\beta = - N$ in our case of $SU(N)$ with two matter superfields in the
adjoint representation ($\beta = -3N + n_{adj} N$).
This two equations are independent from the others and their
solution is
\begin{eqnarray}
\alpha (t) &=& {\alpha_0 \over 1 - {\beta\over 2\pi} \alpha_0 t}
= {\alpha_0 \over 1 + \tilde \alpha_0 t}\\
\tilde m (t) &=& {\tilde m_0 \over \alpha_0} \alpha (t)
\end{eqnarray}
where $\tilde\alpha_0 = N \alpha_0/(2\pi) $ and a $0$ subscript
denotes quantities at the Planck scale.
For the Yukawa coupling, which we can always take real absorbing
its phase in the definition of the singlet field $S$,
we have instead
\begin{equation}
{d\lambda\over dt} = - N {\alpha\over \pi} \lambda +
{\lambda\over 16\pi^2} (N^2+1) |\lambda |^2
\label{RGEyukawa}
\end{equation}
while for the soft susy breaking masses the equations can be
cast in a simple form using the variables
\begin{eqnarray}
m^2_{1-2} &=& m^2_1 - m^2_2, \\
m^2_{1-S} &=& m^2_1 - {1\over N^2-1} m^2_S
\end{eqnarray}
and $m^2_S$, where $m_S, m_i$ are respectively the susy breaking
masses of $S, \phi_i$ and $Y$ is the susy breaking trilinear
coupling.
In fact we have:
\begin{eqnarray}
{dm^2_{1-2} \over dt} &=& 0 \\
{dm^2_{1-S} \over dt} &=& - {2 N \alpha\over \pi} \tilde m^2
\label{RGE1Smass}\\
{dm^2_S\over dt} &=& {N^2 +1 \over 8\pi^2} |\lambda|^2 m^2_S +
{N^2 -1 \over 8\pi^2} |\lambda|^2 \left[ 2 m^2_{1-S} - m^2_{1-2} +
{|Y|^2\over 2} \right].
\label{RGESmass}
\end{eqnarray}
The trilinear term will have instead the equation
\begin{equation}
{dY\over dt} = {1\over 32\pi^2} (N^2+1) Y |\lambda|^2 +
{2\over\pi} N\alpha\tilde m .
\label{RGEtrilinear}
\end{equation}
These are a system of coupled differential equations.
We will in the next sections consider approximate solutions
in different cases and obtain the running inflaton mass.
\section{Dominant gauge coupling}
For $\alpha \gg \lambda^2$ a model independent analysis has been
made in \cite{lcovi:clr98}. In this case we can neglect the $\lambda^2$
terms and the inflaton mass running does not depend on the Yukawa
coupling.
We have then for both charged fields
\begin{equation}
m^2_i (t) = m^2_{i,0} - 2 \tilde m^2_0 \left[ 1- {1\over
( 1 + \tilde \alpha_0 t)^2}\right].
\label{mrun1}
\end{equation}
Notice that eq.(\ref{mrun1}) gives in general a solution of
eq.(\ref{RGE1Smass}), in the case of non negligible Yukawa.
In this case we can easily translate our three parameters into
physical ones and we have:
\begin{eqnarray}
c &=& 2 \tilde \alpha_0 A_0 \left[ 1+{\mu^2_0\over A_0}\right]^{3/2}\\
\tau &=& 2 A_0 \left(1+{\mu^2_0\over A_0}\right) \left[
\sqrt{1+{\mu^2_0\over A_0}}-1 \right]\\
\ln (\sigma) &=& 2 \left( 1+{\mu^2_0\over A_0}\right)
\left[ {1\over\sqrt{1+{\mu^2_0\over A_0}}} -
{1\over\sqrt{1+{\mu^2_0+1\over A_0}}}\right] \nonumber\\
& & +\ln \left[4\left(A_0+\mu^2_0\right)\right]
+\ln\left[ {\sqrt{1+{1\over \mu^2_0+A_0}}-1\over
\sqrt{1+{1\over \mu^2_0+A_0}}+1} \right]
\end{eqnarray}
where $\mu^2_0 = |m^2_{1,0}| M^2_{Pl}/V_0$ and $A_0 = 2\tilde m^2_0
M^2_{Pl}/V_0$ are the values of the scalar and gaugino masses at
the Planck scale.
Then the allowed region for the $c, \sigma$ parameters shown in
the first quadrant of Fig.~1 gives the bounds on
physical parameters shown in Fig.~2.
\begin{figure}[t!]
\centerline{
\epsfig{file=lcovifig1b.ps,height=3in,width=3in}
}
\vspace{10pt}
\caption{ Lines of constant spectral index in the case of gauge
dominated running of the inflaton mass in the plane
$\mu^2_0 = |m^2_\phi(M_{Pl})| M^2_{Pl}/V_0$ vs
$ A_0 = 2 \tilde m^2(M_{Pl}) M^2_{Pl}/V_0$
for a gauge coupling $\tilde \alpha_0 = N/(2\pi)
\,\alpha (M_{Pl}) = 0.01$. Also the lines of constant
$V_0$ are displayed in units of $M^4_{Pl}$ for ${\cal N}_{COBE} = 45$.
This region corresponds to positive $\sigma$ and $c$ \cite{lcovi:cl98} .
}
\label{lcovi:2}
\end{figure}
Notice that in this particular case, an inflaton mass squared
of order $V_0/M^2_{Pl}$ at the Planck scale is acceptable, and the
running is efficiently flattening the potential, provided that the
gaugino mass is sufficiently large. Notice that, as shown in the graph
for a specific choice of the gauge coupling, generally gaugino
masses larger than the scalar one have to be assumed.
For consistency we have also to find the range of values of
$\lambda$ where this approximation is reliable: naturally the
limit $\lambda \rightarrow 0$ violates our assumption
$\phi_{end} > \phi_c$, so that we have a lower bound on $\lambda$:
\begin{equation}
\lambda_0^2 \geq 4 V_0 \exp \left[ {2\over \tilde \alpha_0}
\left( 1-{1\over \sqrt{1+{V_0+|m^2_{1,0}| \over A_0}} } \right)
\right].
\label{boundlambda}
\end{equation}
As we can see this bound is very sensitive to the value of the gauge
coupling and also $V_0$; in general $\alpha_0$ has to be of the
order of $0.01$ or so to give a non negligible allowed region for the
initial masses and a non negligible allowed range for $\lambda$.
\section{Dominant Yukawa coupling}
In this case the equations become similar to those for
uncharged fields. We can therefore consider at the same time
the model where $\phi_i$ are just two singlet fields substituting
in the following $ N^2 \rightarrow 2$. This substitution
amounts to consider only one degree of freedom instead of the
$N^2-1$ of a field in the adjoint representation of $SU(N)$.
The solutions for the scalar masses are given by:
\begin{eqnarray}
m^2_S (t) &=& {N^2-1\over N^2+1} \left[ (m^2_{S,0} + m^2_{1,0} +
m^2_{2,0} + Y^2_0 ) {1\over 1-\tilde\lambda_0^2 t } \right. \nonumber \\
& & \left. -Y_0^2 {1\over \sqrt{1-\tilde\lambda_0^2 t}}
- m^2_{1,0} - m^2_{2,0} + {2\over N^2-1}
m^2_{S,0} \right] \\
m^2_i (t) &=& m^2_{i,0} + {1\over N^2-1} (m^2_S (t)-m^2_{S,0}),
\end{eqnarray}
where the subscript $0$ again indicates the initial values (defined
at the Planck scale) and $\tilde \lambda^2_0 = {N^2+1 \over 2\pi^2}
\lambda_0^2$.
We can see that in this case the initial conditions determine
whether one of the masses changes sign and an extremum in the
potential is reached; generally the singlet mass appears to have
the stronger running since it interacts with more fields.
One possibility for inflation in this case, is to have negative
initial masses and chose the inflaton to be the field whose
mass first become positive.
In the case of universal initial masses such a field turns out
to be the singlet; we have then that inflation happens in the
$S$ direction\footnote{Note that also the direction $S\neq 0,
\phi_i^a=0 $ is a flat direction of our potential.}
and we can easily compute the parameters $c,\sigma,
\tau$ for vanishing $Y_0$:
\begin{eqnarray}
c &=& {4 (N^2-2)^2\over 3 N^4-1} \tilde\lambda_0\mu^2_0\\
\tau &=& {2 (N^2-2)\over 3 (N^2-1)} \mu^2_0\\
\ln (\sigma) &=& {1\over {2 (N^2-2)\over N^2+1} \mu^2_0-1}-
\ln \left[ 1 - {N^2+1\over 2 (N^2-2) \mu^2_0}\right],
\end{eqnarray}
where $\mu^2_0 = |m^2_S(M_{Pl})| M^2_{Pl}/V_0$.
In this case we have only two physical parameters to play with,
but an acceptable region exists as shown in Fig.~3: for initial
values of $\mu^2_0$ of order 1, a Yukawa coupling of order $0.05$
is needed to flatten the potential and $V_0$ is fixed by the
COBE normalization to be of order $10^{12-14} GeV$.
Notice that, in contrast with the previous case of dominant gauge
coupling, now $\mu^2_0$ plays both the role of the scalar
mass and of the gaugino mass and therefore a large initial
$\mu^2_0$ is needed in order for the running to be efficient.
As plotted in Fig. 3, $\mu^2_0$ has to be larger than $0.5$,
otherwise the $\eta$ parameter never becomes of order 1 and the
end of inflation has to be defined by the critical value
$s_c$. In such a case the expression for $\sigma$ is much more
involved and we will not consider it.
\begin{figure}[t!]
\centerline{
\epsfig{file=lcovifig3.eps,height=3in,width=4.5in}
}
\vspace{10pt}
\caption{ Lines of constant spectral index in the case of Yukawa
dominated running of the inflaton mass in the plane
$\mu^2_0 = |m^2_S(M_{Pl})| M^2_{Pl}/V_0$ vs
$ \tilde \lambda^2_0$ for $N \gg 1$ and ${\cal N}_{COBE} = 45$.
Also the lines of constant $V_0$ are displayed in units of $M^4_{Pl}$.
This region again corresponds to positive $\sigma$ and $c$.}
\label{lcovi:3}
\end{figure}
Inflation is also possible along the charged field direction,
but only for non universal masses and low values of $N$.
Taking as an example the case $N=2$, we have in terms of the
physical quantities, $\mu^2_0 = |m^2_{i,0}| M^2_{Pl}/V_0$
and $\xi^2_0 = m^2_{i,0}/m^2_{S,0}$:
\begin{eqnarray}
c &=& {(\xi^2_0-3)^2\over 5(\xi^2_0+2)} \tilde\lambda_0\mu^2_0\\
\tau &=& {\xi^2_0-3\over \xi^2_0+2} \mu^2_0\\
\ln (\sigma) &=& {1\over {\xi^2_0-3\over 5} \mu^2_0-1}-
\ln \left[ 1 - {5\over (\xi^2_0-3) \mu^2_0}\right].
\end{eqnarray}
In such case $\xi^2_0 > 3$, i.e. a singlet mass larger than the
charged fields mass, is needed for flattening the potential
(resembling the gaugino mass larger than scalar mass requirement
for the gauge dominated case).
The bounds on the physical parameter for this non universal case
are given in Fig.~4.
\begin{figure}[t!]
\centerline{
\epsfig{file=lcovifig4.eps,height=3in,width=4.5in}
}
\vspace{10pt}
\caption{ Lines of constant spectral index in the case of Yukawa
dominated running of the inflaton mass for the non universal case.
It is shown the plane $\mu^2_0 = |m^2_{i,0}| M^2_{Pl}/V_0$ vs
$ \xi^2_0 = m^2_{S,0}/m^2_{i,0}$ for $N =2$, $\tilde \lambda^2_0=0.1$
and ${\cal N}_{COBE} = 45$. Also the lines of constant
$V_0$ are displayed in units of $M^4_{Pl}$.}
\label{lcovi:4}
\end{figure}
Another option is that of universal initial positive masses
driven negative, or very small for what regards the inflaton,
by the Yukawa coupling like in the case of the radiative EW
breaking in the MSSM. In such a picture not only would the
quantum corrections be responsible for the flattening of the
potential, but also for the triggering of the hybrid--type
end of inflation. Such a scenario would correspond to the
quadrants with negative $c$ in Fig.~1.
\section{Conclusions}
Quantum corrections can be strong enough to cancel the
supergravity contribution of order $V_0/M^2_{Pl}$ to the
inflaton mass and allow slow roll inflation to take place.
The requirement to have the spectral index in the experimental
range constraints tightly the parameter space of the specific
models. Surprisingly anyway viable regions of the parameter
space exists for reasonable values of the couplings in the
different scenarios of gauge coupling or Yukawa coupling
dominance.
Since the running of the inflaton mass has to be substantial
to give the cancelation, in this class of models the spectral
index has a significant variation on cosmological scales,
surely within the reach of the Planck satellite \cite{lcovi:planck}.
For example in the case of gauge coupling dominance the spectral
index changes by $0.1$ or so in the ten e-foldings corresponding
to cosmological scales \cite{lcovi:clr98} and such a large
variation could be observed or excluded even before the launch of
Planck, by the improvement of the data on the power spectrum of
density perturbations. The scale dependence of the spectral index
can be parameterized by eq. (\ref{lcovi:n-1}), assuming the linear
approximation to be valid when cosmological scales left the
horizon.
\section*{Acknowledgments}
I am very grateful and indebted to David H. Lyth and
Leszek Roszkowski with whom this work has been done.
I would like to thank H. V. Klapdor-Kleingrothaus and
the organizers of BEYOND 99 for the very interesting
workshop and for financial support.
This work was supported by PPARC grant GR/L40649.
|
1,116,691,500,986 | arxiv | \section{Introduction}
The modern gravitational theories are based on the geometrical description of
the gravitational field. For instance in the framework of General Relativity
the space - time is the Riemanien Manifold and the gravitational field occures
as a metric tensor field on this manifold.
It is known for a long time that the non - Riemanien Geometry gives the
appropriate base for the new gravitational theories. In particular the theory
where the torsion field is caled for the description of the gravitational
field along with metric is of special interest. In the framework of
General Relativity only the Energy - Momentum Tensor of matter fields is the
sourse of gravity. At the same time in the theories with torsion one can
consider the Spin Tensor of matter as an additional sourse of the
gravitational interaction [1,2]. The torsion field naturally arises within the
gauge approach to gravity [3,4]. Thus this kind of theories possesses
better conceptual features, and is interesting for investigation.
One can find the review of gravity with torsion in Ref's [1,2,5 - 8].
Some questions related with our subject, namely the equation for particle with
spin $\frac{1}{2}$ have been
discussed in the papers [15 - 17].
If the torsion really exist, the investigation of it's coupling with matter
fields is of crusial importance for the understanding of this phenomena.
The interaction of free matter fields with external torsion field have
been considered in a number of papers (see, for example,
[3 - 6, 9 - 11,18,19] and
references therein). Some aspects of the interacted matter fields theory
in an external gravitational field with torsion have been discussed in [12,13]
(see also [6]). As it was shown in [7,8], the requirement
of the multiplicative
renormalizability makes us to introduce the nonminimal interaction of torsion
with spinor and scalar fields. The renormalization
group analysis of GUT's in an external gravitational field with
torsion shows that the
interaction of matter fields with torsion increase in a strong gravitational
field. Therefore one can conclude that the torsion have more essential
manifestations at high energy level. On the other hand the interaction
with torsion is weakened at low energies.
This fact gives the possible reason to the
absence of torsion in a modern experimental data. Note that the low - energy
manifestations of torsion are quite interesting. In particular, the
investigation
of the weakrelativistic limit for the spin 1/2 field in an
external torsion field gives some new predictions which may turn out to
be the base for the experimental search of torsion [14].
Does the torsion field really exist? The definite answer can be obtained only
on the experimental basis. The purpose of the present paper is to consider the
theoretical grounds for the experimental tests which can detect the possible
torsion effects.
Since the torsion field is the element of the gravitational interaction, this
field must couple with matter in a universal way. Therefore we can suppose that
torsion interact with all the particles which have the nontrivial spin.
The investigation of the torsion - matter coupling is usually argued by the
possible cosmological applications. Here we discuss the possible low - energy
manifestations of torsion field, and show that the torsion field may lead to
some phenomena in the microscopic physics. Of course we do not claim the
existance of torsion, but only consider the way to test this fact.
The paper is organized as following. In section 2 we shall write the action of
spinor Dirac field in an external gravitational field with torsion.
We introduce the nonminimal interaction of torsion with spinor field, that is
the only way to obtain consistent quantum theory [12,13] (see also [7]).
In section 3
the weakrelativistic approximation to the Dirac
equation in an external torsion and electromagnetic
fields is constructed. The generalized
(due to torsion dependent terms) Pauly equation contains new terms which
are different from standard electromagnetic ones.
In section 4 the quasiclassical equations of motion for the
weakrelativistic particle with spin $\frac{1}{2}$ in an external torsion
and electromagnetic fields is derived. These equations contains the standard
terms corresponding to the interaction with electromagnetic field and also some
new terms related with torsion. In spite of usual point of view we find that
this terms have the different structure if compared with electromagnetic ones.
We use these new terms in section 5 where the brief description of the
possible experiments is given.
\section{Spinor field in an external gravitational field with torsion}
Let us start with the basic notations for the gravity with torsion.
In the space - time with metric $g_{\mu\nu}$ and torsion
$T^\alpha_{\;\beta\gamma}$ the connection
$\bar{\Gamma}^\alpha_{\;\beta\gamma}$ is nonsymmetric, and
$$
\bar{\Gamma}^\alpha_{\;\beta\gamma} -
\bar{\Gamma}^\alpha_{\;\gamma\beta} =
T^\alpha_{\;\beta\gamma} \eqno(1)
$$
If one introduce the metricity condition
$\bar{\nabla}_\mu g_{\alpha\beta} = 0$ where the covariant derivative
$\bar{\nabla}_\mu$ is constructed on the base of
$\bar{\Gamma}^\alpha_{\;\beta\gamma}$ then the following solution for
connection
$\bar{\Gamma}^\alpha_{\;\beta\gamma}$ can be easily found
$$
\bar{\Gamma}^\alpha_{\;\beta\gamma} = {\Gamma}^\alpha_{\;\beta\gamma} +
K^\alpha_{\;\beta\gamma} \eqno(2)
$$
where ${\Gamma}^\alpha_{\;\beta\gamma}$ is standard symmetric Christoffel
symbol and $K^\alpha_{\;\beta\gamma}$ is contorsian tensor
$$
K^\alpha_{\;\beta\gamma} = \frac{1}{2} \left( T^\alpha_{\;\;\beta\gamma} -
T^{\;\alpha}_{\beta\;\gamma} - T^{\;\alpha}_{\gamma\;\beta} \right)
\eqno(3)
$$
It is convinient to divide the torsion field into three irreducible components
that are: the trace $T_{\beta} = T^\alpha_{\;\beta\alpha}$, the pseudotrace
$S^{\nu} = \varepsilon^{\alpha\beta\mu\nu}T_{\alpha\beta\mu}$ and the tensor
$q^\alpha_{\;\beta\gamma}$, which satisfy the conditions
$$
q^\alpha_{\;\beta\alpha} = 0,\;\;\;\;\; \;\;\;\;
\varepsilon^{\alpha\beta\mu\nu}q_{\alpha\beta\mu} =0
$$
Then the torsion field can be written in the form
$$
T_{\alpha\beta\mu} = \frac{1}{3} \left( T_{\beta}g_{\alpha\mu} -
T_{\mu}g_{\alpha\beta} \right) - \frac{1}{6} \varepsilon_{\alpha\beta\mu\nu}
S^{\nu} + q_{\alpha\beta\mu} \eqno(4)
$$
Now we consider the Dirac field $\psi$
in an external gravitational field with torsion.
A various aspects of the interaction of the Dirac field with torsion
have been discussed in the literature (see, for example, [1,2, 15 - 18]).
It is well known that the standard way to introduce the minimal interaction
with external fields require the substitution of the partial
derivatives $\partial_\mu$ by the covariant ones. The covariant derivatives
of the spinor field $\psi$ are defined as follows
$$
\bar{\nabla}_\mu \psi = \partial_{\mu}\psi + \frac{i}{2}w_\mu^{\; a b}
\sigma_{a b}\psi
$$
$$
\bar{\nabla}_\mu \bar{\psi} = \partial_{\mu}\bar{\psi} -
\frac{i}{2}w_\mu^{\;a b}\bar{\psi}\sigma_{a b} \eqno(5)
$$
where $w_\mu^{\;a b}$ are the components of spinor connection. We use the
standard representation for the Dirac matrices (see, for example, [20]).
$$
\beta = \gamma^0 = \left(\matrix{1 &0\cr 0 &-1\cr} \right)
$$
$$
\vec{\alpha} = \gamma^0 \vec{\gamma} =
\left(\matrix{0 &\vec{\sigma}\cr 0 &\vec{\sigma}\cr} \right)
$$
$$
\gamma_5 = \gamma^0 \gamma^1 \gamma^2 \gamma^3, \;\;\;\;
\sigma_{a \; b} = \frac{i}{2}(\gamma_a \gamma_b - \gamma_b \gamma_a)
$$
The verbein $e_\mu^a$ obey the equations $e_\mu^a e_{\nu a} = g_{\mu\nu}$,
$e_\mu^ae^{\mu b} = \eta^{ab}$ and $\eta^{ab}$ is the Minkowsky metric. The
gamma matrices in curved space - time are introduced as $\gamma^\mu =
e_a^\mu \gamma^a$ and obviously satisfy the metricity condition
$\bar{\nabla}_\mu \gamma^{\beta} = 0$ .
The condition of metricity enables us to find the explicit expression
for spinor connection which agree with (2).
$$
w_\mu^{\;a b} = \frac{1}{4} (e_\nu^b \partial_\mu e^{\nu a} -
e_\nu^a \partial_\mu e^{\nu b}) + \bar{\Gamma}^\alpha_{\;\nu\beta}
(e^{\nu a}e_\alpha^b - e^{\nu b}e_\alpha^a) \eqno(6)
$$
If the metric is flat, then from (6) follows
$$
w_\mu^{\; a b} = K^\alpha_{\;\nu\beta}
(e^{\nu a}e_\alpha^b - e^{\nu b}e_\alpha^a) \eqno(7)
$$
The action of spinor field minimally coupled with torsion have the form
$$
S = \int d^4 x \;e \; \{ \frac{i}{2}\bar{\psi}\gamma^\mu \bar{\nabla}
_\mu \psi - \frac{i}{2}\bar{\nabla}_\mu\bar{\psi}\gamma^\mu\psi +
m\bar{\psi}\psi \} \eqno(8)
$$
where $m$ is the mass of the Dirac field and
$e = \det \parallel e_\mu^a\parallel$. Further we shall consider only the
torsion effects and therefore restrict ourselves by the only special case
of flat metric. So we put $g_{\mu\nu} = \eta_{\mu\nu}$ but keep
$T^\alpha_{\;\beta\gamma}$ arbitrary. The expression (8) can be rewritten in
the form
$$
S = \int d^4 x\{i\bar{\psi}\gamma^\mu(\partial_\mu+\frac{i}{8}\gamma_5
S_\mu)\psi+m\bar{\psi}\psi\} \eqno(9)
$$
One can see that the spinor field minimally interact only with the
pseudovector $S_\mu$ part of the torsion tensor. The nonminimal interaction is
more complicated.
There are strong reasons to introduce the nonminimal coupling of the form
$$
S = \int d^4 x\{i\bar{\psi}\gamma^\mu(\partial_\mu
+i\eta_1\gamma_5S_\mu+i\eta_2T_\mu)\psi+m\bar{\psi}\psi\} \eqno(10)
$$
Here $\eta_1,\eta_2$ are the dimensionless parameters of the
nonminimal coupling of spinor fields with torsion. The minimal
interaction corresponds to the values $\eta_1 = \frac{1}{8},\;\;\eta_2 = 0$.
The introduction of the nonminimal interaction looks artificial.
Within the classical theory one can explain the use of a nonminimal
action only as an attempt to explore the more general case. However
the situation is different in quantum region where the nonminimal
interaction is the necessary condition of consistency of the theory.
The reason is following. It is well-known that the interaction of
quantum fields leads to the divergences and therefore the
renormalization is needed. As it was shown in
[12, 13], the requirement of the multiplicative
renormalizability makes us to introduce the nonminimal interaction of torsion
with spinor and scalar fields.
\section{The equation of motion for spinor field in the
weakrelativistic approximation}
Let us consider the spin $\frac{1}{2}$ particle in an external torsion
and electromagnetic fields. The equation of motion follows from (10)
with the usual electromagnetic addition.
$$
i \hbar\frac{\partial \psi}{\partial t} = \{ c\vec{\alpha}\vec{p}-
e\vec{\alpha}\vec{A} - \eta_1 \vec{\alpha}\vec{S}\gamma_5 -
\eta_2 \vec{\alpha}\vec{T}+
$$
$$
+ e\Phi + \eta_1 \gamma_5 S_0 + \eta_2 T_0 + m c^2 \beta \}\psi
\eqno(11)
$$
Here the dimensional constants $\hbar$ and $c$ are taken into account,
$A_\mu = (\Phi, \vec{A}),\;\; T_\mu = (T_0, \vec{T}),\;\; S_\mu = (S_0,
\vec{S})$
Following the standard prosedure we write (see, for example, [20])
$$
\psi = \left(\matrix{\varphi\cr\chi\cr}\right)
exp( \frac{imc^2t}{\hbar}) \eqno(12)
$$
Within the weakrelativistic approximation $\chi \ll \varphi$. From
equations (11), (12) it follows that
$$
(i\hbar\frac{\partial}{\partial t}-
\eta_1 \vec{\sigma}\vec{S} - e\Phi - \eta_2 T_0)\varphi=
$$
$$
= (c\vec{\sigma}\vec{p} - e\vec{\sigma}\vec{A} - \eta_1 S_0 -
\eta_2 \vec{\sigma}\vec{T})\chi
\eqno(13a)
$$
and
$$
(i\hbar\frac{\partial}{\partial t}-
\eta_1 \vec{\sigma}\vec{S} - e\Phi - \eta_2 T_0 + 2mc^2 ) \chi =
$$
$$
= (c\vec{\sigma}\vec{p} - e\vec{\sigma}\vec{A} - \eta_1 S_0 -
\eta_2 \vec{\sigma}\vec{T})\varphi
\eqno(13b)
$$
Now we keep only the term $2mc^2\chi$ in the left side of (13b) and
then it is possible to find $\chi$ from (13b). In the leading order in
$\frac{1}{c}$ we meet the following eqation for $\varphi$.
$$
i \hbar\frac{\partial \varphi}{\partial t} =
\{ \eta_1 \vec{\sigma}\vec{S} + e\Phi + \eta_2 T_0 +
$$
$$
+ \frac{1}{2mc^2\chi} (c\vec{\sigma}\vec{p} - e\vec{\sigma}\vec{A} -
\eta_1 S_0 - \eta_2 \vec{\sigma}\vec{T} )^2 \} \varphi \eqno(14)
$$
The last equation is easily rewritten in the Scrodinger form
$$
i\hbar\frac{\partial \varphi}{\partial t} =
\hat{H} \varphi \eqno(15)
$$
where the Hamiltonian have the form
$$
\hat{H} = \frac{1}{2m} \vec{\pi}^2 + B_0 + \vec{\sigma}\vec{Q}
$$
$$
\vec{\pi} = \vec{P} - \frac{e}{c}\vec{A} - \frac{\eta_2}{c}\vec{T} -
\frac{\eta_1}{c}\vec{\sigma}S_0
$$
$$
B_0 = \frac{e}{\Phi} + \eta_2 T_0 - \frac{1}{mc^2}\eta_1^2 S_0^2
$$
$$
\vec{Q} = \eta_1\vec{S} + \frac{\hbar}{2mc}(e\vec{H} + \eta_2\;rot\vec{T})
\eqno(16)
$$
Here $\vec{H} = rot\vec{A}$ is the magnetic field strength.
The equation of (15), (16) is the analog of the Pauly equation in the
case of external torsion and electromagnetic field.
The expression for the Hamiltonian (16) indicate on the physical
effects of the torsion field, that is especially clear if compared with
the electromagnetic terms. For example, the quantity
$T_0$ looks like scalar potential $\Phi$, $\vec{T}$ looks like vector
potential $\vec{A}$. The quantities $\vec{S}$ and $rot\;\vec{T}\;$ may
play the role of the magnetic field. However there is some difference
between torsion and electromagnetic sectors. The term
$- \frac{1}{mc}\eta_1 S_0 \vec{p}\vec{\sigma}\;$ does not have the
analogies in quantum electrodynamics.
\section{The equation of motion for the particle with spin $\frac{1}{2}$ in an
external torsion field.}
If we consider (16) as the Hamiltonian operator of some quantum
particle, then the corresponding classical energy have the form
$$
H = \frac{1}{2m} \vec{\pi}^2 + B_0 + \vec{\sigma}\vec{Q}
\eqno(17)
$$
where $\vec{\pi}, B_0, \vec{Q}$ are defined by (16) and
$\vec{\pi} = m \vec{v}$. Here $\vec{v} = \dot{\vec{x}}$
is the velocity of the particle. From (17) it follows the the expression for
the
canonical conjugated momenta $\vec{p}$.
$$
\vec{p} = m\vec{v} + \frac{e}{c}\vec{A} + \frac{\eta_2}{c}\vec{T} +
\frac{\eta_1}{c}\vec{\sigma}S_0 \eqno(18)
$$
One can consider $\vec{\sigma}$ as the coordinate of internal degrees of
freedom, corresponding to spin.
Let us now perform the canonical quantization of the theory. To make this we
introduce the operators of coordinate $\hat{x}_i$, momenta $\hat{p}_i$ and
spin $\hat{\sigma}_i$ and input the equal - time commutation relations of
the following form:
$$
\left[\hat{x}_i, \hat{p}_j\right] = i\hbar \delta_{ij}, \;\;\;\;\;
\left[\hat{x}_i, \hat{\sigma}_j \right] =
\left[\hat{p}_i, \hat{\sigma}_j\right] = 0,
$$
$$
\left[\hat{\sigma}_i,\hat{\sigma}_j \right] = 2i\varepsilon_{ijk}
\hat{\sigma}_k
\eqno(19)
$$
The Hamiltonian operator $\bar{H}$ which corresponds to the energy (17) is
easily
constructed in terms of the operators $\hat{x}_i, \hat{p}_i, \hat{\sigma}_i$
and then these operators yield the equations of motion
$$
i\hbar \frac{d\hat{x}_i}{dt} = \left[\hat{x}_i, H \right],
$$
$$
i\hbar \frac{d\hat{p}_i}{dt} = \left[\hat{p}_i, H \right],
$$
$$
i\hbar \frac{d\hat{\sigma}_i}{dt} = \left[\hat{\sigma}_i, H \right],
\eqno(20)
$$
After the computation of the commutators in (20) we obtain the
explicit form of the operators equations of motion. Now we can omit
all the terms which vanish when $\hbar \rightarrow \; 0$. Then the
classical equations arise which can be interpreted as the
(quasi)classical equations of motion for the particle in
external torsion and electromagnetic fields. Note that the operator
arrangement problem is irrelevant because of the use of
$\hbar \rightarrow \; 0$ limit. The straightforward calculations
lead to the equations
$$
\frac{d\vec{x}}{dt} = \frac{1}{m} \left( \vec{p} - \frac{e}{c}\vec{A} -
\frac{\eta_2}{c}\vec{T} - \frac{\eta_1}{c}\vec{\sigma}S_0 \right) = \vec{v},
\eqno(21a)
$$
$$
\frac{d\vec{v}}{dt} = e\vec{E} + \frac{e}{c}\left[ \vec{v}\times\vec{H} \right]
+ \frac{\eta_2}{c}\left[ \vec{v}\times rot\;\vec{T} \right] -\eta_2\; grad\;T_0
-
$$
$$
- \frac{\eta_2}{c}\frac{\partial\vec{T}}{\partial t} -
\eta_1\left(\vec{\sigma}\cdot\nabla \right)\vec{S} -
\eta_1\left[ \vec{\sigma}\times rot\vec{S} \right]
- \frac{\eta_1}{c}\vec{\sigma}\frac{\partial S_0}{\partial t} +
$$
$$
+ \frac{\eta_1}{c} \{ \left(\vec{v}\cdot\sigma\right) grad S_0 -
\left(\vec{v}\cdot grad S_0 \right)\vec{\sigma} \}
+\frac{1}{mc^2}\;\eta_1^2\; grad (S_0^2) -
\frac{\eta_1}{c}S_0\frac{d\vec{\sigma}}{dt}, \eqno(21b)
$$
$$
\frac{d\vec{\sigma}}{dt} = \left[ \vec{R}\times\vec{\sigma} \right]
$$
$$
\vec{R} = \frac{2\eta_1}{\hbar}\left[ \vec{S} - \frac{1}{c}\vec{v}S_0 \right]
+ \frac{e}{mc}\vec{H} + \frac{\eta_2}{mc}\; rot\vec{T}
\eqno(21c)
$$
Here $\vec{E}$ is the strength of the external electric field. Equations (21)
contain the torsion - dependent terms which have the same symmetries as
the usual electromagnetic terms. Really, the $T_\mu$ dependent terms are in a
perfect analogy with the $A_\mu$ dependent terms. However the equations (21)
contains some terms which have a qualitatively new structures.
All this terms contains
$S\mu$, that is more relevant part (with respect to the interaction with the
matter fields) of the torsion tensor.
Thus we see that standard claim conserning magnetic field analogy of torsion
effects is not completely correct, and there exist serious difference between
magnetic field and torsion.
\section{Possible experimental investigations of the torsion field}
Let us now consider the Schrodinger equation (15) with the Hamiltonian
operator (16) with the vanishing electromagnetic field. How can the torsion
field
manifest itself? It is evident that the effect of torsion field can modify
the particles spectrum. This modifications have the similar form to the ones
which arise in the elecromagnetic field. At the same time another modifications
are possible due to the qualitatively new terms like
$\;\frac{1}{mc}\eta_1 S_0 \vec{p}\vec{\sigma}\;$ in (16).
It is natural to suppose that the possible interaction with torsion is feeble
enough and therefore one can consider it as some perturbation. This
perturbation
may leads to the splitting of the known spectral lines and hence one can hope
to find the torsion display within the spectral analysis experimants.
In particular one can expect the splitting of the spectral lines even for the
more simple hydrogen atom. Now we consider the particular case of
$\;T_\mu = 0,\;\;
S_\mu = const\;$ and estimate the possible spectrum modifications. In this
particular case Hamiltonian operator is
$$
\hat{H} = \frac{1}{2m}\hat{\pi}^2 + \eta_1 \left( \hat{\vec{S}} -
\frac{1}{2mc}\hat{S}_0\hat{\vec{p}} - \frac{1}{2mc}\hat{\vec{p}}
\hat{S}_0 \right)\cdot \hat{\vec{\sigma}}
$$
$$
\vec{\pi} = \vec{p} - \frac{e}{c}\vec{A} \eqno(22)
$$
In the framework of the nonrelativistic approximation $|\vec{p}| \ll
mc$ and hence the second $S_0$ dependent term in the bracets can be omitted.
The
remaining term $\eta_1\vec{S}\vec{\sigma}$ allows the standard
interpretation and gives the contribution $\pm \eta_1 S_3$ into the
spectrum. Thus, if the $S_3$ component of the torsion tensor is not
equal to zero, the energy level is splitted into two sublevels with
the difference $2 \eta_1 S_3$. If now the week transversal magnetic
field is switched on then the cross between the new levels will arise
and energy absorbtion takes place at the magnetic field frequency $w =
\frac{2\eta_1}{S_3}$. Note that the situation is typical within the
magnetic resonance experiments, however in the case this effect arise
due to the torsion, but not magnetic field effects.
It is natural this effect as the torsion resonance. Taking into
account the previous consideration we arise at the conclusion that the
described effect can be explored at different scales: torsion -
induced spin resonance in atoms, the torsion electron resonance and
the torsion nuclear resonance in a medium. Note that the experiments
related with torsion induced splitting of the energy levels was recently
considered in [23].
The next kind of the possible experinments related with torsion are
the ones which deal with the particle equation of motion (21). Let the
electromagnetic field is absent.
Then, according to (21) the interaction with torsion twists the
particle trajectory and therefore any charged particles may be the
sourse of the electromagnetic radiation. Then the structure of the radiation
field enables one to to look for the torsion effects.
Of course, some evident effects like the precession of spin in an
external torsion field also follows from (21). This effect have been
already described in Ref.'s [1,2,15,16]. It is interesting that from
(21c) follows that the direction of precession of spin depends on the
velocity of the particle. Therefore even the week torsion field may
violate the observable precession of spin in a magnetic field
at high temperature.
To obtain the complete picture of the torsion influence to the energy
spectrum modifications as well as to the radiation of charged
particles it is necessary to make the detailed and systematic
investigation of the equations (15), (21) solutions in the case of
a various external field structures. The main difficulty is that there
are no any experimental data for the values of coupling
constants. That is why it is impossible to give the numerical estimate
for the mentioned physical effects.
Note that the inflationary cosmological model with torsion predict a tiny
value of torsion field, which have to be very slowly variing in a
modern epoch [21]. Thus there are some reasons to look for the evidence of some
weak
global (effectively) axial vector in the Universe, and try to give the upper
bound for torsion field from any modern experiments.
From the results of Ref's [12,13,7] it follows that the interaction of
torsion with matter fields is essentially weakened at low energies due to
quantum effects. That is why we have a very small hope to observe torsion in a
low - energy experiments. On the other hand even the very weak interaction with
torsion may be responsible for some symmetry violation because of the
pseudovector nature of the vector $S_\mu$.
Indeed such an effects are essentially related with high-energy physics and
our consideration have to be extended. In any case the results of the above
analysis may be useful in qualitative understanding of the structure of
torsion - matter interaction.
\section{Acknowledgments}
Authors are appreciate Professors G.Cognola, T.Kinoshita, I.B.Khriplovich,
T.Muta and
S.Zerbiny for useful conversations.
One of the authors (I.Sh.) wish to thanks Particle Physics Group
at Hiroshima University and Theory Division at KEK for kind hospitality.
\newpage
|
1,116,691,500,987 | arxiv | \section{Introduction}
This talk is a special one at a workshop dedicated to nonperturbative
methods in baryon physics. It discusses the other side of
things, namely perturbative QCD (pQCD) applied to baryons, with particular
emphasis on applications to exclusive and semi-exclusive reactions.
We will start out in the next section discussing what I will call ``standard
old stuff,'' reviewing methods of calculation and scaling and normalization
predictions that are well known to many, and seeing in what kinematic regime
pQCD seems to work and how well it works there. I might say now that I am
an optimist, thinking that pQCD results can be valid when momentum transfers
are only a few GeV. The ``standard old stuff'' will come in three headings,
namely the scaling behavior expected for amplitudes at high momentum
transfer, with comparison to data, the polarization behavior expected for
amplitudes at high momentum transfer, with comparison to data, and some
review of results that have been gotten in the few cases where normalized
calculations are possible.
To balance the old, section~\ref{newstuff} will present a selection of new
initiatives using pQCD, focusing on semi-exclusive reactions and
connections between low and high momentum transfer behavior of
$\Delta(1232)$ electroproduction.
\section{Standard Old Stuff}
\subsection{Scaling---expectations and data}
Perturbative QCD for exclusive reactions~\cite{bl80} begins by drawing all
the relevant lowest order Feynman diagrams. There can be many for a given
process and calculating all of them can be time consuming. However, the
scaling behavior is generally the same for all the diagrams, and can be
ferreted out relatively easily. The general categories of processes are
form factors at high momentum transfer, or quasi-elastic reactions at
high $s$ at fixed large $\theta_{CM}$. An example of the latter,
specifically for
$\gamma p \rightarrow \pi^+ n$, is given in the Figure below. The
momentum transfer dependence comes from the internal propagators---a
$1/Q^2$ for each gluon propagator (where $Q$ is some momentum scale) and
a $1/Q$ quark propagators---and a factor $Q$ for each quark
line~\cite{cg84,pire}.
\begin{figure} [h] \label{piphoto}
\vglue -2.5cm
\hskip 2cm \epsfysize 4cm \epsfbox{piphoto.eps}
\vglue -2mm
\caption{One lowest order diagram for $\gamma p \rightarrow \pi^+ n$.}
\end{figure}
The amplitude represented by this diagram has four quark lines and three
each of internal quark and gluon propagators. Hence
\begin{equation}
{\cal M} \propto Q^4 Q^{-3} (Q^2)^{-3} = Q^{-5} \propto s^{-5/2},
\end{equation}
\noindent and the differential cross section is
\begin{equation}
{d \sigma \over dt} = {1\over 16 \pi s^2} |{\cal M}|^2 \propto s^7.
\end{equation}
Does it work? Here is a plot of $s^7 d\sigma / dt$ vs. $s$ for
$\theta_{CM}=90^\circ$,
\begin{figure} [h] \label{jimmypi}
\centerline{ \epsfysize 4.7cm \epsfbox{jimmypi.eps} }
\vglue -2mm
\caption{Scaled cross section for $\gamma p \rightarrow \pi^+ n$.}
\end{figure}
\noindent
The bumps at low $s$ are resonance excitations, and the pQCD
expectation appears to succeed just above resonance region.
Form factors for electron elastic or quasi-elastic scattering from a
hadron with $N$ constituents generally go like,
\begin{equation} \label{falloff}
F(Q^2) \propto 1/(Q^2)^{N-1}.
\end{equation}
\noindent For baryon elastic or transition form factors this means
$F \propto 1/Q^4$. (At least the leading form factor falls like this: there
may be form factors that are zero to leading order, which then fall faster.)
Paul Stoler~\cite{stoler} has produced the following plots:
\begin{figure} [h]
\centerline{\epsfysize 3 cm \epsfbox{stoler.eps}}
\label{stoler}
\vglue 3.5mm
\caption{Form factors for two transition form factors, divided by
$F_{\rm dipole}$}
\end{figure}
\vglue -3mm
\noindent For reasons of space, we have shown only the nucleon
to $N(1535)$ and to $\Delta(1232)$ transition form factors. The dipole form
is $(1+Q^2/0.71 {\rm GeV}^2)^{-2}$, so a flat curve is what pQCD predicts.
There are also plots for the elastic case and the $N(1688)$ region, which
look rather like the $N(1535)$. Hence the pQCD results are
successful, except for the $\Delta(1232)$.
The $\Delta(1232)$ falls faster than the others. There is a
reason within the pQCD framework for this and a discussion will come in
section~\ref{ddr}. Also, there has been a suggestion
that the $N(1535)$ is a $\Lambda K$ bound state. This makes the
minimum Fock component a 5 constituent state, with a faster form factor
falloff according to Eqn~(\ref{falloff}). This is
not supported by the data.
\subsection{Polarization---expectations and data} \label{helicity}
The scaling rules tell us the leading scaling behavior, assuming nothing
else suppresses the amplitude farther. In particular, there can be
farther suppression if the helicity conservation rules are violated.
The basic rule is that, neglecting quark mass and binding, the quark
helicity is conserved in interactions with either gluons or
photons. If all interactions are at close range, the orbital
angular momentum of the quarks can be neglected, and then the
helicity of the hadrons overall must be conserved. Each unit violation
of the helicity conservation rule costs a factor of $O(m/Q)$ where $m$ is
some mass scale and $Q$ is some momentum transfer scale~\cite{cg84,pire}.
The nucleon electromagnetic form factors give a simple example. Thinking
in the Breit frame, a transverse photon with helicity $+1$ hitting a
nucleon with helicity $+1/2$ gives a final state nucleon also of helicity
$+1/2$. Hadron helicity is conserved; The previous rules apply. The
result in terms of $G_M$ comes from
\begin{equation}
G_+ = {1\over 2m_N} \langle R, \lambda^\prime = {1\over 2} |
\epsilon_\mu^{(+)} \cdot j^\mu(0) | N, \lambda = {1\over 2} \rangle
= {Q\over m_N \sqrt{2}} G_M \propto {1\over Q^3}
\end{equation}
\noindent and so one gets $G_M \propto 1/Q^4$, which is well known to
be true. However, bringing in a longitudinal photon leads to a final
helicity of $-1/2$, and so the amplitude should be suppressed by a power
of $Q$, and
\begin{equation}
G_0 = {1\over 2m_N} \langle R, \lambda^\prime = {1\over 2} |
\epsilon_\mu^{(0)} \cdot j^\mu(0) | N, \lambda = {1\over 2} \rangle
= G_E \propto {1\over Q^4} .
\end{equation}
Thus for the Pauli
form factor $F_2$ (using $\tau \equiv Q^2/4m_N^2$),
\begin{equation}
F_2 = {G_M + G_E \over 1 + \tau} \propto {1 \over Q^6}.
\end{equation}
\noindent Comparing to $F_1$ in the figure
($F_1 = G_E + \tau G_M / (1 + \tau) \propto 1/Q^4$), one
sees that this prediction from hadron helicity conservation proves to be
true in nature~\cite{bosted}.
\begin{figure} [h]
\centerline{ \epsfysize 4.5cm \epsfbox{bosted92.eps} }
\label{bosted92}
\caption{Checking the $F_2$ scaling behavior vs. data.}
\end{figure}
\subsection{Normalized calculations} \label{ddr}
When normalized calculations can be done, they become the heart of the
perturbative predictions for exclusive reactions. For example, for some
typical form factor the whole high momentum transfer calculation is
\begin{equation}
F(Q^2)= \int [dx] [dy] \phi(x,Q^2) T(x,y,Q^2) \phi(y,Q^2)
\end{equation}
\noindent Here $\phi(x)$ is the distribution amplitude for the final
baryon, simply related to its wave function, and describes finding three
quarks with substantially parallel momenta, with a tolerance related to
the scale $Q $, and with momentum fractions $x_i$; $\phi(y)$ is the same
for the initial state. The distribution amplitudes are only weakly
dependent on $Q$. The main, power law, $Q$ dependence comes from the
amplitude $T$, which describes one quark absorbing a large momentum
transfer
$Q$ and sharing it with the other quarks so they are all parallel moving
in the final state. It is calculated in perturbation theory.
The wave function
or distribution amplitudes cannot be calculated in perturbation
theory. One gets them using QCD
sum rules to get moments of wave functions, which become constraints on
model wave functions, and model wave functions have been offered by, for
the nucleon, CZ and COZ (Chernyak, Oglublin, Zhitnitsky) and KS
(King-Sachrajda) and GS (Gari-Stefanis).
These all lead to good results for proton $G_M$ (of course),
\begin{equation}
Q^3 G_+(p \rightarrow p) \approx 0.75 {\rm \ GeV}^3 ,
\end{equation}
\noindent with
\begin{equation} \label{asymp}
Q^3 G_+(N \rightarrow \Delta) \approx 0.08 {\rm\ GeV}^{3}
\end{equation}
\noindent and
\begin{equation}
Q^3 G_+(p \rightarrow N^*(1535)) \approx 0.46 {\rm\ GeV}^{3} .
\end{equation}
\noindent For definiteness, these use KS for the nucleon and CP
(Carlson-Poor) for the $\Delta$ and $S_{11}$ (with
apologies to FOZZ (Farrar, Oglublin, Zhang, Zhitnitsky) and BP
(Bonekamp-Pfeil))~\cite{delta}.
The asymptotic $\Delta$ transition amplitude is small. Hence what we see
in the data shown earlier is still the subleading part of the transition.
A deep reason not known. Still, we can claim that the DDR (Disappearing
Delta Resonance) is understood within pQCD.
A quick summary of this quick review is that pQCD has a
decent record in explaining data at high
but feasible momentum transfers, for single baryons
\section{New Initiatives} \label{newstuff}
\subsection{Semi-exclusive reactions}
A semi-exclusive reaction is one where one or a few, but not
all, of the hadrons in a final are observed. We will focus on pion
photoproduction~\cite{many,acw12},
$\gamma p \rightarrow \pi X$.
We will also suppose that the transverse
momentum of the pion is high, and that the recoil mass $m_X$ is
high. These provisos ensure that perturbation theory can
be used in the calculations.
We hope to learn or supplement what we know about:
$\bullet$ the polarized and unpolarized gluon distributions of
the target,
$\bullet$ the quark distributions for high x, and
$\bullet$ the pion wave function at short range.
To proceed, let the transverse momenta be high enough (say $k_\perp >
2$ GeV) so that vector meson dominance is a small contributor. The pion
in
$\gamma p \rightarrow \pi X$ comes either from a parton emerging in
some direction and fragmenting (so that the pion is part of a jet)
or---at the very highest transverse momenta---directly as part of the
short range process (whence the pion is kinematically isolated).
Where, fragmentation dominates, about 1/3 to 1/2
of rate comes from gluon targets in the proton. Note the
importance of the high pion transverse momentum, and not just
for allowing perturbative calculations. There has to be a
recoiling particle, hence the process must be higher order.
Then it is possible for the gluon target process to be of the same
order of magnitude as a quark target process.
One quantity to consider is
\begin{equation}
E \equiv A_{LL} \equiv
{d\sigma_{R+} - d\sigma_{R-} \over
d\sigma_{R+} + d\sigma_{R-} }
\end{equation}
\noindent
as a function of $k_\perp$. The $R$ refers to the right handed
polarization of the photon, and the ``$\pm$'' gives the helicity of the
target proton. The corresponding quantity for the subprocess
$\gamma g \rightarrow q \bar q$
is $(-)100$\%, so that there is a possibility of great sensitivity
to the gluon polarization. This is borne out by actual calculations
using a variety of proposed gluon in the proton
distributions~\cite{acw12}.
We will close this section with one more comment. As lower
energies it is harder to find a fragmentation region between
the direct pion production and VMD regions. Help may be
available in fishing out gluon target events by looking two jets
or two hadrons $180^\circ$ apart in azimuth angle. Think of
the two parton level diagrams,
\begin{figure} [ht]
\centerline{\epsfysize .75 in \epsfbox{diagrams.eps} }
\caption{`Gluon fusion' and `quark Compton' subgraphs for pion
photoproduction.}
\end{figure}
\noindent Fragmenting $q$'s give faster hadrons than fragmenting
glue. Perhaps observing two pions with some cut
like each
$k_\perp$ above 1.5 GeV suffices to ensure that
gluon fusion dominates quark Compton~\cite{twojets} even
at CEBAF with 12 GeV.
\subsection{Approach to pQCD in $\Delta(1232)$ electroproduction}
Electroproduction of the $\Delta(1232)$,
$\gamma^* + N \rightarrow \Delta$,
is a tough place to see pQCD at
work for two reasons. One is that the low $Q^2$ starting point is so
different from the asymptotic ending point. In terms of the multipole
amplitudes, the quark model expectation, born out by data, is that
the so-called electromagnetic ratio (EMR) or
$E_{1+}/M_{1+}$ is essentially zero at low $Q^2$, whereas the high $Q^2$
pQCD prediction is that same ratio is unity. The other is that the
leading term asymptotically is unusually small, as we have already noted
in section~\ref{ddr}.
Since pQCD seems to work at a few GeV$^2$ in more normal cases,
we~\cite{cm98} thought we should examine how the probably delayed approach
to the pQCD result might go as a function of $Q^2$. We did so by
choosing simple forms that would give the correct results at low and
high $Q^2$ and that obeyed a few principles. We worked using the
language of helicity amplitudes, say the $G_+$ and $G_-$ defined in
section~\ref{helicity}. The principles were basically three: the
falloffs of $G_+$ and $G_-$ should be $1/Q^3$ and $1/Q^5$
asymptotically; another is that there should be a kinematic zero in the
amplitude at a (timelike) $Q^2$ where the $\Delta$ does not recoil when
produced off a standing nucleon; and another is the high $Q^2$
normalization (with due regard for the uncertainties of the calculation)
of
$G_+$ that was quoted in section~\ref{ddr}.
At the photon point, $Q^2=0$, the overall normalization of the two
helicity amplitudes were fixed by comparing to existing data. The size of
$G_-$, essentially given by the mass parameter governing its falloff in
$Q^2$, was also determined from unseparated in helicity data on
$\Delta$ electroproduction. Some tweaking of the $G_+$ mass parameter
was also needed: there was some information about $E_{1+}/M_{1+}$ at 3
GeV$^2$ even before the recent CEBAF data was released. Results of
our fits are shown in the Figure below.
\begin{figure} [h]
\centerline{\epsfysize 4 cm
\epsfbox{emr.eps}}
\label{emr}
\caption{The electromagnetic ratio for $\Delta$ electroproduction}
\end{figure}
The solid curve is our preferred fit; the dashed curve is a naive fit
that did not fit the unseparated data well, and the not so different
dotted curve has a asymptotic $G_+$ that was in our opinion too large
even given generous uncertainties in the calculated value. It appears
that even in this tough situation there will be some push toward the pQCD
result by 10 GeV$^2$ momentum transfer.
\medskip
We have only mentioned two new initiatives because of space and time
limitations. Others exist, notably~\cite{dvc} the idea of off-forward
parton distributions and applications to deeply virtual Compton
scattering and meson electroproduction and also including new work on
inclusive/exclusive connections.
\begin{acknowledge}
I thank the organizers of this excellent workshop for their hard work,
my collaborators on the projects described in the ``new initiatives
section,'' namely Andrei Afanasev, Nimai Mukhopadhyay, Chris Wahlquist,
and the NSF for support under grant PHY-9600415.
\end{acknowledge}
\makeatletter \if@amssymbols%
\clearpage
\else\relax\fi\makeatother
|
1,116,691,500,988 | arxiv | \section{Introduction}
\label{sec;introduction}
In \citep{approximations_of_set_valued_functions} the approximation of set-valued functions mapping $[a,b]$ to compact subsets of $\mathbb{R}^d$ is discussed, and theoretical results, regarding the adaptation of the operator of polynomial interpolation from real-valued functions to set-valued functions, have been established. The main idea of this adaptation is to replace operations between numbers by operations between sets. More precisely, given a finite number of samples of a set-valued function $F$, $\{F(x_{i})\}_{i=0}^{N}$, we find all metric chains (see \ref{eq:metric_chains}) connecting these sample sets. The {\bf metric polynomial interpolant} of the set-valued function $F$ at a point $x$ is then defined as the union of the $\mathbb{R}^d$ values at $x$ of the $\mathbb{R}^d$-valued polynomials interpolating the metric chains.
Another way of viewing the problem is reconstruction of a set in $\mathbb{R}^{d+1}$ from its parallel cross-sections, which are compact sets in $\mathbb{R}^{d}$. For example, 3D objects reconstruction from their 2D cross-sections is an important problem in geometric modelling, where algorithms have been proposed, e.g. \cite{Bajaj}, \cite{Boissonnant}, \cite{KelsDyn}, \cite{Levin1986}.
In this work, we limit our research to set-valued functions mapping $[a,b]$ to compact subsets of $\mathbb{R}$. Our contribution in Section 2 is an efficient algorithm, which finds a small sub-collection of the collection of all metric chains built on the samples $\{F(x_{i})\}_{i=0}^{N}$, which we term \textit{significant metric chains}. These significant metric chains are sufficient for reconstructing an approximation of the graph of the set-valued function.
We demonstrate the results of our new algorithm on Lipshitz continuous set-valued functions, and choose $\{x_i\}_{i=0}^N$ to be the roots of Chebyshev polynomial of degree $N+1$. We show that the algorithm “reconstruct” the graph of $F$, which is a $2D$ object, from its $1D$ samples with an approximation rate of $O\Big(\frac{\log{N}}{N}\Big)$, as predicted by the theory in \cite{approximations_of_set_valued_functions}.
In Section 3 we modify the theoretical and the algorithmic results, to achieve a better approximation rate. In particular, we obtain a rate of $O(h^{4})$, where $h$ is the maximal distance between adjacent interpolation points. This is done under the additional assumption that the smoothness of the boundaries of the graph of $F$ are $C^{4}$.
According to our conclusions in Section 2, the maximal error occurs in the vicinity of the points of topology change of $F$ (PCTs). Thus, we suggest a method for high order approximation of the points of change of topology, which results in decreasing the interpolation error. Another factor contributing to the improvement in the decay of the error is due to the use of spline interpolation.
We demonstrate our algorithm using a "not-a-knot" cubic spline interpolation at equally spaced points. By modifying the algorithm of the previous section, we can separate the holes in the graph of $F$ from each other, and use individual spline approximations for the boundaries of each hole.
In Sections 2 and 3 we dealt with set-valued functions (SVFs) with Lipschitz type holes. In Section 4 we extend
our algorithm to deal with SVFs whose holes have upper and lower boundaries, which are $C^{2k}$ with H\"older type singularities at both PCTs. More specifically, assuming the hole is in the interval $[c,d]$, we consider the case where the first derivative of its boundaries diverges as $x\to c^{+}$ at a rate of $|x-c|^{-\frac{1}{2}}$ and as $x\to d^{-}$ at a rate of $|x-d|^{-\frac{1}{2}}$.
We further assume the hole is defined as the interior of a closed boundary curve $\Gamma\in C^{2k}$, such that every vertical cross-section at $x\in(c,d)$ cuts the curve at two points.
We develop an algorithm for deriving high order approximations to holes of H\"older type singularity. We remark here that the algorithm suggested in \cite{Levin1986} fails in approximating such holes in the neighborhood of the PCT's. The algorithm suggested here starts with deriving a high order approximation to the location of the singular points, i.e., the PCT's.
Next, this information is used for computing local singular approximations of the upper and lower boundary functions $g$ and $h$ near the PCT's. Afterwards, we subtract these local approximations in order to regularize the given data of $g$ and $h$. Finally, a spline approximation is applied to the regularized data, and the final approximation is obtained by returning the local singular elements. We present a detailed escription of our algorithm and an error analysis for the approximation of the PCTs.
\section{Preliminaries}
In this section we present definitions, notations and operations relevant to our work:
\subsection{Preliminaries on sets and on set-Valued functions}\label{pre_set_svf}
In this section we present definitions, notation and operations relevant to our work:
\begin{itemize}
\item The set of all compact non-empty subsets of $\mathbb{R}^{d}$ is denoted by $K(\mathbb{R}^{d})$.
\item For given two sets $V, W\in K(\mathbb{R}^{d})$, the \textbf{Hausdorff metric}, which measures the distance between $V$ and $W$, is defined as
\begin{equation}
d_{H}(V,W)=\max{\bigg\{\max_{v\in V}{d(v,W)},\max_{w\in W}{d(w,V)}\bigg\}},
\end{equation}
where $d(v,W)=\min_{w\in W}{\big\{|v-w|\big\}}$ and $|\cdot|$ is the Euclidean distance.
\item The set of all metric pairs of two given sets $V, W\in K(\mathbb{R}^{d})$ is
\begin{equation}
\Pi(V,W)=\bigg\{(v,w)\in V\times W: v\in \Pi_{V}(w) \ \text{or}\ w\in \Pi_{W}(v)\bigg\},
\end{equation}
where $\Pi_{V}(w)=\big\{ v\in V:|v-w|=d(w,V) \big\}$.
\item The collection of \textbf{Metric Chains} of a finite sequence of compact sets $\{V_{i}\in K(\mathbb{R}^{d})\}_{i=0}^{N}$ is
\begin{equation}
\label{eq:metric_chains}
MC\bigg(\{V_{i}\}_{i=0}^{N}\bigg)=\bigg\{
(v_{0},...,v_{N}) : (v_{i},v_{i+1})\in \Pi(V_{i},V_{i+1}),\ i=0,\dots,N-1\bigg\}.
\end{equation}
Note that $MC\bigg(\{V_{i}\}_{i=0}^{N}\bigg)$ depends on the order of the sets.
\item A \textbf{Metric Linear Combination} of a finite sequence of compact sets $\big\{V_{i}\in K(\mathbb{R}^{d})\big\}_{i=0}^{N}$ is
\begin{equation}
\label{eq:mlc}
\bigoplus_{i=0}^{N}\lambda_{i}V_{i}=\bigg\{
\sum_{i=0}^{N}{\lambda_{i}v_{i}} : (v_{0},...,v_{N})\in MC\bigg(\{V_{i}\}_{i=0}^{N}\bigg)
\bigg\},
\end{equation} where $\lambda_{i}\in \mathbb{R},\ 0\leq i\leq N$.
\item A set of points $X=\{x_{0},...,x_{N}\}$ is a partition of the interval $[a,b]$ if\\ $a\leq x_{0} < ... < x_{N}\leq b$. The "norm" of $X$ is $|X|=\max_{i}\{{|x_{i+1}-x_{i}|}\}$ for $0\leq i\leq N-1$.
\item A function $F:[a,b]\to K(\mathbb{R}^{d})$ is called a set-valued function (SVF).
\item A set-valued function $F$ is called H\"older continuous, with respect to the Hausdorff metric, if there exists a constant $\mathcal{C}>0$ such that
\begin{equation}
d_{H}(F(x),F(y))\leq \mathcal{C}|x-y|^{\alpha},\quad x,y\in [a,b].
\end{equation}
where $\alpha\in(0,1]$. We denote the collection of all H\"older continuous functions on $[a,b]$ with the constants $\mathcal{C}$ and $\alpha$ by $Hol_{\alpha}([a,b];\mathcal{C})$. In the special case $\alpha=1$, $F$ is called Lipschitz continuous function. We denote the collection of all Lipschitz continuous functions on $[a,b]$ with a constant $\mathcal{L}>0$ by $Lip([a,b];\mathcal{L})$.
\item For a set-valued function $F:[a,b]\to K(\mathbb{R}^d)$, we define the graph of $F$ by
\begin{equation}
Graph(F)=\big\{(x,y):x\in[a,b],\ y\in F(x)\big\}.
\end{equation}
\comment{
\item In \cite{the_metric_integral_of_set_valued_functions}, the linear approximation operators that are adapted from real-valued functions to SVFs, are of the form
\begin{equation}
\label{eq:lo}
A_{X}f(x)=\sum_{i=0}^{N}{a_{i}(x)f(x_{i})},
\end{equation}
where the function $f:[a,b]\to\mathbb{R}$, and $X$ is a partition of $[a,b]$.
\item For $F:[a,b]\to K(\mathbb{R}^{d})$ and $X$ a partition of $[a,b]$, the \textbf{Metric Operator} $A^{M}_{X}$ of $F$ has the form
\begin{equation}
A^{M}_{X}F(x)=\bigoplus_{i=0}^{N}a_{i}(x)F(x_{i}).
\end{equation}}
\item $K^{*}(\mathbb{R})$ is a subspace of $K(\mathbb{R})$. Each value of $F:[a,b]\to K^{*}(\mathbb{R})$ is a union of a finite number of compact intervals. Our method presented in this section applies such $F$.
\item A \textbf{Point of Change of Topology} (PCT) of a set-valued function $F:[a,b]\to K^{*}(\mathbb{R})$ is a point $(x,y)\in Graph(F)$ that for small enough $\epsilon > 0$ there exists $\delta > 0$ such that for each $z\in[x-\delta,x+\delta]/\{x\}$, $F(z)$ and $F(x)$ have different topology, i.e. $|F(x)\cap B_{\epsilon}(y)|\neq|F(z)\cap B_{\epsilon}(y)|$ where $B_{\epsilon}(y)=[y-\epsilon,y+\epsilon]$ and $|\cdot|$ represents the number of the intervals in a given set.
\item A \textbf{Hole} $H$ of a set-valued function $F:[a,b]\to K^{*}(\mathbb{R})$ is a set of the form
\begin{equation}
\label{eq:definition_of_hole}
H=\big\{(x,y):g(x)<y<h(x),x\in(c,d)\big\}\not\subset Graph(F),
\end{equation}
where $g,h:[c,d]\to \mathbb{R}$, $g(c)=h(c)$, $g(d)=h(d)$ and $g(x),h(x)\in F(x)$ for $x\in[c,d]$. We note that the points $(c, g(c))$ and $(d, g(d))$ are PCTs of $F$.
\comment{\item We consider $F:[a,b]\to{K}^{*}(\mathbb{R})$ with $M<\infty$ holes $\{H_{j}\}_{j=1}^{M}$ with $\big\{h_{j},g_{j}\big\}_{j=1}^{M}$ their boundary functions, defined on respective intervals $[c_i,d_i]$, $a<c_i$, $d_i<b$. Let $u,\ell:[a,b]\to \mathbb{R}$ be real-valued functions representing the upper and the lower boundaries of $F$. We further assume that $F(a)$ and $F(b)$ are convex.}
\item For $F:[a,b]\to{K}^{*}(\mathbb{R})$, we define $u,\ell:[a,b]\to \mathbb{R}$ be real-valued functions representing the upper and the lower boundaries of $F$.
\item {\bf The class $\mathcal{F}([a,b], M)$}: In this work we consider the class of set-valued functions denoted by $\mathcal{F}([a,b], M)$, where $M\in\mathbb{N}$, with the following properties:
\begin{enumerate}
\item For $F\in \mathcal{F}([a,b], M)$, $Graph(F)$ has separable $M$ holes $\{H_i\}_{i=1}^M$ (i.e. the closures of the holes are disjoint).
\item A hole $H_i$ is defined on an interval $[c_i,d_i]\subset(a,b)$, with lower and upper boundary functions $g_i$ and $h_i$. Each hole $H_i$ is simple, namely, it is defined as the interior of a closed boundary curve $\Gamma_i$, such that every vertical cross-section at $x\in(c_i,d_i)$ cuts $\Gamma_i$ at two points.
\item The curves $\{\Gamma_i\}$ do not intersect the upper and the lower boundaries of $Graph(F)$.
\item We further assume that $F(a)$ and $F(b)$ are convex.
\end{enumerate}
\item We denote the set of functions $\big\{u,\ell\big\}\bigcup\big\{g_{j}\big\}_{j=1}^{M}\bigcup\big\{h_{j}\big\}_{j=1}^{M}$ by $\partial F$. Note that $\partial F$ consists of all the boundary functions of $Graph(F)$.
\end{itemize}
\subsubsection{Representing the SVF by the boundaries functions}\label{subsub1}
\hfill
\medskip
The SVF $F$ may be defined using all the above boundary functions, as follows:
For $x\in [a,b]$ we identify all the holes' intervals $\{[c_i,d_i]\}_{i\in I(x)} $ containing $x$. If $\#I(x)=J(x)>0$, we order the corresponding boundary values $\{g_i(x)\}$, $i\in I(x)$, in ascending order, and index the relevant holes according to this ordering $\{H_{i_j}\}_{j=1}^{J(x)}.$ The set $F(x)$ may be expressed as
\begin{equation}\label{Fatx0}
F(x)=[\ell(x),g_{i_1}(x)]\cup\bigcup_{j=1}^{J(x)-1}[g_{i_j}(x),h_{i_{j+1}}(x)]\cup[h_{i_J}(x),u(x)].
\end{equation}
If $J(x)=0$,
\begin{equation}\label{Fatx1}
F(x)=[\ell(x),u(x)].
\end{equation}
In this work we consider the approximation of a set-valued function from a finite number of its samples, and we consider three cases specified by the smoothness class of the boundary functions $\big\{u,\ell\big\}\bigcup\big\{g_{j}\big\}_{j=1}^{M}\bigcup\big\{h_{j}\big\}_{j=1}^{M}$.
In Section \ref{sec:computed_svf_interpolant} we consider the case of boundary functions of Lipschitz type. In Section \ref{C4boundaries} we assume the boundary functions are $C^4$. In Section \ref{Holderboundaries} we deal with SVFs whose holes have upper and lower boundaries of H\"older type with H\"older exponent $\frac{1}{2}$ at both PCTs.
\section{Approximated metric polynomial interpolant} \label{sec:computed_svf_interpolant}
Paper \cite{approximations_of_set_valued_functions} presents a theoretical method for interpolation of Set-valued functions by the \textit{metric polynomial interpolant}. Inspired by the definition of the \textit{metric polynomial interpolant}, we present an efficient algorithm for approximating a set-valued function $F$ from a finite number of its samples.We term the output of our algorithm \textit{approximated metric polynomial interpolant}.
For Lipschitz continuous
$F$ and for Chebyshev interpolation points the approximated metric polynomial interpolant approximates $F$ at the same rate as the \textit{metric polynomial interpolant}.
\subsection{The metric polynomial interpolant}
In this section we present the adaptation of the classical polynomial interpolation operators in Lagrange form to set-valued functions, and present an upper bound of the error in an important special case.
Recalling that for a real valued function $f\in C[a,b]$ the Lagrange form of the polynomial interpolation operator at a partition $X\subset[a,b]$ is given by
\begin{equation}\label{Lagrange}
\mathcal{P}_{X}f(x)=\sum_{i=0}^{N}{l_{i}(x)f(x_{i})},
\end{equation}
For SVF approximation we use the metric analogues of the polynomial interpolation operator:
\begin{definition}(\cite{approximation_of_set_valued_functions}, Section 7.4.3)
Let $F:[a,b]\to K(\mathbb{R}^{d})$ be a set-valued function and $X\subset[a,b]$ be a partition. Let $\big\{(x_{i},F(x_{i}))\big\}_{i=0}^{N}$ be a data set consisting of the samples of $F$ at $X$. The \textbf{metric polynomial interpolation operator} is given by
\begin{equation}\label{PMX}
\mathcal{P}^{M}_{X}F(x)=\bigoplus_{i=0}^{N}l_{i}(x)F(x_{i})=\bigg\{
\sum_{i=0}^{N}l_{i}(x)f_{i}:(f_{0},...,f_{N})\in MC\Big(\big\{F(x_{i})\big\}_{i=0}^{N}\Big)
\bigg\},
\end{equation}
where $l_{i}(x)$ is defined as in the real-valued case.
\end{definition}
It is shown in \cite{approximation_of_set_valued_functions} that for $F\in Lip([a,b],\mathcal{L})$ the metric polynomial interpolation approximates $F$, in the Hausdorff metric, with approximation rate $O(\log(N)/N)$, where $X$ is the set of N Chebyshev points in $[a,b]$ (the roots of the N-th degree
Chebyshev polynomial in $[a,b]$.
However, the elegant formula (\ref{PMX}) representing the metric polynomial interpolant is not a practical one, since, for most cases, in particular for $F\in \mathcal{F}([a,b],M)$, the set of metric chains is infinite. We present below an efficient algorithm for computing an SVF approximation for $F\in \mathcal{F}([a,b],M)$, which gives the same approximation rate as the metric polynomial interpolant.
\subsection{The algorithm}
In this section we present our new method for an efficient computation of an interpolant to $F$. We do it by approximating the boundaries of the graph of $F$. First, we introduce and recall notions and notation used in the presentation of our method.
\subsubsection{Notions and notation}
\begin{itemize}
\item $\mathcal{N}$ is a \textbf{Maximal Interval} in a set $A\in K^{*}(\mathbb{R})$, if there is no interval $\mathcal{M}\neq \mathcal{N}$ satisfying $\mathcal{N}\subset\mathcal{M}\subset A$.
\item \textbf{Samples of an SVF:}
Given a set of interpolation points $X=\{x_{i}\}_{i=0}^{N}$, the samples of $F\in K^{*}(\mathbb{R})$ at these points are $\{F(x_{i})\}_{i=0}^{N}$. Each sample has the form
\begin{equation}\label{sample}F(x_{i})=\bigcup_{j=0}^{M_i}I_{i,j},
\end{equation}
where, for $j=0,...,M_i$, $I_{i,j}=[a_{2j}^{[i]},a_{2j+1}^{[i]}]$ are maximal intervals in $F(x_i)$, with
$a_{j}^{[i]}< a_{j+1}^{[i]}$, for $j=0,...,M_i$.
\item The set of {\bf approximated points of change of topology} in $F(x_{i})$, for $0\leq i\leq N$, is defined by
$$APCT(F,x_{i})=\bigg\{ p=\frac{\min{(I_{j,k+1})} +\max{(I_{j,k})}}{2},\ 0\le k< M_j,\ {\text s.t.}\ p\in F(x_{i})\ {\text and}\ p\notin F(x_{j}),\ {\text for}\ |j-i|=1\bigg\}$$
\iffalse
$$APCT(F,x_{i})=\bigg\{p\in F(x_{i}):p\notin F(x_{j}),\quad \exists\ \ 0\leq k < M_{j}, \quad p=\frac{\min{(I_{j,k+1})}+\max{(I_{j,k})}}{2}\bigg\},$$
\fi
\item {\bf Extended PCT points:} The sets of right and left extended PCT points in $F(x_{i})$, for $0\leq i\leq N$, are defined by
$$EP_{R}(F,x_{i})=\Big\{p\in F(x_i):\exists j<i\ \text{ s.t. } p\in APCT(F,x_{j}) \text{ and } p\in F(x_\ell),\ j<\ell<i\Big\},$$
$$EP_{L}(F,x_{i})=\Big\{p\in F(x_i):\exists j>i\ \text{ s.t. } p\in APCT(F,x_{j}) \text{ and } p\in F(x_\ell),\ i<\ell<j\Big\}.$$
\item \textbf{Discrete Samples}
Given a sample $F(x_i)$ defined by (\ref{sample}), the discrete sample is the set of end points of the intervals,
$$
\zeta \big(F(x_{i})\big)= \big\{a^{[i]}_{0},a^{[i]}_{1}, \ldots, a^{[i]}_{2M_{i}},a^{[i]}_{2M_{i}+1}\big\}.
$$
\iffalse
a set of interpolation points $X=\{x_{i}\}_{i=0}^{N}$ and the samples of $F\in K^{*}(\mathbb{R})$, at these points $S=\{F(x_{i})\}_{i=0}^{N}$, where for $i=0,\ldots,N$, $F(x_{i})=\bigcup_{j=0}^{M_i}[a^{[i]}_{2j},a^{[i]}_{2j+1}]$, the discrete samples are
$$
\tilde{S}=\big\{V^{[i]}\big\}_{i=0}^{N} \ \text{with}\ V^{[i]}=\bigg \{ \big\{a^{[i]}_{0},a^{[i]}_{1}\big\}, \ldots, \big\{a^{[i]}_{2M_{i}},a^{[i]}_{2M_{i}+1}\big\} \bigg \}
$$
Moreover, given a sample $F(x_{i})$, $0\leq i\leq N$, we denote the corresponding discrete sample by $\zeta \big(F(x_{i})\big)$.
\fi
Note that $\zeta \big(F(x_{i})\big)$ consists of boundary points of $Graph(F)$.
\item The set of \textbf{Significant Metric Chains} of a given finite set of samples $\big\{F(x_{i})\big\}_{i=0}^{N}$, is a subset of $MC_{F,X}=MC\big(\{F(x_{i})\}_{i=0}^{N}\big)$ given by:
\begin{equation}
\begin{aligned}
SMC\bigg(\big\{F(x_{i})\big\}_{i=0}^{N}\bigg)=\bigg\{
\big(f_{0},...,f_{N}\big)\in MC_{F,X}:\forall i,\ 0\leq i\leq N-1, (f_{i},f_{i+1})\in \Pi\big(T_{i},T_{i+1}\big)\bigg\},
\end{aligned}
\end{equation}
where $T_{i}=\zeta\big(F(x_{i})\big)\cup APCT(F,x_{i})\cup EP_{r}(F,x_{i})\cup EP_{l}(F,x_{i}),\quad 0\leq i\leq N$.
\iffalse
In words, a significant metric chain is a metric chain with each one of its pairs being a metric pair among the points on the boundary of the corresponding samples, union with the approximated PCT points and the extended PCT points.
\fi
We denote the set of the polynomials that interpolate the set of significant metric chains by $P_{SMC(\{F(x_{i})\}_{i=0}^{N})}$.
\end{itemize}
\iffalse
\item Let $x_{i}, x_{i+1}\in X$ and let $\mathcal{N}\subset F(x_{i})$, $\mathcal{M}\subset F(x_{i+1})$ be two maximal intervals. Let $Q^{up}_{i}=\{f^{up}|_{[x_i,x_{i+1}]}:f^{up}\in P_{SMC(\{F(x_{l})\}_{l=0}^{N})}\}$ and $Q^{low}_{i}=\{f^{low}|_{[x_i,x_{i+1}]}:f^{low}\in P_{SMC(\{F(x_{l})\}_{l=0}^{N})}\}$ satisfying at least one of the following conditions:
\begin{enumerate}
\item $f^{up}(x_{i})=\max{(\mathcal{N})}$ and $f^{low}(x_{i})=\min{(\mathcal{N})}$.
\item $f^{up}(x_{i+1})=\max{(\mathcal{M})}$ and $f^{low}(x_{i+1})=\min{(\mathcal{M})}$.
\end{enumerate}
\item
A \textbf{Vertical section}, corresponding to $[x_{i},x_{i+1}]$ is the set-valued function given by
\begin{equation}
VS(x)=\Big\{\big[f^{low}_{k}(x),f^{up}_{k'}(x)\big] : \max_{f^{up}_{k}\in Q^{up}_{k}, f^{low}_{k'}\in Q^{low}_{k'}}{\big|f^{up}_{k'}(x)-f^{low}_{k}(x)\big|}\Big\},
\end{equation}
{\color {blue}
\begin{equation}\label{VSx}
VS(x)=\big[f^{low}(x),f^{up}(x)\big]\ , f^{up}\in Q^{up}_{i}, f^{low}\in Q^{low}_{i}\ s.t {\big|f^{up}(x)-f^{low}(x)\big|}\ is\ maximal,
\end{equation}
}
where $x\in[x_{i},x_{i+1}]$, and $VS(x_{i})\subseteq\mathcal{N}$ and $VS(x_{i+1})\subseteq\mathcal{M}$.
We note that there might be multiple vertical sections corresponding to $[x_{i},x_{i+1}]$. Each one is related to a specific pair of maximal intervals $(\mathcal{N},\mathcal{M})$.
\item \textbf{Data Structure} ($DS$) of the set-valued interpolant $\Tilde{F}$ of a set-valued function $F$ at $X=\{x_i\}_{i=0}^{N}$, consists of a list of the intervals $\big\{[x_{i},x_{i+1}]\big\}_{i=0}^{N-1}$. To each interval a list of vertical sections is assigned.
{\color {blue}
\item \textbf{The interpolant $\tilde F(x)$} at $x\in [x_i,x_{i+1}]$ is $$\tilde F(x)=\cup_{\mathcal{N}\in F(x_i),\mathcal{M}\in F(x_{i+1})}VS_{\mathcal{N},\mathcal{M}}(x).$$
}
\fi
\iffalse
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{diagram.PNG}
\caption{The diagram of the algorithm.}
\label{fig:algorithm_diagram}
\end{figure}
\fi
\subsubsection{A description of the algorithm for the approximation of $F\in \mathcal{F}([a,b],M)$}
\begin{enumerate}
\item \textbf{Create the discrete samples}:
We substitute each one of the given samples by a discrete sample. The discrete sample is obtained from the give sample by replacing each interval by its two boundary values.
\item \textbf{Find all significant metric chains}:
This step is done by utilizing a tree data structure. Initialy the $i^{\text{th}}$ layer of the tree consists of $2M_{i}+2$ nodes, each containing a point of the $i^{\text{th}}$ discrete sample. At the end of this step each path in the tree represents a significant metric chain.
First, the algorithm identifies all {\bf APCTs}. Each identified APCT is added as a new node to the corresponding layer, and connected to the two nodes corresponding to its metric pair. Next, the {\bf extended PCT points} are added to the corresponding layer.
Then the algorithm scans all the layers of the tree and connects all metric pairs between two consecutive layers
\item \textbf{Compute the real-valued interpolants of the significant metric chains}:
By using a known algorithm for computing real-valued polynomial interpolation, the algorithm computes a set of polynomial interpolants. Each one of these polynomials interpolates the values of one of the significant metric chains at the points $X$.
\item
{\bf Extracting the approximations to the boundaries of $F$:}
Due to the structure of the significant metric chains,
the upper and lower boundaries, $u$, $\ell$, of $Graph(F)$ are interpolated each by one of the interpolation polynomials, computed in the previous step. Also, the boundary of
each of the M holes in $Graph(F)$ is approximated by at least one pair of interpolants, one approximating its upper boundary and one its lower boundary. Both polynomials interpolate the two APCTs of that
hole.
The algorithm extracts these approximations to the boundaries of $Graph(F)$ by relating each significant metric chain either to $u$ or to $\ell$ or to an upper or lower boundary of the $M$ holes.
In case there are more than one pair of interpolating polynomials of a boundary of a hole, an arbitrary choice of one pair is taken as the approximant of that boundary. The latter case is nongeneric.
\item{\bf Construct the approximation $\tilde F(x)$:}
Using the approximations to the boundaries of $Graph(F)$, we apply the procedure in Section \ref{subsub1} for defining a set-valued function using its boundary functions.
\end{enumerate}
\medskip
{\bf Conclusion: The interpolation property}
Since the approximated boundary functions interpolate the exact boundary functions at the sample points, it follows that $\tilde F(x_i)=F(x_i)$, $x_i\in X$.
\iffalse
\item \textbf{Construct the data structure of the set-valued interpolant $\Tilde{F}$}:
For each interval between the locations of two consecutive samples, we find all vertical sections. Then, we add each one of these intervals together with the corresponding set of vertical sections to the data structure.
\item \textbf{Compute the set $\Tilde{F}(x)$ for a given $x\in[a,b]$}:
Given a value $x$, we first identify which one of the intervals in the data structure contains it. Then, we return a set $\Tilde{F}(x)=\bigcup_{0\leq j\leq M
_{i}}VS_{j}(x)$, where $\big\{VS_{j}\big\}_{j=0}^{M_{i}}$ is the set of all vertical sections corresponding to the same interval containing $x$.\bigskip
\end{enumerate}
\fi
\iffalse
See the diagram in Figure \ref{fig:algorithm_diagram}. See also Appendix B for a detailed description of our algorithm as pseudo-code.
To verify that $\tilde{F}$ interpolates $F$ at the points of $X$, we recall the definition of the significant metric chains and the definition of vertical sections, and conclude that
$$
F(x)=\tilde{F}(x)\quad\text{ for } x\in X.
$$
\fi
\subsection{Error analysis}
By the theory in \cite{approximation_of_set_valued_functions}, a Lipschitz continuous set-valued function $F$ is approximated with $O\left(\log(N)/N\right)$ approximation rate in the Hausdorff netric,
by the \textit{metric polynomial interpolant},
interpolating its values at the $N+1$ Chebyshev points (the roots of the Chebyshev polynomial of degree $N+1$). The metric polynomial interpolant is the set of polynomials which interpolate all the metric chains defined by the values of $F$ at the $N+1$ Chebyshev points.
Based on the values of $F$ at the $N+1$ Chebyshev points, our algorithm generates an interpolant $\tilde{F}$, interpolating $F$ at the Chebyshev points. Here we show that $\tilde{F}$ approximates $F$ with error (in the Hausdorff metric) of order $O(\log(N)/N)$ as $N\to\infty$, using only a small subset of metric chains (significant metric chains).
Before stating this result as a theorem, we state three results
which are needed in the proof of the theorem.
The first result is
Theorem 9.3.4 from \cite{approximation_of_set_valued_functions}.
\begin {theorem}
\label{Th:9.3.4}
If $F\in Lip([a,b];\mathcal{L})$, then for any $f\in\partial F$
\begin{equation}
\label{th:lip_svf}
f\in Lip(D_{f},\mathcal{L}),
\end{equation}
with $D_{f}$ the domain of definition of $f$.
\end{theorem}
\begin{lemma}\label{lemma:Cheb}
Let $f\in Lip([a,b],L)$, and let $X=\{x_i\}_{i=0}^N$ be the $N+1$ Chebyshev points in $[a,b]$. Consider the polynomial $p_N\in\Pi_N$ interpolating a perturbed data $f^*(x_i)=f(x_i)+e_i$ where $|e_i|\le \epsilon$, $0\le i\le N$, then
\begin{equation}
\|f-p_N\|_\infty\le C_1\frac{\log(N)}{N}+C_2\epsilon\log(N).
\end{equation}
\end{lemma}
\begin{proof}
By the theory of polynomial interpolation to Lipschitz continuous real-valued functions at Chebyshev points \cite{BL}, and since $f\in Lip([a,b],L)$, it follows that if $q_N$ interpolates the values $\{f(x_i)\}_{i=0}^N$, then
$\|f-q_N\|_\infty\le C_1\log(N)/N$ for $N$ large.
The polynomial $p_N$ interpolates values which are $\epsilon$ perturbation of the values of $h$.
Considering the Lagrange form of the interpolation operator in (\ref{Lagrange}),
it follows that
\begin{equation}
q_N(x)-p_N(x)=\sum_{i=0}^{N}{l_{i}(x)(q_N(x_{i})-p_N(x_i))},\ \ x\in [a,b].
\end{equation}
By \cite{BL}, the Lebesgue constant of the interpolation operator at Chebyshev points is $O(log(N))$ as $N\to \infty$, namely, $\sum_{i=0}^{N}|l_{i}(x)|\le Clog(N)$.
Therefore, it follows that
\begin{equation}
|q_N(x)-p_N(x)|\le C_2\epsilon\log(N),\ \ x\in [a,b].
\end{equation}
Finally,
\begin{equation}
|f(x)-p_N(x)|\le |f(x)-q_N(x)|+|q_N(x)-p_N(x)|\le C_1\frac{\log(N)}{N}+C_2\epsilon\log(N).
\end{equation}
\end{proof}
The third result is a general lemma on sets which is proved in Appendix A.
\begin {lemma}
\label{Lemma:sets}
Let $A_1, A_2, B_1,B_2$ be subsets of $\mathbb {R}^d$. Then
$$ d_H\big(A_1\cup A_2,B_1\cup B_2\big)\le \max\big\{d_H(A_1,B_1),d_H(A_2,B_2)\big\}.$$
\end{lemma}
Equipped with these results we turn to the main theorem of this section:
\begin{theorem}\label{Thm1}
Let $F\in Lip([a,b],L)\cap \mathcal{F}([a,b],M)$, and let $\tilde{F}_N$ be the output of our algorithm from input consisting of samples of $F$ at the $N+1$ Chebyshev points. Then
for $x\in[a,b]$,
\begin{equation}
\label{eq:error_a}
d_{H}(\tilde{F}_N(x),F(x))=O\Big(\frac{\log{N}}{N}\Big) \quad
\textrm{as}\ \ N\to\infty\ .
\end{equation}
\end{theorem}
\begin{proof}
To study the approximations of the boundary functions of the holes, let us first consider the case of a single hole, $H$. We assume the hole is spanned in the interval $[c,d]$, and we let $h$ and $g$ be the functions
defined on $[c,d]$ which describe the upper and the lower boundaries of the hole respectively. The point $(c,h(c))=(c,(g(c))$ and the point $(d,h(d))=(d,g(d))$ are the left and right points of change of topology (PCTs) in the graph of $F$.
We further simplify the geometry assuming that
\begin{equation}\label{assumption3}
h(c),h(d)\in (\max_{x\in [a,b]}\ell(x),\min_{x\in [a,b]}u(x)).
\end{equation}
By Theorem~\ref{Th:9.3.4} $h,g\in Lip([c,d],\mathcal{L})$ and $u,\ell\in Lip([a,b],\mathcal{L})$. Our algorithm constructs interpolating polynomials at Chebyshev points in $[a,b]$, which requires us to extend $h$ and $g$ to the whole interval. We extend both $h$ and $g$ by the constant $h(c)=g(c)$ on $[a,c]$ and by the constant $h(d)=g(d)$ on $[d,b]$. We denote the extended functions by $h^*$ and $g^*$, and it is easy to verify that these functions are in $Lip([a,b],\mathcal{L})$, since $F\in Lip([a,b],L)$.
Let $X=\{x_i\}_{i=0}^N$ be the $N+1$ Chebyshev points in $[a,b]$, and let $\bar X=\{x_i\}_{i=n}^m=X\cap [c,d]$. Given the set-valued data $\{F(x_i)\}_{i=0}^N$, our algorithm identifies the points $\bar X$, and extracts the values of the functions $h$ and $g$ at these points,
$$\{h(x_i),\ g(x_i)\},\ \ \ n\le i \le m.$$
The values $h(c)=g(c)$ and $h(d)=g(d)$ are not given, and the algorithm approximates the left PCT, $(c,h(c))$, by the point
$(x_{n-1},(g(x_{n})+h(x_{n}))/2)$, since $(g(x_{n})+h(x_{n}))/2$ is a metric pair with both $g(x_n)$ and $ h(x_n)$. Similarly the right PCT of the hole, $(d,h(d))$, is approximated by the point
$(x_{m+1},(g(x_{m})+h(x_{m}))/2) .$
The algorithm constructs two significant metric chains on $X$,
$$\{h_\ell\}_{i=0}^{n-1},h(x_{n}),h(x_{n+1})\ldots h(x_m),\{h_r\}_{i=m+1}^N ,$$
and
$$\{g_\ell\}_{i=0}^{n-1},g(x_{n}),g(x_{n+1})\ldots
g(x_m),\{g_r\}_{i=m+1}^N, $$ where
\begin{equation}
\label{eq:hl_and_hr}
h_\ell=g_\ell=(g(x_{n})+h(x_{n}))/2, \qquad
h_r=g_r=(g(x_{m})+h(x_{m}))/2.
\end{equation}
Next, we analyze the approximation of $h^*$ by our algorithm. The result also applies for the approximation of $g^*$.
Our algorithm computes the polynomial $p_N\in \Pi_N$, which interpolates the data $\{(x_i,h_\ell)\}_{i=0}^{n-1}$, $\{(x_i,h(x_i))\}_{i=n}^m$, $\{(x_i,h_r)\}_{i=m+1}^{N}$. In order to prove the theorem, we show that
\begin{equation}\label{OlogNoN}
\|h^*-p_N\|_\infty=O\bigg(\frac{\log{N}}{N}\bigg), \ \ \ \ as\ \ N \to \infty.
\end{equation}
Since both $g$ and $h$ are in $Lip([c,d],\mathcal{L})$,
since $h(c)=g(c)$, and since the maximal distance between two adjacent points in $X$ is bounded by $\frac{\pi}{N}$, it follows that
\begin{equation}
|h^*(c)-h_\ell|=|h(c)-\frac{g(x_{n})+h(x_{n})}{2}|=\frac{1}{2}|(g(c)-g(x_{n})+(h(c)-h(x_{n}))|.
\end{equation}
Thus,
\begin{equation}\label{LpiN}
|h^*(c)-h_\ell|\le \mathcal{L}|c-x_{n}|\le \mathcal{L}|x_{n-1}-x_{n}|\le \mathcal{L}\frac{\pi}{N}.
\end{equation}
Using Lemma \ref{lemma:Cheb} for $f=h^*$ and $\epsilon \le\mathcal{L}\frac{\pi}{N} $ it follows that
\begin{equation}\label{hminusp}
\|h^*-p_N\|_\infty\le C\frac{\log(N)}{N}.
\end{equation}
We denote $\tilde{h}^*_N\equiv p_N$, and analogously, we define $\tilde{g}^*_N$ to be the $N$-th degree polynomial interpolation to $g^*$ at the points $X$. By the arguments leading to
\eqref{hminusp} we conclude that $\|g^*-\tilde g^*_N\|\le C\frac{\log N}{N}$.
\medskip
Let us consider the restriction of both $g^*$ and $\tilde g_N^*$ to the interval $[\tilde c, \tilde d]\equiv [x_{n-1},x_{m+1}]$, denoting the restrictions as $g_e$ and $\tilde g_{e}$. Note that $g_e$ is an extension of $g$ to $[\tilde c, \tilde d]$ on which the approximated hole is defined. In the same manner we define $h_e$ and $\tilde h_{e}$. The functions $\tilde g_{e}$ and $\tilde h_{e}$ define the lower and the upper boundaries of the approximated hole $\tilde H$, approximating $H$. Moreover, the approximation $\tilde F\sim F$ is defined by the approximated boundary functions using the procedure in Section \ref{subsub1}.
We also use here $\tilde \ell\equiv \tilde{\ell}_N\text{ and }\tilde u\equiv\tilde{u}_N$, the interpolants to $\ell\text{ and }u$, with approximation error of order $O\Big(\frac{\log N}{N}\Big)$ (using \cite{BL}).
For $x\in [a,b]\setminus [\tilde c, \tilde d]$, $F(x)=[\ell(x),u(x)]$ while $\tilde F(x)=[\tilde \ell(x),\tilde u(x)]$,
and it follows that
$$
d_{H}(\tilde{F}(x),F(x))=O\Big(\frac{\log{N}}{N}\Big),\
\text{ as } N\to\infty\ .
$$
For $x\in [\tilde c,\tilde d]$
$$
F(x)=\Big[\ell(x),g^*(x)\Big]\cup\Big[h^*(x),u(x)\Big],
$$
whereas
$$
\tilde{F}_N(x)=\Big[\tilde{\ell}(x),\tilde{g}_{e}(x)\Big]\cup\Big[\tilde{h}_{e}(x),\tilde{u}(x)\Big].
$$
Observing that the end-points of $I_1=\big[\ell(x),g^*(x)\big]$ are approximated by the corresponding end-points of $\tilde{I}_1=\big[\tilde{\ell}(x),\tilde{g}_{e}(x)\big]=\big[\tilde{\ell}_N(x),\tilde{g}^*_N(x)\big]$ with error of order $O\Big(\frac{\log N}{N}\Big)$, it is easy to conclude that
$$
d_H(I_1,\tilde{I}_1)=O\bigg(\frac{\log N}{N}\bigg).
$$
Similarly, for $I_2=\big[h^*(x),u(x)\big]$ and $\tilde{I}_2=\Big[\tilde{h}_{e}(x),\tilde{u}(x)\Big]$, we get $d_H(I_2,\tilde{I}_2)=O\Big(\frac{\log N}{N}\Big)$.
Then, by Lemma~ \ref{Lemma:sets}, we conclude that
$$d_H\big(F(x),\tilde{F}(x)\big)=O\bigg(\frac{\log{N}}{N}\bigg)\ \ \text{as}\ N\to \infty.$$
\iffalse
\medskip
Let us now prove the claim of the theorem, namely that for all $x\in[a,b]$, $$
d_{H}(\tilde{F}(x),F(x))=O\Big(\frac{\log{N}}{N}\Big)
\text{ as } N\to\infty\ .
$$
Note that
$$
F(x)=\Big[\ell(x),g^*(x)\Big]\cup\Big[h^*(x),u(x)\Big],
$$
whereas
$$
\tilde{F}_N(x)=\Big[\tilde{\ell}_N(x),\tilde{g}^*_N(x)\Big]\cup\Big[\tilde{h}^*_N(x),\tilde{u}_N(x)\Big].
$$
Here $\tilde{\ell}_N\text{ and }\tilde{u}_N$ are the interpolants to $\ell\text{ and }u$ with error of order $O\Big(\frac{\log N}{N}\Big)$ obtained by \cite{BL}.
Observing that the end-points of $I_1=\big[\ell(x),g^*(x)\big]$ are approximated by the corresponding end-points of $\tilde{I}_1=\big[\tilde{\ell}_N(x),\tilde{g}^*_N(x)\big]$ with error of order $O\Big(\frac{\log N}{N}\Big)$, it is easy to conclude that
$$
d_H(I_1,\tilde{I}_1)=O\bigg(\frac{\log N}{N}\bigg)
$$
Similarly, for $I_2=\big[h^*(x),u(x)\big]$ and $\tilde{I}_2=\Big[\tilde{h}^*_N(x),\tilde{u}_N(x)\Big]$, we get $d_H(I_2,\tilde{I}_2)=O\Big(\frac{\log N}{N}\Big)$.
Then, by Lemma~ \ref{Lemma:sets}, we conclude that
$$d_H\big(F(x),\tilde{F}(x)\big)=O\bigg(\frac{\log{N}}{N}\bigg)$$.
\fi
Let us extend the error analysis for the case of $M$ holes.
Considering the significant metric chains approximating the holes,
\iffalse
we observe that each of them is composed of paths following data on the boundaries of the holes and straight paths following the approximated PCTs of the holes. For a finite number of holes, $M$, there is a finite number of significant metric chains, usually $2M$. In order to study the approximation of the boundaries of the holes by polynomial interpolation to the data of the SMCs
\fi
we define related {\bf auxiliary functions} as follows:
For each hole $H_i$ we have its boundary functions $g_i$ and $h_i$ defined on $[c_i,d_i]$. We extend each of these functions to the left and to the right, using their values at the PCTs, as we did above for the APCTs.
For example, we extend $h_i$ to the right with a constant value $h_i(d_i)$, until it intersects a boundary of another hole $H_j$, or one of the boundary functions $u$ or $\ell$. From the intersection point we follow the boundary function of $H_j$ ($h_j$ or $g_j$) until we reach the right PCT of $H_j$. From this point we continue to the right with the PCT value, and so on till we reach $x=b$.
In case we intersect $u$ or $\ell$, we follow the boundary function up to $x=b$. Similarly, we extend $h_i$ to the left up to $x=a$, and do the same for $g_i$, We denote the extended functions $h_{e,i}$ and $g_{e,i}$, and it is easy to verify that these functions are Lipschitz.
Note that for every significant metric chain there is a corresponding extended function $h_{e,i}$ or $g_{e,i}$.
The polynomial interpolating a significant metric chain data, is also interpolating the associate extended function along the parts lying on the boundary functions. In between boundary functions' segments, the deviation in the interpolated values are due to the deviation between the relevant PCT and the APCT values, which is of order $O(1/N)$ as $N\to \infty$, as proved in the case of one hole. As in the case of one hole, it follows here that the interpolating polynomials approximate the extended functions with approximation order $O(\log(N)/N)$.
Let us review the definition of the approximation $\tilde F$.
For each hole $H_i$, there are two significant metric chains, passing through its left and right APCTs, and following the data on its lower and upper boundaries.
Define the polynomials interpolating these data as $\tilde g_{e,i}$ and $\tilde h_{e,i}$ respectively. These polynomials interpolate the extended functions $g_{e,i}$ and $h_{e,i}$ along the parts lying on the boundary functions. As in the case of one hole, it follows here that the interpolating polynomials approximate the extended functions with approximation order $O(\log(N)/N)$.
Denote the left and the right APCTs of $H_i$ by $\tilde c_i$ and $\tilde d_i$, and the restrictions of $\tilde g_{e,i}$ and of $\tilde h_{e,i}$ to $[\tilde c_i,\tilde d_i]$ by $\tilde g_i$ and $\tilde h_i$ respectively. Also denote the restrictions of $g_{e,i}$ and of $h_{e,i}$ to $[\tilde c_i,\tilde d_i]$ by $g^*_i$ and $h^*_i$ respectively.
As in the case of one hole, it follows that as $N\to \infty$,
\begin{equation}\label{Eg}
\|g^*_i-\tilde g_i\|_{\infty,[\tilde c_i,\tilde d_i]}=O(\frac{log(N)}{N}),
\end{equation}
and
\begin{equation}\label{Eh}
\|h^*_i-\tilde h_i\|_{\infty,[\tilde c_i,\tilde d_i]}=O(\frac{log(N)}{N}).
\end{equation}
Viewing the presentation of $F$ by the procedure described in Section \ref{subsub1}, we note that the same SVF $F$ is obtained if we replace there the functions $\{g_i\}$ by $\{g^*_i\}$, $\{h_i\}$ by $\{h^*_i\}$, and the intervals $\{[c_i,d_i]\}$ by $\{[\tilde c_i,\tilde d_i]\}$.
Both $F$ and $\tilde F$ are defined
using the procedure described in Section \ref{subsub1}, replacing all the original boundary functions by their corresponding approximants, and the intervals $\{[c_i,d_i]\}$ by $\{[\tilde c_i,\tilde d_i]\}$. It follows that if $J^*(x)>0$,
\begin{equation}\label{Fatxextended}
F(x)=[\ell(x),g^*_{i_1}(x)]\cup\bigcup_{j=1}^{J^*(x)-1}[g^*_{i_j}(x),h^*_{i_{j+1}}(x)]\cup[h^*_{i_J}(x),u(x)],
\end{equation}
and, if $J^*(x)=0$,
$F(x)=[\ell(x),u(x)]$.
Similarly, the approximation $\tilde F(x)$ is defined using the set of approximated boundary functions: If $J^*(x)>0$,
\begin{equation}\label{Fappr}
\tilde F(x)=[\ell(x),\tilde g_{i_1}(x)]\cup\bigcup_{j=1}^{J^*(x)-1}[\tilde g_{i_j}(x),\tilde h_{i_{j+1}}(x)]\cup[\tilde h_{i_J}(x),u(x)].
\end{equation}
If $J^*(x)=0$, $\tilde F(x)=[\tilde \ell(x),\tilde u(x)]$.
The index $J^*(x)$ in (\ref{Fappr}) is the same as the index in (\ref{Fatxextended}) since we use the same definition intervals $\{[\tilde c_i,\tilde d_i]\}$ for the boundary functions, and the holes are separated.
To complete the proof of Theorem \ref{Thm1}, we observe that each interval in (\ref{Fatxextended}) has a corresponding interval in (\ref{Fappr}). Using the estimates in (\ref{Eg}) and (\ref{Eh}), it follows that the Hausdorff distance between corresponding intervals is $O(log(N)/N)$ as $N\to \infty$, and the proof is completed by using Lemma \ref{Lemma:sets}.
\end{proof}
\iffalse
\subsubsection{Defining the approximation of the SVF}\label{subsub2}
\hfill
\medskip
\begin{enumerate}
\item
Identify a hole $H_i$
\item Define its left and right APCTs, denoted by $\tilde c_i$ and $\tilde d_i$.
\item Extend the upper and lower data of the hole to the right and to the left by the corresponding APCT values.
\item
For each of the two data sequences compute the polynomial interpolant.
This gives approximants $\tilde h_i$ and $\tilde g_i$. The restriction of these approximants to the interval $[\tilde c_i,\tilde d_i]$ defines the approximation $\tilde H_i$ of the hole $H_i$.
\item
Compute the approximations to the lower and the upper boundary functions,
$\tilde \ell\sim \ell$ and $\tilde u\sim u$.
\item
Define the approximation $\tilde F\sim F$
using the procedure described in Section \ref{subsub1}, replacing all the original boundary functions by their corresponding approximants.
\end{enumerate}
\fi
\subsection{Numerical results}
We demonstrate the interpolation process on one SVF, denoted by $F_A$, and displayed in figures \ref{fig:example_A2}. $F_A$ is explicitly given by,
\[F_A(x)=\begin{cases}
[\ell_{A},u_{A}],\quad &x\in[-1,-0.981]\cup[-0.0188,0]\cup[0,0.153]\cup[0.847,1],\\
\big[\ell_{A},g_{A_{2}}\big]\cup\big[h_{A_{2}},g_{A_{1}}\big]\cup\big[h_{A_{1}},u_{A}\big],\quad &x\in[-0.981,-0.0188],\\
\big[\ell_{A},g_{A_{3}}\big]\cup\big[h_{A_{3}},\ell_{A}\big],\quad &x\in[0.153,0.847].
\end{cases}
\]
where,
\begin{align*}
&u_{A}=\tanh{(-x)} + 1,\quad\ell_{A}=-\tanh{(-x)}-1\\
&h_{A_{1}}=-\frac{1}{\cosh(2x + 1)},\quad g_{A_{1}}=\frac{1}{\cosh(2x + 1)}-\frac{4}{3}\\
&h_{A_{2}}=-\frac{1}{\cosh(2x + 1)}+\frac{4}{3},\quad g_{A_{2}}=\frac{1}{\cosh(2x + 1)}\\
&h_{A_{3}}=-\frac{1}{\cosh(2x - 1)}+ \frac{4}{5},\quad g_{A_{3}}=\frac{1}{\cosh(2x - 1)} - \frac{4}{5}
\end{align*}
\subsubsection{The figures}
\begin{figure}[!ht]
\centering
\subfloat[][The Set-Valued Function $F_A$]{\includegraphics[width=.4\textwidth]{Examples_A/2.png}}\quad
\subfloat[][Approximation with 10 samples]{\includegraphics[width=.4\textwidth]{Examples_A/2_10.png}}\\
\subfloat[][Approximation with 20 samples]{\includegraphics[width=.4\textwidth]{Examples_A/2_20.png}}\quad
\subfloat[][Approximation with 30 samples]{\includegraphics[width=.4\textwidth]{Examples_A/2_30.png}}
\caption{The set-valued function $A$ and its approximations. Each approximation is represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow.}
\label{fig:example_A2}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{Examples_A/2_error.png}
\caption{The ratio between the interpolation error and $\frac{\log{(N)}}{N}$ as a function of the number of\\ interpolation points $N$ for the set-valued function $A$.}
\label{fig:error_example_A2}
\end{figure}
The figures below consist of four sub-figures. The last three of the sub-figures show three interpolants corresponding to different number of interpolation points $N=n+1$. Each interpolant is represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow. The first sub-figures shows the graph of $F_A$. We show the interpolation error, termed
$$
\text{Maximum Error}=\max_{j}{\Big\{d_{H}\big(F(\xi_{j}),\Tilde{F}(\xi_{j})\big)\Big\}},
$$
where $\{\xi_{j}\}_{j=1}^{2N}$ is a set of $2N$ equidistant points in $[a,b]$. We plot
$$
G=\frac{\text{Maximum Error}}{\frac{\log{N}}{N}},
$$
as a function of the number of the interpolation points $N$.
\subsubsection{Conclusions from the figures}
\label{conclusions_section_1}
\begin{enumerate}
\item Figure \ref{fig:example_A2} demonstrates that the interpolation error decreases as $N$ increases in accordance with the theory.
Most importantly, we observe that the maximal error occurs near the PCTs. This observation suggests that to reduce the maximal error we need to approximate the location of the PCTs more accurately. This is done in the next section.
\item As seen from figure
\ref{fig:error_example_A2}, the maximal error divided by $\frac{\log{N}}{N}$ is bounded for each $N$. This indicates that the error decays at the rate as predicted by the theoretical result (\ref{eq:error_a}).
\end{enumerate}
\section{Improved Approximation of set-valued functions with $C^4$ boundaries}\label{C4boundaries}
In this section we modify the algorithm and improve the theoretical results of the previous chapter, to achieve a better rate of decay of the error in the interpolation. We obtain a rate of $O(|X|^{4})$, where $X$ is the partition determining the interpolation points. This is done under the assumption that the smoothness of the boundaries of $Graph(F)$ is $C^{4}$.
According to our conclusions in \ref{conclusions_section_1}, the maximal error occurs in the vicinity of the PCTs. Thus, we suggest a method for achieving high order approximation of the points of change of topology, which results in decreasing the overall interpolation error. Another factor contributing to the improvement in the approximation order is due to the use of spline interpolation,
using a "not-a-knot" cubic spline interpolation.
\iffalse
\subsection{The class of analyzed SVFs}\label{Mholes}
For $F\in\mathcal{F}([a,b],M)$ (Defined in Section \ref{pre_set_svf}).
The SVF approximation $\tilde F(x)$ is defined as follows: For each hole $H_i$ we apply the approximation algorithm described in Section \ref{Algorithm5} for the case of one hole. The outcome includes approximations $\tilde h_i\sim h_i$, $\tilde g_i\sim g_i$ on an interval $[p_{x,i},q_{x,i}]$ approximating the interval $[c_i,d_i]$. In order to compare between $h_i$ and $\tilde h_i$ we extend each of them to a larger interval $[c_{e,i},d_{e,i}]$, $c_{e,i}=p_{x,i}-\epsilon$, $d_{e,i}=q_{x,i}+\epsilon$. The extension is defined as in equations (\ref{he}), (\ref{tildehe}), and they are denoted $\tilde h_{e,i}$ and $\tilde g_{e,i}$ . By the assumptions on the holes, there exists an $\epsilon$ such that the extensions do not cross the boundaries of $Graph(F)$. Moreover, for a small enough $\Delta$ the interval $[c_{e,i},d_{e,i}]$ contains both intervals $[p_{x,i},q_{x,i}]$ and $[c_i,d_i]$.
The approximation $\tilde F(x)$ is similarly defined as
\begin{equation}\label{tildeFatxe}
F(x)=[\tilde \ell(x),\tilde g_{e,i_1}(x)]\cup\bigcup_{j=1}^{J-1}[\tilde g_{e,i_j}(x),\tilde h_{e,i_{j+1}}(x)]\cup[\tilde h_{e,i_J}(x),\tilde u(x)].
\end{equation}
\begin{theorem}\label{Theorem5}
Let $F$ be an SVF such that $Graph(F)$ has separable $M$ holes with $C^{2k}$ boundaries and non-zero curvatures at their PCTs. Define the approximation by (\ref{tildeFatxe}), for a small enough $\Delta$,
\begin{equation}
d_H(F(x)-\tilde F(x))\le C_1\Delta^{\frac{r}{2}+\frac{1}{2}}+C_2\Delta^{s}+C_3\Delta^{\frac{k}{2}}.
\end{equation}
\end{theorem}
\begin{proof}
Each interval in (\ref{Fatxe}) has a corresponding interval in (\ref{tildeFatxe}), and by (\ref{tildehapp}) it is clear that the Hausdorff distance between corresponding intervals is of order
$$O(\Delta^{\frac{r}{2}+\frac{1}{2}})+O(\Delta^{s})+O(\Delta^{\frac{k}{2}}), {\ \ \text as}\ \ \Delta\to 0.$$
The proof is completed using the result in Lemma \ref{Lemma:sets}.
\end{proof}
\fi
\iffalse
\medskip
{\color{magenta}
\begin{itemize}
\item
Identifying the lower and upper boundaries and identifying holes.
\item Let $F_0$ be the SVF without holes, be defined by lower and upper boundary functions $\ell$ and $u$.
\begin{remark}
The observation that the lower and upper boundaries are defined by {\bf functions} $\ell$ and $u$, follows from the assumption that $F$ is Lipschitz.
\end{remark}
\item
Let $\ell_{min}=\min_{x\in [a,b]}\ell(x)$. and $u_{max}=\max_{x\in [a,b]}u(x)$. Considering the hole $H_i$, let the SVF $F_i$ be defined by its graph, $$Graph(F_i)=([a,b]\times [\ell_{min},u_{max}])\setminus H_i.$$
\item
Note that
$$F(x)=\bigcap_{i=0}^M F_i(x).$$
\item Use the data set to approximate the lower and upper boundary functions $\ell$ and $u$, and thus to find an approximation $\tilde F_0\sim F_0$.
\item
Using the data of a hole $H_i$, embedded in the interval set $[\ell_{min},u_{max}]$, find the approximation $\tilde F_i$ to $F_i$.
\item
Finally, define the approximation to $F$ as
$$\tilde F(x)=\bigcap_{i=0}^M \tilde F_i(x).$$
\item
If $d_H(F_i,\tilde F_i)\le \epsilon_i$, then $d_h(F,\tilde F)\le \max\{\epsilon_i\}_{i=0}^M$.
\item
The approximation of $F_0$ is obvious, as presented in previous sections.
\item
Let us consider the case of an SVF $G$ with a single hole, $H$ such that
$$Graph(G)=[a,b]\times [\ell_{min},u_{max}]\setminus H.$$
We assume the hole is spanned in the interval $[c,d]$, and we let $h$ and $g$ be the functions
defined on $[c,d]$ which describe the upper and the lower boundaries of the hole respectively. The point $(c,h(c))=(c,(g(c))$ and the point $(d,h(d))=(d,g(d))$ are the left and right points of change of topology (PCTs) in the graph of $F$.
Let us extend the boundary functions $h$ and $g$ to the whole interval $[a,b]$:
\begin{equation}
\label{he1}
h_e(x) =
\begin{cases}
h(c) &\quad x\in [a,c)\\
h(x) &\quad x\in [c,d]\\
h(d) &\quad x\in (d,b]\\
\end{cases},
\end{equation}
and
\begin{equation}\label{ge1}
g_e(x) =
\begin{cases}
h(c) &\quad x\in [a,c)\\
g(x) &\quad x\in [c,d]\\
h(d) &\quad x\in (d,b]\\
\end{cases}.
\end{equation}
It follows that
\begin{equation}\label{SVFF}
G(x) = [\ell_{min},g_e(x)]\cup[h_e(x),u_{max}].
\end{equation}
\end{itemize}
}
\fi
By modifying the algorithm of the previous section, we can separate the holes from each other. Therefore, we firstly discussed the case of only one hole $H$, defined by (\ref{eq:definition_of_hole}), with $C^4$ boundaries. Later on we extend the result to approximating a set-valued function $F$ whose graph has several holes with $C^4$ boundaries.
\subsection{Notion and notations}\label{Notions}
\begin{enumerate}
\comment{\item We say that $\partial F$ is $H\ddot{o}l_{\alpha}$ if each $f\in\partial F$ is $H\ddot{o}l_{\alpha}(I)$ where $I\subset[a,b]$ is the domain of definition of $f$.}
\item For the simplicity of the presentation, we use here the uniform partition,
\begin{equation}
X=\{x_i=a+i\Delta\}_{i=0}^N,\ \ \Delta=\frac{b-a}{N},
\end{equation}
although the algorithm and the approximation results apply for a general partition $X$.
\item A set of \textbf{Boundary Metric Chains} of a given finite set of samples $\big\{F(x_{i})\big\}_{i=0}^{N}$, at a partition $X$, is given by:
\begin{equation}
\begin{aligned}
BMC\bigg(\big\{F(x_{i})\big\}_{i=0}^{N}\bigg)=\bigg\{
\big(f_{n},...,f_{m}\big): \exists\big(f_{0},...,f_{n},...,f_{m},...,f_{N}\big)\in SMC\Big(\big\{F(x_{i})\big\}_{i=0}^{N}\Big) \\ \quad\land\quad \exists \eta\in\partial F \text{ s.t } f_{k}=\eta(x_{k}),\quad k=n,...,m \bigg\}.
\end{aligned}
\end{equation}
\item Let the hole be defined on the interval $(c,d)$ where
$a<c<d<b$, and let $\{x_i\}_{i=n}^m$ be all points of $X$ in $(c,d)$.
\item It is easy to see that a boundary metric chain derived for the hole $H$ is a significant metric chain restricted to $[c,d]$.
\end{enumerate}
\subsection{A description of the algorithm}\label{Thealgorithm}
\begin{enumerate}
\item \textbf{Identifying the hole}:
We modify the algorithm of the previous section so it identifies the cross-sections cutting the hole $H$. Then, instead of producing the set of \textbf{Significant Metric Chains}, the algorithm produces the set of \textbf{Boundary Metric Chains}. Thus, we get the two boundary metric chains of the hole $H$, $\big\{g(x_n),g(x_{n+1}),\cdots,g(x_m)\big\}$ and $\big\{h(x_n),h(x_{n+1}),\cdots,h(x_m)\big\}$ for the lower and the upper boundaries respectively.
\item \textbf{Approximating the right and left PCTs of $H$}:
In order to find an approximation for the left PCT, we interpolate the first four values of $g$ and of $h$ at $\{x_i\}_{i=n}^{n+3}$, using two cubic polynomial interpolants, $\Tilde{g}_{L}$ and $\Tilde{h}_{L}$, approximating the lower and upper left parts of the hole boundaries. Then,
we find the intersection point of these two polynomials. If such an intersection exists and is in the interval $[x_{n}-\Delta,x_{n}]$ then it gives a better approximation of the location of the left PCT, $\big(\Tilde{c},\Tilde{g}_{L}(\Tilde{c})\big)$, of $H$. However, if $\Tilde{c} < x_{n}-\Delta$ then we we take the approximation of the PCT to be $x_n-\Delta$, which is computed by the algorithm of the previous section. See Lemma \ref{lem:location_of_pct_inverval}. A similar procedure is applied near the right PCT of $H$, using cubic interpolation at the points $\{x_i\}_{i=m-3}^m$, to yield the approximation $\big(\Tilde{d},\Tilde{g}_{R}(\Tilde{d})\big)$ of the right PCT.
\item \textbf{Approximating the lower and the upper boundaries of $H$}:
In order to approximate the boundaries of $H$, we use "not-a-knot" cubic spline interpolation on the extended data-sets $\big\{\big(\Tilde{c},\Tilde{g}_{L}(\Tilde{c})\big),\big(\Tilde{d},\Tilde{g}_{R}(\Tilde{d})\big)\big\}\cup\big\{(x_i,g(x_i))\big\}_{i=n}^{m}$ and $\big\{\big(\Tilde{c},\Tilde{h}_{L}(\Tilde{c})\big),\big(\Tilde{d},\Tilde{h}_{R}(\Tilde{d})\big)\big\}\cup\big\{(x_i,h(x_i))\big\}_{i=n}^{m}$ to obtain the approximation of the lower and the upper boundaries respectively.
\end{enumerate}
\subsection{Error Analysis for one hole}
In this sub-section, we find an error estimate of the output of the above algorithm. First we analyse the error in approximating the location of the PCT's,
and then we consider the approximation of the boundaries of the hole.
\subsubsection{Error estimate of the PCT approximation}
We find the approximation order of the left PCT $\big(c,g(c)\big)$ (the proof is similar for the right PCT). We estimate the Euclidean distance between the original and approximated PCT,
\begin{equation}
\label{eq:B_pct_error_expression}
E=\sqrt{|c-\Tilde{c}|^{2}+|g(c)-\Tilde{g}_{L}(\Tilde{c})|^{2}}.
\end{equation}
\begin{lemma}
\label{lem:location_of_pct_inverval}
The left PCT is in the interval $[x_n-\Delta,x_n)$ and the right PCT is in the interval $(x_m,x_m+\Delta]$
\end{lemma}
The proof follows from the definition of $\{x_i\}_{i=n}^m$.
According to the above lemma, we focus on the interval $[x_{n}-\Delta,x_{n}]$ for approximating the left PCT. We denote by $I^{*}$ the open interval between $c$ and $\Tilde{c}$, and let $\phi=h-g$ and $\psi=\Tilde{h}_{L}-\Tilde{g}_{L}$.
\comment{
By using the first five terms in Taylor expansion of $\phi$ at $x=c$
\begin{equation}
\phi_{e}(x)=\phi(c)+\frac{\phi'(c)}{1!}(x-c)+\frac{\phi''(c)}{2!}(x-c)^{2}+\frac{\phi^{(3)}(c)}{3!}(x-c)^{3}+\frac{\phi^{(4)}(c)}{4!}(x-c)^{4}.
\end{equation}
}
\comment{
\begin{proposition}
\label{prop:pct_estimate_b}
If $\psi^{'}(x)>C>0$ for $x\in I^{*}$, then, step 2 in the above algorithm approximates the left PCT with error $E=O(\Delta^{4})$ as $\Delta\to 0$.
\end{proposition}
\begin{proof}
Finding the intersection of $\Tilde{g}_{L}$ and $\Tilde{h}_{L}$ is equivalent to finding the root of $\psi$. Note that $\psi$ interpolates $\phi$ at $\{x_0,x_1,x_2,x_3\}$, and by definition, $\psi(\Tilde{c})=0$ and $\phi(c)=0$. By the assumption that $\psi^{'}(x)>C>0$ for $x\in I^{*}$, it follows by the Mean Value Theorem that
\begin{equation}
\label{al:error_estimate_phi_b}
\frac{|\psi(c)-\psi(\Tilde{c})|}{|c-\Tilde{c}|}=\frac{|\psi(c)-0|}{|c-\Tilde{c}|}=|\psi'(\xi)|>C,
\end{equation}
where $\xi\in I^{*}$. Using the error estimate in $(\ref{eq:polynomial_extrapolation_error_bound})$ and using (\ref{al:error_estimate_phi_b}) we obtain
\begin{align*}
\frac{|\psi(c)|}{|c-\Tilde{c}|}=\frac{|\psi(\Tilde{c})+O(\Delta^4)|}{|c-\Tilde{c}|}=\frac{|0+O(\Delta^4)|}{|c-\Tilde{c}|}>C,
\end{align*}
and thus
\begin{equation}
\label{eq:pct_x_error_b}
|c-\Tilde{c}|=O(\Delta^4).
\end{equation}
We recall if $f(x)$ is continuous on $[a,b]$ and $f\in C^{n+1}$ in $(a,b)$. If $p_{n}(x)$ interpolates $f(x)$ at $X\subset[a,b]$, then for every $x$ (\cite{numerical_analysis},Chapter 2.2.2)
\begin{equation}
\label{eq:polynomial_interpolation_error_bound_classical}
|f(x)-p_{n}(x)|=\frac{\big|f^{(n+1)}\big(\xi(x)\big)\big|}{(n+1)!}\prod^{n}_{i=0}{(x-x_{i})},
\end{equation}
where $\xi(x)$ is some point in the open interval containing $(a,b)$ and $x$. If $X$ consists of equally spaced points $x_{i+1}=x_{i}+h$, where $h=\frac{b-a}{n}$ and $x_{0}=a$, then for $x\in[a,b]$ (\cite{numerical_analysis},Chapter 2.2.2)
\begin{equation}
\label{eq:polynomial_interpolation_equally_spaced_error_bound}
|f(x)-p_{n}(x)|\leq \frac{h^{n+1}}{4(n+1)}\max_{\xi\in (a,b)}{\big|f^{(n+1)}(\xi)\big|}.
\end{equation}
It is easy to observe that if $x\in[a-h,a)\cup(b,b+h]$, then
\begin{equation}
\label{eq:prod_x}
|(x-x_0)(x-x_1)\cdots(x-x_n)|\leq h^{n+1}(n+1)! \ .
\end{equation}
By using the equality in (\ref{eq:polynomial_interpolation_error_bound_classical}) and (\ref{eq:prod_x}), we obtain
\begin{equation}
\label{eq:polynomial_extrapolation_error_bound}
|f(x)-p_{n}(x)|\leq h^{n+1}\max_{\xi\in (a-h,b+h)}{|f^{(n+1)}(\xi)|}.
\end{equation}
Then, by $(\ref{eq:polynomial_extrapolation_error_bound})$, the Mean Value Theorem, $(\ref{eq:pct_x_error_b})$ and since $\Tilde{g}_{L}'$ is bounded in $I^{*}$, we obtain
\begin{equation}
\label{eq:pct_y_error_b}
|g(c)-\Tilde{g}_{L}(\Tilde{c})|=|\Tilde{g}_{L}(c)-\Tilde{g}_{L}(\Tilde{c})+O(\Delta^4)|\leq |\Tilde{g}_{L}'(\gamma)||c-\Tilde{c}|+O(\Delta^4)=O(\Delta^4),
\end{equation}
where $\gamma$ is between $c$ and $\Tilde{c}$.
Combing $(\ref{eq:pct_x_error_b})$ and $(\ref{eq:pct_y_error_b})$, we conclude from (\ref{eq:B_pct_error_expression}) that the order of the approximation of the location of the left PCT is $O(\Delta^4)$.
\end{proof}}
\begin{proposition}
\label{prop:pct_estimate_b2}
If $\phi^{'}(c)=C>0$, then, step 2 in the above algorithm approximates the left PCT with error $E=O(\Delta^{4})$ as $\Delta\to 0$.
\end{proposition}
\begin{proof}
Finding the intersection of $\Tilde{g}_{L}$ and $\Tilde{h}_{L}$ is equivalent to finding the root of $\psi$. Note that $\psi$ is a cubic polynomial which interpolates $\phi$ at $\{x_n,x_{n+1},x_{n+2},x_{n+3}\}$, and by definition, $\psi(\Tilde{c})=0$ and $\phi(c)=0$. $\phi\in C^4[a,b]$, and it can be extended as a $C^4$ function on $\mathbb{R}$. Using the error formulae for polynomial interpolation it follows that
\begin{equation}\label{OD4}
|\phi(x)-\psi(x)|=O(\Delta^4)
\end{equation}
as $\Delta\to 0$,
in an $O(\Delta)$ neighborhood of $x_n$.
By the assumption that $\phi^{'}(c)=C>0$, it follows that, for a small enough $\Delta$, $\phi^{'}(x)>\tilde C>0$ in an $O(\Delta)$ neighborhood of $x_n$.
Observing that
$$\phi(c\pm \Delta)=\pm C\Delta +O(\Delta^2),$$
and in view of (\ref{OD4}), it follows that for a small enough $\Delta$, $\psi$ changes sign in $[c-\Delta,c+\Delta]$, hence
$\tilde c\in [c-\Delta,c+\Delta]$.
To estimate $|c-\tilde c|$ we employ the Mean Value Theorem,
\begin{equation}\label{MVT}
\frac{|\phi(c)-\phi(\tilde c)|}{|c-\Tilde{c}|}=\phi'(\xi)>\tilde C.
\end{equation}
Since $\phi(c)=0$, it follows that
$|c-\tilde c|<\frac{|\phi(\tilde c)|}{\tilde C}.$
Using (\ref{OD4}), $
|c-\tilde c|<\frac{|\psi(\tilde c)+O(\Delta^4)|}{\tilde C},$ and recalling that $\psi(\tilde c)=0$,
\begin{equation}\label{cminusct}
|c-\tilde c|=O(\Delta^4).
\end{equation}
To evaluate the error in $y$-coordinate of the approximated PCT, we estimate $|g(c)-\Tilde{g}_{L}(\Tilde{c})|$.
Using the Mean Value Theorem, and since $\tilde g'$ is bounded near $c$, we obtain
\begin{equation}
\label{eq:pct_y_error_b}
|g(c)-\Tilde{g}_{L}(\Tilde{c})|=|\Tilde{g}_{L}(c)-\Tilde{g}_{L}(\Tilde{c})+O(\Delta^4)|\leq |\Tilde{g}_{L}'(\xi)||c-\Tilde{c}|+O(\Delta^4)=O(\Delta^4),
\end{equation}
where $\xi$ is between $c$ and $\Tilde{c}$.
Combining the last two error estimates, we conclude for the error $E$ in approximating the location of the left PCT that $$E=O(\Delta^4),$$
as $\Delta\to 0$.
\end{proof}
\subsubsection{Estimating the approximation error}\hfill
\medskip
As defined above, we approximate the lower and upper boundaries of $H$, using "not-a-knot" cubic spline interpolation on the data-sets $$\big\{\big(\Tilde{c},\Tilde{g}_{L}(\Tilde{c})\big),\big(\Tilde{d},\Tilde{g}_{R}(\Tilde{d})\big)\big\}\cup\big\{(x_i,g(x_i))\big\}_{i=n}^{m}$$
and
$$\big\{\big(\Tilde{c},\Tilde{h}_{L}(\Tilde{c})\big),\big(\Tilde{d},\Tilde{h}_{R}(\Tilde{d})\big)\big\}\cup\big\{(x_i,h(x_i))\big\}_{i=n}^{m}.$$ We denote the resulting approximations to $g$ and $h$ on the interval $[\tilde c, \tilde d]$ by $\tilde g$ and $\tilde h$ respectively. This is performed for each hole in $Graph(F)$. We also use "not-a-knot" cubic spline interpolation to approximate the functions $\ell$ and $u$ describing the lower and upper boundaries of $Graph(F)$, denoting the appropriate approximations by $\tilde \ell$ and $\tilde u$. The approximation of the boundaries of $Graph(F)$ induces the definition of the approximation $\tilde F$ of the set-valued function $F$.
For simplicity of presentation we
introduce the definition of $
\tilde F$ and the error analysis for the case of one hole. A full error analysis for the case of several holes is presented in Section \ref{Mholes} for the case of holes of H\"older type singularities. The method of approximating the PCTs and the boundaries of a hole with H\"older type singularities is different, but the method of extending the approximation results to the case of several holes holds for the case of $C^4$ boundaries as well.
\begin{definition}\label{Def2}{\bf The approximation $\tilde F(x)$.}
\begin{equation}\label{Def3eq}
\tilde F(x)=\begin{cases}
[\tilde \ell(x),\tilde u(x)], &x\in[a,\tilde c]\cup[\tilde d,b],\\
\big[\tilde \ell(x),\tilde h(x)\big]\cup\big[\tilde g(x),\tilde u(x)\big], &x\in[\tilde c, \tilde d].
\end{cases}
\end{equation}
\end{definition}
We recall that for $f\in C^4([\alpha,\beta])$ the "not-a-knot" cubic spline interpolation $s(f;x)$ satisfies the following error estimate (see \cite{numerical_analysis}, Chapter 2.3.4):
\begin{equation}\label{notaknot}
\norm{f(x)-s(f;x)}_{\infty,[\alpha,\beta]}\leq C\Delta^{4} \norm{f^{(4)}}_{\infty,[\alpha,\beta]},
\end{equation}
where $\Delta$ is the length of the maximal knots' interval, and $C$ is a constant independent of $f$ and $\Delta$.\\
\iffalse
Let $\Tilde{F}$ be a set-valued interpolant interpolating $F$ at $N+1$ $\Delta$-equally spaced samples $\big\{{x_i,F(x_i)}\big\}_{i=0}^{N}$. We find an error estimate of the interpolation in the hole region. The error is computed using Hausdorff metric and is denoted by
\begin{align*}
e(F,\Tilde{F})=\max_{x\in[\min\{{\Tilde{c},c}\},\max\{{\Tilde{d},d}\}]}{\bigg\{haus\big(F(x),\Tilde{F}(x)\big)\bigg\}},
\end{align*}
where $\Tilde{c}$ and $\Tilde{d}$ are the $x$-coordinate of the approximated left and right PCT respectively. \fi
\begin{proposition}
\label{prop:error_estimate_b}
For $x\in [a,b]$, $d_H(F(x),\Tilde{F}(x))=O(\Delta^{4})$ as $\Delta\to 0$.
\end{proposition}
\begin{proof}
Case 1: If $c < \tilde c$ and $x\in [c,\tilde c]$, then $\tilde F(x)$ has a 1-dimensional hole which is of length $|\phi(x)|$. Since $\phi(c)=0$ and $|\phi'(c)|$ is bounded, it follows from (\ref{cminusct}) that $\phi(x)=O(\Delta^4)$. By (\ref{notaknot}) we have that the approximations to $\ell$ and to $u$ are also $O(\Delta^4)$. Altogether,
$$d_H(F(x),\Tilde{F}(x))=\max\{|\ell(x)-\tilde \ell(x)|,|u(x)-\tilde u(x)|, 0.5|\phi(x)|\}=O(\Delta^4).$$
Case 2: If $\tilde c < c$ and $x\in [\tilde c,c]$, then $\tilde F(x)$ has a 1-dimensional hole which is of length $|\psi(x)|$. Since $\psi(\tilde c)=0$ and $|\phi'(c)|$ is bounded, it follows as above that
$$d_H(F(x),\Tilde{F}(x))=\max\{|\ell(x)-\tilde \ell(x)|,|u(x)-\tilde u(x)|, 0.5|\psi(x)|\}=O(\Delta^4).$$
Case 3: If $x<\min\{c.\tilde c\}$, or $\max\{c,\tilde c\}<x<\min\{d,\tilde d\}$, or $x>\max\{d,\tilde d\}$
$$d_H(F(x),\Tilde{F}(x))=\max\{|\ell(x)-\tilde \ell(x)|,|u(x)-\tilde u(x)|\}=O(\Delta^4).$$
The approximation near the right PCT of the hole is treated in the same manner as Cases 1 and 2.
\iffalse
We find the error estimate for the left side of the hole (it is applied the same for the right side). First, we start with the error estimate of the approximation of the hole near the left PCT, and denote it by $e_{pct}(F,\Tilde{F})$. We observe that for each $x$ between $c$ and $\Tilde{c}$, $F(x)$ and $\Tilde{F}(x)$ have different topology, i.e. one has a $1$-dimensional hole whereas the other doe not have. This difference is due to the error of approximating the PCT point. In this case, $haus\big(F(x),\Tilde{F}(x)\big)$ equals the half length of the $1$-dimensional hole.
As depicted in figure \ref{fig:error_pct}
\begin{equation}
\label{eq:e_pct_definition}
e_{pct}(F,\Tilde{F})=
\begin{cases}
\frac{1}{2}\phi(\Tilde{c}), & c>\Tilde{c} \\
\frac{1}{2}\psi(c), & c\leq\Tilde{c}
\end{cases}
\end{equation}
By (\ref{eq:pct_x_error_b})
\begin{equation}
\label{eq:phi_of_tilde_c_estimate_and_psi_of_c}
\phi(\Tilde{c})=O(\Delta^{4}), \qquad
\psi(c)=O(\Delta^{4}),
\end{equation}
and combining (\ref{eq:phi_of_tilde_c_estimate_and_psi_of_c}) and (\ref{eq:e_pct_definition}), we obtain
\begin{equation}
\label{eq:error_estimate_near_pct_b}
e_{pct}(F,\Tilde{F})=O(\Delta^{4}).
\end{equation}
Next, we analyse the error estimate of the approximation of the hole along the upper and lower boundaries, and we denoted it by $e_{bnd}(F,\Tilde{F})$. In contrast with the case near the PCTs, $F(x)$ and $\Tilde{F}(x)$ have the same topology, i.e. they both have a $1$-dimensional hole.
We recall that for $f\in C^4$ on $[a,b]$. If $s(f;x)$ is a "not-a-knot" cubic spline interpolation operator, then, for every $x\in[a,b]$ (\cite{numerical_analysis},Chapter 2.3.4):
\begin{equation}
\label{eq:cubic_spline_error_bound}
\norm{f^{(r)}(x)-s^{(r)}(f;x)}_{\infty}\leq C_{r}\big|h\big|^{4-r} \norm{f^{(4)}}_{\infty},\quad r=0,1,2,3,4
\end{equation}
where $C_{r}$ is a constant independent of $f$ and $h$.\\
Since the approximation is based on cubic spline interpolation and since $f,g\in C^{4}$, using the error estimate above
\begin{equation}
\label{eq:error_estimate_along_boundaries_b}
e_{bnd}(F,\Tilde{F})\leq C_{0}|\Delta|^{4}\max{\bigg\{\norm{g^{(4)}}_{\infty},\norm{h^{(4)}}_{\infty}\bigg\}}=O(\Delta^4)
\end{equation}
Finally, using (\ref{eq:error_estimate_near_pct_b}) and (\ref{eq:error_estimate_along_boundaries_b}), the total error estimate of the approximation of the hole is
\begin{equation}
\label{eq:total_error_estimate_b}
e(F,\Tilde{F})=\max{\big\{e_{pct}(F,\Tilde{F}), e_{bnd}(F,\Tilde{F})\big\}}=O(\Delta^{4}).
\end{equation}\fi
\end{proof}
\comment{
\begin{figure}[ht]%
\centering
\includegraphics[width=\textwidth]{error_1.png}
\qquad
\caption{An illustration of the error of SVF interpolation near a right PCT. The blue curve can represent the boundary of the hole of $F$ and the red one that of $\Tilde{F}$ or vice versa. Note that the maximal error occurs at the yellow point, which is either the approximated PCT or the actual PCT respectively.}
\label{fig:error_pct}
\end{figure}
}
\subsection{Numerical results}
We demonstrate the process of approximating the left PCT as well as showing the decay rate of the interpolation error on one SVF $F$ displayed in figure \ref{fig:example_B1}, which is given explicitly by,
\[ F(x)=\begin{cases}
\big[-e^{x},e^{x}],\qquad &x\in[-1,1]/[-x_a,x_a],\\
\Big[-e^{x},-\frac{\cos{(3x)}}{3}\Big]\bigcup\Big[\frac{\cos{(2x)}}{2}, e^{x}\Big],\qquad &x\in[-x_a,x_a],
\end{cases}
\]
where $-x_a$ and $x_a$ are the roots of $f(x)=\frac{\cos{(2x)}}{2}+\frac{\cos{(3x)}}{3}$.
\subsubsection{The figures}
\begin{figure}[!ht]
\centering
\subfloat[][The Set-Valued Function $F_A$]{\includegraphics[width=.4\textwidth]{Examples_B/example_1.png}}\quad
\subfloat[][Approximation with 10 samples]{\includegraphics[width=.4\textwidth]{Examples_B/example_1_10.png}}\\
\subfloat[][Approximation with 14 samples]{\includegraphics[width=.4\textwidth]{Examples_B/example_1_14.png}}\quad
\subfloat[][Approximation with 18 samples]{\includegraphics[width=.4\textwidth]{Examples_B/example_1_18.png}}
\caption{The set-valued function $B$ and its approximations, zoomed in on the red rectangle in (a). Each approximation is represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow.}
\label{fig:example_B1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{Examples_B/maximum_error_example_1.png}
\caption{The interpolation error divided by $\Delta^{4}$, as a function of the number of interpolation points ($N$) for the set-valued function $F$. Here, $\Delta=\frac{b-a}{N-1}$.}
\label{fig:maximum_error_example_B1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{Examples_B/pct_error_example_1.png}
\caption{The error in the approximation of the left PCT divided by $\Delta^{4}$, as a function of the number of interpolation points ($N$) for the set-valued function $F$. Here, $\Delta=\frac{b-a}{N-1}$.}
\label{fig:pct_error_example_B1}
\end{figure}
Figure \ref{fig:example_B1} correspond to the above SVF, consists of four sub-figures. The first sub-figure shows the graph of the original function with the close-up view area near the left PCT bounded by a red rectangle. The last three sub-figures show a zoomed in view of three interpolants corresponding to different numbers of interpolation points $N$. Each interpolant is represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow.
We show the rate of decay of the interpolation error for the above SVF in figures \ref{fig:maximum_error_example_B1}. The error is measured by
$$
\text{Maximum Error}=\max_{j}{\Big\{d_{H}\big(F(\xi_{j}),\Tilde{F}(\xi_{j})\big)\Big\}},
$$
where $\{\xi_{j}\}_{j=1}^{400}$ is a set of $400$ equidistant points in $[a,b]$. We plot
$$
G=\frac{\text{Maximum Error}}{\Delta^{4}},
$$
with $\Delta=\frac{b-a}{N-1}$, as a function of the number of the interpolation points $N$.
Finally, we show the rate of decay of the error of approximating the left PCT of the above SVF in figures \ref{fig:pct_error_example_B1}. We plot the value of
$$
\tilde{E}=\frac{E_{k}}{\Delta^{4}},
$$
as a function of the number of the interpolation points $N$. Here, $E$ is the error of the approximation of the left PCT as in (\ref{eq:B_pct_error_expression}).
\subsubsection{Conclusions from the figures}
\begin{enumerate}
\item Figure \ref{fig:example_B1} demonstrates that the PCT approximation error decreases as $N$ increases in accordance with the theoretical rate proved in Proposition \ref{prop:pct_estimate_b2}.
\item As seen from figures \ref{fig:maximum_error_example_B1} and \ref{fig:pct_error_example_B1}, the maximal error and the PCT approximation error both divided by $\Delta^{4}$ are bounded as $N$ increases. This indicates that the errors decays at the rate as predicted by Propositions \ref{prop:pct_estimate_b2} and \ref{prop:error_estimate_b}.
\end{enumerate}
\subsection{Error analysis - $M$ holes}\label{Mholes}
For $F\in\mathcal{F}([a,b],M)$ (Defined in Section \ref{pre_set_svf}),
the SVF approximation $\tilde F(x)$ is defined as follows:
For each hole $H_i$ we apply the approximation algorithm described in Section \ref{Algorithm5} for the case of one hole. The outcome includes approximations $\tilde h_i\sim h_i$, $\tilde g_i\sim g_i$ on an interval $[\tilde c_i,\tilde d_i]$ approximating the interval $[c_i,d_i]$. In order to compare between $h_i$ and $\tilde h_i$ we extend each of them to a larger interval $[c_{e,i},d_{e,i}]$, $c_{e,i}=c_i-\epsilon$, $d_{e,i}=d_i+\epsilon$. The extensions are defined as in equations (\ref{he1}), (\ref{tildehe1}), and they are denoted $h_{e,i}$ and $\tilde h_{e,i}$. Analogously, we define extensions $g_{e,i}$ and $\tilde g_{e,i}$ of $g_i$ and $\tilde g_i$.
\begin{equation}\label{he1}
h_{e,i}(x) =
\begin{cases}
h_i(c) &\quad x\in [c_{e,i},c_i)\\
h_i(x) &\quad x\in [c_i,d_i]\\
h_i(d) &\quad x\in (d_i,d_{e,i}]\\
\end{cases},
\end{equation}
\begin{equation}\label{tildehe1}
\tilde h_{e,i}(x) =
\begin{cases}
\tilde h_i(\tilde c_i) &\quad x\in [c_{e,i},\tilde c_i)\\
\tilde h_i(x) &\quad x\in [\tilde c_i,\tilde d_i]\\
\tilde h_i(\tilde d_i) &\quad x\in (\tilde d_i,d_{e,i}]\\
\end{cases}.
\end{equation}
By the assumptions on the holes, there exists an $\epsilon$ such that the extensions do not cross the boundaries of $Graph(F)$. Moreover, for a small enough $\Delta$ the interval $[c_{e,i},d_{e,i}]$ contains both intervals $[\tilde c_i,\tilde d_i]$ and $[c_i,d_i]$.
Similarly to the expression for $F$ in (\ref{Fatx0}),
$F$ may be re-formulated using the above extended boundary functions, as follows:
For $x\in [a,b]$ we identify all the intervals $\{[c_{e,i},d_{e,i}]\}_{i\in I(x)} $ containing $x$. If $\#I(x)=J(x)>0$, we order the corresponding boundary values $\{g_{e,i}(x)\}$, $i\in I(x)$, in ascending order, and index the relevant holes according to this ordering $\{H_{i_j}\}_{j=1}^{J(x)}.$ The set $F(x)$ is then re-expressed as
\begin{equation}\label{Fatxe4}
F(x)=[\ell(x),g_{i_1}(x)]\cup\bigcup_{j=1}^{J(x)-1}[g_{i_j}(x),h_{i_{j+1}}(x)]\cup[h_{i_J}(x),u(x)].
\end{equation}
The approximation $\tilde F(x)$ is similarly defined as
\begin{equation}\label{tildeFatxe4}
\tilde F(x)=[\tilde \ell(x),\tilde g_{e,i_1}(x)]\cup\bigcup_{j=1}^{J(x)-1}[\tilde g_{e,i_j}(x),\tilde h_{e,i_{j+1}}(x)]\cup[\tilde h_{e,i_J}(x),\tilde u(x)].
\end{equation}
\begin{theorem}\label{TheoremC4}
Let $F$ be an SVF such that $Graph(F)$ has separable $M$ holes with $C^{4}$ boundaries. Defining the approximation by (\ref{tildeFatxe4}), for a small enough $\Delta$,
\begin{equation}
d_H(F(x),\tilde F(x))\le C\Delta^4.
\end{equation}
\end{theorem}
\begin{proof}
Each interval in (\ref{Fatxe4}) has a corresponding interval in (\ref{tildeFatxe4}), and by Proposition \ref{prop:error_estimate_b} it is clear that the Hausdorff distance between corresponding intervals is of order
$O(\Delta^4)$ as $\Delta\to 0$.
The proof is completed using the result in Lemma \ref{Lemma:sets}.
\end{proof}
\section{Set-Valued Functions with Holes of H\"older Type}\label{Holderboundaries}
\subsection{Introduction}
So far we dealt with SVFs of Lipschitz type, i.e., with holes defined by Lipschitz continuous boundary functions (see Theorem \ref{th:lip_svf}). In this section, we extend
our algorithm to deal with SVFs whose holes have upper and lower boundaries of H\"older type with H\"older exponent $\frac{1}{2}$ at both PCTs. Here again, we present the computation procedure and the error analysis for the case of an SVF with one hole defined on the interval $[c,d]\subset[a,b]$. Finally, we show how to deal with the case of several holes. We assume the hole is defined as the interior of a closed boundary curve $\Gamma\in C^{2k},\ k\in \mathbb{N}$, such that every vertical cross-section at $x\in(c,d)$ cuts the curve at two points as in Figure \ref{fig:example_C}. We further assume that $\Gamma$ has non-zero curvature at both PCTs. As before, we let the upper and lower boundaries of the hole be defined by the functions $h$ and $g$ respectively.
In the next lemma we examine the behavior of $h$ and $g$ near the left PCT, namely at $(c,h(c))$.
\begin{definition}{\bf Local series approximation}
\hfill
\medskip
Let $\{\alpha_j\}_{j=0}^J$, $\alpha_0\ge 0$, be an increasing real sequence. We say that $\sum_{j=0}^Ja_j(x-c)^{\alpha_j}$ is a local series approximation of $f(x)$ at $x=c$ if
$$|f(x)-\sum_{j=0}^ka_j(x-c)^{\alpha_j}|=o(|x-c|^{\alpha_{k}}),\ \ \text{as}\ x\to c.$$
\end{definition}
The local series approximation concept is compatible with the asymptotic expansion concept \cite{Dingle}
\begin{lemma}{\bf Local series approximations of $h$ and $g$}
\hfill
\medskip
Consider a hole with a $C^{2k}$ boundary $\Gamma$ and assume $\Gamma$ has non-zero curvature at the PCT's of the hole. Then $h$ and $g$ have a local series in powers of $(x-c)^{0.5}$.
\end{lemma}
\begin{proof}
W.l.o.g., we assume $c=0$ and $h(c)=0$.
Reflecting $\Gamma$ about the line $y=x$, and using the non-zero curvature assumption at the PCT, it follows that the inverse function $h^{-1}$ has the local power series expansion:
$$h^{-1}(y)\sim \sum_{j=2}^{2k} a_jy^j, \ a_2> 0.$$
Let $\psi(y)=\sqrt{h^{-1}(y)}$, then $\psi(0)=0$ and $\psi'(0)=\sqrt{a_{2}}>0$. By the series reversion formula (see Abramowitz and Stegun 1972, p. 16 \cite{AbStegun}), $\psi(y)$ is invertible in a neighborhood of $y=0$, and it has a local power series expansion of the form
$$\psi^{-1}(x)=\sum_{j=1}^{2k} b_jx^j .$$
Using the relation $(u\circ v)^{-1}=v^{-1}\circ u^{-1}$, with $u(t)=sqrt(t)$ and $v=h^{-1}$, we obtain
$$\psi^{-1}(x)=h(x^2),$$
which implies, for $x\ge 0$,
$$h(x)=\psi^{-1}(\sqrt{x})=\sum_{j=1}^{2k} b_j x^{j/2} .$$
\end{proof}
Altogether, it follows that
that $h,g\in C^{2k}(c,d)$. With local expansions near $c$ and $d$ of the form,
\begin{equation}\label{sqrtexp1}
h(x)\sim h_c^{[2k]}(x)=\sum_{j=0}^{2k} c_j (x-c)^{j/2}, \quad \text{ as } x \to c^{+},
\end{equation}
and
\begin{equation}\label{sqrtexp2}
h(x)\sim h_d^{[2k]}(x)=\sum_{j=0}^{2k} d_j |x-d|^{j/2}, \quad \text{ as } x \to d^{-}.
\end{equation}
\begin{corollary}\label{Coro51}
\begin{equation}\label{c2kclosedcd}
\hat h^{[2k]}\equiv h- h_c^{[2k]}-h_d^{[2k]}\in C^{k}[c,d].
\end{equation}
\end{corollary}
Similar local expansion is assumed for $g$.
For example, circular or elliptic holes fulfill these conditions.
In this section, we develop an algorithm for constructing high order approximations to SVFs with holes of the above type. We remark here that the algorithm suggested in \cite{Levin1986} fails in approximating such holes in the neighborhood of the PCTs.
{\bf The approximation procedure} starts with deriving a high order approximation to the PCTs.
Next, this information is used for computing local approximations of the form (\ref{sqrtexp1}) for the upper and lower boundary functions $g$ and $h$ near the PCTs. Afterwards, in view of (\ref{c2kclosedcd}), we subtract these local approximations in order to regularize the given data of $h$ and $g$. Finally, a spline approximation is applied to the regularized data, and the final approximation is obtained by adding those previously subtracted local approximations at the PCTs.
\begin{figure}[ht]
\centering{{\includegraphics[width=5cm]{lip.png} }}
\qquad{{\includegraphics[width=5cm]{Non_lip.png} }}
\caption{An illustration of two SVFs one with a Ho\"lder type hole (right) and one with a Lipschitz type hole (left).}%
\label{fig:example_C}
\end{figure}
\subsection{A Description of The Algorithm}\label{Algorithm5}
We consider a set-valued function $F$ with only one hole $H$, and samples given at equally spaced points $X=\big\{{x_i}\big\}_{i=0}^{N}$, where $x_{i}=a+i\Delta,\ \Delta=\frac{b-a}{N}$. As in the previous section, we let $\{x_i\}_{i=n}^{m}$ be all the sample points in $(c,d)$. We assume that $m-n > 2k$.
The first two steps of the algorithm are identical to the corresponding steps in the description of the algorithm of the previous chapter.
\begin{enumerate}
\item \textbf{Approximating the functions $u$ and $\ell$ describing the lower and the upper boundaries of the graph of $F$}
As in the previous chapter.
\item \textbf{Identifying the hole}
As in the previous chapter.
\item \textbf{Approximating the right and left PCTs of $H$}:
In order to approximate the location of the left PCT, we swap between the $x-$coordinate and $y-$coordinate of the data points near the PCT by reflecting the graph of the hole across the line $y=x$ (see Figure \ref{fig:reflection}). Afterwards, we find a polynomial $p_{2k-1}(y)$, which interpolates the points
\begin{align*}
\Big\{\big(g(x_{i}),x_{i}\big)\Big\}_{j=n}^{n+k-1}\bigcup\Big\{\big(h(x_{i}),x_{i}\big)\Big\}_{j=n}^{n+k-1}.
\end{align*}
\begin{figure}[ht]
\centering
\subfloat[\centering Left side of a hole]{{\includegraphics[width=5cm]{before_reflection.png}}}
\qquad
\subfloat[\centering Left side of a hole after the reflection across $y=x$]{{\includegraphics[width=5cm]{after_reflection.png}}}
\caption{An illustration of swapping between the $x-$coordinate and $y-$coordinate of the data points of the left side of a hole, which is equivalent to reflection across the line $y=x$.}%
\label{fig:reflection}
\end{figure}
Next, we find the minimum point $\big(\Tilde{y}_{p},p_{2k-1}(\Tilde{y}_{p})\big)$ of $p_{2k-1}(y)$ over the interval $[g(x_{n+k-1}),h(x_{n+k-1})]$. Finally, we define the approximation of the left PCT $\big(c,g(c)\big)$ to be $(p_x,p_y)\equiv \big(p_{2k-1}(\Tilde{y}_{p}),\Tilde{y}_{p}\big)$. For the right PCT we do a similar procedure, defining the approximation of the right PCT as $(q_x,q_y)$, using the maximum point of an analogue polynomial interpolating the reflected data near the right PCT.
\item \textbf{Approximating the lower and upper boundaries of the hole}:
In this step, we build a function that approximates the upper boundary function $h(x)$ (a similar procedure is applied for the lower boundary). The function $h(x)$, $x\in [c,d]$, is known to be $C^{2k}$ in $(c,d)$, with singularities at $c$ and $d$ of the form (\ref{sqrtexp1}) and (\ref{sqrtexp2}). The approximation procedure suggested here is based upon the observation (\ref{c2kclosedcd}), implying that $\hat h^{[2k]}$ can be efficiently approximated using spline interpolation over $[c,d]$.
In our problem we do not know the expansions $h_c^{[2k]}$ and $h_d^{[2k]}$, In particular, we do not know $c$ and $d$. Instead,
We find two approximations $P\sim h_c^{[2k]}$ and
$Q\sim h_d^{[2k]}$. Following the singularity behavior in (\ref{sqrtexp1}) and (\ref{sqrtexp2}) we look for $P$ and $Q$ of the form
\begin{equation}\label{sqrtexp3}
P(x)=\sum_{j=0}^{r} p_j (x-p_x)^{j/2},
\end{equation}
\begin{equation}\label{sqrtexp4}
Q(x)=\sum_{j=0}^{r} q_j |x-q_x|^{j/2},
\end{equation}
where $P$ interpolates the set of points
\begin{equation}
\label{eq:first_data_set}
\Big\{\big(p_{x},p_{y}\big)\Big\}\bigcup\Big\{\big(x_{j},h(x_{j})\big)\Big\}_{j=n}^{n+r-1},
\end{equation}
and $Q$ interpolates the set of points
\begin{equation}
\label{eq:second_data_set}
\Big\{\big(x_{j},h(x_{j})\big)\Big\}_{j=m-r+1}^{m}\bigcup\Big\{\big(q_{x},q_{y}\big)\Big\}.
\end{equation}
\iffalse
We shift (\ref{eq:first_data_set}) such that the first coordinate of the approximated left PCT is zero. We also shift (\ref{eq:second_data_set}) such that the first coordinate of the approximated right PCT is zero. Then, we use the following base for the construction of $\tilde{P}$ and $\tilde{Q}$
\begin{align*}
\Big\{1,\sqrt{t},t,\sqrt{t^{3}},t^{2}\Big\}.
\end{align*}
With $t=|x-p_{x}|$ we define $P(t)=\tilde P(x)$, and with $t=|x-q_{x}|$ we define $Q(t)=\tilde Q(x)$. By solving a system of linear equations, we find the coefficients of $P$ and $Q$. The system for $P$ is
\begin{equation}
\begin{pmatrix}
1 & \sqrt{t_{0}} & t_{0} & \sqrt{t_{0}^{3}} & t_{0}^{2} \\
1 & \sqrt{t_{1}} & t_{1} & \sqrt{t_{1}^{3}} & t_{1}^{2} \\
1 & \sqrt{t_{2}} & t_{2} & \sqrt{t_{2}^{3}} & t_{2}^{2} \\
1 & \sqrt{t_{3}} & t_{3} & \sqrt{t_{3}^{3}} & t_{3}^{2} \\
1 & \sqrt{t_{4}} & t_{4} & \sqrt{t_{4}^{3}} & t_{4}^{2} \\
\end{pmatrix}
\begin{pmatrix}
a_{0} \\
a_{1} \\
a_{2} \\
a_{3} \\
a_{4} \\
\end{pmatrix}
=
\begin{pmatrix}
y_{0} \\
y_{1} \\
y_{2} \\
y_{3} \\
y_{4} \\
\end{pmatrix},
\end{equation}
with $t_{0}=0$, $t_{i}=|x_{i-1}-p_{x}|$ for $1\leq i\leq4$, $y_{0}=p_{y}$, $y_{i}=h(x_{i})$ for $0\leq i\leq 3$. For $Q$ we solve the system
\begin{equation}
\begin{pmatrix}
1 & \sqrt{t_{0}} & t_{0} & \sqrt{t_{0}^{3}} & t_{0}^{2} \\
1 & \sqrt{t_{1}} & t_{1} & \sqrt{t_{1}^{3}} & t_{1}^{2} \\
1 & \sqrt{t_{2}} & t_{2} & \sqrt{t_{2}^{3}} & t_{2}^{2} \\
1 & \sqrt{t_{3}} & t_{3} & \sqrt{t_{3}^{3}} & t_{3}^{2} \\
1 & \sqrt{t_{4}} & t_{4} & \sqrt{t_{4}^{3}} & t_{4}^{2} \\
\end{pmatrix}
\begin{pmatrix}
b_{0} \\
b_{1} \\
b_{2} \\
b_{3} \\
b_{4} \\
\end{pmatrix}
=
\begin{pmatrix}
y_{0} \\
y_{1} \\
y_{2} \\
y_{3} \\
y_{4} \\
\end{pmatrix},
\end{equation}
with $t_{4}=0$, $t_{i}=|x_{n-3+i}-q_{x}|$ for $1\leq i\leq4$, $y_{n}=q_{y}$, $y_{i}=h(x_{n-3+i})$ for $0\leq i\leq 3$.
Note that the matrices of the above systems are Vandermonde matrices with the substitution of $s_{i}=\sqrt{t_{i}}$ for $0\leq i \leq 4$. Thus, the above systems of equations have each a unique solution. Note that $\Tilde{Q}(x)=Q(|x-q_{x}|)$ and $\Tilde{P}(x)=P(|x-p_{x}|)$.
\fi
Here $r$ is a free parameter to be determined according to the desired approximation order.
Afterwards, we compute a "not-a-knot" cubic spline $S(x)$ interpolating the following set of data points
\begin{equation}\label{Rdata}
\quad\Big\{\big( p_{x},p_{y}-R(p_{x})\big)\Big\}
\bigcup\Big\{\big(x_{i},h(x_{i})-R(x_{i})\big)\Big\}_{i=0}^{m}
\bigcup\Big\{\big( q_{x},q_{y}-R(q_{x})\big)\Big\},
\end{equation}
where $R(x)=P(x)+Q(x)$. As explained above, the subtraction of $P$ and $Q$ intends to eliminate the singularity behavior of $h$ near the PCTs.
Finally, the upper boundary of the hole is approximated by the function
\begin{equation}
U(x)=S(x)+P(x)+Q(x).
\end{equation}
In a similar way we compute the approximation of the lower boundary of the hole.
\end{enumerate}
These approximations, together with the approximation of the upper and the lower boundaries of the graph of $F$, define the final approximation $\tilde F$ of the set-valued function $F$.
\subsection{Error Analysis}\label{EA}
The error analysis is composed of four steps:
\begin{itemize}
\item Estimating the error in approximating the location of the PCT $(c,h(c))$.
\item Bounding the error involved in the approximations $P\sim h_c^{[2k]}$ and $Q\sim h_d^{[2k]}$.
\item Estimating the error in approximating the values $\{\hat h^{[2k]}(x_i)\}$ by $\{h(x_i)-R(x_i)\}$.
\item Bounding the spline approximation error and combining all error estimates.
\end{itemize}
\subsubsection{ Error Analysis of the Approximation of the PCTs}\label{PCTappr}
\hfill
\medskip
In this section we find the approximation order in approximating the left PCT $\big(c,g(c)\big)$. A similar result applies for the approximation of the right PCT.
The polynomial $p_{2k-1}$ defined in Step 3 of the algorithm is using the data at $x_i=a+i\Delta$, $n\le i \le n+k-1$. Since $(c,g(c))$ is the left PCT, it turns out that for a small enough $\Delta$, $g$ and $h$ are invertible over the interval $[c,x_{n+k-1}]$.
We define the following function
\begin{align*}
\phi(y) =
\begin{cases}
g^{-1}(y), &\quad y\in\big[g(x_{n+k-1}), g(c)\big)\\
h^{-1}(y), &\quad y\in\big[h(c),h(x_{n+k-1})\big]\\
\end{cases},
\end{align*}
recalling $g(c)=h(c)$. We observe that $\phi\in C^{2k}[g(x_{n+k-1}),h(x_{n+k-1})]$ since $\Gamma\in C^{2k}$. By definition, the polynomial $p_{2k-1}(y)$ interpolates $\phi(y)$ over the interval $\big[g(x_{n+k-1}),h(x_{n+k-1})\big]$. Since $h$ and $g$ has the local expansions (\ref{sqrtexp1}) and (\ref{sqrtexp2}), it follows that the maximal distance between the interpolation points for $p_{2k-1}(y)$ can be estimated as
$\Delta_{y}=O\big(\sqrt{\Delta}\big)$ as $\Delta\to 0$.
We note that by the definition of the inverse function, we have $\phi(y_{p})=c$ where $y_p=h(c)=g(c)$. Thus, we denote the actual left PCT by $\big(\phi(y_{p}),y_{p}\big)$. Recall that the approximated left PCT is $(p_x,p_y)\equiv \big(p_{2k-1}(\Tilde{y}_{p}),\Tilde{y}_{p}\big)$. In the following we estimate the Euclidean distance between the original and approximated PCT,
\begin{equation}
\label{eq:error_pct_approx_c}
E_{k}=\sqrt{\big(\phi(y_{p})-p_{2k-1}(\Tilde{y}_{p})\big)^{2}+\big(y_{p}-\Tilde{y}_{p}\big)^{2}}.
\end{equation}
\begin{proposition}\label{Prop5}
Using the above algorithm for approximating the left PCT , $E=O\big(\Delta^{k-0.5}\big)$ as $\Delta\to 0$.
In particular, $|c-p_x|=O(\Delta^k)$
and $|h(c)-p_y|=O(\Delta^{k-1/2})$.
\end{proposition}
\begin{proof}
By the assumption on non-zero curvature of $\Gamma$ at the PCT, it follows that for a small enough $\Delta$ $\phi''(y)>C>0$ for $\forall y\in\big[g(x_{n+k-1}),h(x_{n+k-1})\big]$. By the mean value theorem
\begin{equation}
\label{al:error_estimate_phi_tag_c}
\frac{|\phi'(y_{p})-\phi'(\Tilde{y}_{p})|}{|y_{p}-\Tilde{y}_{p}|}=\frac{|0-\phi'(\Tilde{y}_{p})|}{|y_{p}-\Tilde{y}_{p}|}=|\phi''(y_{c})|>C>0,
\end{equation}
where $y_{c}$ is between $y_{p}$ and $\Tilde{y}_{p}$. Using the estimate for the error in approximating the derivative by polynomial interpolation, and recalling $p_{2k-1}'(\Tilde{y}_{p})=0$, we obtain
\begin{align*}
\frac{|\phi'(\Tilde{y}_{p})|}{|y_{p}-\Tilde{y}_{p}|}=\frac{|p_{2k-1}'(\Tilde{y}_{p})+O(\Delta_{y}^{2k-1})|}{|y_{p}-\Tilde{y}_{p}|}=\frac{|0+O(\Delta_{y}^{2k-1})|}{|y_{p}-\Tilde{y}_{p}|}=\frac{O(\Delta_{y}^{2k-1})}{|y_{p}-\Tilde{y}_{p}|},
\end{align*}
and using (\ref{al:error_estimate_phi_tag_c}) we obtain
\begin{equation}
\label{eq:pct_x_error_c}
|y_{p}-\Tilde{y}_{p}|=O(\Delta_{y}^{2k-1}).
\end{equation}
Moreover, by using the Mean Value Theorem, the interpolation error estimate, and $(\ref{eq:pct_x_error_c})$
\begin{equation}
\label{eq:pct_y_error_c1}
|\phi(y_{p})-p_{2k-1}(\Tilde{y}_{p})|\leq |\phi(y_{p})-\phi(\Tilde{y}_{p})|+O(\Delta_{y}^{2k})= |\phi'(\xi)||y_{p}-\Tilde{y}_{p}|+O(\Delta_{y}^{2k}),
\end{equation}
where $\xi$ is between $y_{p}$ and $\Tilde{y}_{p}$. Since $\phi'(y_p)=0$, $|\phi'(\xi)|\le C|y_{p}-\Tilde{y}_{p}|$. Hence,
\begin{equation}
\label{eq:pct_y_error_c2}
|\phi(y_{p})-p_{2k-1}(\Tilde{y}_{p})|\leq
C|y_{p}-\Tilde{y}_{p}|^2+O(\Delta_{y}^{2k})=O(\Delta_{y}^{4k-2})+O(\Delta_{y}^{2k})=O(\Delta_{y}^{2k}).
\end{equation}
\iffalse
\begin{equation}
\label{eq:pct_y_error_c}
\begin{split}
&|\phi(y_{p})-p_{2k-1}(\Tilde{y}_{p})|\leq |\phi(y_{p})-\phi(\Tilde{y}_{p})|+O(\Delta_{y}^{2k})\\&\leq |\phi'(y_{d})||y_{p}-\Tilde{y}_{p}|+O(\Delta_{y}^{2k}) \leq \mathcal{M}|y_{p}-\Tilde{y}_{p}|+O(\Delta_{y}^{2k})\\&=O(\Delta_{y}^{2k-1})+O(\Delta_{y}^{2k})=O(\Delta_{y}^{2k-1}).
\end{split}
\end{equation}
Here $y_{d}$ is between $y_{p}$ and $\Tilde{y}_{p}$ and $\mathcal{M}$ is the maximum of $\phi'$ in the interval between $y_{p}$ and $\Tilde{y}_{p}$.
\fi
Recalling $\phi(y_{p})=c$ and $y_p=h(c)=g(c)$, and the notation for the approximated PCT, $(p_x,p_y)=(p_{2k-1}(\tilde y_p),\tilde y_p)$, it follows from $(\ref{eq:pct_x_error_c})$ and $(\ref{eq:pct_y_error_c2})$ that $|c-p_x|=O(\Delta^k)$
and $|h(c)-p_y|=O(\Delta^{k-1/2})$.
Combining the error estimates we conclude that
\begin{equation}
\label{eq:error_estimate_pct_c}
E_{k}=O(\Delta_{y}^{2k-1})=O(\Delta^{k-\frac{1}{2}}).
\end{equation}
\end{proof}
\subsubsection{The error in approximating the local expansion near the PCT}
\hfill
\medskip
To analyze the approximations $P\sim h_c^{[2k]}$ and $Q\sim h_d^{[2k]}$ we employ the following interpolation lemma:
\begin{lemma}\label{lemmaT}
Let $T(s)$ be a local power series of $f\in C^{r+1}[a,b]$ at $0\in (a,b)$,
$$T(s)=\sum_{j=0}^ra_js^j,$$
and let $p_r(s)=\sum_{j=0}^rb_js^j$ interpolate the data
$\{(s_i,f(s_i))\}_{i=0}^r$, where $s_i\in [0,r\delta]$ and $\delta$ is the maximal distance between interpolation points.
Then, for $0\le j\le r$,
\begin{equation}\label{InterpolationLemma}
|b_j-a_j|=O(\delta^{r+1-j}), \ \ {\text as}\ \delta\to 0.
\end{equation}
\end{lemma}
The result (\ref{InterpolationLemma}) follows using standard estimates for the approximation of a function and its derivatives by polynomial interpolation, applied at $s=0$.
Let $\tilde P(x)=\sum_{j=0}^{r} \tilde p_j (x-c)^{j/2}$ interpolate the function $h$
at the data set $\{c,\{x_{j}\}_{j=n}^{n+r-1}\}.$
We recall that $h(x)$ has a local series expansion of the form
$$h_c^{[2k]}(x)=\sum_{j=0}^{2k} c_j (x-c)^{j/2}.$$
Setting $s=(x-c)^{1/2}$, the problem is transformed into approximating $h(s^2+c)\sim \sum_{j=0}^{2k} c_j s^{j}$ by polynomial interpolation at the points
$s_0=0$ and $\{s_{j-n+1}=(x_{j}-c)^{1/2}\}_{j=n}^{n+r-1}.$ Noting that the maximal distance between the interpolation points $\{s_j\}_{j=0}^r$ is $O(\Delta^{1/2})$, and using the above lemma, we obtain for $0\le j\le r$,
$$|\tilde p_j-c_j|=O(\Delta^{\frac{r+1-j}{2}}), \ \ {\text as}\ \Delta\to 0.$$
Here it also follows that $\tilde p_0=c_0=h(c)$.
Since the location of the PCT is unknown, we cannot compute $\tilde P$. Instead, we compute $P(x)=\sum_{j=0}^{r} p_j (x-p_x)^{j/2}$ by interpolating the data
\begin{equation}
\Big\{\big(p_{x},p_{y}\big)\Big\}\bigcup\Big\{\big(x_{j},h(x_{j})\big)\Big\}_{j=n}^{n+r-1},
\end{equation}
and we need to estimate the errors $|p_j-c_j|$.
By Proposition \ref{Prop5} we have
$|c-p_x|=O(\Delta^{k})$ and $|h(c)-p_y|=O(\Delta^{k-1/2})$.
Let us compare the interpolation problems for $\tilde P$ and for $P$, using the Lagrange interpolation formula. We observe that in the denominators of the Lagrange polynomials for $P$ there is an $O(\Delta^{k-3/2})$ perturbation relative to those for $\tilde P$. It follows that $$p_0=p_y=c_0+O(\Delta^{k-\frac{1}{2})},$$
and for $1\le j\le r$
\begin{equation}\label{pjmcj}
|p_j-c_j|=O(\Delta^{\frac{r+1-j}{2}})+O(\Delta^{k-\frac{3}{2}}).
\end{equation}
Similar estimates hold for the approximation $Q(x)$.
Comparing $P$ and $h_c^{[2k]}$, the other source of discrepancy is due to different expansion points, $p_x\ne c$. The major influence comes from the leading singular terms in the power series expansions. Using the estimate $|c-p_x|=O(\Delta^k)$ as $k\to 0$, it follows that
\begin{equation}\label{cmxp}
||x-c|^\frac{1}{2}-|x-p_x|^\frac{1}{2}|\le C|c-x_p|^\frac{1}{2}=O(\Delta^\frac{k}{2}),\ \ {\text as}\ \Delta\to 0.
\end{equation}
The contribution of other terms in the expansion is of higher order in $\Delta$.
\subsubsection{The error in the spline approximation to $\hat h^{[2k]}$.}
\hfill
\medskip
For $k\ge s$, $s$ even, by Corollary \ref{Coro51}, $\hat h^{[2k]}\in C^s[c,d]$.
Using a $s$ order spline interpolant, $\hat S$, to approximate $\hat h^{[2k]}$ yields a uniform approximation error
$$
|\hat h^{[2k]}(x)-\hat S(x)|\le C\Delta^s,\ \ \forall x\in [c,d].
$$
However, since $\hat h^{[2k]}$ is unavailable, we apply the spline interpolation to the data (\ref{Rdata}), which is an approximation of exact data of $\hat h^{[2k]}$.
Using the estimates (\ref{pjmcj}), (\ref{cmxp}), we have that
\begin{equation}\label{h2kmR}
|h_c^{[2k]}(x)+h_d^{[2k]}(x)-R(x)|=O(\Delta^{\frac{r}{2}+\frac{1}{2}})+O(\Delta^{k-\frac{3}{2}})+O(\Delta^{\frac{k}{2}}),\ \ x\in[\max\{c,p_x\},c+K\Delta],
\end{equation}
where $K$ is independent of $\Delta$. Using $r=4$ and $k=5$ gives
\begin{equation}\label{h2kmR2}
|h_c^{[10]}(x)+h_d^{[10]}(x)-R(x)|=O(\Delta^{\frac{5}{2}}),\ \ x\in[\max\{c,p_x\},c+K\Delta].
\end{equation}
In the following we assume $k\ge 3$ which implies that the term $O(\Delta^{k-\frac{3}{2}})$ is redundant. A similar estimate holds near the right PCT.
Approximating the PCT location using $k$ sample points, approximating the local series expansions $P$ and $Q$ using the approximated PCT and $r$ samples, approximating the resulting regularized data using a not-a-knot cubic spline interpolation $S$ on $[p_x,q_x]$, we define the final approximation to $h$ as
\begin{equation}\label{tildeh}
\tilde h(x)=S(x)+P(x)+Q(x),\ \ x\in [p_x,q_x].
\end{equation}
We notice that $h$ is defined on the interval $[c,d]$, while $\tilde h$ is defined on $[p_x,q_x]$. In order to compare between $h$ and $\tilde h$ we extend each of them to a larger interval $[c_e,d_e]$, $c_e=p_x-\epsilon$, $d_e=q_x+\epsilon$, as follows:
\begin{equation}\label{he}
h_e(x) =
\begin{cases}
h(c) &\quad x\in [c_e,c)\\
h(x) &\quad x\in [c,d]\\
h(d) &\quad x\in (d,d_e]\\
\end{cases},
\end{equation}
\begin{equation}\label{tildehe}
\tilde h_e(x) =
\begin{cases}
\tilde h(p_x) &\quad x\in [c_e,p_x)\\
\tilde h(x) &\quad x\in [p_x,q_x]\\
\tilde h(q_x) &\quad x\in (q_x,d_e]\\
\end{cases}.
\end{equation}
Using the estimates in Proposition \ref{Prop5}, $|c-p_x|$ and $|d-q_x|$ are both bounded by $C\Delta^k$. Hence, choosing $\epsilon\ge C\Delta^k$ guarantee that $[c_e,d_e]$ contains both intervals $[c,d]$ and $[p_x,q_x]$.
All the above estimates lead to the following approximation theorem:
\begin{proposition}\label{Prop6}
Using spline interpolation of order $s$ for $S$, and assuming $k\ge 3$,
\begin{equation}\label{tildehapp}
\|h_e-\tilde h_e\|_{[a,b],\infty}=O(\Delta^{\frac{r}{2}+\frac{1}{2}})+O(\Delta^{s})+O(\Delta^{\frac{k}{2}}), {\ \ \text as}\ \ \Delta\to 0.
\end{equation}
\end{proposition}
A similar construction, with a similar approximation estimate hold for the approximation $\tilde g_e$ to $g_e$.
\iffalse
Applying the approximation procedure for each of the holes' SVFs $\{F_i\}_{i=1}^M$, we obtain the corresponding approximations $\{\tilde F_i\}_{i=1}^M$. We also define the approximation $\tilde F_0$ to $F_0$ using $s$ order spline interpolation to the lower and the upper boundary functions $\ell$ and $u$.
\fi
\subsubsection{The case of $M$ holes}\label{Mholes5}
\hfill
\medskip
Let $F$ be an SVF such that $Graph(F)$ has separable $M$ holes $\{H_i\}_{i=1}^M$ (i.e. the closures of the holes are disjoint). The hole $H_i$ is defined on an interval denoted by $[c_i,d_i]\subset(a,b)$, and we assume that it is simple, namely, it is defined as the interior of a closed boundary curve $\Gamma_i$, such that every vertical cross-section at $x\in(c_i,d_i)$ cuts the curve at two points. We further assume that the curves $\{\Gamma_i\}$ do not intersect each other, and do not intersect the upper and the lower boundaries of $Graph(F)$. We further assume that each $\Gamma_i$ has non-zero curvature at both PCTs of $H_i$. Let the upper and lower boundaries of $H_i$ be defined by the functions $h_i$ and $g_i$ respectively. We also recall the functions $u$ and $\ell$ defining the upper and lower boundaries of $Graph(F)$.
The SVF approximation $\tilde F(x)$ is defined as follows: For each hole $H_i$ we apply the approximation algorithm described in Section \ref{Algorithm5} for the case of one hole. The outcome includes approximations $\tilde h_i\sim h_i$, $\tilde g_i\sim g_i$ on an interval $[\tilde c_i,\tilde d_i]\equiv [p_{x,i},q_{x,i}]$ approximating the interval $[c_i,d_i]$.
We continue the analysis as in Section \ref{Mholes}, using the same idea of extended functions on extended intervals, and using the same definitions therein.
\begin{theorem}\label{Theorem5}
Let $F$ be an SVF such that $Graph(F)$ has separable $M$ holes with $C^{2k}$ boundary curves with non-zero curvatures at the PCTs. Defining the approximation by (\ref{tildeFatxe4}), then, for a small enough $\Delta$,
\begin{equation}
d_H(F(x),\tilde F(x))\le C_1\Delta^{\frac{r}{2}+\frac{1}{2}}+C_2\Delta^{s}+C_3\Delta^{\frac{k}{2}}.
\end{equation}
\end{theorem}
\begin{proof}
Each interval in (\ref{Fatxe4}) has a corresponding interval in (\ref{tildeFatxe4}), and by (\ref{tildehapp}) it is clear that the Hausdorff distance between corresponding intervals is of order
$$O(\Delta^{\frac{r}{2}+\frac{1}{2}})+O(\Delta^{s})+O(\Delta^{\frac{k}{2}}), {\ \ \text as}\ \ \Delta\to 0.$$
The proof is completed using the result in Lemma \ref{Lemma:sets}.
\end{proof}
\iffalse
\medskip
{\color{magenta}
\begin{itemize}
\item
Identifying the lower and upper boundaries and identifying holes.
\item Let $F_0$ be the SVF without holes, be defined by lower and upper boundary functions $\ell$ and $u$.
\begin{remark}
The observation that the lower and upper boundaries are defined by {\bf functions} $\ell$ and $u$, follows from the assumption that $F$ is Lipschitz.
\end{remark}
\item
Let $\ell_{min}=\min_{x\in [a,b]}\ell(x)$. and $u_{max}=\max_{x\in [a,b]}u(x)$. Considering the hole $H_i$, let the SVF $F_i$ be defined by its graph, $$Graph(F_i)=([a,b]\times [\ell_{min},u_{max}])\setminus H_i.$$
\item
Note that
$$F(x)=\bigcap_{i=0}^M F_i(x).$$
\item Use the data set to approximate the lower and upper boundary functions $\ell$ and $u$, and thus to find an approximation $\tilde F_0\sim F_0$.
\item
Using the data of a hole $H_i$, embedded in the interval set $[\ell_{min},u_{max}]$, find the approximation $\tilde F_i$ to $F_i$.
\item
Finally, define the approximation to $F$ as
$$\tilde F(x)=\bigcap_{i=0}^M \tilde F_i(x).$$
\item
If $d_H(F_i,\tilde F_i)\le \epsilon_i$, then $d_h(F,\tilde F)\le \max\{\epsilon_i\}_{i=0}^M$.
\item
The approximation of $F_0$ is obvious, as presented in previous sections.
\item
Let us consider the case of an SVF $G$ with a single hole, $H$ such that
$$Graph(G)=[a,b]\times [\ell_{min},u_{max}]\setminus H.$$
We assume the hole is spanned in the interval $[c,d]$, and we let $h$ and $g$ be the functions
defined on $[c,d]$ which describe the upper and the lower boundaries of the hole respectively. The point $(c,h(c))=(c,(g(c))$ and the point $(d,h(d))=(d,g(d))$ are the left and right points of change of topology (PCTs) in the graph of $F$.
Let us extend the boundary functions $h$ and $g$ to the whole interval $[a,b]$:
\begin{equation}
\label{he1}
h_e(x) =
\begin{cases}
h(c) &\quad x\in [a,c)\\
h(x) &\quad x\in [c,d]\\
h(d) &\quad x\in (d,b]\\
\end{cases},
\end{equation}
and
\begin{equation}\label{ge1}
g_e(x) =
\begin{cases}
h(c) &\quad x\in [a,c)\\
g(x) &\quad x\in [c,d]\\
h(d) &\quad x\in (d,b]\\
\end{cases}.
\end{equation}
It follows that
\begin{equation}\label{SVFF}
G(x) = [\ell_{min},g_e(x)]\cup[h_e(x),u_{max}].
\end{equation}
\end{itemize}
}
\fi
\subsection{Numerical Results}
We demonstrate the process of approximating the left PCT as well as showing the decay rate of the interpolation error on a set-valued function with an elliptic hole, displayed in figure \ref{fig:example_C1}, which is explicitly given by,
\[ F(x)=\begin{cases}
\big[-\frac{3}{2},\frac{3}{2}\big],\qquad &x\in\big[-1,1\big]/\big[-\frac{1}{2},\frac{1}{2}\big],\\
\Big[-\frac{3}{2}, -\sqrt{1-4x^2}\Big]\bigcup\Big[\sqrt{1-4x^2}, \frac{3}{2}\Big],\qquad &x\in\big[-\frac{1}{2} ,\frac{1}{2}\big].
\end{cases}
\]
\begin{figure}[!ht]
\centering
\subfloat[][The Set-Valued Function $F$]{\includegraphics[width=.4\textwidth]{Examples_C/example_2.png}}\quad
\subfloat[][Approximation with 30 samples]{\includegraphics[width=.4\textwidth]{Examples_C/example_2_30.png}}\\
\subfloat[][Approximation with 35 samples]{\includegraphics[width=.4\textwidth]{Examples_C/example_2_35.png}}\quad
\subfloat[][Approximation with 40 samples]{\includegraphics[width=.4\textwidth]{Examples_C/example_2_40.png}}
\caption{The set-valued function $F$ and its approximations, zoomed in near the left PCT of the hole. Each approximation is represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow.}
\label{fig:example_C1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{Examples_C/maximum_error_example_2.png}
\caption{The rate of decay of the interpolation error as a function of the number of interpolation points ($N$) for the set-valued function $F$ and for three values of $k$, $s=3$ and $r=4$.}
\label{fig:maximum_error_example_C1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{Examples_C/pct_error_example_2.png}
\caption{The rate of decay of the error in the location of the left PCT as a function of the number of interpolation points ($N$) for the set-valued function $F$ and for three values of $k$, $s=3$ and $r=4$.}
\label{fig:pct_error_example_C1}
\end{figure}
Figure \ref{fig:example_C1} consists of four sub-figures. The first sub-figure shows the graph of the original function with the close-up view area near the left PCT bounded by a red rectangle. The last three sub-figures show a zoomed-in view of the approximations corresponding to a different number of samples. The approximant values are represented by vertical blue lines, drawn on the graph of the original function, which is colored in yellow.
We show the rate of decay of the approximation error for the above SVF in figure \ref{fig:maximum_error_example_C1}. The error is estimated by
$$
\text{Maximum Error}=\max_{j}{\Big\{d_{H}\big(F(\xi_{j}),\Tilde{F}(\xi_{j})\big)\Big\}},
$$
where $\{\xi_{j}\}_{j=1}^{400}$ is a set of equidistant points in $[a,b]$.
Recall that the interpolating polynomial for approximating the PCT is of degree $2k-1$ (See the description of the algorithm). We plot
$$
G_{k,r,s}=\frac{\log{(\text{Maximum Error}})}{\log{(\Delta)}},\qquad k=2,3,4,\qquad s=3,\qquad r=4,
$$
with $\Delta=\frac{b-a}{N-1}$, as a function of the number of the interpolation points $N$.
Finally, we show the rate of decay of the error of approximating the left PCT of the three above SVFs in figures \ref{fig:pct_error_example_C1}. We plot the value of
$$
\tilde{E}_{k}=\frac{\log{(E_{k})}}{\log{(\Delta)}},\qquad k=2,3,4,
$$
as a function of the number of the interpolation points $N$. Here, $E_{k}$ is the error of the approximation of the left PCT defined in (\ref{eq:error_pct_approx_c}).
Figure \ref{fig:example_C1} demonstrates that the approximation error of the left PCT decreases as $N$ increases in accordance with the theory.
Figure \ref{fig:pct_error_example_C1} shows that the decay rate of the approximation of the left PCT improves as $k$ increases as predicted by the theoretical result (\ref{eq:error_estimate_pct_c}).
|
1,116,691,500,989 | arxiv | \section{Introduction}
The nearby \citep[d=3.6\,Mpc;][]{freedman94} star-forming galaxy M82 has been subject to
frequent radio monitoring at centimetric wavelengths with the VLA from
the early 1980s \citep{b1,b2}, and with MERLIN from the early 1990s
\citep{b3,b4}. Of order 60 compact radio sources have been
identified within the central kpc of M82, the majority of which are
thought to be recent supernova remnants which have exploded within the
last 2000 years. The origin of 46 of these objects has been determined
by the study of their radio spectral indices; 30 are considered to be
supernova remnants, and 16 are thought to be compact H{\sc ii} regions \citep{b5}.
Radio monitoring at intervals of around a year has shown that there is
an additional population of radio transient sources whose origin is
unknown. To date two transient sources have been detected, and each
for only a single monitoring epoch implying that their lifetimes are
typically less than a year. \citet{b6} detected the compact radio
source 41.5+597 in M82 with the VLA in February 1981. At that epoch,
the object had a flux density of 7.1\,mJy and 2.6\,mJy at wavelengths
of 6 and 2\,cm respectively, implying that the source possessed a
steep radio spectral index of $\alpha$=$-$0.9 (S=$\nu$$^{\alpha}$). By
October 1983 the source had faded to below the detection threshold of
their VLA monitoring observations with an upper limit of 1.5\,mJy at
6\,cm. In a series of 6\,cm MERLIN monitoring observations starting in
the early 1990s, no emission was found at the position of 41.5+597 to
limits of $\sim$60$\mu$Jy, and in the deepest 6\,cm MERLIN
observations of M82 to date, made in 2002 \citep{b7} no emission was
found to a limit of 20\,$\mu$Jy.
In July 1992 \citet{b3} detected a second transient, 40.59+55.8 with
MERLIN at 6\,cm with a flux density of 1.2\,mJy. Subsequent MERLIN
monitoring at 21\,cm in April/May 1993 failed to detect emission at
the position of the transient to a limit of 300\,$\mu$Jy. Furthermore
it was not detected by deep MERLIN imaging at 6\,cm in February 1999
and April 2002 with limits of 35 and 21\,$\mu$Jy respectively
\citep{b5,b7}. Both 41.5+597 and 40.59+558 lie outside the dynamical
centre of M82 and since neither has given rise to a radio supernova
remnant, they may be examples of stellar binary microquasar
systems. If so, they would be the first to be discovered in the radio
outside the Milky Way.
Recently, \citet{brunthaler09a,b8} reported the detection of radio emission from a
new bright supernova in M82 (SN\,2008iz) which is thought to have flared
during the last week of March 2008 \citep{b10}. The appearance of
SN\,2008iz around 45 years after the previous supernova
\citep[43.31+592,][]{b11} is consistent with the radio supernova rate for M82 of a new
supernova approximately every 15 to 30 years \citep{b3,b7}. Subsequent
enhanced MERLIN monitoring of M82 has resulted in the detection of a
new radio source in the central region of the galaxy
\citep{b9}. This faint new radio source was discovered in observations
taken 1-5th May 2009 and was not present in images taken $\sim$1 week
earlier, on the 25th April 2009 (see Fig.\,\ref{Maps1}). Using closely spaced MERLIN (and VLBI) observations between
April 2009 and January 2010, it has been possible, for the first time, to
study the detailed evolution of one of the M82 transient source
population.
\vspace*{-0.5cm}
\section[]{Observations and Data Reduction}
The detection of both this new source and the campaign of continued
flux density monitoring of the evolving SN\,2008iz motivated a series of
radio monitoring observations of M82. MERLIN observations of M82 were
made between late April 2009 and January 2010 at 4994 and 6668.4\,MHz,
and 1658\,MHz.
All observations were made in wide-field mode, with parallel hands of
circular polarisation, measured over 16\,MHz of bandwidth correlated
into 32 frequency channels. The primary flux density calibrator
3C\,286 was used to set the flux density scale and the unresolved
bright calibrator OQ208 was used to calibrate the amplitudes and
bandpass responses. Throughout each epoch observations were
interspersed with scans on the nearby phase reference source
J095910+693217, with an assumed position of RA
09$\hbox{$^{\rm h}$}\,59\hbox{$^{\rm m}$}\,10\fsec6391$, Dec 69$\degr\,32\hbox{$^{\prime}$}\,17\farcs723$
(J2000).
Data from each epoch were independently reduced using standard methods
applying phase corrections determined form the phase reference
source, J095910+693217, and the data were weighted appropriately to account
for the relative sensitivities of the individual antennas. Following calibration, a large field encompassing
the entire radio extent of M82 at this resolution was imaged, using multiple
imaging facets and fully accounting for wide-field imaging effects. At each
different reference frequency all epochs were imaged in an identical
manner and the images were restored with a circular Gaussian beam
appropriate for the {\it uv} spacing of the baseline lengths and the
weighting applied to the gridded data during imaging.
\begin{center}
\begin{figure*}
\includegraphics[width=15.5cm,angle=0]{muxlow_fig2.eps}
\vskip -0.15cm
\caption{The {\it left-hand panel} shows the radio light curve of SN\,2008iz and the new transient source
reported over the first 150 days of its existence. MERLIN 4994 and
1658\,MHz data are plotted as black and blue crosses
respectively. MERLIN observations at 6666.8\,MHz are not
plotted. 1.4\,GHz eVLBI data are shown as blue stars, 5 and 1.6\,GHz
VLBA are shown as black and blue open diamonds respectively
\citep{brunthaler09d}. The 5\,GHz light curve for the SN\,2008iz
\citep{b10} derived from single dish Urumqi observations is shown in
green. The flux densities measured from these MERLIN data at 5\,GHz
for two nearby compact remnants 44.01+59.6 and 43.31+59.2 which are
known to have a constant flux density \citep{b6,ulvestad94} are also shown as
pale blue triangles. An enlargement of the shaded region showing the
light curve for the new radio transient is shown in the {\it
right-hand panel}.}
\label{lightcurve1}
\end{figure*}
\end{center}
\vspace*{-1.0cm}
\section{Results}
\subsection{Radio lightcurve}
Radio lightcurves of the new MERLIN radio source are shown in
Fig.\,\ref{lightcurve1}, along with a composite MERLIN and Urumqi
5\,GHz light curve for SN\,2008iz \citep{b10,beswick09}. The Urumqi
5\,GHz observations do not resolve M82, but the observed variations in
the total flux density are dominated by SN2008iz. The new MERLIN radio
source reached a flux density of $\sim$600-700\,$\mu$Jy at 4994\,MHz
by early May 2009, showing greater than a factor of 5 increase in flux
density within 8 days. The flux density of the source has remained
approximately constant throughout all subsequent
observations. Simultaneous VLBA observations at 1.6 and 4.8\,GHz
observed on 30th April 2009, 3 days prior to the initial MERLIN 5\,GHz
detection, show this source to have an initial spectral index of
$-$0.7 \citep{brunthaler09d}. MERLIN monitoring observations at 1658,
4994 and 6668.4\,MHz show no significant variations in the spectral
index of this source throughout its first 150 days.
The evolution of the radio light curve for SN\,2008iz, \citep[left-hand panel of
Fig.\,\ref{lightcurve1} and discussed in detail by][]{b10}, is typical for a core-collapse
supernova, showing a rapid rise in flux density followed by a
power-law decline \citep[see for example][]{weiler02}. In comparison the new MERLIN
radio source is $\sim$100 times fainter than SN\,2008iz, and shows
significantly different flux density evolution with little or no
detectable variation following a very rapid initial rise.
\vspace*{-0.5cm}
\subsection{Position and size of this new radio source}
This new MERLIN source was detected on 3rd May 2009 at a position of
RA 09$\hbox{$^{\rm h}$}\,55\hbox{$^{\rm m}$}\,52\fsec5083$, Dec
69$\degr\,40\hbox{$^{\prime}$}\,45\farcs410$ (J2000) with an astrometric
error of 5\,mas in each coordinate. This position is within 3\,mas of
the VLBA detection of this source on the 30th April 2009 \citep{brunthaler09d}.
The position of the new MERLIN source has been measured in each epoch
relative to the position of the phase reference source and relative to
other bright, static radio sources in M82, such as SN\,2008iz,
41.95$+$57.5, 43.31$+$59.2 and 44.01$+$59.6. Over the first 50 days of
monitoring, including 6 MERLIN and 3 VLBI epochs, the fitted position
of the source shows evidence for east to west proper motion
of $\sim$10$\pm$5\,mas. This equates to an apparent proper motion of
$\sim$0.2\,mas\,day$^{-1}$, equivalent to an apparent superluminal
motion of $\sim$4.2c at the distance of M82. Subsequent data from 29th
June 2009 (58 day after 1st May 2009) onwards show the source position
to be consistent with its initial position measured on 3rd May
2009. Thus, considering that the positional shift is at the limit
achievable with these data the detection of any proper motion can only
be considered as tentative at this early epoch.
The highest resolution image with MERLIN was observed on 3rd January
2010, 247\,d after 1st May 2009, at a
frequency of 6.7\,GHz. From these data the source is partially
resolved with a deconvolved Gaussian fitted size of $15^{+5}\!\!\!\!\!\!_{-6}$\,mas.
\section {Discussion: nature of this new radio source}
Historical transients 41.5+597 \citep{b6} and 40.59+558 \citep{b3}
were each detected in only one monitoring epoch implying lifetimes of
less than a few months to a year. The new transient is broadly
similar to the earlier detections in flux density (within a factor of
10) and spectral index, although its longevity may soon indicate that
it is a different type of object. To date, beyond the radio detection
reported here, no confirmed detections of this new source has been
made at any other waveband in either archival or contemporaneous
observations, including X-ray \citep{kong09} and at K-band using
Gemini \citep{fraser09} and the Nordic Optical Telescope (S. Mattila,
private comm.).
We must consider the possibility that the new object is a background radio
source that has brightened significantly. The area of sky that has
been subjected to monitoring observations is the central nuclear
$\sim$1\,kpc of M82, an area of approximately 45$\times$15\,arcsec in
extent. The probability of finding a background AGN
system of $\sim$1\,mJy at 5\,GHz within this area is $\sim$1 in 550
\citep{prandoni06}. However, this object is extremely unusual in that it has
brightened by at least a factor of 5 (detection flux density/3$\sigma$
non-detection flux density) on a timescale of $\sim$1 week. Since
$<$1$\%$ of the faint background source population exhibit such violent intrinsic
variability, this reduces the probability by at least two orders
of magnitude \citep[e.g.][]{carilli03}.
The position of this new source lies at high Galactic latitude
(+40$\degr$). Considering this and the fact that no detection of this new
source has been made in any other waveband this source is consistent with being either extremely
optically faint and/or highly obscured by material within M82. Thus on the balance of probability, we conclude that the new
transient is neither a foreground or background source and that it must
lie within M82.
\begin{center}
\begin{figure}
\includegraphics[width=8.0cm]{muxlow_fig3.eps}
\vskip -0.15cm
\caption{Combined MERLIN and VLA 5\,GHz false colour image of the region surrounding
the new radio source location, taken prior to its discovery. The position of the new source is
marked by an X (size not equal to the astrometric error). The position with associated errors of the dynamical centre of M82 derived
via several multiwavelength methods (Weliachew et al.
1984) are also shown as plus signs ($+$). The two brightest compact radio sources in this image are 44.01+59.6 (left) and 43.31+59.2 (right).}
\label{dyncentre}
\end{figure}
\end{center}
\vspace*{-1.2cm}
\subsection{A faint and unusual radio supernova?}
Typically radio supernovae emit high brightness temperature
synchrotron emission which initially is absorbed at longer wavelengths. As the supernova shell
expands this absorption decreases resulting in a rapid turn-on of
emission at shorter wavelengths followed by a later turn-on at longer
wavelengths, with an associated evolution in radio
spectral index. After reaching its peak luminosity a supernova then normally follows
a power-law decline \citep[e.g. SN\,2008iz. Fig\,\ref{lightcurve1} and ][]{weiler02}.
The peak luminosity of this new source at 5\,GHz is
$\sim$1$\times10^{18}$\,W\,Hz$^{-1}$. This is 3 orders of magnitude
less than the peak observed from Type-Ib/c supernovae and comparable
to the limits on the radio emission from Type-Ia
supernovae, which so far have not been detected \citep{panagia06}. Whilst this source is two orders of magnitude fainter
than SN\,2008iz (see Fig.\,\ref{lightcurve1}), its luminosity is
comparable to some faint nearby Type-II radio
supernovae \citep[e.g. SN\,2004dj, SN\,1987A;][]{beswick05,turtle87}. The peak luminosity and the
rise time of Type-II supernovae can be empirically related \citep[see Eq. 20
of][]{weiler02}. Following this relationship a
source of this luminosity should reach its peak flux density at 5\,GHz
between 3 and 11\,days after the supernova detonation. This timescale is consistent with the rise time
observed (Fig.\,\ref{lightcurve1}) and supports the scenario that
this source is a Type-II supernova.
However, there are several observational discrepancies with the
hypothesis that this is a faint supernova. Firstly, following the very
rapid initial rise in flux density (Figs.\,\ref{Maps1},
\ref{lightcurve1}) the light curve for this object shows no
significant evolution and in particular no power-law decline. Whilst
this is not common for supernovae \citep{weiler02}, a plateau in the
radio light curve as observed here could result from the expanding
supernova shell interacting with a denser interstellar medium at later
times. Thus, whilst atypical, this lack of power-law flux density
decay cannot rule out the supernova hypothesis. Secondly the spectral
index as measured via simultaneous multi-frequency observations prior
to the source reaching its peak flux density was $-$0.7 and has shown
no apparent evolution in the subsequent 150 days. This characteristic
is in contrast to spectral evolution expected for young radio
supernovae, and can only be accounted for if the source had already
evolved past its peak at centimetre radio frequencies before 30th
April 2009.
The initial expansion velocities of typical radio supernovae have been
measured to be $\sim$23,000\,km\,s$^{-1}$
\citep[e.g. SN\,2008iz][and references therein]{brunthaler09c,brunthaler10,weiler02}. Thus at an age of
$\sim$250 days a radio supernova would typically have an angular size
of $\ifmmode\stackrel{<}{_{\sim}}\else$\stackrel{<}{_{\sim}}$\fi$2\,mas at the distance of M82. Our most recent and highest
resolution observations were taken on 3rd January 2010, when the
source was at least 250 days old. At this epoch the source is
tentatively resolved with a Gaussian fitted size of
$15^{+5}\!\!\!\!\!\!_{-6}$\,mas (see Section 3.2). If this source is a
supernova this size would require either a mildly relativistic
expansion velocity \citep[similar to that recently reported for
SN2007gr and SN2009bb,][]{paragi10,soderberg10} or that its age is
significantly underestimated.
\vspace*{-0.2cm}
\subsection{Accretion around a massive collapsed object?}
The steep radio spectral index from birth (and the possible detection
of apparent superluminal motion) supports the hypothesis that the
transient may be associated with an accretion disc around a
massive collapsed object in the nuclear region of M82. We suggest two possible scenarios.
\vspace*{-0.4cm}
\subsubsection{An AGN in the nucleus of M82}
The transient lies close to, but a few arcsec to the West of the
dynamical centre of M82 as derived from radio, optical, NeII, and
$^{13}$CO kinematic studies \citep[][see Fig.~3]{weliachew84}. The position is also displaced from the ridge-line
of the extended radio emission which is thought to be associated with
the integrated emission from ejected plasma over the recent
star-formation history of M82. Unless the region of nuclear
star-formation and the dynamical centre are significantly displaced
from the centre of the gravitational potential, it would seem unlikely
that this object is associated with a central super-massive black hole
(SMBH) in M82. Emission from such an object has, to date, never been
detected. It is possible that this could be emission from a system
associated with a second SMBH absorbed from a dwarf galaxy merging
with M82, but there is no supporting evidence of such a merger from
observations; however M82 is very disturbed and is interacting with
other galaxies in the M81-M82 group (Yun, Ho \& Lo 1994).
\vspace*{-0.3cm}
\subsubsection{Radio emission from an extragalactic microquasar?}
Alternatively this source may be result of
some form of flaring microquasar event in M82. The
700\,$\mu$Jy flux density of this source in M82 is equivalent to a
$\sim$90\,Jy source at a distance of 10\,kpc. The brightest
microquasar flares that have been seen in the Galaxy at centimetric
wavelengths are from Cygnus X-3 which flares to several tens of Jy
\citep{g1}. However, whilst such flares are close to the required
luminosity seen in the transient, the light curves of known Galactic
microquasars differ significantly from that seen for this object with
Galactic flares peaking and then decaying away on a timescale of days
to weeks. The radio luminosity of the transient is comparable with the
ultraluminous X-ray source found in the nearby dwarf galaxy NGC 5408
\citep{Lang}. Both objects possess a steep radio spectral index, however
to date, no variable X-ray source has been detected at this position in M82.
Some form of relativistic jet could account for the observed steep
spectral index since in Galactic microquasars any optically thick
state typically evolves to thin within hours of turn on
\citep{Mirabel99,Fender04}. This scenario is compatible with the possible
detection of superluminal proper motion and the elongation seen at
late times. The strong jet-disk coupling would thus imply the presence
of an accretion disk around a massive collapsed object, although the
nature of this collapsed object remains unclear. If the transient is
some form of microquasar, its luminosity suggests that it is likely to
be associated with a massive black hole system of some type. This
could range from an extreme form of X-ray binary to an
intermediate-mass black hole system. However, the very high luminosity
and temporal longevity of the transient imply that this type of
accretion object is unusual and has not yet been seen within our
Galaxy.
\vspace*{-0.5cm}
\section{Conclusions}
This new source could be any of the above possibilities although each
of the proposed scenarios has difficulty in explaining all of the
observed properties. At present this source
has been detected for $>$9 months and shows no immediate signs of
fading. Depending upon its longevity it may represent
another example of a relatively short-lived faint radio source
population in M82. If so it would be the third example seen
in $\sim$30\,years of observations; thus a lower limit on their occurrence
rate is $\sim$1 every 10\,years, depending on their
lifetimes. If this population is associated with faint supernovae it will have significant implications on the radio derived
supernova rate of M82. Alternatively if this source is associated
with microquasar flare event and the occurrence of which is related to
the host galaxies star-formation rate, we would expect to see a
comparable event in our own Galaxy around 1 every 100\,years. Regular
monitoring observations with new sensitive, high resolution imaging
arrays, such as e-MERLIN and the EVLA, will be required to determine
the size and nature of any such a population, both M82 and other
star-forming galaxies.
Global VLBI observations at 1.6 and 5\,GHz with milliarcsecond
resolution were taken in late 2009 and are awaiting
correlation. Images of the transient from these new data will
constrain the nature of this exciting source.
\vspace*{-0.7cm}
\section*{Acknowledgments}
MERLIN is operated by The University of Manchester
on behalf of the Science and Technology Facilities Council.
We thank R. Spencer for useful discussion.
|
1,116,691,500,990 | arxiv | \section{Introduction}
Spoken Language Understanding (SLU)~\citep{tur:2011} is a crucial task of most modern systems interacting with humans by speech. SLU refers to the ability of a machine to extract semantic information from speech signals in a form which is amenable for further processing including dialogue, voice commands, information retrieval, etc.
With the emergence of smart assistants in computer, smart phones and smart speakers, SLU is not only a great research area but has also become a key technology for the industry, as evidenced by several challenges including major companies such as Amazon with its Alexa prize of several million dollars to be distributed to university teams demonstrating ground breaking progress in spoken conversational AI\footnote{\url{https://developer.amazon.com/alexaprize}}.
From a computing perspective, the classical approach to SLU since the early age of spoken language systems \citep{Sears1988,hemphill1990} has been a pipeline of Automatic Speech Recognition (ASR) -- to transcribe speech signal into a textual representation -- which feds a Natural Language Understanding (NLU) module -- to extract semantic labels from the transcription. The main problem of such an approach is the cascading error effect which shows that any error at the ASR level has a dramatic impact on the NLU part. This is why a large set of different approaches has been proposed to take into account the ASR hypotheses uncertainty within the NLU module to improve the robustness of the whole chain. This `classical' approach is the default one in most industrial applications and is still an active research area \citep{simonnet2017asr}.
However, since \cite{qian2017exploring}, Neural End-to-End SLU (E2E SLU) has emerged to benefit both from the performing architecture of Deep Neural Networks (DNN) and from the joint optimisation of the ASR and NLU parts.
Although the classical pipeline SLU model is still competitive, E2E SLU research has shown that the joint optimisation is an efficient way to handle the problem of cascading errors \citep{serdyuk2018towards,desot2019slu,desot2019towards,ghannay2018end}. In particular, it has been shown that perfect ASR transcriptions are \textit{not} necessary to predict intents and concepts \citep{ghannay2018end}.
However, E2E SLU is more than just a way to perform joint optimisation. Indeed, in Neural End-to-End SLU, contrary to the classical pipeline, the decision stage has a direct access to the acoustic signal (e.g. prosody features). Therefore, an important question is to know \textbf{which signal characteristics (and other linguistic properties) are effectively used by the E2E SLU model}. To the best of our knowledge there has been little research to shed light on this important question.
Therefore, the goal of this study is to perform a comprehensive analysis of the linguistic features and abilities that are better exploited by E2E SLU than pipeline SLU.
In particular, this paper addresses the following research questions:
\begin{enumerate}
\item \textbf{Can the cascade error effect of the pipeline SLU approach be avoided?} Although, this has been showed in other research, there are still too few research projects
to take the answer as granted. Hence we present a comparison between pipeline and E2E SLU approach evaluated in a realistic voice command context for a language other than English: French.
\item \textbf{Is the model effectively exploiting acoustic information to perform concept and intent prediction?} We are not aware of studies having seriously explored this question.
We believe that an E2E model accesses the \textit{acoustic} levels to infer concepts and intents directly from speech. By accessing this information the model can avoid the \textit{cascade of errors} introduced by the interaction between the ASR and NLU models in a pipeline SLU method.
\item \textbf{Would an E2E SLU model be able to be more robust to variations in vocabulary and syntax?} While grammatical robustness seems to be a question only for the NLU task, we are not aware of research having investigated whether an E2E model would present a better ability to process grammatical variation. Hence, we present an acoustic and grammatical performance analysis to assess the ability of E2E SLU models to handle variation at these two levels.
\end{enumerate}
Part of the comparison between pipeline and E2E SLU has been published in \citep{desot2019towards,desot2019slu}.
However, this paper presents updated experiments and a transfer learning approach which has not been presented yet. Furthermore, the comprehensive acoustic and symbolic performance analysis was never published and constitutes the core of this article.
In this paper, Section~\ref{sec:SLU_state_of_the_art} gives a brief overview of the state-of-the-art dedicated to pipeline and E2E SLU as well as the few studies analysing deeply E2E SLU performances. Our whole approach is described in Section~\ref{sec:method} where we recap the motivations, the baseline pipeline SLU and the target E2E SLU approaches before introducing the evaluation strategy. Section~\ref{sec:Data_test_train_overview} summarises the artificial speech training data generation and describes the held out real speech test set acquired in a real smart home. Section~\ref{sec:Experiments_and_results} presents the results of the experiments with the baseline SLU and E2E SLU trained by transfer learning. It reports superior performances for the E2E SLU model despite its lower ability at the speech transcription task than the pipeline model. Sections~\ref{sec:Acoustic_impact_on_E2E_SLU_prediction} and ~\ref{sec:Symbolic_impact_on_E2E_SLU_prediction} present respectively the analysis at the acoustic level and at the grammatical level. The E2E SLU model has shown to be more robust than the pipeline one at the noise and grammatical variation levels while its ability to benefit from prosodic information is less clear. These findings are discussed in Section~\ref{sec:Discussion} before reaching the conclusion in Section~\ref{sec:Conclusion}.
\section{Related work} \label{sec:SLU_state_of_the_art}
Although there is a rising interest in End-to-End (E2E) SLU that jointly performs ASR and NLU tasks, the E2E models have still not definitely superseded pipeline approaches.
\subsection{Pipeline SLU} \label{sec:SLU_SOTA_pipeline}
A typical SLU pipeline approach is composed of an ASR and a NLU module. ASR output hypotheses are fed into an NLU model aiming to extract the meaning from the input transcription. The main problem with such an approach is the dependence on the transcription output from the ASR module causing error propagation and reducing the performance of the NLU module.
Hence, to deal with the uncertainty conveyed by the ASR, several methods incorporate the handling of \textit{N best hypotheses}. For instance in \cite{he2003data}, the ASR module (HMM) is followed by an NLU module using a Hidden Vector State Model (HVS) for concept prediction.
A rescoring is applied to the N-best word hypotheses from word lattices as output from the speech recogniser. Parse scores from the semantic parser are then combined with the language model likelihoods.
Another strategy to decrease error propagation is the use of \textit{confidence measures}. These were used by
\cite{sudoh2006incorporating} for concept prediction, augmenting Japanese ASR transcriptions with concept labels. In their case, a concept label was associated to ASR transcriptions by an SVM model only if confidence measures were above a certain threshold.
N-best list hypotheses and ASR output confidence measures were also exploited using weighted voting strategies \citep{zhai2004using}. Since the {n}\textsuperscript{-th} hypothesis transcription contains more errors than the {n-1}\textsuperscript{-th} hypothesis, voting mechanisms were used to improve performance. For instance, a concept was considered correct if it was predicted in more than 30\% of the n-best hypotheses per reference sentence.
A third strategy is to use \textit{word confusion networks}. For instance
\cite{hakkani2006beyond} improve the transition between ASR and SLU concept prediction, using word confusion networks obtained from ASR word lattices instead of simply using ASR one-best hypotheses \citep{mangu2000finding}. Word confusion networks provide a compact representation of multiple aligned ASR hypotheses along with word confidence scores. Their transitions are weighted by the acoustic and language model probabilities.
More recently, acoustic word embeddings for ASR error detection were trained through a convolutional neural network (CNN) based ASR model to detect erroneous words \citep{simonnet2017asr}. This approach was combined with word confusion networks and posterior probabilities as confidence measures, for concept prediction. Output of the ASR model was fed to a conditional random fields (CRF) model and an attention-based RNN NLU model.
In \cite{liu2020jointly}, SLU models using word confusion networks are compared with 1-best hypothesis and N-best lists (N=10) for concept label and value prediction. The ASR posterior probabilities are integrated in a pretrained BERT based SLU model \citep{devlin-etal-2019-bert}. The word confusion network is fed into the BERT encoder, and integrated into vector representation. The output layer is a concept label and value classifier.
Finally, another strategy, particularly adapted when aligned labels are missing (e.g., different from an aligned BIO scheme), is to use a \textit{sequence generation} instead of a \textit{sequence labelling} approach. In \citep{desot2019towards,desot2019slu}, \textit{unaligned NLU} data was used to train a BiLSTM seq2seq attention-based model as NLU module. Despite a lower prediction accuracy than aligned models, it provides the flexibility to infer slot labels from imperfect transcriptions and speech with disfluencies.
\subsection{E2E SLU} \label{sec:SLU_SOTA_E2E}
Only recently SLU is conceived as a joint processing of the ASR and NLU tasks which decreases the error propagation between the ASR and NLU modules.
Furthermore, such a model has \textit{access to the acoustic and prosodic levels}, which can have a positive impact on the performance of SLU. For instance,
\cite{serdyuk2018towards} trained a sequence-to-sequence (seq2seq) model on clean and noisy speech data to infer intents directly from audio MFCC. Such an approach showed that some prosodic aspects of the speech signal were exploited by the E2E model for intent classification (e.g., question vs imperative voice).
E2E SLU is also driven by the intuition that recognising speech,\textit{word by word}, is \textit{not} necessary.
In
\citep{ghannay2018end}, the Baidu Deep Speech ASR system \citep{hannun2014deep} was trained on transcriptions enriched with concept labels. Eight concepts were injected into the ASR transcriptions as symbolic labels. In order to reduce the importance that the connectionist temporal classification (CTC) cost function \citep{graves2006connectionist,watanabe2017hybrid,ueno2018acoustic} assigns to each character and to draw more attention to the concept symbols, all character sequences not related to a concept label were replaced by one and the same symbol.
Different from these E2E models that predict intent and concept labels directly from speech, a \textit{transfer learning} technique allows the training of a complete E2E model though sub-tasks (for instance, forcing hidden layer to predict phonemes), thereby providing an easier learning path. Combined with a curriculum learning that presents the easy examples before the more complex ones during training, convergence of the learning algorithms is accelerated \citep{krueger2009flexible}. For instance, in
\cite{lugosch2019speech}, a transfer learning for intent prediction is applied by training first an ASR model and then adapting it to an SLU task to predict concepts
that are finally mapped to intents.
In \cite{caubriere2019curriculum,caubriere2020we}, this type of E2E SLU is performed for concept prediction, using the Baidu Deep Speech ASR tool.
A phase of training an ASR model, is followed by three phases of learning concepts with an increasing complexity. The approach showed a clear gain in performance compared with a classical pipeline approach on the French Media corpus.
\subsection{E2E SLU analysis} \label{sec:SLU_SOTA_E2E_analysis}
Although recently E2E SLU is emerging as an alternative approach for pipeline SLU systems, we were not able to find research offering an in-depth performance analysis.
In
\cite{caubriere2019curriculum}, an \textit{Error analysis} of their E2E SLU system is performed. They show that concept deletion errors are not mainly caused by the ASR capability of the system, but occur as a consequence of a segmentation problem. On top of that, unseen concepts are better predicted using a transfer learning approach. In the study of
\cite{rao2020speech}, a pipeline SLU system is compared with an E2E SLU model for Amazon Alexa, where the interface between ASR and NLU is a shared 1-best hidden layer. They show that joint ASR and NLU training improves SLU performances for ASR erroneous output transcriptions that impact NLU performances in a pipeline model.
In
\cite{denisov2020pretrained}, a pretrained ESPnet ASR model encoder was combined with transformer based pretrained contextual BERT embedding. They analysed the fine-tuning of the E2E SLU \textit{layers}.
Results indicate that fine-tuning the ASR encoder layers is more beneficial than the NLU layers. This would mean that the acoustic representation must be adapted to the concept extraction task and is thus different from the ASR task.
This brief state of the art shows that handling the cascading error effect has been the main focus of pipeline SLU studies and the main motivation for E2E systems. Despite a gain of performance of E2E SLU and some studies analysing the potential effect of such gain,
the impact of the E2E SLU model's access to the acoustic level has not been investigated in-depth. To the best of our knowledge, we are not aware of any other study analysing the impact of acoustic features on E2E SLU performances and the robustness of such a model to grammatical mismatch.
\section{Method} \label{sec:method}
Before presenting the overall approach of the paper,
we recall the research questions mentioned in the introduction.
The state of the art section showed that one of the main drawbacks of a pipeline SLU is the cascade of errors effec
. Hence, the main objective of previous and recent SLU approaches was to reduce the impact of ASR errors on NLU performances. Our goal in this study is not only to avoid this cascade of errors but also to \textbf{understand what advantages an end-to-end (E2E) SLU approach can offer over a traditional pipeline approach} (this will be detailed in Section \ref{sec:Experiments_and_results}).
Given that the E2E approach extracts semantics directly from the acoustic signal, an interesting research question we address in this paper is \textbf{ whether the E2E model exploits prosodic information to infer intent and concepts from the speech signal} (this will be the subject of Section \ref{sec:Acoustic_impact_on_E2E_SLU_prediction}).
Another drawback of the classical ASR and NLU systems that can impact the NLU part negatively is the problem of out of vocabulary words (OOV) and unusual syntactic structures. By modelling linguistic phenomena at a finer granularity, we want to verify \textbf{whether an E2E SLU model is more robust to vocabulary and syntactic variations} (Section \ref{sec:Symbolic_impact_on_E2E_SLU_prediction} will detail this study).
This section details the overall approach we have followed to address the above challenges.
\subsection{Overall approach}
The solution we propose consists in considering the SLU problem as a slot-filling problem applied to the field of voice command in a smart home.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[node distance=2.5cm, auto, text width=2cm]
\node [draw] (corpus) {Corpus acquisition};
\node [draw,right of=corpus] (SLU) {Definition of models};
\node [draw,right of=SLU] (SLU_app) {Model training};
\node [draw,right of=SLU_app] (eval) {Evaluation \& analysis};
\path [draw,->]
(corpus) -- (SLU)
(SLU) -- (SLU_app)
(SLU_app) -- (eval) ;
\end{tikzpicture}
\caption{Overview of SLU development steps\label{fig:demarche}}
\end{figure}
Figure~\ref{fig:demarche} describes the steps of the approach.
The first step was to collect a
\textit{test corpus} to evaluate the SLU approaches on realistic data. To solve the lack of \emph{training} data, we have chosen an expert based artificial corpus generation approach. In order to reproduce a realistic situation, no data from the test corpus was used to train the models.
Then, we defined two baseline systems: the pipeline SLU and E2E SLU.
\textit{The pipeline SLU} is a combination of 2 state of the art ASR and NLU modules. \textit{The End-to-end SLU model} is a pyramidal RNN multi-task model that combines a CTC cost function and an attention based encoder and decoder . Theoretically, this type of model is capable of handling OOV words, it is therefore interesting to study whether the interaction between attention and CTC can strengthen the robustness of such a model on test data with high \textit{linguistic variability}. The SLU objective and the two SLU baselines are further introduced in the following Section~\ref{sec:SLU_approach}.
Once the models defined, they were trained on the generated corpus. The pipeline training is detailed in Section~\ref{subsec:pipeline_SLU} and
the E2E training in Section~\ref{subsec:E2E_SLU}.
Finally, once the models have been trained for the task, we performed various evaluations to assess to which extent the results are correlated to external factors such as noise condition, gender, pitch variation, syntactic complexity, etc. The correlation measures used for the study are introduced in Section~\ref{sec:analysis_measures} while the experiments are detailed in Sections~\ref{sec:Acoustic_impact_on_E2E_SLU_prediction} and~\ref{sec:Symbolic_impact_on_E2E_SLU_prediction}.
\subsection{SLU approach} \label{sec:SLU_approach}
The target of our SLU experiments, are commands without linguistic context and with one intent per utterance.
The notion of intent is close to that of a \textit{speech act} which is the speaker's communicative activity in which an utterance produces an effect on its interlocutor \citep{crystal:2011}. In this study, the intent is the type of act addressed to the home automation system. For example, the statement ``Allume la lumière'' (\emph{turn on the light}) conveys the intent to the smart home to change the state of an object. To characterise the voice command, it is also necessary to identify its \textit{concepts} or \textit{slots} representing the most important information (i.e., entities and actions). This process is called \textit{slot-filling} \citep{tur:2011}.
Figure~\ref{fig:schema_architects_SLU_SEQ_SLU_E2E} shows the two ways of extracting intent and slots we followed in this paper. In the pipeline case (on the top of the figure), the input utterance is first transcribed by the ASR module to be analysed by an NLU module that generates a sequence of
\texttt{concepts}
that are supposed to be found in the speech input. In the example, the intent is \texttt{set\_device} while the concepts are the action with value \texttt{turn\_on} and the device with value \texttt{light}. In the E2E case (on the bottom of the figure), the SLU target is seen as an enriched transcription task. The SLU model is trained to surround the transcribed words with specific characters such as \string^ to delimit an action or \} to delimit a device. Intents are classified using a special character at the beginning and end of a transcription (here @ means the \texttt{set\_device} intent).
\tikzstyle{rect} = [draw, rectangle, fill=white!20, text width=5em, text centered, minimum height=2em]
\begin{figure}[!ht]
\begin{tikzpicture}[node distance=1cm, auto]
\node [rect,draw=none] (step1) {{\small ``Allume la lumière*''}};
\node [right of=step1, node distance=2.5cm] (uk_1) {{\footnotesize \emph{*Turn on the light}}};
\node [rect, below right of=step1, node distance=2.5cm] (step3) {E2E SLU};
\node [rect, above right of=step1, node distance=2.5cm] (step4) {ASR};
\node [rect, draw=none,right of=step4, node distance=2.5cm] (step5) {{\small``Allume les lumières**''}};
\node [below right of=step5, node distance=2cm] (uk_2) {{\footnotesize \emph{**Turn on the lights}}};
\node [rect, right of=step5, node distance=2.75cm] (step6) {NLU};
\node [rect,draw=none, right of=step6, node distance=3cm, text width=3cm] (step7) {{\small intent[set\_device], action[turn\_on], device[light]}};
\node [rect,draw=none, right of=step3, node distance=8.3cm, text width=3cm] (step8) {{\small@ \string^allume\string^ \}la lumière\} @}};
\path [draw ,->] (step1) -- (step4);
\path [draw ,->] (step1) -- (step3);
\path [draw ,->] (step4) -- (step5);
\path [draw ,->] (step5) -- (step6);
\path [draw ,->] (step6) -- (step7);
\path [draw ,->] (step3) -- (step8);
\end{tikzpicture}
\caption{Comparison of pipeline and E2E SLU tasks}
\label{fig:schema_architects_SLU_SEQ_SLU_E2E}
\end{figure}
\subsection{Baseline pipeline SLU} \label{Baseline_pipeline_slu}
As in \citep{desot2019towards}, the ASR component of our pipeline SLU is the Kaldi tool, nnet2 version. This neural-network ASR training framework allows training with large amounts of data using multiple GPUs or multi-core machines. It uses speaker adapted features from the GMM (Gaussian Mixture Model) system, so a first pass of GMM decoding and adaptation is required \citep{povey2011kaldi,povey2014parallel}.
Mel Frequency Cepstral Coefficients (MFCC) were used as input features. Kaldi also allows using several adaptation methods of the acoustic models to the speaker, such as {\it Maximum Likelihood Linear Regression} (MLLR) \citep{Leggetter1995}, {\it Constrained Maximum Likelihood Linear Regression} (fMLLR) \citep{digalakis1996speaker} and {\it Speaker Adaptive Training} (SAT) \citep{anastasakos1996compact}. As the ASR component has to interact with the NLU module in a pipeline system in the real time setting of a smart home, the nnet2 \textit{online} version was also used.
Regarding the NLU module, we approach it as a \textit{sequence generation task} with \emph{unaligned} data. The SLU task is seen as a translation problem where the input must be abstracted to generate output intent classes and concept labels. For that reason, the NLU module was a seq2seq bi-directional LSTM encoder and decoder attention-based model.
This was our strategy to decrease errors of our baseline pipeline SLU model, due to the imperfect transcription output of the ASR component that impacts the NLU. Using an unaligned approach the model should learn to associate several words to one slot label without aligned data. Furthermore, classical BIO alignment can not be assumed for pipeline and E2E SLU when input data consists of spontaneous speech with disfluencies that often cause ASR deletion and insertion errors. Hence, a robust NLU sub-part is needed that can handle those and that can be trained with unaligned labels. In \cite{mishakova2019learning}, we showed that such an NLU approach is competitive with state of the art \emph{aligned} NLU CRF models \citep{Jeong2008} and also with DNN-based models \citep{Mesnil2015,Bapna2017,Liu2016,Huang2017} that deal with the NLU problem as a \textit{sequence labelling task}.
\subsection{End-to-end SLU}
The E2E approach as outlined in \cite{desot2019slu} was based on the ESPnet ASR toolkit \citep{watanabe2018espnet}. It integrates the Kaldi data preparation, extracts Mel filter-bank features, and combines Chainer and PyTorch deep learning tools \citep{tokui2015chainer,paszke2017automatic}. The encoder consists of a very deep convolutional neural network (VGG) followed by six bidirectional pyramidal subsampling bi-LSTM layers. Figure~\ref{fig:ESPnet_CTC_attention_ML-RNN} includes an overview of the ESPnet architecture.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{./ctc-att-rnn-espnet.png}
\caption{ESPnet architecture with the different training/inference strategies (CTC, attention, ML RNN)}
\label{fig:ESPnet_CTC_attention_ML-RNN}
\end{figure}
Convolutional neural networks (CNNs) have achieved great success in image recognition \citep{cho2018multilingual}. In the context of ASR, CNNs are usually used as feature extractors, while the HMM part is typically replaced by RNNs that provide a distribution over sequences directly \citep{zhang2016towards}.
The success of using CNNs in ASR tasks, can be attributed to the use of local filtering and maxpooling in the CNN architecture. This combination
turns out to be a better strategy than a GMM model that represents the entire frequency spectrum as a whole. Another benefit is better robustness against ambient noise. In order to locally apply filtering, a frequency scale is needed that can be divided into a number of local bands.
Therefore, MFCC features are not suitable because of DCT-based (Discrete Cosine Transform) decorrelation transform. Indeed the Gaussian mixture model needed decorrelated input feature dimensions while CNNs can benefit from correlated inputs. For that reason, filter-bank features are more fit for local filtering using CNNs. Max pooling and filtering is also an alternative for SAT (Speaker Adaptive Training) and MLLR (Maximum Likelihood Linear Regression) that transform speech features into a canonical speaker space \citep{abdel2012applying}.
In ESPNet, mapping from acoustic features to character sequences is performed by a \textit{hybrid} multitask learning that combines CTC \citep{amodei2016deep,graves2006connectionist} and attention \citep{bahdanau2014neural}. The attention mechanism allows a more flexible alignment, which focuses on the important features and character sequences whereas the ASR alignment is monotonic. A trade-off hybrid CTC and attention-based approach finds a balance between attention and CTC.
For moving the ESPNet model from a transcription objective to a SLU objective, as described in Figure \ref{fig:schema_architects_SLU_SEQ_SLU_E2E} the transcriptions were enriched with symbols representing intent classes and concept labels. This strategy has been previously used with success by~ \cite{ghannay2018end,desot2019slu, desot2019towards}.
\subsection{Analysis measures}\label{sec:analysis_measures}
An E2E SLU approach has access to the \textit{acoustic} and \textit{prosodic} information of the input signal. Therefore, in the analysis we intend to assess to which extent the model is capable of exploiting para-linguistic information to infer semantic information.
Table \ref{tab:Evaluation_model_acoust_pros_symb} gives an overview of the analysis levels that we consider in the study.
\begin{table}[!bh]
\caption{
Analysis levels of ASR, pipeline and E2E SLU on the VocADom@A4H test set}
\label{tab:Evaluation_model_acoust_pros_symb}
\centering
\begin{footnotesize}
\begin{tabular}{|l|ccccccc|ccccc|ccccc|cc|}
\hline
\textbf{Analysis} &\multicolumn{12}{c|}{\textbf{Acoustic}} & \multicolumn{7}{c|}{\textbf{Symbolic}} \\
\textbf{level} &\multicolumn{12}{c|}{\textbf{}} & \multicolumn{7}{c|}{\textbf{}} \\
\hline
&\multicolumn{7}{c|}{\textbf{Source}} & \multicolumn{5}{c|}{\textbf{Prosody}} & \multicolumn{5}{c|}{\textbf{Lexical}} & \multicolumn{2}{c|}{\textbf{Syntax}}\\
\hline
\textbf{Analysis:} &\multicolumn{4}{c|}{\textbf{Noise}} & \multicolumn{3}{c|}{\textbf{Gender}} & \multicolumn{2}{c}{\textbf{Avg. F0}} & \multicolumn{3}{c|}{\textbf{}} & \multicolumn{5}{c|}{\textbf{OOV}} & \multicolumn{2}{c|}{\textbf{Variation}} \\
\hline
\textbf{Hypothesis:} & \multicolumn{19}{l|}{(1)
Does the E2E model benefit from \textbf{prosodic, acoustic information}?
} \\
\textbf{} & \multicolumn{19}{l|}{(2)
Is the E2E model more robust to \textbf{lexical} and \textbf{syntactic} variations?} \\
\hline
\end{tabular}
\end{footnotesize}
\end{table}
The acoustic analysis was devoted to two main features~: the robustness to the variability of the source (here, the background noise and gender) and the capability of the model to exploit acoustic features. The robustness to OOV and syntactic variability was studied under the term 'symbolic' in order to distinguish it from the purely acoustic part of the study. For each study, the analysis consisted in generating or extracting specific stimuli to feed models, then measuring the \emph{performance} of each model at various levels (intent, slot-filling, speech recognition) and assessing their \emph{correlations} with the acoustic or symbolic features of the input stimuli. We describe below which performance and correlation measures we used in the study.
For assessing the intent prediction performance, since the task consists in choosing for each utterance one possible intent among a restricted set of classes, we therefore used classic measures in classification, recall $= \frac{TP}{TP+FN}$ and precision $= \frac{TP}{TP+FP}$, where TP is {\it True Positive}, FP is {\it False Positive}, and FN {\it False Negative}. \textit{Precision} expresses the proportion of correctly predicted intents in the set of \textit{predictions}, whereas \textit{recall} expresses the rate of correct predictions among the set of instances to be predicted. The F-measure or F1 score provides a single score that balances both precision and recall and is calculated as follows,
\begin{equation}
F1 = 2 \times \dfrac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}
\end{equation}
For slot-filling performance
we used concept error rate (CER) which is defined as the ratio of the sum of deleted, inserted and confused concepts w.r.t. a Levenshtein alignment for a given reference concept string \citep{hahn2008comparison}. In this paper, we calculated the CER in a similar way, but we did not take the label sequence order into account since we used a generative approach. Hence, a reference sequence {\tt action, device} provides the same information as a hypothesis such as {\tt device, action}.
As for the E2E approach, transcriptions are enriched with symbols that represent concepts and intents. We only consider the symbol sequences of reference and hypothesis concepts. Figure~\ref{fig:CER_E2E_example} shows how the labels are extracted from the outputs to compute the CER. In this example, the hypothesis transcription shows an erroneous deletion of the symbol \verb+}+ and obtains thus a CER score of 50\%.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{./CER_E2E_example.png}
\caption{E2E SLU - calculation of the concept error rate that is symbolically represented}
\label{fig:CER_E2E_example}
\end{figure}
For the ASR level, performances were evaluated using the WER. The WER is defined as the sum of \textit{insertions} ($I$), \textit{deletions} ($D$) and \textit{substitutions} ($S$) of words compared to the number of words $N$ in a reference transcription that has manually been verified. The alignment between the reference and hypothesis transcription is obtained by dynamic programming in order to find the alignment leading to the minimal WER. The WER is calculated as follows~:
\begin{equation}
\text{WER} = \dfrac{I + S + D}{N} \times 100
\end{equation}
As a final evaluation step, our main strategy is to compute the correlation of the resulting SLU and ASR performances with respect to the characteristics of the input stimuli.
We measure \textit{Pearson} and \textit{Spearman's rank} correlations between performances (CER, WER, F1) and input features (e.g. \textit{energy}).
Pearson's correlation coefficient $r$ measures the strength of the association between two variables $x$ and $y$ assuming a normal distribution of values. Pearson's correlation is calculated as follows, where $n$ denotes the number of elements~:
\begin{equation}
r = \frac{\sum_{i=1}^{n} \left(x_{i} - \overline{x} \right) \left( y_{i} - \overline{y} \right)}
{ \sum_{i=1}^{n}\sqrt{\left(x_{i} - \overline{x} \right)^2} \sum_{i=1}^{n}\sqrt{\left(y_{i} - \overline{y} \right)^2} }
\end{equation}
$r$ varies between -1 and 1. There is no correlation when $r$ is equal to 0. 1 indicates a strong correlation and -1 a strong negative correlation.
\textit{Spearman's rank correlation}, $r_{s}$ , measures correlations between \textit{ranked variables} and is calculated as follows where $n$ denotes the number of elements~:
\begin{equation}
r_{s} = 1 - \frac{6\sum_{i=1}^{n} \left(x_{i} - y_{i} \right)^2 }{n(n^2-1)}
\end{equation}
In order to determine if the resulting correlation coefficient is significant, the $p$-value is often used. It is calculated in hypothesis testing to determine whether or not to reject a null hypothesis. The $p$-value for the Pearson or Spearman correlation coefficient ($coef$) uses the distribution law $t$,
\begin{equation}
t = r \sqrt{\frac{n - 2}{1 - coef}}
\end{equation}
We reject the null hypothesis, $H_{0}: coef = 0$, if the $p$-value is less than 0.05 ($p < 0.05$). The $p$-value or hypothesis $H_{1}: coef \neq 0$ (two tails test), is computed as follows~:
\begin{equation}
p = 2 * P(T > |t|),
\end{equation}
where $P$ denotes the probability and $T$ follows a distribution $t$ with $n - 2$ degrees of freedom. We distinguish between correlations for which $p<0.05$ that we mark with $^*$
and correlations for which $p<0.01$ that we denote with $^{**}$.
In order to measure and compare the robustness of the pipeline and E2E SLU models to lexical and grammatical variations (see Section \ref{sec:Symbolic_impact_on_E2E_SLU_prediction}), we verified the impact of an increased OOV (Out Of Vocabulary words) on SLU performances by gradually replacing domain specific vocabulary by synonyms that do \textit{not} occur in the training set.
We also measured the impact of the syntactic variability in the speech of our target users by predicting concepts and intents of the test data where we inserted syntactic structures and disfluencies that hardly occurred in the training set (Table \ref{tab:Evaluation_model_acoust_pros_symb}).
Our hypothesis is that the learning of the E2E SLU model, that combines the CTC and attention mechanisms, can enhance the robustness of the E2E model to \textit{linguistic variability}.
\section{Data} \label{sec:Data_test_train_overview}
To acquire target corpora in sufficient size to train deep neural network models, we used a combination of realistic data, synthetic data and out-of-domain data.
Realistic data was acquired within a real smart home with naive users, using a Wizard-Of-Oz strategy to acquire diverse and contextualised voice commands. Indeed, since our target users are senior adults, they tend to deviate from a too strict grammar \citep{Moller2008,Takahashi2003,Vacher2015}, hence the need for a realistic corpus that accounts for the rich set of possible sentences and pronunciations. This corpus is quickly introduced in section~\ref{sec:test_data} and was made available to the community \citep{portet2019context}.
Acquiring such kind of realistic corpus is highly time consuming and leads to an amount of data far too small for machine learning. We tackled this problem by automatically generating a domain-specific \textit{synthetic speech training} corpus using Natural Language Generation controlled by the semantic space of the smart home. This synthetic generation is described Section~\ref{sec:synthdata}.
Finally, since the artificial generation presents a good semantic coverage but a poor diversity, we also collected other out-of-domain corpora that can be used either to enrich the training data or to perform transfer learning. This collection of corpora is listed in Section~\ref{sec:hand_out_datasets}.
\subsection{VocADom@A4H test data}\label{sec:test_data}
The VocADom@A4H corpus \citep{portet2019context,desot2018towards} includes about twelve hours of speech data and was acquired in realistic conditions in the Amiqual4Home smart home\footnote{https://amiqual4home.inria.fr}. Eleven participants uttered voice commands while performing activities of daily living for about one hour in different rooms including a kitchen, a living room, a bedroom and a bathroom. Out-of-sight experimenters reacted to participants' voice commands following a wizard-of-Oz strategy to add naturalness to the corpus. Furthermore, experimenters were also present in the home to act as visitors of the participant. At the end, the corpus consists of a mixture of spontaneous and read voice commands, with lexical and syntactic variation. Some of the utterances were recorded with background noise (use of vacuum cleaner, radio, tv etc.).
Each voice command was prefixed with a keyword (chosen by the participant in a list) to activate the smart home (e.g., ``Minouche, lower the blind of the bedroom'' were `Minouche' was one of the possible keywords). The participants' and experimenters' speech was semi-automatically transcribed and then corrected manually. For SLU, data was manually annotated with intent classes and slot labels whose semantics were defined in accordance with the smart home capabilities. A description of the semantic labels for the slot is provided in Figure~\ref{fig:slots_tree}.
At the end of the annotation process, 6,747 utterances constitute the dataset. As shown in Table~\ref{tab:corpora_test}, it consists of voice commands (\textit{intents(1)}), and utterances other than voice commands (\textit{none intents(2)}). This \textit{realistic} corpus is the held out test set used for all our SLU experiments. It is freely available for research purpose at \href{http://vocadom.imag.fr}{vocadom.imag.fr}. For more information about this corpus, the reader is referred to \citep{portet2019context}.
\begin{table*}[t]
\caption{VocADom@A4H test data overview}
\label{tab:corpora_test}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{VocADom@A4H}~&\textbf{utterances}~&~\textbf{words}~&~\textbf{intents}~&\textbf{slot labels}\\
\hline
intents(1)& 2612 & 430 & 7 & 14 \\
none intents(2) & 4135 & 1326 & 1 & - \\
\hline
complete(3) & 6747 & 1462 & 8 & 14 \\
\hline
\end{tabular}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{./slotstreesymb.png}
\caption{Tree structure of concepts with symbolic representation}
\label{fig:slots_tree}
\end{figure}
\subsection{Automatic artificial training data generation}\label{sec:synthdata}
To gather a large amount of data with a broad coverage, we used an expert-based Natural Language Generation (NLG) approach \citep{Gatt2018}. An NLG approach is more easily controlled compared to a constrained RNN language model for data augmentation \citep{Hou2018}. The generation was performed in two phases:
\begin{enumerate}
\item Voice command utterances were generated as text and semantically annotated at the same time.
\item Generated textual voice commands were fed to a speech synthesizer to provide a complete corpus of speech annotated with semantic information.
\end{enumerate}
The NLG system was based on the NLTK python library feature-based context free grammar (FCFG) \citep{bird2009natural} allowing for sentence generation, and for features (i.e. slot information) to be attached to the final output sentences. The grammar defines intents as a composition of their possible constituents, with constraints on generation.
The semantic space consisted of four general intent classes:\begin{itemize} \item \texttt{contact} which allows a user to place a call; \item \texttt{set} to make changes to the state of objects in the smart home; \item \texttt{get} to query the state of objects as well as properties of the world at large; \item and \texttt{check} to check the state of an object. Slot labels are divided into eight categories:
the \texttt{action} to perform, the \texttt{device} to act on, the \texttt{location} of the device or action, the \texttt{person} or \texttt{organization} to be contacted, a device \texttt{component}, a device \texttt{setting} and the \texttt{property} of a location, device, or world.\end{itemize}
Syntactical variation was also part of the grammar design.
Similar to the test set, each voice command includes a keyword to activate the smart home.
Maximising all combinations of semantic labels that result in meaningful utterances, the grammar generated about 77k phrases. These generated sentences were automatically annotated with 17 different concepts and eight different intent classes, based on the general categories defined by our semantic space. Table \ref{tab:intents_overview} includes an overview of intents. An overview of concepts is presented in Figure~\ref{fig:slots_tree}, while Table~\ref{tab:intents_overview} provides a comparison of the number of utterances per intent between the artificial and realistic corpus. The generation process and its evaluation is detailed
in \citep{desot2018towards}.
\begin{table}[ht]
\caption{Distribution of utterances broken down by intent for the Artificial (Artif.) train and VocADom@A4H (Real.) test corpus with examples}
\label{tab:intents_overview}
\centering
\begin{tabular}{|p{0.3\linewidth}p{0.07\linewidth}p{0.3\linewidth}p{0.07\linewidth} p{0.07\linewidth}|}
\hline
\textbf{Intent} & \textbf{Symb.} & \textbf{Example} & \textbf{\#} & \\
& & \textbf{utterances} & \textbf{Artif.} & \textbf{Real.} \\
\hline
{\tt Check\_device} & \# & {\it minouche is the window open~?} & 2754 & 284\\
{\tt Contact} & [ & {\it vocadom call a doctor} & 567 & 114\\
{\tt Get\_room\_property} & \{ & {\it berenio what's the temperature~?} & 9 & 3\\
{\tt Get\_world\_property} & ] & {\it ulysse what's the time~?} & 9 & 3\\
{\tt None} & & {\it the window is open} & - & 4135\\
{\tt Set\_device} & @ & {\it hestia lower the blinds} & 63,288 & 2178\\
{\tt Set\_device\_property} & \_ & {\it ichefix decrease the TV volume} & 7290 & 9 \\
{\tt Set\_room\_property} & \& & {\it chanticou decrease the temperature} & 3564 & 21 \\
\hline
\end{tabular}
\end{table}
The semantic annotation part of the synthetic corpus was generated in two versions: one of for the pipeline and one for the E2E approach. Table~\ref{tab:unaligned_corpora_format} provides examples of these two formats. For the E2E SLU approach the artificial corpus transcriptions are enriched with intent class and slot label symbols \citep{desot2019towards,desot2019slu}. A similar approach was applied in \citep{ghannay2018end}, however our transcriptions are enriched with \textit{both} intent and slot label symbols.
\begin{table*}[t]
\caption{Artificial corpus format for the generative approach (pipeline) and the enriched transcriptions approach (E2E)}
\label{tab:unaligned_corpora_format}
\setlength{\tabcolsep}{10pt}
\centering
\begin{tabular}{|l|c|}
\hline
{\bf Format for generative NLU}\\
\hspace{16 mm}{\bf Pipeline SLU} (``vocadom close the door")\\
\verb|(Source) vocadom ferme la porte|\\
\verb|(Target) intent[set_device], action[close], device[door]|\\
\hline
{\bf Format for symbolically enriched transcription}\\
\hspace{16 mm}{\bf E2E SLU} (``vocadom switch on the light")\\
\verb|(Source + Target labels injected)|\\
\verb|@ VocADom ^allume^ }la lumière} @|\\
\verb|SET_DEVICE| intent class symbol \verb|@|/
\verb|Action| slot symbol \verb|^| / \\
\verb|Device| slot symbol \verb|}|\\
\hline
\end{tabular}
\end{table*}
As an SLU system extracts slot labels and intent classes from speech, we used a speech synthesizer to generate spoken utterances for the 77k artificial sentences, using the open source Ubuntu SVOX~ \footnote{https://launchpad.net/ubuntu/+source/svox}
female French voice~\footnote{https://doc.ubuntu-fr.org/svoxpico}.
\subsection{Collection of realistic data sets}\label{sec:hand_out_datasets}
The artificial corpus, though of great semantic coverage, does not cover the diversity of speech that can be found in the test data. For instance, Table~\ref{tab:corpora_test} reports that \texttt{none} is the majority intent class in the VocADom@A4H test set. Furthermore, the artificial corpus contains only artificial speech produced by one synthetic voice. In order to increase the number of \texttt{none} intents in the artificial training data as well as to add voice diversity, the ESLO2 corpus \citep{serpollet2007large} of conversational French speech was added to the training set. Sentences unrelated to voice command intent were extracted (i.e. \texttt{none} intent) and manually filtered. Only out of domain utterances were kept for collecting \texttt{none} intent training data. Furthermore, similarly to the VocADom@A4H corpus, it contains frequent disfluencies.
The small domain specific SWEET-HOME corpus, with distant voice commands \citep{vacher2014b} was also added to the training data.
Table~\ref{tab:corpora} includes an overview of the complete SLU training set consisting of the artificial corpus and the SWEET-HOME and ESLO2 corpora.
The perplexity (\textit{perpl.}) and OOV with respect to the test set are provided using a 3-gram language model learned on each corpus. It can be seen that the vocabulary of the artificial corpus is quite poor despite a good semantic coverage. Because of this small vocabulary and the strict syntactic pattern the perplexity and OOV stays high. By contrast, the SWEET-HOME corpus has a relatively low perplexity even with such a small amount of data. Finally, the large vocabulary of ESLO implies the smallest OOV rate.
\begin{table*}[t]
\caption{Comparison of SLU training and test data (OOV = test set words not seen in training data)}
\label{tab:corpora}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|l|}
\hline
\textbf{training}~&\textbf{utterances}~&~\textbf{words}~&~\textbf{intents}~&\textbf{slots}~&\textbf{perpl.}~&\textbf{OOV}~&\textbf{speech}\\
\textbf{set}~&\textbf{}~&~\textbf{}~&~\textbf{}~&\textbf{}~&\textbf{}~&\textbf{}~&\textbf{(hours)}\\
\hline
Artif. & 77,481 & 187 & 7 & 17 & 124.41 & 307 & 81.25\\
Sweet-Home & 1412 & 480 & 6 & 7 & 49.33 & 343 & 2.5\\
Eslo2 & 161,699 & 29,149 & 1 & - & 151.90 & 211 & 126\\
\hline
Total & 240,592 & 30,821 & 8 & 17 & 372.06 & 235 & 209.75
\\
\hline
\end{tabular}
\end{table*}
\section{Training of the pipeline and E2E SLU models: impact of the cascade error effect} \label{sec:Experiments_and_results}
\subsection{Baseline pipeline SLU} \label{subsec:pipeline_SLU}
The ASR and NLU modules of the pipeline SLU were trained separately.
For the ASR module, a large acoustic model was trained using 472.65 hours of \textit{Real Speech} data from the French corpora ESTER1 \citep{galliano2005ester} \index{corpus!ESTER1} and ESTER2 \citep{galliano2009ester}, \index{corpus!ESTER2} REPERE \index{corpus!REPERE} \citep{giraudel2012repere}, ETAPE \citep{gravier2012etape}, \index{corpus!ETAPE} BREF120 \citep{tan2006french}, \index{corpus!BREF120} AD \citep{vacher2008preliminary}, \index{corpus!AD} SWEET-HOME \citep{vacher2014b}, \index{corpus!SWEET-HOME} CIRDO \citep{vacher2016cirdo} \index{corpus!CIRDO} and the corpus of spontaneous speech ESLO2 \citep{serpollet2007large}, and also the small domain specific SWEET-HOME corpus (Table \ref{tab:corpus_modacoust_Kaldi}).
For Kaldi NNET2 (Section \ref{Baseline_pipeline_slu}), an architecture of 4 hidden layers with 1024 hidden units, a Softmax layer with 4748 units and SGD (Stochastic Gradient Descent) was used. The learning rate started at 0.01 and ended at 0.0001, with a batch size of 128. The total training lasted for 253 hours, using 1 GPU GeForce GTX TITAN Black. Acoustic features were 13-dimensional MFCC features. For further detail, the reader is refereed to \citep{desot2019slu,desot2019towards,desot2020corpus}.
\begin{table}[t]
\caption{French corpora used to train an acoustic model using the ASR tool Kaldi
}
\label{tab:corpus_modacoust_Kaldi}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Corpus}~&~\textbf{\# hours of speech} \\
\hline
ESTER1 & 100 \\
ESTER2 & 100 \\
REPERE & 60\\
ETAPE & 30\\
BREF120 & 51.50\\
AD & 0.5\\
SWEET-HOME & 2.5\\
CIRDO & 2\\
ESLO2 & 126\\
\hline
Total & 472.65\\
\hline
\end{tabular}
\end{table}
For the seq2seq NLU module described in Section~\ref{subsec:pipeline_SLU}, input words were first passed to a 300-unit embedding layer. The encoder and decoder were each a single layer of 500 units. Adam optimiser was used with a batch size of 10, using gradient clipping at a norm of 2.0. Dropout was set to 0.2 and training continued for 10,000 steps with a learning rate of 0.0001. Input sequence length was set to 50 and output sequence length to 20, with a beam search of size 4.
The training data was the combined semantically annotated data sets: artificial, SWEET-HOME and the filtered ESLO2 utterances without intent (Table~\ref{tab:corpora}).
Once ASR and NLU models trained, inference with the pipeline approach just consisted in feeding the best ASR transcription generated using Kaldi NNET2
to the seq2seq NLU module.
Table \ref{tab:eval_SLU_tab} shows the differences in performance between the pipeline SLU approach (Pipeline SLU) and the NLU model that are larger for slot predictions as compared to intent predictions.
\begin{table}[th]
\caption{Pipeline and E2E SLU performances, \% F1-score - Concept Error Rate - WER on VocADom@A4H. $\dagger$ with ESPNet as ASR}
\label{tab:eval_SLU_tab}
\setlength{\tabcolsep}{4pt}
\centering
\begin{tabular}{|l|lcccc|}
\hline
\textbf{Model} ~&~ \textbf{Hours} ~&~ \textbf{(\%) TTS} ~&~ \textbf{Intent} ~&~ \textbf{Slot} ~&~ \textbf{WER} \\
& \textbf{of speech} ~&~ \textbf{in train} ~&~ \textbf{F1-score} ~&~ \textbf{CER} ~&~ \\ \hline
\textbf{Pipeline:} & & & & & \\
ASR & 472.65 & 0.00 & - & - & 22.92 \\
NLU & - & - & 85.51 & 33.78 & -\\
SLU & 472.65 & 0.00 & \bf{84.21} & \bf{36.24} & -\\
\hline
\textbf{E2E:} & & & & & \\
ASR & 553.90 & 14.67 & - & - & 46.50 \\
SLU & 553.90 & 14.67 & 47.31 & 51.87 & - \\
\hline
\end{tabular}
\end{table}
\subsection{E2E SLU with transfer learning} \label{subsec:E2E_SLU}
For the E2E experiments, we used ESPnet default settings \citep{desot2019slu} in order to train on speech data, with slots and intents symbolically injected in the transcriptions. Since the training was end-to-end, the training data was composed of the ASR training data (472.65 hrs) plus the NLU training data (81,25) were the artificial dataset was synthesised (cf. Section~\ref{sec:Data_test_train_overview}). This represents 553.9 hours of training data.
Table~\ref{tab:eval_SLU_tab} reports the results on the VocADom@A4H test set (\textit{E2E, ASR}). For the Pipeline and E2E ASR task, a far worse WER is exhibited for the E2E model. Similarly, for the SLU task, the E2E model exhibits far worse performances for CER and F1.
The large amount of data for the E2E learning, far from allowing generalisation, mainly led to unbalanced learning. In the training data, the open domain was too large to allow domain data to properly drive the training.
Another way to take advantage of a large non-domain specific dataset, and a small domain specific dataset, is to use a transfer learning approach
(cf. state of the art in Section~\ref{sec:SLU_state_of_the_art}). This means pre-training a large amount of speech data on an ASR task in order to make the model learn the input representations of the acoustic signal. Once the model is learned, training is restarted on an SLU task with other data sets designed specifically to learn concepts and intents.
The complete process consists of 4 steps (Figure \ref{fig:transfer_learning_datasets_eng} and Table \ref{tab:intent_slot_symbols_and_asterisk}): 3 steps for the prediction of concepts and a fourth step for intent prediction:
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{./transfer_learning_datasets_eng.png}\\
\caption{Transfer learning - Concept and intent prediction}
\label{fig:transfer_learning_datasets_eng}
\end{figure}
\begin{enumerate}
\item For the first step an ASR model is trained (16 epochs) on the set of real and artificial speech utterances (553.9h) (\textit{data(1)});
\item For the second step, there is a training (12 epochs) on 37k \textit{real} sentences of speech data, as part of \textit{data(1)}, which contains symbolic concepts specific to the home automation domain (\textit{data(2)});
\item The third stage is a training (9 epochs) on 3800 artificial corpus sentences whose concepts are \textit{missing} or \textit{under-represented} in \textit {real} \textit{data(2)}. 1651 statements were also taken from \textit {data(2)} containing under-represented concepts. Indeed we combined a \textit{transfer} learning and a technique of \textit{data duplication} resulting in 5451 statements (\textit {data(3)});
\item The final step is a training (11 epochs) on intent utterances \textit{data(4)} which contains 11K statements of real and artificial speech.
\end{enumerate}
\begin{table}[th]
\caption{E2E SLU - intent and concept symbols}
\label{tab:intent_slot_symbols_and_asterisk}
\centering
\begin{tabular}{|l|
\hline
\textbf{Concept} (data(3))\\ \hline
hestia s'il vous plaît \^{}baisser\^{} \}la lampe\} $>$de la chambre$>$ \\
\hspace{35 mm}\texttt{action}
\hspace{2 mm}\texttt{device}
\hspace{2 mm}\texttt{location-room} \\
(hestia please decrease light in the room) \\\hline
\textbf{Intent + Concept} (data(4)) \\
\hline
@@ hestia s'il vous plaît \^{}baisser\^{} \}la lampe\} $>$de la chambre$>$ @@\\
\hspace{0 mm}\texttt{set\_device}
\hspace{15 mm}\texttt{action}
\hspace{2 mm}\texttt{device}
\hspace{2 mm}\texttt{location-room} \\
\hline
\textbf{Intent + Concept - without words outside slots } (data(4*)) \\ \hline
@@ hestia * * * \^{}baisser\^{} \}la lampe\} $>$de la chambre$>$ @@\\
\hspace{0 mm}\texttt{set\_device}
\hspace{1 mm}\texttt{action}
\hspace{2 mm}\texttt{device}
\hspace{2 mm}\texttt{location-room} \\
\hline\end{tabular}
\end{table}
\begin{table}[th]
\caption{E2E SLU - intent and concept symbols}
\label{tab:eval_ESPNET_transfert_notransfert_slu}
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{Model} & \textbf{Intent (\%)} & \textbf{Concept (\%)} \\
& \textbf{F1-score} & \textbf{CER}\\
\hline
\textbf{Training without transfer learning} :& & \\
Pipeline SLU & \textbf{84.21} & \emph{36.24} \\
E2E SLU & 47.31 & 51.87 \\
\hline
E2E SLU Data(3) & - & 69.11\\
\hline
\textbf{Transfer learning E2E SLU} :& & \\
Data(1) $\rightarrow$ Data(2) & - & 42.19\\
Data(2) $\rightarrow$ Data(3) & - & \textbf{32.12}\\
Data(2) $\rightarrow$ Data(3) $\rightarrow$ Data(4) & 68.13 & - \\
Data(2) $\rightarrow$ Data(3) $\rightarrow$ Data(4*) & \emph{74.57} & - \\
\hline
\end{tabular}
\end{table}
Results are presented in Table~\ref{tab:eval_ESPNET_transfert_notransfert_slu} which shows performance results for all phases of transfer learning. The baseline result with Kaldi (\textit{Pipeline SLU}) and the E2E approach are showed for comparison (\textit{E2E SLU-small} is not used as baseline since it includes 1k from the test set). The results for the transfer learning from the ASR (\textit{Data(1)}) to the SLU task (\textit{Data(2)}), (\textit{Data(1) $\rightarrow$ Data(2)}), show lower performances than the approach with a reduced data set. In order to verify that our results are \textit{truly} based on a transfer learning, we compared performances based on a model trained \textit{only} on the 5k statements of \textit{Data(3)}.
Training only on \textit{Data(3)} shows its limits (CER amount to 69.11\%). On the other hand, a training on the SLU task \textit{Data(2)} that is transferred to SLU task \textit{Data(3)} (\textit{Data(2) $\rightarrow$ Data(3)}) shows its efficiency for concept prediction, performing better (CER $=$ 32.12 \%) than the baseline pipeline SLU approach (\textit{Pipeline SLU}) obtaining a CER of 36.24\%.
These results indicate the relevance of transfer learning for E2E SLU.
For intent prediction, we continued the transfer learning principle using intent data (\textit{Data(4)}). For this intent learning task, on top of the transcriptions that are augmented with concept symbols, intent symbols were inserted (Table \ref{tab:intent_slot_symbols_and_asterisk}). Table \ref{tab:eval_ESPNET_transfert_notransfert_slu} shows that we could not outperform the sequential SLU intent prediction performances. Best transfer learning results for intent prediction were obtained using the \textit{Data(2) $\rightarrow$ Data (3) $\rightarrow$ Data(4*)} model where word tokens ``outside concepts'' have been replaced by asterisk symbols. Nevertheless the latter model outperforms the reduced model's intent prediction (\textit{ESpnet-small}).
\subsection{ASR impact on E2E SLU} \label{subsec:ASR_impact_on_E2E_SLU}
In Section \ref{subsec:E2E_SLU}, we have shown that the E2E SLU model outperforms the pipeline SLU approach, by applying transfer learning, for concept prediction in spite of ASR hypothesis transcriptions that are far from perfect. Hence, we expect a weak correlation between WER values from ASR performance and CER values from E2E SLU concept prediction, based on transfer learning. In order to verify this, we calculated the Pearson and Spearman correlation coefficients between the WER value per utterance and the CER value per utterance. Table \ref{tab:Correlations_Pearson_Spearman_WER_RAP_CER_SLU_E2E1} shows very significant correlations but which are not as strong as one would expect. Thus an improvement for an ASR task should have a positive impact on the SLU task but it is by far not sufficiently predictive of E2E SLU performances. This answers our first question by confirming that E2E approaches are indeed a way to diminish the cascade of errors effect of the ASR task on the NLU task.
\begin{table}[th]
\caption{Pearson and Spearman correlations E2E ASR - E2E SLU}
\label{tab:Correlations_Pearson_Spearman_WER_RAP_CER_SLU_E2E1}
\centering
\begin{tabular}{|l|ll|}
\hline
\textbf{Model} & \textbf{WER} (\%) & \textbf{CER} (\%)\\
\hline
ASR E2E & {\bf46.50} & - \\
E2E SLU & - & {\bf32.12} \\
\hline
\textbf{Correlation coef.} & & \\
Pearson (r) & 0.26$^{**}$ & \\
Spearman (r$_s$) & 0.25$^{**}$ & \\
\hline
\end{tabular}
$^*$ means $p<0.05$ ; $^{**}$ means $p<0.01$
\end{table}
\section{Acoustic Analysis of E2E SLU prediction} \label{sec:Acoustic_impact_on_E2E_SLU_prediction}
Unlike a sequential SLU approach, an E2E SLU approach has access to \textit{acoustic} information of the input signal. It is therefore relevant to verify whether the model exploits para-linguistic indices to infer semantic information.
\subsection{Acoustic information impact in the E2E model}
\begin{figure}[b!t]
\begin{minipage}{0.4\linewidth}
\scriptsize
\centering
\includegraphics[width=0.9\linewidth]{./encoderdecoder.png}
\caption{Utterance with attention for concept labels
}\label{fig:attentions_concepts}
\end{minipage}\hfill
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1\linewidth]{./no_concepts.png}
\caption{Utterance without concept labels}\label{fig:attentions_noconcepts}
\end{minipage}
\end{figure}
One way to qualitatively analyse E2E neural model is to analyse the attention map. Using ESPnet, an attention heat map was generated for the test set utterance ``ulysse baissez le store de la chambre'' ({\it ulysse lower the blind in the room}), with concept labels \verb|action| 'lower' and \verb|device| 'the blind'. The yellow arrows in Figure \ref{fig:attentions_concepts}, pointing to the lighter color areas, show increased attentions for the concept labels, especially around the hat and brace symbols that represent the concept labels \verb|action| and \verb|device| respectively.
On top of that, pitch and energy were measured for the same utterance using \textit{Praat} \footnote{https://www.fon.hum.uva.nl/praat/}. Figure \ref{fig:pitchenergy} shows a pitch contours (blue line) that increases for the concepts.
In order to exclude that the increased attention around the symbols is caused by white-spaces ($<$space$>$), another attention heat map for a test set utterance \textit{without} concepts was generated for the utterance ``ah bah ça tombe bien alors'' ({\it ah well that's good then}) and clearly does not show any increased attention around white-spaces (Figure \ref{fig:attentions_noconcepts}). This indicates that the E2E SLU model seems to learn that the concept symbols are more important than the other character symbols. This result led us to the research question whether E2E SLU benefits from \textit{acoustic} information in order to predict concepts and intents and if correlations exist between prosody on the one hand and the prediction of slot and intent labels on the other hand.
\begin{figure}[hbt]
\centering
\includegraphics[width=1\linewidth]{./pitchenergy.png}\\
\caption{Pitch and energy for the utterance ``ulysse lower the blind in the room''}
\label{fig:pitchenergy}
\end{figure}
\subsection{Background noise}
\begin{table}[th]
\caption{ASR and SLU performances - voice commands with background noise}
\label{tab:Performances_ASR_Kaldi_ESPnet_NENOISE200}
\centering
\begin{tabular}{|l|cc|cc|r|}
\hline
\textbf{Background} & \multicolumn{2}{c|}{\textbf{Pipeline SLU}}& \multicolumn{2}{c|}{\textbf{E2E SLU}}& \textbf{\#}\\
\textbf{noise} & \textbf{WER (\%)} & \textbf{CER (\%)} & \textbf{WER (\%)}& \textbf{CER (\%)}& \textbf{utt.}\\
\hline
\textbf{All:} & & & & &\\
\hspace{0.2cm} \textbf{M\&F} &38.58&57.80&57.53&\bf 39.73&204\\
\hspace{0.2cm} \textbf{M} & 30.78 & 54.98 & 52.74 & \bf 34.05 & 152\\
\hspace{0.2cm} \textbf{F} & 58.72 & 65.06 & 69.87 & \bf 54.38 & 52\\
\hline
\textbf{Vacuum} & & & & &\\
\textbf{cleaner:} & & & & &\\
\hspace{0.2cm} \textbf{M\&F} &57.00&77.62&59.00&\bf 53.79&108\\
\hspace{0.2cm} \textbf{M} & 46.64 & 75.46 & 54.23 & \bf 47.82 & 72\\
\hspace{0.2cm} \textbf{F} & 77.75 & 81.94 & 71.31 & \bf 65.74 & 36\\
\hline
\textbf{Radio\&TV:} & & & & & \\
\hspace{0.2cm} \textbf{M\&F} &20.31&35.77&56.31&\bf 25.94&75\\
\hspace{0.2cm} \textbf{M} & 18.08 & 36.64 & 53.30 & \bf 20.90 & 58\\
\hspace{0.2cm} \textbf{F} & 27.96 & \bf 32.84 & 66.57 & 43.13 & 17\\
\hline
\textbf{Fan:} & & & & & \\
\hspace{0.2cm} \textbf{M\&F} &14.74&32.69&65.36&\bf 26.92&21\\
\hspace{0.2cm} \textbf{M} & 13.18 & \bf 24.99 & 62.84 & 38.88 & 17\\
\hspace{0.2cm} \textbf{F} & 18.27 & 49.99 & 71.03 & \bf 0 & 4\\
\hline
\end{tabular}
\end{table}
The VocADom@A4H test set was recorded in presence of background noise such as radio, television, etc. Some of these classes of noise have high frequency components, as for example those of vacuum cleaners. We randomly selected utterances that we annotated with background noise labels until we had about 10\% (204 utterances) test set utterances with voice commands containing background noise available.
Table \ref{tab:Performances_ASR_Kaldi_ESPnet_NENOISE200} shows that E2E SLU outperforms (\textit{All}, CER) pipeline SLU, especially for utterances with high pitched vacuum cleaner background noise. Although the pipeline Kaldi ASR module outperforms ESPnet ASR in general, performance for both models is closer for utterances with vacuum cleaner background noise. This is the case for female speakers (\textit{F}) as well as for male speakers (\textit{M}). These results can be related to the results of \citep{qian2016very} according to which an E2E ASR system shows more robustness in processing noisy speech due to its layers of CNN (Convolutional Neural Network) in their architecture. However, here it is not the ASR task that benefits from it but the SLU task. It seems also that the Kaldi module exhibits much larger WER variation between gender than ESPNet. However, this behavior is reversed for the CER (since the pipeline NLU is only processing words) suggesting that the E2E SLU is influenced by acoustic features late in the prediction process to exhibit such a gender bias.
\subsection{Pitch and energy}
Pitch and energy are known to impact ASR performances. In \citep{goldwater2010words}, the prosodic characteristics that are related to an increased ASR WER error have been studied and the authors concluded that \emph{pitch} and intensity have impact at extreme values.
Pitch is a perceptive frequency-related account of a sound wave, which cannot be measured directly.
However, two tones can be considered to have the same pitch if they share the same F0 values. Regarding speech, increasing and decreasing pitch contours help define prosody \citep{plack2005overview}. According to \citep{stehwien2016exploring} and \citep{su2018perceivable}, most of the words related to concepts also carry a pitch accent and can point to the most salient semantic information. In order to verify this, they studied the correlation between pitch variations and concepts. These studies inspired us to analyze the relationships between pitch, energy and SLU performances.
The study was performed at utterance level on the test corpus VocADom@A4H. For each utterance, the ASR performance (WER) was computed for both pipeline and E2E models and the mean F0 and mean energy were computed using the \textit{Praat} software.
Two F0 values were computed using two filters: \begin{itemize} \item A band-pass filter between 75 and 600Hz, typically containing male and female speaker F0 values and on the other hand; \item No filter.\end{itemize}
Hence for the F0 values without filters, high pitched background noise values are also included.
As the target of this study is the extraction of concepts and intents, the correlations were computed for the entire test set, as well as only for the 2612 test set utterances containing a voice command for comparison.
\begin{table}[th]
\caption{Pearson and Spearman correlations energy and pitch with WER of pipeline ASR (Kaldi) and E2E ASR (ESPnet) }
\label{tab:Correlations_Pearson_Spearman_WER_RAP_CER_SLU_E2E}
\centering
\begin{tabular}{|l|cc|cc|cc|r|}
\hline
\textbf{Correlation} & \multicolumn{2}{c|}{\textbf{Energy}}& \multicolumn{2}{c|}{\textbf{Pitch (no filter)}}& \multicolumn{2}{c|}{\textbf{Pitch(75-600Hz)}} \\
& \textbf{Kaldi}& \textbf{ESPnet} & \textbf{Kaldi} & \textbf{ESPnet} & \textbf{Kaldi} & \textbf{ESPnet}\\
\hline
\hline
\textbf{full dataset}: & & & & & &\\
Pearson (r) & \textbf{-0.22$^{**}$} & -0.07$^{**}$ & -0.02 & 0.003 & 0.05$^{**}$ & 0.007 \\
Spearman (r$_s$) & -0.14$^{**}$ & -0.08$^{**}$ & -0.07$^{**}$ & -0.03$^*$ & 0.01 & 0.002 \\
\hline
\hline
\textbf{voice commands}: & & & & & &\\
\textbf{only}: & & & & & &\\
Pearson (r) & 0.04$^*$ & 0.04$^*$ & \textbf{0.23$^{**}$} & 0.18$^{**}$ & 0.09$^{**}$ & 0.06$^*$ \\
Spearman (r$_s$)& 0.05$^*$ & 0.06$^{**}$ & 0.10$^{**}$ & 0.19$^{**}$ & 0.06$^{**}$ & 0.08$^{**}$ \\
\hline
\end{tabular}
$^* p<0.05$ ; $^{**} p<0.01$
\end{table}
Table \ref{tab:Correlations_Pearson_Spearman_WER_RAP_CER_SLU_E2E} shows that Word Error Rate is significantly correlated to energy but has a real impact only on Kaldi in the full data-set case. In all other cases, in particular for voice commands the correlation is negligible.
Regarding pitch, it is more correlated to WER for utterances with voice commands.
This is especially the case for pitch values without filter.
We further analyzed this effect in three steps:
\begin{enumerate}
\item For the voice command utterances the F0 values between 75 and 600Hz every 0.01 seconds were computed.
\item Timestamps for the word boundaries of the reference and hypothesis transcripts, with symbolic concept labels were generated, applying a forced alignment. Timestamps of F0 values on the one hand and timestamps of the word boundaries on the other hand were then aligned.
\item Finally the number of reference concept labels with the highest F0 values in the utterance that were correctly predicted in the hypothesis predictions were computed for pipeline and E2E SLU.
\end{enumerate}
The frequency list in Table \ref{tab:Freq_words_concepts_highest_F0_per_utt} shows that 3 concept labels (\texttt{device}, \texttt{action}, \texttt{location-room}) are among the 10 most frequent concepts and words with the highest F0 value per utterance. 47.79\% of all voice commands (1222/2557 voice commands) in the test set contain a concept consisting of words with the highest F0 values per utterance. It turns out that speakers, by talking to the home system, make more effort in uttering commands and thus speak with an increased intonation, which results in higher F0 values for the words belonging to concepts of the command.
\begin{table}[!ht]
\caption{Frequency of words and associated concepts with highest F0 value per utterance over 2557 voice commands of the test set}
\label{tab:Freq_words_concepts_highest_F0_per_utt}
\centering
\begin{scriptsize}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Frequency} & \textbf{Word} & \textbf{Frequency} & \textbf{Word}\\
\hline
538&\textbf{\} (device)}&62&vocadom\\
509&\textbf{\^{} (action)}&59&est-ce\\
163&cirrus&51&hé\\
160&dis&47&hestia\\
131&ulysse&43&chanticou\\
131&\textbf{$>$ (location-room)}&39&allô\\
105&téraphim&37&que\\
84&ichéfix&35&messire\\
72&minouche&32&, (device-setting)\\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
For the 1222 voice commands (reference utterances) with the highest concept pitch value, we calculated whether these concepts were well predicted by the pipeline and E2E SLU systems. As shown in Table \ref{tab:ref_voice_command_concepts_highest_F0_in_hyp}, a slightly higher percentage of concepts are retrieved for the E2E model as compared to the SLU pipeline model. Although the difference is small this might indicate that the E2E SLU is slightly less impacted by pitch effects than the pipeline SLU.
\begin{table}[!hb]
\caption{Reference voice command concepts with highest F0 values in hypothesis transcriptions}
\label{tab:ref_voice_command_concepts_highest_F0_in_hyp}
\centering
\begin{tabular}{|l|l|}
\hline
\textbf{SLU model} & \textbf{concept ref. in hyp.(\%)} \\
\hline
Pipeline & 74.22\\
E2E & \textbf{75.00}\\
\hline
\end{tabular}
\end{table}
\subsection{Impact of MFCC and fbank features}
Another way to check the impact of pitch on ASR and SLU performance is the removal of pitch variation from the test set utterances. To this end, average values of F0 per speaker were calculated. Using Praat, all test utterances for each speaker were resynthesized based on the resulting F0 average. As a next step, an E2E ASR model was also trained, using \textit{MFCC} features, instead of \textit{fbank} features, in order to compare the performance of ESPnet with the same acoustic features as used for Kaldi.
Table \ref{VocADom@A4H_Performances_RAP_SLU_pitch_removal}
shows that, at the level of the pipeline SLU ASR module (\textit{Pipeline ASR}), the performances of Kaldi (MFCC) on data \textit{without} pitch variation are superior to performances on data \textit{with} pitch variation. However \textit{E2E ASR} and \textit{E2E SLU} performances on data \textit{with} pitch variation are superior to utterances \textit{without} pitch variation, especially with fbank features.
Finally utterances from male (\textit{M(1)}) and female (\textit{F(1)}) speakers were evaluated separately. These utterances were compared with those of male (\textit{M(2)}) and female (\textit{F(2)}) speaker samples with a pitch \textit{above} the average per speaker. These results show that performances for (\textit{M(2)}) and (\textit{F(2)}) with \textit{deletion} of pitch variation are significantly worse than with pitch variation. This indicates that the E2E SLU is more robust to pitch variation.
\begin{table}[th]
\caption{ASR and SLU performances (\%), deletion of pitch variation}
\label{VocADom@A4H_Performances_RAP_SLU_pitch_removal}
\centering
\begin{tabular}{|l|l|cc|cc|}
\hline
\textbf{Model} & \textbf{Acoust.} &\multicolumn{2}{c|}{\textbf{No pitch var.}}& \multicolumn{2}{c|}{\textbf{Pitch var.}} \\
& \textbf{param.} & \textbf{WER} & \textbf{CER } & \textbf{WER}& \textbf{CER} \\
\hline
\textbf{Pipeline ASR:} & & & & & \\
\textbf{Kaldi } & MFCC & \textbf{21.48} & - & 22.92 & - \\
\hline
\textbf{E2E ASR:} & & & & & \\
\textbf{ESPNet ASR:}
& MFCC & 49.90 & - & 47.60 & - \\
& fbank & 50.20 & - & \textbf{46.50} & - \\
\hline
\textbf{E2E SLU:} & fbank & - & 40.02 & - & \textbf{32.12} \\
\textbf{M(1).} & fbank & - & 41.94 & - & 32.90 \\
\textbf{F(1).} & fbank & - & 36.58 & - & 30.74 \\
\textbf{F0 $>$ F0 avg.} & & & & & \\
\textbf{M(2).} & fbank & - & 53.32 & - & 42.40 \\
\textbf{F(2).} & fbank & - & 37.89 & - & 32.36\\
\hline
\end{tabular}
\end{table}
\section{Grammatical Analysis of E2E SLU prediction} \label{sec:Symbolic_impact_on_E2E_SLU_prediction}
Although some pipeline approaches to SLU are able to handle OOV and syntactic variation, the E2E approach builds its own internal representation of utterances and has as final target a character string generation. This model should thus be more robust to OOV than classical ASR/NLU modules. To assess the capability of the E2E SLU model to handle better grammatical variations than a pipeline SLU system, we generated new input stimuli for which we controlled the \textit{linguistic variation}, in particular at the \textit{lexical} and \textit{syntactic} levels. This is detailed in the two following sections.
\subsection{Out of vocabulary words (OOV)}
In order to measure the impact of an increased OOV rate, we gradually replaced the test set vocabulary of some specific concepts with words that did not appear in the training set. To measure an increasing difficulty this experiment was performed in 4 steps~: \begin{enumerate} \item Step 1: {\tt action} and {\tt device-setting}, \item Step 2: Step 1 and {\tt device} , \item Step 3: Step 2 and {\tt location}, \item Step 4: Step 3 and {\tt key-words}. \end{enumerate}
The following example shows a voice command (``vocadom turn on the kettle'') with symbolic intent and concepts \textit{before} (1) and \textit{after} (2) insertion of OOV words (Step 4):
\begin{small}
\begin{lstlisting}
(1) @ ah vocadom euh ^allume^ }la bouilloire} @
(ah vocadom uh turn on the kettle)
(2) @ ah ursule euh ^enclenche^ }la bouillotte} @
(ah ursule uh switch on the kettle)
\end{lstlisting}
\end{small}
Table \ref{tab:OOV_setup} shows that substituted words in step 4 represent 26.15 \% of the total number of word tokens (31k) and 3.48 \% of the total number (1462) of word types (vocabulary).
\begin{table}[th]
\caption{Vocadom@A4H - ratio OOV total words}
\label{tab:OOV_setup}
\centering
\begin{scriptsize}
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Substitutions} & \textbf{\#Word} & \textbf{\#Words} & \textbf{(\%) Word} & \textbf{(\%) Total} \\
& \textbf{Type} & & \textbf{Type} & \textbf{Words} \\ \hline
Step 1 & 22 & 1785 & 1.50 & 5.72 \\
Step 2 & 34 & 4276 & 2.32 & 13.70 \\
Step 3 & 41 & 5516 & 2.80 & 17.68 \\
Step 4 & 51 & 8160 & 3.48 & 26.15 \\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
Once the test set has been altered with OOV words, the speech utterances were generated with the same TTS tool as used for the artificial corpus generation in Section~\ref{sec:synthdata}. The resulting utterances were then fed to the two SLU models. However, the E2E SLU was trained on data containing artificial speech, which was not the case for the pipeline SLU approach. Hence, for a fair comparison, the resulting E2E ASR model was used as ASR front-end for the pipeline SLU.
\begin{table}[!ht]
\caption{Impact of OOV on SLU performances (\%)}
\label{tab:Evaluation_VocADom@A4H_SLU_pipeline_OOV}
\centering
\begin{tabular}{|l|cc|cc|ccc|}
\hline
\textbf{Model} &\multicolumn{4}{c|}{\textbf{Pipeline}}&\multicolumn{3}{c|}{\textbf{E2E}} \\
&\multicolumn{2}{c|}{\textbf{NLU}}& \multicolumn{2}{c|}{\textbf{ASR+NLU}} & \multicolumn{3}{c|}{\textbf{SLU}} \\
& \textbf{CER} & \textbf{F1} & \textbf{CER}& \textbf{F1}& \textbf{WER} & \textbf{CER}& \textbf{F1}\\
\hline
Compl. real & 33.78 & 85.51 & 36.24 & 84.21& 46.50 & 32.12 & 74.57\\
\hline
Compl. synth. & - & - & 37.07 & 83.34 & 39.30 & 25.00 & 53.70 \\
\hline
\textbf{OOV:} & & & & & & & \\
Step 1 & 37.75 & 81.50 & 45.43 & 79.56& 44.00 & 30.75 & 50.39\\
Step 2 & 53.77 & 72.39 & 62.03 & 72.48 & 53.20 & 46.75 & 50.26\\
Step 3 & 63.01 & 69.58 & 68.07 & 70.29 & 52.50 & 50.89 & 51.59\\
Step 4 & 90.45 & 63.66 & 86.44 & 65.03 & 55.90 & 58.80 & 51.43 \\
\hline
\textbf{Diff.} & 56.67 & 21.85 & 49.37 & 18.31& \textbf{16.6} & \textbf{33.8} & \textbf{2.27} \\
\hline
\end{tabular}
\end{table}
Table \ref{tab:Evaluation_VocADom@A4H_SLU_pipeline_OOV} shows ASR and SLU performances according to the rate of OOV words. The complete \textit{Compl. real.} line corresponds to the original Vocadom@A4H test set while the \textit{Compl. synth.} lines corresponds to the test set of which speech has been synthesized through TTS. Using the E2E ASR for the pipeline SLU did increase the CER (ASR+NLU column) but had a small impact on the intent prediction. Overall, for all models performances for concept (\textit{CER}) and intent prediction (\textit{F1}-score)
deteriorate with increased OOV rates. The differences (\textit{Diff.}) between \textit{Compl. synth.} and \textit{Step 4} are much smaller for the E2E model than for the pipeline SLU model for both concept and intent prediction. This would suggest that the E2E model is more robust to OOV words.
It must be noticed that the intent prediction of the E2E model, decreases dramatically with synthetic speech. This is due to an increased error rate for the {\tt None} intent that consisted only of real speech in the training data whereas we used synthetic evaluation data for OOV impact evaluation. This is another evidence that acoustic features play a role up to the higher decision stages of the E2E model.
\subsection{Syntactic variation}
In this section, we measure the robustness of both SLU models for syntactic variability, predicting concepts and intents on test data with progressive syntactic variability in two steps:\begin{enumerate}
\item Step 1, 32 verbs belonging to the {\tt action} concept were replaced by more complex syntactic constructions;
\item Step 2, substitutions in Step 1 have been augmented by disfluencies surrounding the words of 18 labeled concepts of {\tt device}.
\end{enumerate}
The following example ('Vocadom turn on the kettle') shows a voice command containing an intent and symbolic concepts from the test set \textit{before} (1) and \textit{after} (2) insertion of more complex syntactic constructions and disfluencies (Step 2):
\begin{small}
\begin{lstlisting}
(1) @ vocadom euh ^allume^ }la bouilloire} @
(vocadom uh turn on the kettle)
(2) @ vocadom euh pourrais-tu ^allumer^ la la }bouilloire} @
(vocadom uh could you turn on the the kettle)
\end{lstlisting}
\end{small}
We also generated text-to-speech based on the resulting modified test sets, that we evaluated in the same way as for the OOV words test set up, as outlined in previous section.
\begin{table}[!ht]
\caption{Impact of syntactic variation on SLU performances (\%)}
\label{tab:Evaluation_VocADom@A4H_SLU_E2E_OOV}
\centering
\begin{tabular}{|l|cc|cc|ccc|}
\hline
\textbf{Model} &\multicolumn{4}{c|}{\textbf{Pipeline}}&\multicolumn{3}{c|}{\textbf{E2E}} \\
&\multicolumn{2}{c|}{\textbf{NLU}}& \multicolumn{2}{c|}{\textbf{ASR+NLU}} & \multicolumn{3}{c|}{\textbf{SLU}} \\
& \textbf{CER} & \textbf{F1} & \textbf{CER}& \textbf{F1}& \textbf{WER} & \textbf{CER}& \textbf{F1}\\
\hline
Compl. real & 33.78 & 85.51 & 36.24 & 84.21& 46.50 & 32.12 & 74.57\\
\hline
Compl. synth. & - & - & 37.07 & 83.34 & 39.30 & 25.00 & 53.70 \\
\hline
\textbf{Synt. var.:} & & & & & & & \\
Step 1 & 38.41 & 81.06 & 50.40 & 77.45 & 44.40 & 16.29 & 52.59\\
Step 2 & 38.34 & 81.19 & 52.75 & 76.36 & 50.90 & 22.07 & 49.09\\
\hline
\textbf{Diff.} & 4.56 & 4.32 & 15.68 & 6.98& \textbf{11.60} & \textbf{2.93} & 4.61\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:Evaluation_VocADom@A4H_SLU_E2E_OOV}
shows that differences (\textit{Diff.}) between performances of concept and intent prediction for test data with the complete original syntactic structure \textit{Compl. synth.} on the one hand and test utterances with modified syntactic structure \textit{Synt. var., Step 2} on the other hand, are smaller for the E2E model than for the sequential SLU model. This again indicates a greater robustness of the E2E model to cope with increased syntactic variation.
Table \ref{tab:Evaluation_VocADom@A4H_SLU_E2E_OOV} also shows that the performance of the E2E model for concept prediction improves with a more complex syntax. This may be due to an average sentence length of 15 words for the artificial corpus utterances, while the average sentence length for the (original) test set utterances is only 5. The increased syntactic variation, also increases the length of the evaluation utterances which consequently approaches the average length of the the artificial corpus utterances.
\begin{table}[th]
\caption{Impact of utterances length in the real test set case}
\label{tab:SLU_performances_on_longest_utterances}
\centering
\begin{tabular}{|l|cc|cc|r|}
\hline
\textbf{Test set} & \multicolumn{2}{c|}{\textbf{Pipeline SLU}}&
\multicolumn{2}{c|}{\textbf{E2E SLU}}\\
\textbf{} & \textbf{Concept} & \textbf{Intent} & \textbf{Concept}& \textbf{Intent}\\
\textbf{} & \textbf{CER (\%)} & \textbf{F1 (\%)} & \textbf{CER (\%)}& \textbf{F1 (\%)}\\
\hline
\hline
\textbf{Compl. real} & 36.24 & 84.21 & 32.12 & 74.57\\
6747 utterances & & & &\\
\hline
\textbf{Compl. real $>$ 7 words} & 38.62 & 80.01 & 33.18 & 68.23\\
1461 utterances & & & & \\
\hline
\textbf{Diff.} & \bf 2.38 & 4.2 & {\bf1.06} & 6.34\\
\hline
\end{tabular}
\end{table}
To test whether such robustness of the E2E SLU model is observable on the \textit{real} test data (and not only on synthetic data), we extracted the more complex utterances from the test set based on their length. Since
the average sentence length of the test set is seven words, 1461 utterances of the original test set (over 6747 total utterances) were labelled more complex. Although higher number of words does not necessarily means more complex sentences, this is nevertheless correlated in the voice command case. The performances are shown on Table~\ref{tab:SLU_performances_on_longest_utterances}. The first row recall the results on the whole test set while the second row shows performances computed on long utterances only. As with the synthetic dataset, the E2E SLU concept prediction errors increases far less with the length of sentences than the pipeline SLU (Table \ref{tab:SLU_performances_on_longest_utterances}, Diff.). Furthermor, similarly to Table~\ref{tab:eval_ESPNET_transfert_notransfert_slu}, we can see that the E2E SLU could not outperform the pipeline SLU intent prediction performances.
The reader can find a summary of the experiments measuring the acoustic and grammatical impact on the E2E SLU model compared with the pipeline SLU model in Table~\ref{tab:Evaluation_aperçu_results}.
\section{Discussion}\label{sec:Discussion}
To deal with the lack of data our strategy consisted in generating artificial utterances for voice commands. To deal with the bottleneck of distance between real speech test data and artificial speech training data, we applied a \textit{transfer learning} approach. A initial model was pre-trained on large out-of-domain data but composed of real speech data, and then this model was fine-tuned on the artificial but in-domain speech. It allowed us to take advantage of a large non-domain specific data set, and a small domain specific data set. This approach outperformed the pipeline SLU for concept prediction. Our data augmentation technique is close to \citep{li2018training,lugosch2019using}, who reported better concept prediction performances with a real speech model augmented with artificial speech than with an acoustic model only trained on real speech. \cite{li2018training} reported optimal performances for their E2E ASR model using an acoustic model trained on 50\% synthetic speech data and 50\% real speech.
On the other hand, \textit{intent} prediction did not sufficiently benefit from transfer learning. A possible explanation is that ESPnet, being an ASR tool, functions at a level that is too local to be able to perform the global abstraction required for intent prediction.
Regarding the analysis of performances, for the Pipeline SLU model, performance differences between the NLU module and the complete sequential SLU system often remain high, despite the strategies used to reduce them. Since the current state of the art indicates that good SLU performance requires good ASR performance, we compared the pipeline and E2E SLU approaches on an ASR task learned on equivalent data. Our experiments show that the WER of the E2E ASR is significantly higher than the one of the pipeline ASR module. However, the E2E SLU approach shows better SLU performance for concept prediction, using the same tool (ESPnet) as E2E ASR. On top of that, our correlation tests in Section \ref{subsec:ASR_impact_on_E2E_SLU} showed that perfect ASR is not necessary to obtain good E2E SLU performance. It is however essential in the case of a pipeline approach as we have demonstrated in \citep{desot2019towards} for intent prediction and in \citep{desot2019slu} for concept prediction. \textbf{This answers our first question and confirm the state-of-the-art: the E2E model reduces the cascade of errors effect.}
The E2E approach infers concepts and intents conveyed by an utterance directly from the acoustic signal. Our experiments in Section~\ref{sec:Acoustic_impact_on_E2E_SLU_prediction} reveal that prosodic information allows the model to point to the most important semantic information. There are indications that the higher pitch values improve the performance of the E2E SLU approach that turns out to be more robust to noisy speech as compared to the pipeline model. This indicates that the convolutional network of the E2E SLU model seems to benefit more from correlated and richer filter-bank features than from MFCC features, used by the pipeline ASR model. \textbf{This answer our second research question by showing that the E2E SLU model uses prosodic information to infer concepts and is able to learn a feature representation more robust to noise than the pipeline model.}
As our target users are senior adults who tend to deviate easily from a fixed grammar of voice commands, we tested the SLU approaches with increased amount of OOV words and inserted more syntactic variation in the test corpus utterances. In these two cases, the E2E SLU model proved to be more robust than the pipeline approach while both model have been exposed to the same data to learn the concepts to extract (same NLU training set). \textbf{This answers our third research question. The E2E SLU model is indeed more robust to syntactic variation than the pipeline model.}
\section{Conclusion and future work}\label{sec:Conclusion}
Our answer to the bottleneck of the cascade of errors effect between the ASR and NLU modules of a pipeline SLU model, was an E2E SLU model that extracts intents and concepts directly from the signal. This approach based on deep neural networks allowed us to avoid the \textit{cascade of errors} by performing a joint learning of these two tasks in one and the same model. By comparing our E2E SLU approach with a pipeline baseline approach, composed of a state of the art ASR system and an NLU module, learned on data, specific to the home automation field, we were able to show that the E2E SLU approach gives best performance in terms of concept prediction. A possible solution to improve E2E SLU intent prediction is adding a decoder to the ESPnet architecture in order to train and predict concepts and intents jointly. A similar multi-task learning has already been applied for NLU in \citep{Liu2016}.
We can confirm one of the conclusions of the study of \cite{stehwien2016exploring} and \cite{su2018perceivable} that prosodic information can point to the most important semantic information. We have shown that the E2E SLU model exploits prosodic information which favours its performance in predicting intents and in particular concepts. On top of that the E2E SLU model shows more robustness as compared to the pipeline approach for processing target users syntactic variation and an increased OOV rate.
A transfer learning allowed us to decrease the \textit{acoustic} distance between the artificial speech of the training data and the real speech of the test set. To further decrease this distance, speech synthesis based on a larger number of voices could be generated and added to the training data. Another possibility is training a neural speech synthesis model such as \textit{Tacotron} \citep{wang2017tacotron, li2018training} for one or more speakers of the real SWEET-HOME corpus. The resulting model could then be used to generate new synthetic utterances that would be closer to the reference corpus.
One of the strengths of the E2E SLU approach is its processing of noisy speech data, which makes this approach very suitable in a realistic smart home situation where voice commands must be extracted from utterances with various background noises. In addition to background noise, residents are far away from microphones. This leads to distorted acoustic signals by reverberation depending on the acoustics of the room. To better assess and understand the performance of the E2E SLU system in such a realistic situation, as a next step, the micro-distant recordings version of the test corpus should be tested. These are the recordings made by the four antennas of 4 microphones integrated into the ceiling of the Amiqual4Home smart home. This could be solved by augmenting our training data with \textit{room impulse response} (RIR) data. However, the acquisition of real RIR data is not trivial. \cite{ko2017study} show that acoustic models trained on simulated RIR data are competitive with real RIR data. Therefore, a technique for increasing our training data with simulated RIR data could be explored.
\section*{Acknowledgements}\label{sec:Acknowledgements }
This work is part of the VOCADOM project founded by the French National Research Agency (Agence Nationale de laRecherche) / ANR-16-CE33-0006. It was also partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003).
\bibliographystyle{model5-names}
|
1,116,691,500,991 | arxiv | \section*{Introduction}
A well-known question in relating the geometric and arithmetic properties of symmetric domains is the following problem expectation of Oort (posed in \cite{oortconj}, see \cite{moor}): for $g$ sufficiently large, does the moduli space ${\mathcal A}_g$ of complex principally polarized abelian varieties contain any Shimura subvariety contained generically in the Torelli locus~${\mathcal J}_g$ --- the locus of Jacobians of smooth genus $g$ curves? A finite number of examples of Shimura curves contained in the Torelli locus have been constructed by de Jong, Moonen, Mumford, Noot, Oort, and others, most of them arising as Galois covers of $\mathbb{P}^1$ with branch points varying --- see the survey \cite{moor} for the history of the problem, details, and references; see also \cite{fred2} for more examples arising as non-abelian Galois covers.
From the other direction, a lot of work has been devoted to proving that for $g$ sufficiently large the Torelli locus contains no special subvarieties. The latest progress in this direction has been made by Chen, Lu and Zuo, who proved that there do not exist Shimura curves of Mumford type, or of maximal variation, or contained in the locus of hyperelliptic Jacobians, for some explicit low bounds on $g$, see \cite{luzuo,chenluzuo}.
Our modest contribution in this note is an explicit construction --- by writing down families of period matrices --- of infinitely many Shimura curves contained in ${\mathcal J}_4$.
\begin{Thm}\label{thm:main}
For a dense set of points $z_1,z_2$ in a 2-dimensional complex ball, the one-parameter family of $4\times 8$ period matrices, given by
$$(Z_1\,|\, Z_2) = \left(\smallmatrix 3\tau & 0 &\ 3\tau +3 & 0 \\
0 &\ Z_1^{(3)}(z_1,z_2) & 0 &\ Z_2^{(3)}(z_1,z_2) \endsmallmatrix\right)B^{-1}$$
where $\tau\in\mathbb{H}$ is the parameter for the family, where~$B$ is given by~\eqref{eq:STbasechange}, and $Z_1^{(3)},Z_2^{(3)}$ are given by Proposition \ref{prop:genus3}, defines a Shimura curve contained generically in the locus of Jacobians of smooth curves of genus~4.
\end{Thm}
Our examples include the Shimura-Teichm\"uller curve of genus~4
(\cite{moellerST})) obtained for the values
$$z_1 \= \tfrac{-2\zeta^3 + \zeta^2 + \zeta -3}2, \qquad z_2 \= 3^{-1/4}
\, \tfrac{\zeta^3 -2\zeta^2 +1}2.$$
These Shimura curves are not of maximal degeneration or of Mumford type in the sense of \cite{luzuo}.
Unlike our purely analytic construction of infinitely many Shimura curves contained in the locus of hyperelliptic Jacobians of genus three \cite{grmo}, the construction in this note is geometric, by using $\mathbb{Z}/3$ Galois covers of elliptic covers, and the associated Prym map studied by Pirola (\cite{pirola}).
\medskip
We announced our results at several talks starting in February 2014, including at Oberwolfach, Paris Jussieu, and Roma Tre. At the last stage of preparing our manuscript, the preprint of Frediani, Penegini, Porru \cite{frediani} appeared, which studies much more generally under what conditions families of covers of elliptic curves may lead to Shimura curves. They independently discovered our examples, and much more, while we are able to compute the period matrices explicitly using the Shimura description.
\subsection*{Notation}
We denote by $\moduli$ the moduli space of smooth complex curves,
and by $\AVmoduli$ the moduli space of complex principally polarized abelian
varieties (ppav). We denote by ${\rm Jac} \,:\moduli\to\AVmoduli$ the Torelli map
that sends the curve to its Jacobian. The main question we study here is
the existence of Shimura curves that are contained generically in
${\mathcal J}_g:={\rm Jac} \,(\moduli)$. A {\em Kuga fiber space} is an inclusion $j$ of
a $\mathbb{Q}$-algebraic group $G$ into $\Sp(2g,\mathbb{Q})$, such that an arithmetic
lattice $\Gamma \subset G_\mathbb{R}$ maps to $\Sp(2g,\mathbb{Z})$ and such that the
intersection of the maximal compact
subgroup $U(g) \subset \Sp(2g,\mathbb{R})$ with $j(G_\mathbb{R})$ is a maximal compact
subgroup $K_\mathbb{R}$ of $G_\mathbb{R}$. We identify a Kuga fiber space with the
subvariety of the moduli stack
$$ \Gamma \backslash G_\mathbb{R}/K_\mathbb{R} \hookrightarrow \Sp(2g,\mathbb{Z}) \backslash \Sp(2g,\mathbb{R})/\mathop{U}(g) = \AVmoduli.$$
A {\em Shimura subvariety} of $\AVmoduli$ is the image of such an inclusion
that moreover contains a CM point. One-dimensional Kuga fiber spaces
$\Gamma \backslash G_\mathbb{R}/K_\mathbb{R}$ are called Kuga curves, one-dimensional
Shimura varieties are called Shimura curves.
\par
We refer to \cite{moor}
for a detailed discussion of the history and importance of the problem,
its relationship with the Andr\'e-Oort and Coleman conjectures, the current
status, and further references.
\medskip
In Section~1 we review Pirola's construction and explain how it gives
examples of Shimura curves. In Section~2 we compute the period matrix of
the Shimura-Teichm\"uller curve in genus~4. explicitly. In Section~3 we use
this to compute the necessary isogeny data to compute the data
of the period matrices for all the Shimura curves that we construct ---
we hope that this section may be of independent use as a working guide
for applying Shimura's construction.
\section{Pirola's Hurwitz space of cyclic triple covers of elliptic curves by genus four curves}
In \cite{pirola} Pirola studied the space of genus four smooth curves $X$
that are Galois triple covers $p:X\to E$ of elliptic curves $E$, totally
ramified over three points. We denote by ${\mathcal H}$ the Hurwitz space
of such covers, which thus admits a finite map $\pi:{\mathcal H}\to
\moduli[1,3]$ and a generically injective map $\phi:{\mathcal H}\to\moduli[4]$.
Pirola's interest in ${\mathcal H}$ is in disproving a conjecture of Xiao on the fixed parts of families of curves.
\par
Given a point $p: X \to E$ in ${\mathcal H}$ we define
$${\rm Prym}(X/E) := {\rm Ker}\Big({\rm Jac} \,(X) \to {\rm Jac} \,(E)=E\Big)$$
to be the (generalized) Prym variety of the covering. The polarization
on ${\rm Prym}(X/E)$ is given by the
restriction of the principal polarization from ${\rm Jac} \,(X)$, and is of
type $(1,1,3)$, so that we get the Prym map $P:{\mathcal H}\to\AVmoduli[{3,(1,1,3)}]$
to the moduli space of $(1,1,3)$-polarized abelian threefolds. Using deformation
theory, Pirola showed \cite[Section~2]{pirola} that the image $P({\mathcal H})$ is
two-dimensional. Since $\dim{\mathcal H}=3$, the generic fiber of the map $P$ is one-dimensional. These fibers are the Kuga curves we are looking for.
\par
\begin{Prop}
For any $A\in P({\mathcal H})\subset\AVmoduli[3,(1,1,3)]$ the one-parameter family of four-dimensional
Jacobians ${\rm Jac} \,(P^{-1}(A))$ is isogenous to the product of $A$ (as a trivial family over
a base) and the universal family of elliptic curves with a suitable level structure.
\end{Prop}
\begin{proof}
For any $(p:X\to E)\in P^{-1}(A)$ the Jacobian ${\rm Jac} \,(X)$ is isogenous to the
product of the fixed abelian threefold $A$ and the elliptic curve $E$, and
the elliptic curve varies as we vary along the family. Thus what remains
to be seen is that the family of elliptic curves that is thus obtained is
indeed the universal family of elliptic curves with some level, and not
some ramified cover of it.
\par
The family of curves $y^6=x(x+1)(x-t)$ studied in detail in the next section
is a member of this family for some~$A$, in fact the period matrix will
be computed below (see~\eqref{Z3special}). This family is a Shimura curve
as discussed in \cite{moellerST} and computed again below, hence for this~$A$
the family of elliptic curves is as claimed.
\par
On the other hand, the isogeny that allows to write ${\rm Jac} \,(X)$ as a product is
constant over all of $P({\mathcal H})$, since the possible isogenies are countable.
This implies that the uniformization of the elliptic curve does not vary
in the family over $P({\mathcal H})$.
\end{proof}
\par
\begin{Cor}
For any $A\in P({\mathcal H})$, the image of the one-parameter family of curves
$\phi(P^{-1}(A))\subset\moduli[4]$ under the Torelli map ${\rm Jac} \,$ is a
Kuga curve in $\AVmoduli[4]$ generically contained in the locus of Jacobians
of smooth curves.
\end{Cor}
\begin{proof}
This is obvious from the preceding proposition. Alternatively, one can
invoke the criterion from \cite{movizu} as the universal family of abelian
varieties over
such a curve is globally isogenous to the product of a trivial family and
the varying family of elliptic curves which reaches the Arakelov bound.
\end{proof}
\begin{Rem}
Note that we did not make any attempt to characterize how many times the closure of $C_A$
intersects the reducible locus nor how many times $C_A$ intersects the hyperelliptic locus.
Consequently, we do not claim a priori that any of the curves $C_A$ reaches the Arakelov bound,
just that $C_A$ is the intersection of a Kuga curve in $\AVmoduli[4]$ with $\moduli[4]$.
\end{Rem}
\begin{Rem}
If $A$ is a CM point, then the Kuga curve constructed
above is in fact a Shimura curve --- indeed, just take the elliptic curve to be CM, and then the
abelian fourfold is isogenous to a product of a CM abelian threefold and a CM elliptic curve. Thus within the two-dimensional family of Kuga curves parameterized by $P({\mathcal H})$ we have an everywhere dense collection
of Shimura curves.
\end{Rem}
\section{The period matrix for the genus four Shimura-Teichm\"uller curve}
We recall that there is precisely one Shimura curve in genus~4 which happens
to be also a Teichm\"uller curve (\cite{moellerST}). It parametrizes the
family of curves given by equations
$$X_t := \lbrace y^6 = x(x+1)(x-t)\rbrace, $$
where $t$ is the parameter of the family.
In this section we compute explicitly the period matrix of
$X_t$ and then compute explicitly the isogeny between ${\rm Jac} \,(X_t)$
and ${\rm Prym}(X/E) \times E$.
\par
\begin{Prop}
In the bases of homology and 1-forms defined below, the period matrix of $X_t$
is given by
$$(Z_1 \, |\, Z_2) = \left( \begin{smallmatrix*}[r]
\tau & \tau &0 &\ -\tau - 1 &\, \ 1 &1 &0 &-1 \\
\zeta^2 - 1 & \ \ 1 &-\zeta^2 + 1 &1\,& 1 &-\zeta^2 &\zeta^2 &\ -\zeta^2 + 1\\
\zeta^3 - \zeta & \ -\zeta^3 & \ -2\zeta^3 + 2\zeta^2 + \zeta - 1 &\ \zeta^2 - \zeta \,&1 &\ \zeta^2 - 1 &\zeta^3 - \zeta^2 - 2\zeta + 2 &\zeta^2 \\
-\zeta^3 + \zeta &\zeta^3 &2\zeta^3 + 2\zeta^2 - \zeta - 1 &\zeta^2 + \zeta \,&1 &\zeta^2 - 1 &\ -\zeta^3 - \zeta^2 + 2\zeta + 2 &\zeta^2\\
\end{smallmatrix*} \right)$$
where $\zeta := \zeta_{12} = e^{2\pi i/12}$ is a primitive $12$-th root of unity
and where $t = \lambda(\tau)$ for $\lambda: \mathbb{H} \to \mathbb{P}^1\setminus\{0,1,\infty\}$.
\end{Prop}
\par
Our strategy is similar to that used by Guardia
for the Shimura-Teichm\"uller curve in genus 3 in
\cite[Section~3 and~4]{Guardia}, except that our situation is
somewhat more involved, as the only automorphism of a generic point of our
family is $\alpha:(x,y) \mapsto (x,\zeta^2 y)$. We thus apply the fact that
most eigenspaces of the action of the automorphism on the cohomology are
unitary local systems, allowing us to determine much of the period matrix
from suitable special values of $t$.
\begin{proof}
We decompose the cohomology $H^1(X_t,\mathbb{Z})$ in a direct sum of
the eigenspaces for the action of~$\alpha$. The eigenvalues are powers of $\zeta^2$, i.e.~sixth
roots of unity, and their dimensions can be computed
by the well-known formulas for cyclic covers (see e.g.~\cite{bouwprank}).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|}
\hline&&&&&\\
i & 1 & 2 & 3 & 4 & 5 \\
[-\halfbls] &&&&&\\
\hline&&&&& \\
${\rm rank}(H^1(X_t,\mathbb{Z}))_{\zeta^{2i}}$ & 2 & 1 & 2 & 1 & 2 \\
[-\halfbls] &&&&&\\
\hline&&&&&\\
${\rm dim}(H^0(X,\Omega^1_X))_{\zeta^{2i}}$ & 0 & 0 & 1 & 1 & 2 \\
[-\halfbls] &&&&&\\
\hline
\end{tabular}
\end{table}
In fact, a basis of eigenforms on $X$ is given by
$\omega_1 = dx/y^3$, $\omega_2 = dx/y^4$, $\omega_3 = dx/y^5$ , $\omega_2 = xdx/y^5$.
Consequently, all but the $(-1)$-eigenspace are unitary and thus
the corresponding lines in the period matrix do not depend on~$t$.
\par
It suffices to compute the period matrix locally near some
fixed $t_0$. We take $t_0\in\mathbb{R}_{>0}$, and let $X = X_{t_0}$. The first step
is to define a suitable basis of $H_1(X,\mathbb{Z})$ and of $H^0(X,\Omega^1_X)$.
With the branch points aligned as $(-1,0,t_0,\infty)$ we perform the
branch cuts and draw two paths as in Figure~\ref{fig:paths}.
\begin{figure}[ht]
\begin{tikzpicture}[scale=1
\tikzset{arrow data/.style 2 args={%
decoration={%
markings,
mark=at position #1 with \arrow{#2}},
postaction=decorate}
}%
\draw (0,0) node(1){} -- (.825,0) node(2){} -- (1.65,0) node(3){} -- (2.475,0) node(4){} -- (3.3,0)
node(5){} -- (4.125,0) node(6){} -- (4.95,0) node(7){};
\fill (1) circle (1.2pt)
(3) circle (1.2pt)
(5) circle (1.2pt)
(7) circle (1.2pt);
\draw [arrow data={0.1}{stealth}, arrow data={0.4}{stealth},
arrow data={0.6}{stealth}, arrow data={0.9}{stealth}] plot [smooth, tension=.7]
coordinates {(2.475,0) (1.3,1.5) (.825,0) (1.3,-1.5) (2.475,0)
(3.65,1.5) (4.125,0) (3.65,-1.5) (2.475,0)};
\draw (7.25,0) node(8){} -- (8.075,0) node(9){} -- (8.9,0) node(10){} -- (9.725,0) node(11){} -- (10.55,0)
node(12){} -- (12.1,0) node(13){};
\fill (8) circle (1.2pt)
(10) circle (1.2pt)
(12) circle (1.2pt)
(13) circle (1.2pt);
\draw [arrow data={0.1}{stealth}, arrow data={0.4}{stealth},
arrow data={0.6}{stealth}, arrow data={0.9}{stealth}] plot [smooth, tension=.7]
coordinates {(8.075,0) (6.9,1.5) (6.425,0) (6.9,-1.5) (8.075,0)
(9.25,1.5) (9.725,0) (9.25,-1.5) (8.075,0)};
\tikzstyle{every node}=[font=\scriptsize]
\node (1) at (0,-0.27) {$-1$};
\node (3) at (1.65,-0.27) {$0$};
\node (5) at (3.3,-0.27) {$t_0$};
\node (7) at (4.95,-0.27) {$\infty$};
\node (8) at (7.25,-0.27) {$-1$};
\node (10) at (8.9,-0.27) {$0$};
\node (12) at (10.55,-0.27) {$t_0$};
\node (13) at (12.1,-0.27) {$\infty$};
\end{tikzpicture}
\caption{Paths on $\mathbb{P}^1(\mathbb{C})$ and ...} \label{fig:paths}
\end{figure}
Next, w define the loops $F$ and $G$ as lifts of this paths to $X_t$
as in Figure~\ref{fig:lifts}
\begin{figure}[ht]
\begin{tikzpicture}[scale=1]
\tikzset{arrow data/.style 2 args={%
decoration={%
markings,
mark=at position #1 with \arrow{#2}},
postaction=decorate}
}%
\draw (0,0) node(1){} -- (.825,0) node(2){} -- (1.65,0) node(3){} -- (2.475,0) node(4){} -- (3.3,0)
node(5){} -- (4.125,0) node(6){} -- (4.95,0) node(7){};
\fill (1) circle (1.2pt)
(2) circle (1.2pt)
(3) circle (1.2pt)
(4) circle (1.2pt)
(5) circle (1.2pt)
(6) circle (1.2pt)
(7) circle (1.2pt);
\draw [arrow data={0.1}{stealth}, arrow data={0.4}{stealth},
arrow data={0.6}{stealth}, arrow data={0.9}{stealth}] plot [smooth, tension=.7]
coordinates {(2.475,0) (1.3,1.5) (.825,0) (1.3,-1.5) (2.475,0)
(3.65,1.5) (4.125,0) (3.65,-1.5) (2.475,0)};
\draw (7.25,0) node(8){} -- (8.075,0) node(9){} -- (8.9,0) node(10){} -- (9.725,0) node(11){} -- (10.55,0)
node(12){} -- (12.1,0) node(13){};
\fill (8) circle (1.2pt)
(9) circle (1.2pt)
(10) circle (1.2pt)
(11) circle (1.2pt)
(12) circle (1.2pt)
(13) circle (1.2pt);
\draw [arrow data={0.1}{stealth}, arrow data={0.4}{stealth},
arrow data={0.6}{stealth}, arrow data={0.9}{stealth}] plot [smooth, tension=.7]
coordinates {(8.075,0) (6.9,1.5) (6.425,0) (6.9,-1.5) (8.075,0)
(9.25,1.5) (9.725,0) (9.25,-1.5) (8.075,0)};
\tikzstyle{every node}=[font=\scriptsize]
\node (1) at (0,-0.27) {$-1$};
\node (3) at (1.65,-0.27) {$0$};
\node (5) at (3.3,-0.27) {$t_0$};
\node (7) at (4.95,-0.27) {$\infty$};
\node (8) at (7.25,-0.27) {$-1$};
\node (10) at (8.9,-0.27) {$0$};
\node (12) at (10.55,-0.27) {$t_0$};
\node (13) at (12.1,-0.27) {$\infty$};
\node (A) at (1.65,1.5) {$6$};
\node (B) at (.825,-1.5) {$1$};
\node (C) at (4.125,-1.5) {$2$};
\node (D) at (4.125,1.5) {$F$};
\node (E) at (6.4,1.5) {$1$};
\node (F) at (8.6,1.5) {$6$};
\node (G) at (9.725,1.5) {$G$};
\node (H) at (9.725,-1.5) {$2$};
\node (I) at (7.5,-1.5) {$1$};
\end{tikzpicture}
\caption{... their lifts to $X$.} \label{fig:lifts}
\end{figure}
with the convention that the lower left end of the `butterfly'
is always on sheet number one. \footnote{We follow the sheet numbering convention
that clockwise loops around $-1$, $0$ and $t$ are
given by the permutation $\gamma=(123456)$, while the
loop around $\infty$ is given by $\gamma^{-3}= (14)(25)(36)$.
The action of $\alpha$ corresponds to a clockwise turn and thus
decreases the sheet number by one. We also use the convention that
the intersection number is plus one if the intersection is pointing
as the fingers of the right hand.
}
The set of paths $\{ u_{1+k} = \alpha^k(F), u_{7+\ell} = \alpha^\ell(G),\,
k,\ell=0,\ldots,5\}$ has intersection matrix $M = \langle u_i, u_j \rangle$
given by
$$
{
M= \left( \begin{smallmatrix*}[r
0 & \ -1 &0 & 0 & 0 & 1 & \ -1 & 1 & 0 & 0 &0 & 0 \\
1 & 0 &\ -1 & 0 & 0 & 0 & 0 & -1 & 1 & 0 &0 & 0 \\
0 & 1 &0 & \ -1 & 0 & 0 & 0 & 0 & -1 & 1 &0 & 0 \\
0 & 0 &1 & 0 & \ -1 & 0 & 0 & 0 & 0 &-1 &1 & 0 \\
0 & 0 &0 & 1 & 0 & \ -1 & 0 & 0 & 0 & 0 &-1 &1 \\
-1 & 0 &0 &0 & 1 & 0 & 1 & 0 & 0 & 0 &0 &-1 \\
1 & 0 &0 & 0 & 0 & -1 & 0 & \ -1 & 0 & 0 &0 & 1 \\
-1 & 1 &0 & 0 & 0 & 0 & 1 & 0 & \ -1 & 0 &0 & 0 \\
0 & -1 &1 & 0 & 0 & 0 & 0 & 1 & 0 & \ -1 &0 & 0 \\
0 & 0 &-1 & 1 & 0 & 0 & 0 & 0 & 1 &0 &\ -1 & 0 \\
0 & 0 &0 & -1 & 1 & 0 & 0 & 0 & 0 & 1 &0 &\ -1 \\
0 & 0 &0 & 0 & -1 & 1 & \ -1 & 0 & 0 & 0 &1 &0 \\
\end{smallmatrix*} \right)}\,.
$$
Since the $8\times 8$ minor of the intersection matrix corresponding to $u_1,u_2,u_3,u_4,u_7,u_8,u_9,u_{10}$ is
non-degenerate, it follows that these paths generate $H_1(X,\mathbb{Z})$. We compute that
the homology classes
\begin{equation*}\begin{aligned}
e_1 := u_1,\quad & e_2 := u_3, &\ e_3 := u_1 - u_3 + u_5 + u_6,\quad
& e_4:= u_2 -u_5-u_8\\
e_5 := u_7,\quad & e_6 := u_9, &\ e_7:= u_2+u_3-u_5+u_7, \quad& e_8 := u_1 + u_2 + u_4 + u_6\\
\end{aligned}\end{equation*}
form a symplectic basis for $H_1(X,\mathbb{Z})$.
To compute the period matrix in this basis, it suffices to determine $f_i = \int_F \omega_i$ and $g_i = \int_G \omega_i$.
since $\alpha$ acts on $\omega_i$ by a power of $\zeta$ and since
\begin{equation} \label{eq:alphaomega}
\int_{\alpha^k(F)} \omega_i = \int_{F} \alpha^{-k}(\omega_i).
\end{equation}
First, we use the intermediate 3-to-1 covering of $\mathbb{P}^1$ by the family
of elliptic curves
${\mathcal E}_t: y^3 = x(x+1)(x-t)$.
This covering is a constant family of elliptic curves, since there is
no ramification at infinity. We let $q:{\mathcal X}_t \to {\mathcal E}_t, (x,y) \mapsto (x,y^2)$
be the quotient map. The holomorphic one-form on ${\mathcal E}_t$ is $\omega_E = dx/y^2$
and its pullback is $q^* \omega_E = \omega_2$. On $E$ we may suppose $t=1$ and then there is
an additional automorphism $\beta_E: (x,y) \mapsto (-x,\zeta y)$ besides
$\alpha_E: (x,y) \mapsto (x,\zeta^2 y)$. We view ${\mathcal E}_t$ as a
three-sheeted covering of the projective line and decompose each sheet into two
half-planes, which we number and then label $U$ and $L$ for upper and lower. Then
$\beta_E$ maps $1U$ to $1L$, maps $1L$ to $3U$, and so on --- consistently with the fact that $\beta_E^2 = \alpha_E$ decreases the
sheet number by one. Now denoting $F_E:=q(F)$ and $G_E:=q(G)$, we have
$\beta_E(G_E) = -\alpha_E^{-1}(F_E)$ and $\beta_E(F_E) = - \alpha_E^2(G_E)$. Hence
$$ \int_{F_E} \omega_E = - \int_{\beta(G_E)} \alpha_E^* \omega_E = - \zeta^{-4} \int_{G_E}
(\beta^{-1})^* \omega_E = \zeta^{-2} \int_{G_E} \omega_E$$
and consequently $g_2 = \int_G q^*\omega_E = \int_{G_E} \omega_E =
\zeta^2 \int_F \omega_2 = \zeta^2 f_2$ gives us the row in the period matrix corresponding to integrals of $\omega_2$
The holomorphic one-form $\omega_1$ is the pullback of the one-form from the
non-isotrivial family
of elliptic curves $y^2=x(x+1)(x-t)$.
We keep~$f_1$ and~$g_1$ as indeterminates and determine the rest
of the first row using~\eqref{eq:alphaomega}.
\par
Finally, $\omega_3$ and $\omega_4$ belong to a unitary local system
and we may calculate their
periods at any special point. We choose $t=1$, where ${\mathcal X}_1$ has the extra
automorphism $\beta: (x,y) \mapsto (-x,\zeta y)$ with $\beta^2 = \alpha$.
Similarly to the situation for ${\mathcal E}_t$, we see that $\beta$ maps the
sheets by the pattern $1U\mapsto 1L \mapsto 6U \mapsto 6L \mapsto \cdots$.
Consequently,
$\beta(G) = -\alpha^{-1}(F)$ (and $\beta(F) = - \alpha^2(G)$). We thus get
$$ f_3 = \int_{F} \omega_3 = - \int_{\beta(G)} \alpha^* \omega_3 = - \zeta^{-5} \int_{G}
(\beta^{-1})^* \omega_3 = \zeta^{-5} \int_{G} \omega_3$$
and $f_4 = - \zeta^{-5} g_4$ due to an extra minus sign appearing when pulling
back by $\beta^{-1}$. Another application of the same argument implies that for
our special point $t=1$ we have $f_4 = \zeta^{-3}g_4$.
\par
The previous calculations also imply that we may normalize the
differentials so that $f_i=1$ for $i=1,2,3,4$. We let $\tau = g_1$
and altogether the period matrix with respect to the $\omega$ and
$\{e_1,\ldots,e_8\}$ is as stated in the proposition.
\end{proof}
\par
\medskip
While the automorphism~$\alpha$ of order~$6$ may not deform to any genus four curves
outside the family $X_t$ above (and from our description it would in fact follow
that it does not, but we will not need this result), the automorphism $\varphi = \alpha^2$
of order~$3$ by definition does deform to the Hurwitz space ${\mathcal H} $. Thus in what follows
the cover $p: X \to E$ given by $(x,y) \mapsto (x,y^3)$ that is obtained as the quotient
by $\varphi$ will be most important to us.
\par
We want to exhibit an isogeny ${\rm Jac} \,(X) \to {\rm Prym}(X/E) \times E$.
On the level of one-forms, this is easy, since the first row of
the period matrix $(Z_1\,|\, Z_2)$ corresponds
to $\omega_1$, which is $\varphi$-invariant. Moreover, we identify the period
lattice $\Lambda$ of ${\rm Jac} \,(X) = \mathbb{C}^4 /\Lambda$ with $\mathbb{Z}^8$ and
consider the sublattices $\Lambda_E$ resp.\ $\Lambda_P$ of $\Lambda$ that
are orthogonal to the real and imaginary part of the second through
fourth row (resp.\ first row) of $(Z_1\,|\, Z_2)$. Considering $\Lambda_E$
as a sublattice of $\mathbb{Z}^8$ and writing its components as the first and
fifth column of a base change matrix $B$, and proceeding similarly with $\Lambda_P$,
we obtain the matrix
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:STbasechange}
B := \left( \begin{smallmatrix*}[r]
1 &\ \ 0 &\ -1 &\ -1 & 1 &\ \ 0 &\ \ 1 &-2\\
1 &0 & 1 & 2 & 1 &0 &0 & 1\\
0 &1 & 0 & 0& 0& 0& 0& 0\\
-1 &0 & 0 & 1& \ -1& 0& 1& \ -1\\
0 &0 & 0 & 1& 1& 0& 0& 0\\
0 &0 & 0 & 1& 1& 0& 1& 0\\
0 &0 & 0 & 0& 0& 1& 0& 0\\
1 &0 & 0& 1& 0& 0& 0& 1\\
\end{smallmatrix*} \right)
\ee
such that
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:Z1Z2B}
(Z_1\,|\, Z_2) B = \left(\smallmatrix 3\tau & 0 &\ 3\tau +3 & 0 \\
0 &\ Z_1^{(3)} & 0 &\ Z_2^{(3)} \endsmallmatrix\right)
\ee
is indeed the period matrix of the product of $E$ and the period matrix of ${\rm Prym}(X/E)$, which has period matrix
$(Z_1^{(3)}\,|\, Z_2^{(3)})$ given by
\begin{equation}\label{Z3special}
\left( \begin{smallmatrix*}[r]
-\zeta^2 + 1 &\ -\zeta^2 + 2 & -3\zeta^2 + 6 & \zeta^2 & 0& -3\zeta^2 + 3 \\
-2\zeta^3 + 2\zeta^2 + \zeta - 1 & 2\zeta^3 + \zeta &\ -3\zeta^3 + 3\zeta^2 &
\phantom{-} \zeta^3 - \zeta^2 - 2\zeta + 2 & \phantom{-}\zeta^3 + 2\zeta^2 - 2\zeta - 1 & \ -3\zeta^3 + 3\zeta \\
\phantom{-}2\zeta^3 + 2\zeta^2 - \zeta - 1 & \phantom{-}2\zeta^3 - \zeta &
\phantom{-} 3\zeta^3 + 3\zeta^2 &\ -\zeta^3 - \zeta^2
+ 2\zeta + 2 &\ -\zeta^3 + 2\zeta^2 + 2\zeta - 1 &\phantom{-}3\zeta^3 - 3\zeta \\
\end{smallmatrix*} \right)\,.
\end{equation}
We check that the polarization $J_3 = B^T J B $, where $J$ is the standard principal polarization matrix with ones
on the diagonal of the upper right block, is indeed of type $(1,1,3)$.
Furthermore, we see that the $(1,1,3)$ polarized abelian threefold with the period matrix $(Z_1^{(3)}\,|\, Z_2^{(3)})$ indeed admits indeed an order three automorphism, preserving the polarization $J_3$, given by the diagonal matrix ${\rm diag}(\zeta^4,\zeta^8,\zeta^8)$ acting on
the left and the inverse of
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:M3}
M_3 := \left( \begin{smallmatrix*}[r]
0 &0 &0 &-1& 0& 0\\
0 &1 &0 &0& -3& 3\\
0 &0& 1& 0& 1& 0\\
1 &0 &0 &\ -1 &0 &0\\
0 &0 &-3& 0 &\ -2 &0\\
0 &\ -1 &\ -3 &0 &0&\ -2\\
\end{smallmatrix*} \right)
\ee
acting on the right.
\section{Families of abelian threefolds with complex multiplication}
\label{sec:A3CM}
The moduli spaces of abelian varieties with given endomorphism ring
and polarization, nowadays called PEL-Shimura varieties, have been
constructed by Shimura in \cite{shimura}. This construction is presented in
the textbook \cite[Chapter~9]{bl}, the notation of which we follow, and
to which we refer for further details.
\par
In the previous section we dealt with the Shimura-Teichm\"uller curve~${\mathcal X}$
the general Jacobian in which has a $\mathbb{Z}/6$ automorphism. Our goal now is to
explicitly construct $(1,1,3)$-polarized abelian threefolds with an order
three automorphism inducing complex multiplication by $\mathbb{Z}[\rho]$ with
$\rho:=\eta^4$ a third root of unity. The moduli space of polarized
abelian threefolds with such an endomorphism is two-dimensional.
\par
\begin{Prop}\label{prop:genus3}
There exists an irreducible component of the moduli space of $(1,1,3)$
polarized abelian threefolds with a $\mathbb{Z}/3$ automorphism such that
the $3\times 6$ period matrices are $(Z_1^{(3)}(z_1, z_2) \,|\, Z_2^{(3)}(z_1, z_2)$
for a suitable choice of basis, where the matrix $Z_1^{(3)}$ is equal to
\bas
\left(\begin{smallmatrix*}[r]
3^{-1/4}(\zeta^2+3\zeta+1)\, z_2 & (\zeta^2+3\zeta+1)(z_1+1)
&\ \ (-\zeta^3+8\zeta+3)(z_1+1) \\
-2\zeta^3 +2\zeta^2 + \zeta -1 &3^{3/4}\,(\zeta^3+\zeta^2-1) \,z_2 -3\zeta^2(z_1-1)
& a_{23}\\
2\zeta^3 + 2\zeta^2 -\zeta -1 &\ \ \ 3^{3/4}\,(\zeta^3+\zeta^2-1) \,z_2 +
(4\zeta^3 + 2\zeta^2 -5\zeta -4)(z_1-1)
& a_{33}\\
\end{smallmatrix*} \right)
\eas
where
$$
\scriptstyle
a_{23}\=3^{3/4} (3\zeta^3 + 3\zeta^2-2\zeta -1) \,z_2 - 3(\zeta^2-3\zeta-1) (z_1 -1)
$$
and
$$
\scriptstyle
a_{33}\=
3^{3/4}\,(3\zeta^3+3\zeta^2-\zeta-4) \,z_2 + (10\zeta^3 + \zeta^2 -17\zeta -11)
\,z_1 + (8\zeta^3 +2\zeta^2 -13\zeta -11);$$
while the matrix $Z_2^{(3)}$ is equal to
\bas
\left(\begin{smallmatrix*}[r]
3^{-1/4}(3\zeta^3 + \zeta^2 - 3\zeta -2) \,z_2&\ \ (3\zeta^3 + \zeta^2 - 3\zeta -2)
(z_1+ 1) & \ \ b_{13} \\
\zeta^3 -\zeta^2 -2\zeta +2 & \ \ 3(z_1+1) + 3^{3/4}(\zeta^3 -\zeta +1)\,z_2
& b_{23} \\
-\zeta^3 -\zeta^2 -2\zeta +2 & \ \ 3^{3/4}(-\zeta^3 +\zeta +1)\,z_2
+ (\zeta^3 + 2\zeta^2 + 4\zeta + 2 )(z_1+1)
& b_{33} \\
\end{smallmatrix*} \right)
\eas
where
\bas \scriptstyle
b_{13} & \= \scriptstyle (8\zeta^3 + 3\zeta^2 -7\zeta -3) \,z_1
\+ 10\zeta^3 + 6\zeta^2 -8\zeta -6\\
\scriptstyle b_{23} &\= \scriptstyle (9\zeta^3 - 3)\,z_1
\,-\, 3^{3/4}(12\zeta^3 -3\zeta^2 -9\zeta +9)\,z_2 \+ 9\zeta^3 -3\zeta^2 \\
\scriptstyle b_{33} &\= \scriptstyle (\zeta^3 + 10\zeta^2 + 10\zeta
+ 7)\,z_1 \,-\, 3^{3/4}(\zeta^3 - \zeta^2 -3\zeta -3)\,z_2 \+ 5\zeta^3 + 5\zeta^2+8\zeta +2
\eas
for $(z_1,z_2)$ in some complex $2$-ball.
Moreover, the $3\times 6$ period matrix obtained for
$$z_1 \= \tfrac{-2\zeta^3 + \zeta^2 + \zeta -3}2, \qquad z_2 \= 3^{-1/4}
\, \tfrac{\zeta^3 -2\zeta^2 +1}2.$$
is precisely equal to the matrix \eqref{Z3special} obtained in the previous
section.
\end{Prop}
Once we have proven this proposition, our main result follows immediately
\begin{proof}[Proof of Theorem \ref{thm:main}]
Indeed, once we have determined the period
matrices $(Z_1^{(3)}(z_1, z_2) \,|\,
Z_2^{(3)}(z_1, z_2))$ of the Pryms appearing in our construction, to obtain the period matrices of the genus four Jacobians that
form Shimura curves, we will substitute them into the form of period matrices
given by \eqref{eq:Z1Z2B} and undo the base change by the matrix $B$ given by \eqref{eq:STbasechange}.
\end{proof}
\par
\medskip
Thus it remains to prove the proposition using Shimura's construction.
However, the construction of universal families by Shimura involves
quite a number of choices. So this section is a guide to the practical
use of Shimura's construction: how to adapt the choices so that a given
period matrix appears as a member of the family.
\par
\begin{proof}[Proof of Proposition \ref{prop:genus3}]
For the proof, we follow Shimura's original construction, adapted and specialized to our case,
as explained in \cite[Chapter 9]{bl}, to which we refer for all justifications of the steps.
In our case the endomorphism ring is the maximal order $\mathbb{Z}[\rho]$
in the field $K = \mathbb{Q}(\rho)$, while the Rosati involution is simply the
complex conjugation, and its fixed is $K_0=\mathbb{Q}$, of which $K$ is thus a quadratic extension.
Abelian threefolds with this endomorphism ring are given as quotients of the $2$-ball
$$ B_2 := \{ z = \left(\begin{smallmatrix} z_1 \\ z_2 \end{smallmatrix} \right)
\in \mathbb{C}^2: |z_1|^2 + |z_2|^2 < 1 \}.$$
Following Shimura, to construct {\em all} such abelian threefolds (for complex multiplication
by {\em any} order in~$K$) one chooses any pair $({\mathcal M},T)$ consisting of a free rank~$6$ submodule ${\mathcal M}$
of $K^3$ and a non-degenerate signature $(2,1)$ matrix $T \in \operatorname{Mat}_{3\times 3}(K)$ such that
$\overline{T}^T = -T$, satisfying
$$ {\rm tr}^K_\mathbb{Q}(a^T T \overline{b}) \in \mathbb{Z} \quad \text{for all} \quad a,b \in {\mathcal M}\,.$$
To construct such $({\mathcal M},T)$, recall that the signature (2,1) condition means that
there exists a matrix $W \in {\rm GL}_3(\mathbb{C})$ satisfying
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:defW}
T \= W^T \mat {I_2} 00{-i} \overline{W}\,,
\ee
where we denote by $I_2$ the product of $i$ and the $2\times 2$ identity matrix.
To any $z \in B_2$ we associate the linear map $J_z: \mathbb{C}^6 \to \mathbb{C}^3$
given by left multiplication by the matrix
$$ J_z := \mat{(z^t,1)W} 0 0 {(I_2,z)\overline{W}}\,$$
and we let $j: {\mathcal M} \to \mathbb{C}^6$ be given by one embedding of $K \to \mathbb{C}$ in the
first three coordinates and the complex conjugate embedding in the remaining
three coordinates. Then the abelian variety $X_z := \mathbb{C}^3/J_z(j({\mathcal M}))$ with the
polarization
$$ H := \left(\begin{smallmatrix}
|z|^{-1} & 0 \\ 0 & (I_2 - \overline{z} z^T)^{-1} \\
\end{smallmatrix}\right) $$
has the desired endomorphism, since by a direct matrix computation one checks that
$$ a J_z(j(b)) \= J_z(j(ab)) \quad \text{for all} \quad a \in K\, \quad\text{and}\ b\in{\mathcal M}\, ,$$
and hence the action of $K$ on $\mathbb{C}^3$ is compatible with the lattice $\Lambda= J_z(j({\mathcal M}))$ (see \cite[Chapter~9.3]{bl}).
Moreover, $\operatorname{Im} H$ is indeed an integer-valued bilinear form on $\Lambda$, as one checks by computing
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:polidentity}
(\operatorname{Im} H)\left(J_z(j(a)),J_z(j(b))\right) \ = {\rm tr}^K_\mathbb{Q}(a^T T \overline{b})
\ee
for all $a,b \in {\mathcal M}$.
\medskip
Consequently, our goal is to find a pair $({\mathcal M},T)$ such that the period matrix
$(Z_1^{(3)}\,|\, Z_2^{(3)})$ of the Shimura-Teichm\"uller curve, given by \eqref{Z3special}, appears in the family of period matrices $J_z(j({\mathcal M}))$
for some $(z_1, z_2) \in B_2$. To find ${\mathcal M}$, note that
since $(Z_1^{(3)}\,|\, Z_2^{(3)})$ has complex multiplication by a maximal order, we must have ${\mathcal M} \cong \mathbb{Z}[\rho]^3$.
Hence we have to pick three elements in $\mathbb{Z}^6$ that together with their
$\rho$-images (with $\rho$ acting by left multiplication by the matrix $M_3$ given
in~\eqref{eq:M3}, computed for the special case) generate $\mathbb{Z}^6$. A choice of such elements
is given for example by $u_1 = e_1$, $u_2 = e_2 + e_5$, $u_3 = 2e_3 + e_5 -e_6$, and $u_{3+k} = \rho u_k$
for $k=1,2,3$.
Let now $L$ be the base change matrix from $\mathbb{Z}^6$ to the basis $u_1,\ldots,u_6$. We compute
$$ L^T J_3 L \= \left(\begin{smallmatrix*}[r]
0 & 0 &0 \\ 0 & 0 & 1 \\ 0 &\ -1 &\ \ 0 \\
\end{smallmatrix*} \right), \quad L^T M_3^T J_3 L \ = \left(\begin{smallmatrix*}[r]
-1 & 0 & 0 \\ 0& 0& 1 \\ 0 &\ \ 2&\ \ 9 \\
\end{smallmatrix*} \right).
$$
Now to make a choice of $T$, we use \eqref{eq:polidentity} and the fact that $\operatorname{Im} H$ is the intersection form on $\Lambda$. The matrix
$$T \= \left(\begin{smallmatrix*}[r]
\tfrac23 \zeta^4 + \tfrac1 3 & 0 & 0\\ 0 &0& -\zeta^4 \\ 0 &\ \ -\zeta^4-1
&\ \ -6\zeta^4 - 3 \\
\end{smallmatrix*} \right)
$$
has the desired property~\eqref{eq:polidentity}, since ${\rm tr}^K_\mathbb{Q}(T + \overline{T})
= L^T J_3 L$ and since ${\rm tr}^K_\mathbb{Q}(\zeta^4T + \overline{\zeta^4T}) = L^T M_3^T J_3 L$.
We can now find a $W$ satisfying \eqref{eq:defW}, and we choose
$$ W \= \left(\begin{smallmatrix*}[r]
0 & \ \ 3 - \zeta & 0 \\
3^{-1/4} & 0 & 0 \\
0 & 1 & \ \ 3-i \\
\end{smallmatrix*} \right).
$$
Substituting all of these choices, for Shimura's form of the period matrices we finally obtain
\bas
(Z_1^{(S)}(z_1,z_2)\,|\, Z_2^{(S)}(z_1,z_2)) = \qquad\qquad\qquad\qquad\qquad\qquad \\
\left(\begin{smallmatrix*}[r]
3^{-1/4}\,z_2\, &\ z_1 + 1\, & (3-\zeta)\,z_1 + 3-i\, &
\ 3^{-1/4}\;\zeta^4\, z_2\, &\, \zeta^4(z_1 + 1)\, & ((3-\zeta)\,z_1 + 3-i)\zeta^4\\
0 \,& z_1 + 1\,&\ (3-i)(z_1+1) + 3-\zeta^{-1}&
0 \,& (z_1 + 1)\zeta^8 \, &\,\ ((3-i)(z_1+1) + (3-\zeta^{-1})) \zeta^8\\
3^{-1/4}\,& z_2 \,& (3+i)\,z_2 \, &
3^{-1/4}\; \zeta^8\,& \zeta^8\,z_2 \,& (3+i) \zeta^8\,z_2 \\
\end{smallmatrix*} \right)
\eas
\par
Finally, the last indeterminacy in the choice of the period matrices is
that we can choose different eigenforms within the two eigenspaces. This
means we can further have a base change matrix $C = (c_{ij})$ for holomorphic
one-forms and are looking for $z=(z_1,z_2)$ such that
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:solveforspec}
\left(\begin{smallmatrix}
c_{11} & 0 & 0 \\
0 & c_{22} & c_{23} \\
0 & c_{32} & c_{23} \\
\end{smallmatrix} \right) (Z_1^{(S)}(z)\,|\ Z_2^{(S)}(z)) \= (Z_1^{(3)}\,|\, Z_2^{(3)}) \,.
\ee
This system has a unique solution, given by
\par
\bas \scriptstyle \begin{matrix*}[l]
\scriptstyle c_{11} \= \zeta^2 + 3\zeta + 1, &\scriptstyle
\ c_{22} \= -3\zeta,&\scriptstyle
c_{23} \= 3^{3/4}\,(\zeta^3-\zeta^2+1), \\
& \scriptstyle\ c_{32} \= 4\zeta^3 +2\zeta^2 -5\zeta -4, &\scriptstyle
c_{33} \= 3^{3/4}\,(\zeta^3+\zeta^2-1)
\end{matrix*}
\eas
and $(z_1,z_2)$ as stated in the proposition.
With these values of $c_{ij}$, the left hand side of~\eqref{eq:solveforspec}
is the family given in the statement of the proposition.
\end{proof}
|
1,116,691,500,992 | arxiv | \section{Introduction}
A \emph{stream} is a sequence $S = ((d_1,\delta_1),(d_2,\delta_2),\ldots,(d_{N},\delta_{N}))$, where $d_i\in[n]$ are called the items or elements in the stream and $\delta_i\in\mathbb{Z}$ is an update to the $d_i$th coordinate of an implicitly defined $n$-dimensional vector.
Specifically, the \emph{frequency} of $d\in[n]$ after $k\leq N$ updates is
\[f^{(k)}_d = \sum \{\delta_j| j\leq k, d_j=d\},\] and the implicitly defined vector $f:= f^{(N)}$ is commonly referred to as the \emph{frequency vector} of the stream $S$.
Let $M:=\max\{n,|f^{(k)}_d|:d\in[n],0\leq k\leq N\}$ and $m=\sum_d |f_d|$; thus, it requires $O(\log{M})$ bits to exactly determine the frequency of a single item.
This model is commonly known as the \emph{turnstile} streaming model, as opposed to the \emph{insertion-only} model which has $\delta_i=1$, for all $i$, but is the same otherwise.
In an insertion-only stream $N=m\geq M$.
Streams model computing scenarios where the processor has very limited access to the input.
The processor reads the updates one-at-a-time, without control of their order, and is tasked to compute a function on the frequency vector.
The processor can perform its computation exactly if it stores the entire vector $f$, but this may be undesirable or even impossible when the dimension of $f$ is large.
Thus, the goal is to complete the computation using as little storage as possible.
Typically, exact computation requires storage linear in $n$, so we seek approximations.
Given a stream with frequencies $f_d$, for $d\in[n]$, we consider the problem of approximating the frequency negative moments, specifically $F_p=\sum |f_d|^p$ where $p<0$ and the sum is taken over all items $d\in[n]$ with nonzero frequency.
We characterize, up to factors of $O(\epsilon^{-1}\log^2{n}\log{M})$ in the turnstile model and $O(\epsilon^{-1}\log{M})$ in the insertion-only model, the space necessary to produce a $(1\pm\epsilon)$-approximation to $F_p$, for $p<0$, in terms of the accuracy $\epsilon$, the dimension $n$, and the $L^1$ length $m$ of $f$.
Negative moments, also known as ``inverse moments'', of a probability distribution have found several applications in statistics.
Early on, they were studied in application to sampling and estimation problems where the sample size is random~\cite{stephan1945expected,grab1954tables} as well as in life-testing problems~\cite{mendenhall1960approximation}.
More recently, they appear in the design of multi-center clinical trials \cite{jones2004approximating} and in the running time analysis of a quantum adiabatic algorithm for $3$-\textsc{SAT}~\cite{znidaric2005asymptotic,znidaric2006exponential}.
$F_0/F_{-1}$ is the harmonic mean of the (nonzero) frequencies in the insertion-only model, and more generally, the value $(F_p/F_0)^{1/p}$ is known as the $p$th power mean~\cite{bullen2003handbook}.
The harmonic mean is the truest average for some types of data, for example speeds, parallel resistances, and P/E~ratios~\cite{reilly2004handbook}.
To our knowledge this is the first paper to consider streaming computation of the frequency negative moments and the first to determine the precise dependence of the space complexity of streaming computations on $m$.
In fact, in the process of characterizing the storage necessary to approximate the frequency negative moments, we actually characterize the space complexity of a much larger class of streaming sum problems.
Specifically, given any nonnegative, nonincreasing function $g:\mathbb{N}\to\mathbb{R}$ we determine to within a factor of $O(\epsilon^{-1}\log^2{n}\log {M})$ the space necessary to approximate
\[g(f):=\sum_{d\in\supp(f)}g(|f_d|),\]
where $\supp(f):=\{d\in[n]:f_d\neq0\}$ is the support of $f$.
Furthermore, the sketch providing a $(1\pm\epsilon)$-approximation for $g(f)$ is universal for a $(1\pm\epsilon)$-approximation for any nonnegative nonincreasing function with the same or smaller space complexity as $g$.
This partially answers a question of Nelson~\cite{sublinear_open_30} -- which families of functions admit universal sketches?
The attention on $m$ is warranted; in fact, the complexity in question depends delicately on this parameter.
If we forget about $m$ for a moment, then a standard reduction from the communication problem $\textsc{Index}$ implies that computing a $(1\pm\frac{1}{2})$-approximation to $F_p$, for $p<0$, requires $\Omega(n)$ bits of storage -- nearly enough to store the entire vector $f$.
However, the reduction requires $m=\Omega(n^{1-1/p})$, recall that $p<0$.
If $m=o(n^{1-1/p})$ then, as we show, one can often get away with $o(n)$ bits of memory.
The next two sections outline our approach to the decreasing streaming sum problem and state our main results.
Section~\ref{sec: background} reviews previous work on streaming sum problems.
In Section~\ref{sec: frequency negative moments} we show how our results solve the frequency negative moments problem.
Sections~\ref{sec: lower bounds} and~\ref{sec: upper bounds} prove the main results.
Finally, Section~\ref{sec: computing ps} and Appendix~\ref{app: details} describe the implementation details for the streaming setting.
\subsection{Preliminaries}
Let $\mathcal{F} = \{f\in\mathbb{N}^n:\sum f_d\leq m\}$ and let $\mathcal{T}$ and $\mathcal{I}$ denote the sets of turnstile streams and insertion-only streams, respectively, that have their frequency vector $f$ satisfying $|f|\in\mathcal{F}$.
The set $\mathcal{F}$ is the set of all nonnegative frequency vectors with $L^1$~norm at most~$m$.
Clearly, $\mathcal{F}$ is the image under coordinate-wise absolute value of the set of all frequency vectors with $L^1$ norm at most $m$.
We assume $n\leq m$.
In order to address the frequency negative moments problem we will address the following more general problem.
Given a nonnegative, nonincreasing function $g:\mathbb{N}\to\mathbb{R}$, how much storage is needed by a streaming algorithm that $(1\pm\epsilon)$-approximates $g(f)$, for the frequency vector $f$ of any stream $S\in\mathcal{T}$ or $S\in\mathcal{I}$?
Equivalently, we can assume that $g(0)=0$, $g$ is nonnegative and nonincreasing on the interval $[1,\infty)$, and extend the domain of $g$ to $\mathbb{Z}$ by requiring it to be symmetric, i.e., $g(-x)=g(x)$.
Therefore, $g(f) = \sum_{i=1}^ng(f_d)$.
For simplicity, we call such functions ``decreasing functions''.
A randomized algorithm $\mathcal{A}$ is a \emph{turnstile streaming $(1\pm\epsilon)$-approximation algorithm} for $g(f)$ if
\[P\left\{(1-\epsilon)g(f)\leq \mathcal{A}(S)\leq (1+\epsilon)g(f)\right\}\geq \frac{2}{3}\]
holds for every stream $S\in\mathcal{T}$, and insertion only algorithms are defined analogously.
For brevity, we just call such algorithms ``approximation algorithms'' when $g$, $\epsilon$, and the streaming model are clear from the context.
We consider the maximum number of bits of storage used by the algorithm $\mathcal{A}$ with worst case randomness on any valid stream.
A sketch is a, typically randomized, data structure that functions as a compressed version of the stream.
Let $\mathcal{G}\subseteq\mathbb{R}^\mathbb{N}\times(0,1/2]$.
We say that a sketch is \emph{universal} for a class $\mathcal{G}$ if for every $(g,\epsilon)\in\mathcal{G}$ there is an algorithm that, with probability at least $2/3$, extracts from the sketch a $(1\pm\epsilon)$-approximation to $g(f)$.
The probability here is taken over the sketch as well as the extraction algorithm.
Our algorithms assume a priori knowledge of the parameters $m$ and $n$, where $m=\|f\|_1$ and $n$ is the dimension of $f$.
In practice, one chooses $n$ to be an upper bound on the number of distinct items in the stream.
Our algorithm remains correct if one instead only knows $m\geq \|f\|_1$, however if $m\gg\|f\|_1$ the storage used by the algorithm may not be optimal.
We assume that our algorithm has access to an oracle that computes $g$ on any valid input.
In particular, the final step of our algorithms is to submit a list of frequencies, i.e., a sketch, as inputs for $g$.
We do not count the storage required to evaluate $g$ or to store its value.
\subsection{Our results}\label{sec: our results}
Our lower bound is proved by a reduction from the communication complexity of disjointness wherein we parameterize the reduction with the coordinates of $|f|$, the absolute value of a frequency vector.
The parameterization has the effect of giving a whole collection of lower bounds, one for each frequency vector among a set of many.
Specifically, if $f\in\mathcal{F}$ and $g(f)\leq \epsilon^{-1}g(1)$ then we find an $\Omega(|\supp(f)|)$ lower bound on on the number of bits used by any approximation algorithm.
This naturally leads us to the following nonlinear optimization problem
\begin{equation}\label{eq: ps definition}
\sigma(\epsilon,g,m,n):=\max\left\{|\supp(f)|: f\in\mathcal{F}, g(f)\leq\epsilon^{-1}g(1)\right\},
\end{equation}
which gives us the ``best'' lower bound.
We will use $\sigma=\sigma(\epsilon,g,m,n)$ when $\epsilon$, $g$, $m$, and $n$ are clear from the context.
Our main lower bound result is the following.
\begin{theorem}\label{thm: decreasing lower bound}
Let $g$ be a decreasing function, then any $k$-pass insertion-only streaming $(1\pm\epsilon)$-approximation algorithm requires $\Omega(\sigma/k)$ bits of space.
\end{theorem}
Before we consider approximation algorithms, let us consider a special case. Suppose there is an item~$d^*$ in the stream that satisfies $g(f_{d^*})\geq\epsilon g(f)$.
An item such as $d^*$ is called an $\epsilon$-heavy hitter.
If there is an $\epsilon$-heavy hitter in the stream, then $g(1)\geq g(f_{d^*})\geq\epsilon g(f)$ which implies $|\supp(f)|\leq\sigma$, by the definition of $\sigma$.
Of course, in this case it is possible compute $g(f)$ with $O(\sigma\log{M})$ bits in one pass in the insertion-only model and with not much additional space in the turnstile model simply by storing a counter for each element of $\supp(f)$.
Considering the $\Omega(\sigma)$ lower bound, this is nearly optimal.
However, it only works when $f$ contains an $\epsilon$-heavy hitter.
Our approximation algorithm is presented next.
It gives a uniform approach for handling all frequency vectors, not just those with $\epsilon$-heavy elements.
\begin{algorithm}
\begin{algorithmic}[1]
\State Compute $\sigma=\sigma(\epsilon,g,m,n)$ and let
\begin{equation}\label{eq: sampling probability}
q\geq\min\left\{1,\frac{9\sigma}{\epsilon|\supp(f)|}\right\}.
\end{equation}
\State Sample pairwise independent random variables $X_d\sim\text{Bernoulli}(q)$, for $d\in[n]$, and let $W=\{d\in\supp(f):X_d=1\}$.
\State Compute $f_d$, for each $d\in W$.
\State Output $q^{-1}\sum_{d\in W}g(f_d)$.
\end{algorithmic}
\caption{$(1\pm\epsilon)$-approximation algorithm for $g(f)$.}
\label{algo: approximate}
\end{algorithm}
Algorithm~\ref{algo: approximate} simply samples each element of $\supp(f)$ pairwise independently with probability $q$.
The expected sample size is $q|\supp(f)|$, so in order to achieve optimal space we take equality in Equation~\ref{eq: sampling probability}.
The choice yields, in expectation, $q|\supp(f)|=O(\sigma/\epsilon)$ samples.
Section~\ref{sec: computing ps} and Appendix~\ref{app: details} explain how to implement the algorithm for the streaming setting and the correctness is established by the following theorem.
It is proved in Section~\ref{sec: upper bounds}.
\begin{theorem}\label{thm: decreasing sketches}
There is a turnstile streaming algorithm that, with probability at least $2/3$, outputs a $(1\pm\epsilon)$-approximation to $g(f)$ and uses $O(\epsilon^{-1}\sigma\log^2(n)\log(M))$ bits of space.
The algorithm can be implemented in the insertion-only model with $O(\epsilon^{-1}\sigma\log(M)+\log^2{n})$ bits of space.
\end{theorem}
It is worthwhile to remark that the suppressed constants in the asymptotic bounds of Theorems~\ref{thm: decreasing lower bound} and~\ref{thm: decreasing sketches} are independent of $g$, $\epsilon$, $m$, and $n$.
The optimization problem \eqref{eq: ps definition} reappears in the proof of Theorem~\ref{thm: decreasing sketches}.
The key step is the observation mentioned above.
Namely, for the particular frequency vector~$f$ that is our input, if there is an item $d$ satisfying $g(|f_d|)\geq \epsilon g(f)$ then $|\supp(f)|\leq\sigma$.
Let us now emphasize a particular feature of this algorithm.
Previously, we commented that choosing equality in \eqref{eq: sampling probability} is optimal in terms of the space required.
However, Algorithm~\ref{algo: approximate} is still correct when the inequality is strict.
Notice that the sketch is just a (pairwise independent) random sample of $\supp(f)$ and its only dependence on $g$ and $\epsilon$ is through the parameter $\sigma/\epsilon$.
Let $g'$ and $\epsilon'$ be another decreasing function and error parameter satisfying $\frac{\sigma(\epsilon',g',m,n)}{\epsilon'} \leq \frac{\sigma(\epsilon,g,m,n)}{\epsilon}$, then
\[q' = \min\left\{1,\frac{9\sigma'}{\epsilon'|\supp(f)|}\right\}\leq q = \min\left\{1,\frac{9\sigma}{\epsilon|\supp(f)|}\right\}.\]
In particular, this means that the sketch that produces an $(1\pm\epsilon)$-approximation to $g(f)$ also suffices for an $(1\pm\epsilon')$-approximation to $g'$.
For example, if one takes $g'\geq g$, pointwise with $g'(1)=g(1)$, then $\sigma(\epsilon,g',m,n)\leq\sigma(\epsilon,g,m,n)$ so one can extract from the sketch $(1\pm\epsilon)$-approximations to $g(f)$ and $g'(f)$, each being separately correct with probability $2/3$.
Thus, the sketch is universal for any decreasing function $g'$ and accuracy $\epsilon'$ where $\sigma(\epsilon',g',m,n)\leq\sigma(\epsilon,g,m,n)$.
In the context of the frequency negative moments, this implies that the sketch yielding a $(1\pm\epsilon)$-approximation to $F_p$, for $p<0$, is universal for $(1\pm\epsilon)$-approximations of $F_{p'}$, for all $p<p'<0$.
Computing the sketch requires a priori knowledge of $\sigma$.
If one over-estimates $\sigma$ the algorithm remains correct, but the storage used increases.
To know $\sigma$ requires knowledge of $m$, or at least an good upper bound on $m$.
This is a limitation, but there are several ways to mitigate it.
If one does not know $m$ but is willing to accept a second pass through the stream, then using the algorithm of \cite{kane2010exact} one can find a $(1\pm\frac{1}{2})$-approximation to $m$ with $O(\log{M})$ bits of storage in the first pass and approximate $g(f)$ on the second pass.
A $(1\pm\frac{1}{2})$-approximation to $m$ is good enough to determine $\sigma$ to within a constant, which is sufficient for the sketch.
Alternatively, one can decide first on the space used by the algorithm and, in parallel within one pass, run the algorithm and approximate $m$.
After reading the stream one can determine for which decreasing functions $g$ and with what accuracy~$\epsilon$ does the approximation guarantee hold.
\subsection{Background}\label{sec: background}
Much of the effort dedicated to understanding streaming computation, so far, has been directed at the frequency moments $F_p = \sum |f_i|^p$, for $0<p<\infty$, as well as $F_0$ and $F_\infty$, the number of distinct elements and the maximum frequency respectively.
In the turnstile model, $F_0$ is distinguished from $L^0=|\supp(f)|$, the number of elements with a nonzero frequency.
The interest in the frequency moments began with the seminal paper of Alon, Matias, and Szegedy~\cite{alon1996space}, who present upper and lower bounds of $O(\epsilon^{-2}n^{1-1/p})$ and $\Omega(n^{1-5/p})$, respectively, on the space needed to find a $(1\pm\epsilon)$-approximation to $F_p$, and a separate $O(\epsilon^{-2}\log{m})$ space algorithm for $F_2$.
Since then, many researchers have worked to push the upper and lower bounds closer together.
We discuss only a few of the papers in this line of research, see \cite{woodruff2014data} an the references therein for a more extensive history of the frequency moments problem.
To approximate $F_p$, Alon, Matias, and Szegedy inject randomness into the stream and then craft an estimator for $F_p$ on the randomized stream.
A similar approach, known as stable random projections, is described by Indyk~\cite{indyk2006stable} for $F_p$, when $0<p\leq 2$ (also referred to as $\ell_p$ approximation).
Kane, Nelson, and Woodruff~\cite{kane2010exact} show that Indyk's approach, with a more careful derandomization, is optimal.
Using the method of stable random projections, Li~\cite{li2008estimators} defined the so-called \emph{harmonic mean estimator} for $F_p$, when $0<p<2$, which improves upon the sample complexity of previous methods.
We stress that this is not an estimator for the harmonic mean of the frequencies in a data stream, rather it is an estimator for $F_p$ that takes the form of the harmonic mean of a collection of values.
For $p>2$, the AMS approach was improved upon~\cite{coppersmith2004improved,ganguly2004estimating} until a major shift in the design of streaming algorithms began with the algorithm of Indyk and Woodruff~\cite{indyk2005optimal} that solves the frequency moments problem with, nearly optimal, $n^{1-2/p}(\frac{1}{\epsilon}\log{n})^{O(1)}$ bits.
Their algorithm introduced a recursive subsampling technique that was subsequently used to further reduce space complexity \cite{bhuvanagiri2006simpler,braverman2010recursive}, which now stands at $O(\epsilon^{-2}n^{1-2/p}\log{n})$ in the turnstile model~\cite{ganguly2011polynomial} with small $\epsilon$ and $O(n^{1-2/p})$ in the insertion-only model with $\epsilon=\Omega(1)$~\cite{braverman2014approximating}.
Recently, there has been a return to interest in AMS-type algorithms motivated by the difficulty of analyzing algorithms that use recursive subsampling.
``Precision Sampling'' of Andoni, Krauthgamer, and Onak~\cite{andoni2011streaming} is one such algorithm that accomplishes nearly optimal space complexity without recursive subsampling.
Along these lines, it turns out that one can approximate $g(f)$ by sampling elements~$d\in[n]$ with probability roughly $q_d \approx g(f_d)/\epsilon^2 g(f)$, or larger, and then averaging and scaling appropriately, see Proposition~\ref{prop: sampling archetype}.
Algorithm~\ref{algo: approximate} takes this approach, and also fits in the category of AMS-type algorithms.
However, it is far from clear how to accomplish this sampling optimally in the streaming model for a completely arbitrary function $g$.
A similar sampling problem has been considered before.
Monemizadeh and Woodruff~\cite{monemizadeh20101} formalized the problem of sampling with probability $q_d = g(f_d)/ g(f)$ and then go on to focus on $L_p$ sampling, specifically $g(x)=|x|^p$, for $0\leq p\leq 2$.
In follow-up work, Jowhari, S\u aglam, and Tardos offer $L_p$ sampling algorithms with better space complexity~\cite{jowhari2011tight}.
As far as the frequency moments lower bounds go, there is a long line of research following AMS~\cite{baryossef2002information,chakrabarti2003near,gronemeier2009asymptotically,andoni2013tight} that has led to a lower bound matching the best known turnstile algorithm of Ganguly~\cite{ganguly2011polynomial} to within a constant~\cite{li2013tight}, at least for some settings of $m$ and $\epsilon$.
The insertion-only algorithm of Braverman et al.~\cite{braverman2014approximating} matches the earlier lower bound of Chakrabarti, Khot, and Sun~\cite{chakrabarti2003near}.
For a general function~$g$ not much is known about the space-complexity of approximating $g(f)$.
Most research has focused on specific functions.
Chakrabarti, Do Ba, and Muthukrishnan~\cite{chakrabarti2006estimating} and Chakrabarti, Cormode, and Muthukrishnan~\cite{chakrabarti2007near} sketch the Shannon Entropy. Harvey, Nelson, and Onak~\cite{harvey2008sketching} approximate Renyi~$\log(\|f\|_{\alpha}^\alpha)/(1-\alpha)$, Tsallis~$(1-\|x\|_{\alpha}^\alpha)/(\alpha-1)$, and Shannon entropies.
Braverman, Ostrovsky, and Roytman~\cite{braverman2010zero,braverman2014universal} characterized nonnegative, nondecreasing functions that have polylogarithmic-space approximation algorithms present a universal algorithm, based on the subsampling technique, for the same.
Guha, Indyk, McGregor~\cite{guha2007sketching} study the problem of sketching common information divergences between the streams, i.e., statistical distances between the probability distributions with p.m.f.s $e/\|e\|_1$ and $f/\|f\|_1$.
\section{The frequency negative moments}\label{sec: frequency negative moments}
Before proving Theorems~\ref{thm: decreasing lower bound} and~\ref{thm: decreasing sketches}, let us deploy them to determine the streaming space complexity of the frequency negative moments.
It will nicely illustrate the trade-off between the length of the stream and the space complexity of the approximation.
The first step is to calculate $\sigma(\epsilon,g,m,n)$, where $g(x)=|x|^p$, for $x\neq0$ and $p<0$, and $g(0)=0$.
There is a maximizer of \eqref{eq: ps definition} with $L^1$ length $m$ because $g$ is decreasing.
The convexity of $g$ implies that $\sigma \leq \max\{s\in\mathbb{R}: s(m/s)^p\leq \epsilon^{-1}\}$, and $\sigma$ is at least the minimum of $n$ and $\max\{s\in\mathbb{N}: s(m/s)^p\leq \epsilon^{-1}\}$ by definition.
Thus, we can take $\sigma =\min\left\{n,\theta\left( \epsilon^{\frac{-1}{1-p}}m^{\frac{-p}{1-p}} \right)\right\}$.
This gives us the following corollary to Theorems~\ref{thm: decreasing lower bound} and~\ref{thm: decreasing sketches}.
\begin{corollary}
Let $p<0$.
Any $(1\pm\epsilon)$-approximation algorithm for $F_p$ requires $\Omega(\min\{n,\epsilon^{\frac{-1}{1-p}}m^{\frac{-p}{1-p}}\})$ bits of space.
Such an approximation can be found with $O(\epsilon^{-\frac{2-p}{1-p}}m^{\frac{-p}{1-p}}\log^2{n}\log{M})$ bits in a turnstile stream and $O(\epsilon^{-\frac{2-p}{1-p}}m^{\frac{-p}{1-p}}\log{M})$ bits in an insertion-only stream.
\end{corollary}
For example, taking $p=-1$ we find that the complexity is approximately $\frac{\sigma}{\epsilon} = \min\{n,\theta(\epsilon^{-3/2}m^{1/2})\}$.
This is also the space complexity of approximating the harmonic mean of the nonzero frequencies.
It is apparent from the formula that the relationship between $m$ and $n$ is important for the complexity.
\section{Lower bounds for decreasing streaming sums}\label{sec: lower bounds}
It bears repeating that if $g(x)$ decreases to $0$ as $x\to\infty$ then one can always prove an $\Omega(n)$ lower bound on the space complexity of approximating $g(f)$.
However, the stream needed for the reduction may be very long (as a function of $n$).
Given only the streams in $\mathcal{T}$ or $\mathcal{I}$, those with $L^1$-length $m$ or less, a weaker lower bound may be the best available.
The present section proves this ``best'' lower bound, establishing Theorem~\ref{thm: decreasing lower bound}.
The proof uses a reduction from the communication complexity of disjointness, see the book of Kushilevitz and Nisan~\cite{kushilevitz1996communication} for background on communication complexity.
The proof strategy is to parameterize the lower bound reduction in terms of the frequencies $f$.
Optimizing the parameterized bound over $f\in\mathcal{F}$ gives the best possible bound from this reduction.
The proof of Theorem~\ref{thm: decreasing lower bound} is broken up with a two lemmas.
The first lemma is used in the reduction from $\textsc{Disj}(s)$, the $s$-element disjointness communication problem.
It will show up again later when we discuss a fast scheme for computing $\sigma$ for general functions.
\begin{lemma}\label{lem: decreasing lower bound booster}
Let $y_i\in\mathbb{R}_{\geq 0}$, for $i\in[s]$, and let $v:\mathbb{R}\to\mathbb{R}_{\geq 0}$.
If $\sum y_i\leq Y$ and $\sum v(y_i)\leq V$, then there exists $i$ such that $\frac{s}{2} y_i\leq Y$ and $\frac{s}{2} v(y_i)\leq V$.
\end{lemma}
\begin{proof}
Without loss of generality $y_1\leq y_2\leq\cdots \leq y_s$.
Let $i_j$, $j\in [\sigma]$, order the sequence such that $v(y_{i_1})\leq v(y_{i_2})\leq \cdots\leq v(y_{i_s})$ and let $I = \{i_j|j\leq \lfloor s/2\rfloor +1\}$.
By the Pigeon Hole Principle, there exists $i\in I$ such that $i\leq \lfloor s/2\rfloor +1$.
Thus $\frac{s}{2}y_i\leq \sum_{j=\lfloor s/2\rfloor+1}^s y_{i_j} \leq Y$ and $\frac{s}{2} v(y_i)\leq \sum_{j=\lfloor s/2\rfloor+1}^s v(y_j) \leq V$.
\end{proof}
\begin{lemma}\label{lem: decreasing lower bound}
Let $g$ be decreasing and $\epsilon>0$.
If $f = (y,y,\ldots,y,0,\ldots,0)\in\mathcal{F}$ and $g(f)\leq\epsilon^{-1}g(1)$,
then any $k$-pass $(1\pm\epsilon)$-approximation algorithm requires $\Omega(|\supp(f)|/k)$ bits of storage.
\end{lemma}
\newenvironment{proofsketch}{\paragraph{Sketch of proof.}}{}
\begin{proofsketch}
Let $\mathcal{A}$ be an $(1\pm\epsilon)$-approximation algorithm.
We use a reduction from the communication complexity of $\textsc{Disj}(s)$, where $s=\lfloor |\supp(f)|/2\rfloor$.
Alice is given $A\subseteq [s]$ and Bob is given $B\subseteq [s]$.
They jointly create a stream~$S$ with $s$~or~fewer distinct elements such that all of the frequencies are $1$, $y$, or~$y+1$, then they compute the approximation $\mathcal{A}(S)$ and compare the outcome to a threshold.
Computing $\mathcal{A}(s)$ requires them to transmit the memory $O(k)$ times.
The number of items in $S$ with frequency $1$ is $|A\cap B|$, so it can be arranged that when the intersection is empty $\mathcal{A}(S)$ is smaller than the threshold and otherwise it is larger.
The condition $g(f)\leq\epsilon^{-1}g(1)$ guarantees sufficient separation between the two cases.
We defer the complete proof to Appendix~\ref{app: lower bound proof}.
\end{proofsketch}
\newenvironment{lowerboundproof}{\paragraph{Proof of Theorem~\ref{thm: decreasing lower bound}.}}{}
\begin{lowerboundproof}
Let $f\in\mathcal{F}$ be a maximizer of \eqref{eq: ps definition} and apply Lemma~\ref{lem: decreasing lower bound booster} to the positive elements of $f$.
From this we find that there exists $y$ such that $ys'\leq \|f\|_1$ and $g(1)\geq\epsilon s'g(y)$, for $s' = \sigma/2$.
Therefore, $f'=(y,y,\ldots,y,0,\ldots,0)\in\mathcal{F}$ with $\lfloor s'\rfloor$ coordinates equal to $y$.
Applying Lemma~\ref{lem: decreasing lower bound} to $f'$ implies the desired bound.
\end{lowerboundproof}
With Lemma~\ref{lem: decreasing lower bound booster} in mind, one may ask: why not restrict the maximization problem in \eqref{eq: ps definition}, the definition of $\sigma$, to streams that have all frequencies equal and still get the same order lower bound?
This is valid alternative definition.
In fact, doing so does appreciably affect the effort needed to compute $\sigma$, it is one of the main steps used by our algorithm to approximate $\sigma$ in Section~\ref{sec: computing ps}.
However, it makes reasoning about $\sigma$ a bit messier.
For example, in Section~\ref{sec: our results} we comment that if the frequency vector $f$ contains an $\epsilon$-heavy element then $|\supp(f)|\leq\sigma$.
This comes directly from the fact that $\{f'\in\mathcal{F} : g(f')\leq\epsilon^{-1}g(1)\}$ is the feasible set for \eqref{eq: ps definition}.
If we restrict the feasible set, then we cannot so directly draw the conclusion.
Rather, we must compare $g(f)$ to points in the restricted feasible set by again invoking Lemma~\ref{lem: decreasing lower bound booster}.
\section{Correctness of the algorithm}\label{sec: upper bounds}
This section presents the proof that our approximation algorithm is correct.
Algorithm~\ref{algo: approximate} describes the basic procedure, and Appendix~\ref{app: details} describes how it can be implemented in the streaming setting.
The correctness relies on our ability to perform the sampling and the following simple proposition.
\begin{proposition}\label{prop: sampling archetype}
Let $g$ be a nonnegative function and let $X_d\sim\text{Bernoulli}(p_d)$ be pairwise independent random variables with $p_d\geq \min\left\{1,\frac{9g(f_d)}{\epsilon^2 g(f)}\right\}$, for all $d\in[n]$.
Let $\hat{G} = \sum_{d=1}^np_d^{-1}X_dg(f_d)$, then $P(|\hat{G}-g(f)|\leq \epsilon g(f))\geq \frac{8}{9}$.
\end{proposition}
\begin{proof}
We have $E\hat{G}=g(f)$ and $Var(\hat{G})\leq\sum_d p_{d}^{-1}g(f_d)^2 = \frac{1}{9}(\epsilon g(f))^2$, by pairwise independence.
The proposition now follows from Chebyshev's inequality.
\end{proof}
The algorithm samples each element of $\supp(f)$ with probability approximately $\sigma/\epsilon\supp(f)$.
In order to show that this sampling probability is large enough for Proposition~\ref{prop: sampling archetype} we will need one lemma; its proof is given in Appendix~\ref{app: details}.
It gives us some control on $\sigma(\epsilon,g,m,n)$ as $\epsilon$ varies.
\begin{lemma}\label{lem: ps relations}
If $\alpha<\epsilon$, then $\epsilon (1+\sigma(\epsilon,g,m,n))\geq\alpha\sigma(\alpha,g,m,n) $.
\end{lemma}
For brevity, we only state here the correctness of the streaming model sampling algorithm, which uses standard techniques.
The details of the algorithm are given in the Appendix~\ref{app: details}.
\begin{lemma}\label{lem: sampling algorithm}
Given $s\leq n$, there is an algorithm using $O(s\log^2{n}\log{M})$ bits of space in the turnstile model and $O(s\log{M}+\log^2{n})$ bits in the insertion-only model that samples each item of $\supp(f)$ pairwise-independently with probability at least $\min\{1,s/|\supp(f)|\}$ and, with probability at least $7/9$, correctly reports the frequency of every sampled item and the sampling probability.
\end{lemma}
Finally, we prove the correctness of our approximation algorithm.
Here is where we will again use the optimality of $\sigma$ in its definition \eqref{eq: ps definition}.
In regards to the lower bound of Theorem~\ref{thm: decreasing lower bound}, this upper bound leaves gaps of $O(\epsilon^{-1}\log^2{n}\log{M})$ and $O(\epsilon^{-1}\log{M})$ in the turnstile and insertion-only models, respectively.
\newenvironment{upperboundproof}{\paragraph{Proof of Theorem~\ref{thm: decreasing sketches}.}}{}
\begin{upperboundproof}
We use the algorithm of Lemma~\ref{lem: sampling algorithm} to sample with probability at least $q= \min\{1,9(\sigma+1)/\epsilon|\supp(f)|\}$.
Let us first assume that $q\geq \min\{1, 9g(f_d)/\epsilon^2 g(f)\}$, for all $d$, so that the hypothesis for Proposition~\ref{prop: sampling archetype} is satisfied.
The algorithm creates samples $W_i$, for $i=0,1,\ldots,O(\log{n})$, where each item is sampled in $W_i$ with probability $q_i=2^{-i}$.
For each $i$ such that $q_i\geq q$, Proposition~\ref{prop: sampling archetype} guarantees that $\hat{G}_i=q_i^{-1}\sum_{d\in W_i}g(f_d)$ is a $(1\pm\epsilon)$-approximation with probability at least $8/9$.
With probability at least $7/9$, the algorithm returns one of these samples correctly, and then the approximation guarantee holds.
Thus, the approximation guarantee holds with probability at least $2/3$.
It remains to show that $q\geq \min\{1,9g(f_d)/\epsilon g(f)\}$, for all $d\in[n]$.
Let $\alpha = g(1)/g(f)$ then define $\sigma_\epsilon = \sigma(\epsilon,g,m,n)$ and $\sigma_\alpha=\sigma(\alpha,g,m,n)$.
By definition $|\supp(f)|\leq\sigma_\alpha$, thus if $\alpha \geq\epsilon$ then $|\supp(f)|\leq \sigma_\alpha\leq \sigma_\epsilon$, so the sampling probability is 1 and the claim holds.
Suppose that $\alpha<\epsilon$.
For all $d\in[n]$, we have
\begin{align*}
\frac{g(f_d)}{g(f)}\leq\frac{g(1)}{g(f)} = \alpha \leq \frac{\epsilon(1+\sigma_\epsilon)}{\sigma_\alpha} \leq \frac{\epsilon(1+\sigma_\epsilon)}{|\supp(f)|},
\end{align*}
where the second inequality comes from Lemma~\ref{lem: ps relations} and the third from the definition of $\sigma_\alpha$ as a maximum.
In particular, this implies that
\[\frac{9\epsilon^{-1}(\sigma+1)}{|\supp(f)|}\geq \frac{9g(f_d)}{\epsilon^2g(f)},\]
which completes the proof.
\end{upperboundproof}
\section{Computing $\sigma$}\label{sec: computing ps}
The value $\sigma$ is a parameter that is needed for Algorithm~\ref{algo: approximate}.
That means we need a way to compute it for any decreasing function.
As we mentioned before, the only penalty for overestimating $\sigma$ is inflation of the storage used by the algorithm so to over-estimate $\sigma$ by a constant factor is acceptable.
This section shows that one can find $\sigma'$ such that $\sigma\leq\sigma'\leq 4\sigma$ quickly and by evaluating $g$ at just $O(\log{m})$ points.
Because $g$ is decreasing, the maximum of~\eqref{eq: ps definition} will be achieved by a vector $f$ of length $m$.
Lemma~\ref{lem: decreasing lower bound booster} says that we might as well take all of the other frequencies to be equal, so we can find a near maximizer by enumerating the single value of those frequencies.
Specifically,
\[s(y) = \min\left\{\frac{m}{y},\frac{g(1)}{\epsilon g(y)}\right\}\]
is the maximum bound we can achieve using $y$ as the single frequency.
The value of $\sigma$ is at most twice $\max\{s(y):(m/n)\leq y\leq m\}$, by Lemma~\ref{lem: decreasing lower bound booster}.
But we do not need to check every $y=1,2,\ldots,m$ to get a pretty good maximizer.
It suffices to check only values where $y$ is a power of two.
Indeed, suppose that $y^*$ maximizes $s(y)$ and let $y^*\leq y'\leq 2y^*$.
We will show that $s(y')\geq s(y^*)/2$, and since there is a power of two between $y^*$ and $2y^*$ this implies that its $s$ value is at least $s(y^*)/2\geq \sigma/4$.
Since $y^*$ is a maximizer we have $s(y')\leq s(y^*)$, and because $y'\geq y^*$ and $g$ is decreasing we have $g(y')\leq g(y^*)$.
This gives us
\[\frac{g(1)}{\epsilon g(y')}\geq \frac{g(1)}{\epsilon g(y^*)}\geq s(y^*).\]
We also have
\[\frac{m}{y'}\geq \frac{m}{2y^*}\geq \frac{1}{2}s(y^*).\]
Combining these two we have $s(y')\geq s(y^*)/2$.
Thus, one can get by with enumerating at most $\lg m$ values to approximate the value of the parameter $\sigma$.
Take the largest of the $\lg m$ values tried and quadruple it to get the approximation to $\sigma$.
|
1,116,691,500,993 | arxiv | \section{Introduction\label{Intro}}
Classical methods of forecasting include exponential smoothing and moving average models, ARIMA, Bayesian non parametric models including Gaussian Processes and other machine learning schemes such as Forward and Recurrent Neural Networks, Deep Learning, LSTMs, Reinforcement Learning, Reservoir Computing and Fuzzy Systems (\cite{Granger1969,de200625,lukovsevivcius2009reservoir,brockwell2016introduction,greff2016lstm,pathak2018model,vlachas2020backpropagation,wyffels2010comparative}).
An inherent problem of using such data-driven modelling/forecasting approaches, when dealing with temporal measurements in a high-dimensional feature space, is the ``curse of dimensionality'' (\cite{bellmann1957dynamic}). In such cases, training, i.e.~the estimation of the values of the model parameters, requires a large number of snapshots, which increases exponentially with the dimension of the feature space. Thus, a fundamental task in modelling and forecasting is that of dimensionality reduction. Out of the many features that can be measured in time, one has first to identify the intrinsic dimension of the possible low-dimensional subspace (manifold) and the corresponding vectors that span it, which actually govern the emergent / macroscopically observed dynamics of the system under study. Assuming that the emergent dynamics evolve on a smooth enough low-dimensional manifold, an arsenal of linear, such as Singular value Decomposition (SVD) (\cite{golub1971singular}) and Dynamic Mode Decomposition (DMD) (\cite{schmid2010dynamic,mann2016dynamic,kutz2016dynamic}), and nonlinear such as kernel-PCA (\cite{scholkopf1998nonlinear}), Locally Linear Embedding (LLE) (\cite{roweis2000nonlinear}), ISOMAP (\cite{tenenbaum2000global,balasubramanian2002isomap}), Laplacian Eigenmaps (\cite{belkin2003laplacian}), Diffusion Maps (DMs) (\cite{coifman2005geometric,coifman2006diffusion,coifman2006geometric,nadler2006diffusion,coifman2008diffusion}), and Koopman operator (\cite{mezic2013analysis,williams2015data,dietrich2020koopman}) related manifold learning algorithms can be exploited towards this direction.
For the two-fold task of dimensionality reduction and modelling in the low-dimensional subspace, \cite{chiavazzo2014reduced} constructed reduced kinetics models, by extracting the slow dynamics on a manifold globally parametrized by a truncated DM. A comparison of the reconstructed and the original high-dimensional dynamics was also reported. The reconstruction was achieved using Radial Basis Functions (RBFs) interpolation and Geometric Harmonics (GHs). \cite{liu2014coarse} used DMs to identify coarse variables that govern the emergent dynamics of a particle-based model of animal swarming and, based on these, they constructed a reduced stochastic differential equation. \cite{dsilva2015data} used DMs with a Mahalanobis distance as a metric to parametrize the slow manifold of multiscale dynamical systems. \cite{williams2015data} addressed an extension of DMD to compute approximations of the Koopman eigenvalues, thus showing that for large data sets the procedure converges to the numerical approximation one would obtain from a Galerkin method. The performance of the method was tested via the unforced Duffing equation and a stochastic differential equation with a double-well potential that
converges to a Fokker-Planck equation. \cite{brunton2016discovering} used SVD to embed the dynamics of nonlinear systems in a linear manifold spanned by a few SVD modes, and constructed state-space linear models on the manifold to approximate the full dynamics. The so-called Sparse Identification of the Nonlinear Dynamics (SINDy) method was demonstrated using the Lorenz equations and the 2D Navier–Stokes equations. \cite{bhattacharjee2016nonlinear} used ISOMAP to construct reduced-order models of heterogeneous hyperelastic materials in the 3D space, whose solutions are obtained by finite elements, while for the construction of the inverse map used RBF interpolation. \cite{wan2017reduced} proposed a methodology for forecasting and quantifying
uncertainty in a reduced-dimensionality space, based on Gaussian process regression. The efficiency of the proposed approach was validated using data series from the Lorenz 96 model, the
Kuramoto-Sivashinsky, as well as a barotropic climate model in the form of a PDE. \cite{nathan2018applied} proposed a scheme based on the Koopman-operator theory to embed spatio-temporal dynamics of PDEs with the aid of DMD, for approximating the evolution of the high-dimensional dynamics on the reduced manifold; the reconstruction of the states of the original system was achieved by a linear transformation. \cite{chen2018molecular} have proposed a molecular enhanced sampling method based on the concept of autoencoders ~\cite{kramer1991nonlinear} to discover the low-dimensional projection of Molecular Dynamics and then reconstruct back the atomic coordinates.
\cite{vlachas2018data} proposed an LSTM-based method to predict the high-dimensional dynamics from the information acquired in a low-dimensional space. The embedding space is constructed based on the Takens embedding theorem (\cite{takens1981detecting}). The method is demonstrated through time-series obtained from the Lorenz 96 equations, the Kuramoto–Sivashinsky equation and a barotropic climate model as in \cite{wan2017reduced}.~\cite{lee2020coarse} used DMs and Automatic Relevance Determination to construct embedded PDEs by the aid of Artificial Neural Networks (ANNs) and Gaussian Processes. The methodology was illustrated with Lattice-Boltzman simulations of the 1D Fitzhugh-Nagumo model. \cite{koronaki2020data} used DMs to embed the dynamics of a one-dimensional tubular reactor as modelled by a system of two PDEs on a 2D manifold, and then constructed a feedforward ANN to learn the dynamics on the 2D manifold. The efficiency of the scheme was compared with the original high-dimensional PDE dynamics by lifting the reduced dynamics back to the original space using GHs and RBFs. Recently, \cite{lin2021data} used the Koopman and the Mori-Zwanzig projection operator formalism to construct reduced-order models for chaotic and randomly forced dynamical systems, and then based on the Wiener projection they derived Nonlinear Auto-Regressive Moving Average with exogenous input models. The approach was demonstrated through the Kuramoto-Sivashinsky PDE and the viscous Burgers equation with stochastic forcing.
The performance of most of the above schemes was assessed in terms of interpolation, namely, the data are produced using dynamical models (ODEs, PDEs, SDEs or agent-based models) that were simulated in some specific interval of the high-dimensional domain, then reduced order models were constructed and trained based on the generated data, and finally a reconstruction of the high-dimensional dynamics was implemented within the original interval of the high-dimensional domain.
\subsection{Our contribution\label{Contribution}}
We assess the performance of a manifold learning numerical scheme within a different context: forecasting, i.e. out-of-sample extrapolation. This is a conceptually different task with respect to the interpolation problem of the model reduction of well-defined dynamical models in the form of ODEs or PDEs, as attempted in the above-mentioned works (see also the discussion in the conclusion section).
We address a three-tier scheme to perform forecasting in the high-dimensional space, by
\begin{description}
\item[(i)] reducing the high-dimensional time series into a low-dimensional manifold using manifold learning;
\item[(ii)] constructing and training reduced-order models on the manifold, and, based on these, making predictions;
\item[(iii)] lifting the predictions in the high-dimensional space.
\end{description}
The performance of this ``embed-forecast-lift'' scheme is assessed through four sets of time series: three of them are synthetic time series resembling EEG recordings generated by linear and nonlinear discrete stochastic models with different model orders, while the fourth set is a real-world data set containing the time series of 10 key pairs of foreign exchange prices retrieved from the \url{www.investing.com} open API, spanning the period
03/09/2001-29/10/2020.
For our computations, we used two well-established and widely applied nonlinear manifold learning algorithms, namely the LLE algorithm and DMs, two types of models, namely Multivariate Autoregressive (MVAR) and Gaussian Process Regression (GPR) models, and two operators for lifting, namely RBFs and GHs. We considered various combinations of the previous methods and compared them against the naive random walk and MVAR and GPR models implemented directly in the original space. A comparison with the results obtained with the leading PCA components for the case of the FOREX forecasting problem is also provided. To the best of our knowledge, this is the first work that addresses the problem of forecasting based on manifold learning, providing a comparison of various embedding, modelling and reconstruction approaches, and assessing their performance on both synthetic time series and a real-world FOREX data set.
\subsection{Structure of the article\label{Structure}}
The structure of this paper is as follows. In section~\ref{MethodologyPrelimin}, we introduce our methodology and give some preliminaries on manifolds, tangent spaces, which provide the keystones for the development and implementation of the nonlinear manifold learning algorithms. For the completeness of the presentation, in section~\ref{ManifoldLearning} we describe in a concise way LLE and DMs, and in section~\ref{ForecastingModels} we briefly present the problem of forecasting with regression models, in particular with the use of MVAR and GPR. In section~\ref{LiftingOperators}, we discuss how the reconstruction of the high-dimensional space can be achieved using RBFs and GHs. In section~\ref{Forecasting}, we present the problems that we used to assess the forecasting performance of the proposed ``embed-train-lift'' framework, and in section~\ref{NumericalResults} we present the numerical results quantifying this performance, providing a comparison among the different manifold learning algorithms, regression models and reconstruction methods. Finally, we give some conclusions in section~\ref{Conclusion}, discussing also future directions.
\section{Methodology and preliminaries\label{MethodologyPrelimin}}
Let us denote by $\boldsymbol{x}_i \in \mathbb{R}^D$ the vector containing the features at the $i$-th snapshot of the time series, and by $\boldsymbol{X} \in \mathbb{R}^{D \times N}$ the matrix having as columns the vectors spanning a time window of $N$ observations. The assumption is that the data lie on a ``smooth enough'' low-dimensional (say of dimension $d$) manifold that is embedded in the high-dimensional $\mathbb{R}^D$ space. Our aim is to exploit manifold learning algorithms to forecast the time series in the high-dimensional feature space. Thus, our purpose is to bypass the ``curse of dimensionality'' associated with both the dimension of the original data and, importantly, with the limited size of their snapshots in time that usually characterize real-world data (such as financial data). Towards this aim, we propose a numerical scheme that consists of three steps.
At the first step, we employ embedding algorithms (LLE and DMs) to construct nonlinear maps from the high-dimensional to a low-dimensional subspace that preserve as much as possible the intrinsic geometry of the manifold (i.e.~maps assuring that neighborhoods of points in the high-dimensional space are mapped to neighborhoods of the same points in the low-dimensional manifold, and the notion of distance between the points is maintained as much as possible). In general it is expected that the manifold learning algorithms will provide an approximation to the intrinsic embedding dimension, which will differ from the ``true'' one. This depends on the threshold that one puts for the selection of the number of embedded vectors that span the low-dimensional manifold.
A central concept here is the Riemannian metric, which defines the properties of this map. At the second step, based on the resulting embedding features that span the low-dimensional subspace, we train a class of regression models (MVAR and GPR) to predict the evolution of the embedded time-series on the manifold based on their past values. The final step implements the lifting map. The aim here is to form the inverse map from the out-of-sample forecasted embedded points to the reconstruction of the features in the high-dimensional space. On one hand, the existence of such an inverse map is theoretically guaranteed by the assumption that a low-dimensional ``sufficiently smooth'' manifold exists. On the other hand, the data-driven derivation of the corresponding inverse map is neither unique (\cite{lee2013smooth}) nor trivial as for example when using PCA. In general, one has to solve a nonlinear least squares problem requiring the minimization of an objective function that can be formed using different criteria for the properties of the neighborhood of points; this results in different inverse maps whose performance has to be validated.
Next, for the completeness and clarity of the presentation, we present basic concepts of the manifold theory, useful to understand the steps of the proposed numerical methodology.
Manifold learning techniques can be viewed as unsupervised learning algorithms in the sense that they ``learn'' from the available data a low-dimensional representation of the original high-dimensional space, thus providing an ``optimal'' (under certain assumptions) embedded subspace where the information available in the high-dimensional space is preserved
as much as possible. Here, we briefly present some basic elements of the theory of manifolds and manifold learning (see e.g. \cite{berger2012differential,kuhnel2015differential,lee2013smooth,wang2012geometric}).
Let us start with the definition of a $d$-dimensional manifold.
\begin{definition}[Manifold]
\textit{A set $M \subset \mathbb{R}^{n}$ is called a $d$-dimensional manifold (of class $C^{\infty}$) if for each point $\boldsymbol{p} \in M$, there is an open set $W \subset M$ such that there exists a $C^{\infty}$ bijection $\boldsymbol{f}: U \rightarrow W$, $U \subset \mathbb{R}^d$ open, with $C^{\infty}$ inverse $\boldsymbol{f}^{-1}: W \rightarrow U$ (i.e. $W$ is $C^{\infty}$-diffeomorphic to $U$). The bijection $\boldsymbol{f}$ is called a local parametrization of $M$, while $\boldsymbol{f}^{-1}$ is called a local coordinate mapping, and the pair $(W,\boldsymbol{f}^{-1})$ is called a chart (or neighborhood) of $\boldsymbol{p}$ on $M$.}
\end{definition}
\noindent
Thus, in a coordinate system $(W,\boldsymbol{f}^{-1})$, a point $\boldsymbol{p} \in W$ can be expressed by the coordinates $(f^{-1}_1,f^{-1}_2,\dots, f^{-1}_n)$, where $f^{-1}_i$ is the $i$-th element of $\boldsymbol{f}^{-1}$. A chart $(W,\boldsymbol{f}^{-1})$ is centered at $\boldsymbol{p}$ if and only if $\boldsymbol{f}^{-1}(\boldsymbol{p}) = \boldsymbol{0}$.
Thus, we always assume that the manifold satisfies the $T_2$ Hausdorff separation axiom, stating that every pair of distinct points on $M$ have disjoint open neighborhoods. Summarizing, a manifold is a Hausdorff (Separated/T2) space, in which every point has a neighborhood that is diffeomorphic to an open subset in $\mathbb{R}^d$.
\begin{definition}[Tangent space and tangent bundle to a manifold]
\textit{Let $\boldsymbol{p}\in M \subset \mathbb{R}^n$. The tangent space to $M$ at $\boldsymbol{p}$ is the vector subspace $T_{\boldsymbol{p}}M$ formed by the tangent vectors at $\boldsymbol{p}$ defined by
$$ T_{\boldsymbol{p}}M = D\boldsymbol{f}_{\boldsymbol{u}}(T_{\boldsymbol{u}}\mathbb{R}^{d}), \quad \boldsymbol{f}(\boldsymbol{u})=\boldsymbol{p}, $$
where $T_{\boldsymbol{u}}\mathbb{R}^{d}$
is the tangent space at $\boldsymbol{u}$ and
\begin{equation*}
D\boldsymbol{f}_{\boldsymbol{u}}(T_{\boldsymbol{u}}\mathbb{R}^{d})=\left[\frac{\partial f_i(\boldsymbol{u})}{\partial u_j} \right]=
\begin{bmatrix} \frac{\partial f_1}{\partial u_1} & \frac{\partial f_1}{\partial u_2} & \dots & \frac{\partial f_1}{\partial u_d}\\
\frac{\partial f_2}{\partial u_1} & \frac{\partial f_2}{\partial u_2} & \dots & \frac{\partial f_2}{\partial u_d}\\
\vdots & \vdots & \vdots & \vdots\\
\frac{\partial f_n}{\partial u_1} & \frac{\partial f_n}{\partial u_2} & \dots & \frac{\partial f_n}{\partial u_d}
\end{bmatrix},
\end{equation*}
with $rank(D\boldsymbol{f})=d$. Thus, $S= \left\{ \frac{\partial \boldsymbol{f}}{\partial u_1}, \frac{\partial \boldsymbol{f}}{\partial u_2}, \dots, \frac{\partial\boldsymbol{f}}{\partial u_d} \right\}$ is a basis for $T_{\boldsymbol{p}}M$.
The union of tangent spaces
$$TM = \cup_{\boldsymbol{p} \in M} T_{\boldsymbol{p}}M$$
is called the tangent bundle of $M$.}
\end{definition}
\begin{definition}[Riemannian manifold and metric]
\textit{A Riemannian manifold is a manifold endowed with a positive definite inner product $g_{\boldsymbol{p}}$ defined on the tangent space $T_{\boldsymbol{p}}M$ at each point $\boldsymbol{p}$. The family of inner products $g_{\boldsymbol{p}}$ is called a Riemannian metric and the Riemannian manifold is denoted $(M,g)$.}
\end{definition}
The above definitions refer to the case of continuum limit -- infinite number of points. However, for real data with a limited number of observations, one can only try to approximate the manifold; the analytic charts of the Riemannian metric that actually define the geometry of the manifold are simply not available. Thus, a consistent manifold learning algorithm constructs in a numerical way a map that approximates in a probabilistic way the continuous one when the sample size $N \to \infty$. Thus, a fundamental preprocessing step of a class of manifold learning algorithms such as Laplacian Maps, LLE, ISOMAP and DMs is the construction of a weighted graph, say $\mathcal{G}(\boldsymbol{X},E)$, where $E$ denotes the set of weights between points. This construction is based on a predefined metric (such as the Gaussian kernel and the $k$-NN algorithm), which is used to appropriately connect each point $\boldsymbol{x}\in\mathbb{R}^D$ with all the others. This defines a weighted graph of neighborhoods of points. Theoretically, with an appropriate choice of the metric and its parameters, the graph is guaranteed to be connected, i.e.~there is a path between every pair of points in $\boldsymbol{X}$ (\cite{lee2013smooth}).
For the completeness of the presentation, in the following sections we briefly discuss the manifold learning algorithms, the regression models and the lifting techniques used in this work.
\section{Manifold learning algorithms\label{ManifoldLearning}}
\subsection{Locally Linear Embedding (LLE)\label{LocalLinearEmbeddingFramework}}
The LLE algorithm (\cite{Saul2003}) is a nonlinear manifold learning algorithm which constructs $\mathcal{G}(\boldsymbol{X},E)$ based on the $k$-NN algorithm. It is assumed that the neighborhood forms a basis for the reconstruction of any point in the neighborhood itself. Thus, every point is written as a linear combination of its neighbors. The weights of all pairs are then estimated with least squares, minimizing the $L_2$ norm of the difference between the points and their neighborhood-based reconstruction. The same rationale is assumed for the low-dimensional subspace, keeping the same estimates of the weights in the high-dimensional space. In the low-dimensional subspace, one now seeks for the coordinates of the points that minimize the $L_2$ norm of the difference between the coordinate of the points on a $d-$dimensional manifold and the weighted neighborhood reconstruction. The minimization problem is represented by an eigenvector problem, whose first $d$ bottom non-zero eigenvectors provide an ordered set of orthogonal coordinates that span the manifold. It should be noted that in order to obtain a unique solution to the minimization problem, the number of $k$ nearest neighbors should not exceed the dimension of the input space (\cite{Saul2003}).
Thus, the LLE algorithm can be summarized in the following three basic steps (\cite{Saul2003}).
\begin{enumerate}
\item Identify the $k$ nearest neighbors for all $\boldsymbol{{x}_{i}} \in R^D , \; i=1,2,\dots,N$ (with $k\ge d + 1$).
For each $\boldsymbol{{x}}_{i}$ this forms a set $K\{\boldsymbol{{x}}_{i}\} \subset \mathbb{R}^{k \times D}$ containing the $k$ nearest neighbors of $\boldsymbol{{x}}_{i}$.
\item Write each $\boldsymbol{{x}}_{i}$ in terms of $K\{\boldsymbol{{x}}_{i}\}$ as
\begin{equation}
\boldsymbol{{x}}_{i} = \sum_{j \in K\{\boldsymbol{{x}}_{i}\}} w_{ij}\boldsymbol{{x}}_{j},
\label{eq1LLE}
\end{equation}
and find the matrix $\boldsymbol{W} = [w_{ij}] \in \mathbb{R}^{N \times N}$ of the unknown weights by minimizing the objective function
\begin{equation}
\mathcal{L}(\boldsymbol{W}) = \sum_{i=1}^{N} \left\lVert \boldsymbol{{x}}_{i} - \sum_{j=1, j\neq i}^{N} w_{ij}\boldsymbol{x}_{j} \right\rVert^{2}_{L_2}, \label{eq2LLE}
\end{equation}
with the constraint
\begin{equation}
\sum_{j=1}^{N} w_{ij} = 1.
\end{equation}
\item Embed the points $\boldsymbol{{x}}_{i} \in \mathbb{R}^D$, $i=1,2,\dots, N$, into a low-dimensional space with coordinates $\boldsymbol{y}_{i} \in \mathbb{R}^d$, $i=1,2,\dots, N$, $d \ll D$. This step in LLE is accomplished by computing the vectors $\boldsymbol{y}_{i} \in \mathbb{R}^d$ that minimize the objective function:
\begin{equation}
\phi(\boldsymbol{Y})=\sum_{i=1}^{N} \left\lVert \boldsymbol{y}_i - \sum_{j=1, j\neq i}^{N} w_{ij}\boldsymbol{y}_{j} \right\rVert ^{2}_{L_2}
\label{eq3LLE},
\end{equation}
where the weights $w_{ij}$ are fixed at the values found by solving the minimization problem \eqref{eq2LLE}. The embedding vectors are required to be centered at the origin with an identity covariance matrix (\cite{Saul2003}). The embedded vector $\boldsymbol{y}_i$ are constrained so that they have a zero mean and a unit covariance matrix.
\end{enumerate}
The cost function \eqref{eq3LLE} is quadratic and can be stated as
\begin{equation}
\phi(\boldsymbol{Y})=\sum_{i=1}^{N} \sum_{j=1}^{N} Q_{ij}\langle \boldsymbol{y}_{i}, \boldsymbol{y}_{j} \rangle,
\end{equation}
involving the inner products of the embedding vectors and a square matrix $\boldsymbol{Q}$ with elements
\begin{equation}
Q_{ij} = \delta_{ij} - w_{ij} - w_{ji} + \sum_k w_{ki} w_{kj},
\end{equation}
$\delta_{ij}=1$ if $i=j$, and $0$ otherwise. In matrix form, $\boldsymbol{Q}$ can be written as an $N \times N$ \emph{sparse} symmetric matrix:
\begin{equation}
\boldsymbol{Q} = (\boldsymbol{I}-\boldsymbol{W})^T (\boldsymbol{I}-\boldsymbol{W}).
\end{equation}
In practice, this sparse representation of $\boldsymbol{Q}$ gives rise to a significant computational reduction, especially when $N$ is large, since left multiplication by $\boldsymbol{Q}$ is given by
\begin{equation}
\boldsymbol{Q}\boldsymbol{v} = (\boldsymbol{v}-\boldsymbol{W}\boldsymbol{v}) - \boldsymbol{W}^T(\boldsymbol{v}-\boldsymbol{W}\boldsymbol{v}), \quad \boldsymbol{v} \in \mathbb{R}^N,
\end{equation}
requiring just two multiplications by the sparse matrices $\boldsymbol{W}$ and $\boldsymbol{W}^T$.
Minimizing the cost function \eqref{eq3LLE} can be computed via the eigenvalue decomposition of $\boldsymbol{Q}$. Specifically, the eigenvector associated with the smallest (zero) eigenvalue is the vector with all 1's and it trivially minimizes \eqref{eq3LLE} (\cite{Saul2003}); it is disregarded because it leads to a constant (degenerate) embedding. The optimal embedding is therefore obtained by the $d$ eigenvectors of $\boldsymbol{M}$, denoted as $\boldsymbol{q}_k \in R^N$, $k=1,2,\dots,d$, corresponding to the next $d$ smallest eigenvalues. Thus, the coordinates of $\boldsymbol{x}_i$, $i=1,2,\dots,N$, in the embedded space are given by the vector
\begin{equation}
\boldsymbol{\mathcal{R}}(\boldsymbol{x}_i)=\boldsymbol{y}_i=[q_{i1},q_{i2},\dots,q_{id}]^T,
\end{equation}
where $q_{ij}$ denotes the $j$-th element of eigenvector $\boldsymbol{q}_{i}$.
\subsection{Diffusion Maps (DMs)\label{DiffusionMaps}}
Here, the construction of the affinity matrix is based on the computation of a random walk on the graph $\mathcal{G}(\boldsymbol{X},E)$. The first step is to construct a graph using a kernel function,
say $k(\boldsymbol{x}_i,\boldsymbol{x}_j)$.
The kernel function can be chosen as a Riemannian metric, so that it is symmetric and positive definite.
Standard kernels, such as the Gaussian kernel, typically define a neighborhood of each point $\boldsymbol{x}_i$, i.e.~a set of points $\boldsymbol{x}_j$ having strong connections with $\boldsymbol{x}_i$, namely, large values of $k(\boldsymbol{x}_i,\boldsymbol{x}_j)$.
At the next step, one constructs a Markovian transition matrix, say $\boldsymbol{P}$, whose elements correspond to the probability of jumping from one point to another in the high-dimensional space.
This transition matrix defines a Markovian (i.e.~memoryless) random walk $X_t$ by setting
$$ p_{ij} = p(\boldsymbol{x}_i,\boldsymbol{x}_j) = \text{Prob}(X_{t+1} = \boldsymbol{x}_j|X_t = \boldsymbol{x}_i).$$
For a graph constructed from a sample of finite size $N$, the weighted degree of a point (node) is defined by
\begin{equation}
deg(\boldsymbol{x}_i) = \sum_{j=1}^N k(\boldsymbol{x}_i,\boldsymbol{x}_j),
\end{equation}
and the volume of the graph is given by
$$ vol(\mathcal{G}) = \sum_{i=1}^N d(\boldsymbol{x}_i).$$
Then, the random walk on such a weighted graph can be defined by the transition probabilities
\begin{equation}
p_{ij} = p(\boldsymbol{x}_i,\boldsymbol{x}_j) = \frac{k(\boldsymbol{x}_i,\boldsymbol{x}_j)}{deg(\boldsymbol{x}_i)}.
\label{transprob}
\end{equation}
Clearly, from the above definition, we have that $\sum_{j=1}^{N} p(\boldsymbol{x}_i,\boldsymbol{x}_j)=1$.
We note that in the continuum, the above can be described as a continuous Markov process on a probability space $(\Omega, \mathcal{H}, \mathcal{P})$, where $\Omega$ is the sample space, $\mathcal{H}$ is a $\sigma$-algebra of events in $\Omega$, and $\mathcal{P}$ is a probability measure.
Let $\mu$ be the density function of the points in the sample space $\Omega$ induced from the probability measure, $\mu: \mathcal{H} \rightarrow \mathbb{R}$ with $\mu(\Omega)=1$.
Then, using the kernel function $k$, the transition probability from a point $\boldsymbol{x}\in \Omega$ to another point $\boldsymbol{y} \in \Omega$ is given by
\begin{equation}
p(\boldsymbol{x},\boldsymbol{y})=\frac{k(\boldsymbol{x},\boldsymbol{y})}{
\int_{\boldsymbol{\Omega}} k(\boldsymbol{x},\boldsymbol{y}) d\mu(\boldsymbol{\Omega})},
\end{equation}
which is the continuous-space counterpart of \eqref{transprob}.
The above procedure defines a row-stochastic transition matrix, $\boldsymbol{P} = [p_{ij}]$, which encapsulates the information about the neighborhoods of the points.
Note that by raising $\boldsymbol{P}$ to the power of $t=1,2,\dots $, we get the jumping
probabilities in $t$ steps. This way, the underlying geometry is revealed through high or low transition probabilities between the points, i.e. paths that follow the underlying geometry have a high probability of occurrence, while paths away from the ``true'' embedded structure have a low probability. Note that the $t$-step transition probabilities, say $p_t(\boldsymbol{x}_i,\boldsymbol{x}_j)$, satisfy the Chapman-Kolmogorov equation:
\begin{equation}
p_{t_1+t_2}(\boldsymbol{x}_i,\boldsymbol{x}_j)=\sum_{\boldsymbol{x}_k\in \boldsymbol{X}}p_{t_1}(\boldsymbol{x}_i,\boldsymbol{x}_k)p_{t_2}(\boldsymbol{x}_k,\boldsymbol{x}_j).
\end{equation}
The Markov process defined by the probability matrix $\boldsymbol{P}$ has a stationary distribution given by
\begin{equation}
\pi(\boldsymbol{x}_i)=\frac{deg(\boldsymbol{x}_i)}{\sum_{\boldsymbol{x}_j\in \boldsymbol{X}}deg(\boldsymbol{x}_j)},
\end{equation}
and it is reversible, i.e.
\begin{equation}
\pi(\boldsymbol{x}_i)p(\boldsymbol{x}_i,\boldsymbol{x}_j)= \pi(\boldsymbol{x}_j)p(\boldsymbol{x}_j,\boldsymbol{x}_i), \quad \forall \boldsymbol{x}_i, \boldsymbol{x}_i \in \boldsymbol{X}.
\end{equation}
Furthermore, if the kernel is appropriately chosen so that the graph is connected, then the Markov chain is ergodic and irreducible. The Brouwer Fixed Point Theorem (\cite{kellogg1976constructive}) implies that the transition matrix of an ergodic process has a stationary vector $\boldsymbol{\pi}$ such that:
\begin{equation}
\boldsymbol{P}^T\boldsymbol{\pi}=\boldsymbol{\pi},
\end{equation}
and hence the matrix of a Markov ergodic chain has always an eigenvalue 1 corresponding to the stationary state. From the Perron-Frobenius Theorem, we know that its geometric multiplicity is 1. It can be shown that all other eigenvalues have a magnitude smaller than 1 (\cite{coifman2006diffusion}).
From \eqref{transprob} we get
\begin{equation}
\boldsymbol{P}=\boldsymbol{D}^{-1}\boldsymbol{K}, \quad \boldsymbol{D}=\text{diag} \left( \sum_{j=1}^{N}k_{ij} \right),
\label{diagonP}
\end{equation}
where $k_{ij} = k(\boldsymbol{x}_i,\boldsymbol{x}_j)$.
We note that the transition matrix $\boldsymbol{P}$ is similar to the symmetric and positive definite matrix $\boldsymbol{S}=\boldsymbol{D}^{-1/2}\boldsymbol{K}\boldsymbol{D}^{-1/2}$. Thus, the transition matrix $\boldsymbol{P}$ has a decomposition given by
\begin{equation}
\boldsymbol{P}=\sum_{i=1}^N \lambda_i \boldsymbol{u}_i \boldsymbol{v}_{i}^{T},
\end{equation}
where $\lambda_i \in \mathbb{R}$ are the (positive) eigenvalues of $\boldsymbol{P}$, $\boldsymbol{u}_i \in \mathbb{R}^N$ are the left eigenvectors, and $\boldsymbol{v}_i \in \mathbb{R}^N$ are the right eigenvectors, such that $\langle \boldsymbol{u}_i,\boldsymbol{v}_{j} \rangle =
\delta_{ij}$.
The set of right eigenvectors $\boldsymbol{v}_i$ establishes an orthonormal basis for the subspace $R(\boldsymbol{P}^T)$ of $\mathbb{R}^D$ spanned by the rows of $\boldsymbol{P}$. Row $i$ represents the transition probabilities from point $\boldsymbol{x}_i$ to all the other points of the graph. According to the Eckart-Young-Mirsky Theorem (\cite{eckart1936approximation,mirsky1960symmetric}), the best $d$-dimensional low-rank approximation of the row space of $\boldsymbol{P}$ in the Euclidean space $\mathbb{R}^d$ is provided by its $d$ right eigenvectors corresponding to the $d$ largest eigenvalues.
It has been shown that asymptotically, when the number of data points uniformly sampled from a low dimensional manifold goes to infinity, the matrix $\frac{1}{\sigma}(\boldsymbol{I}-\boldsymbol{P})$ (where $\sigma$ is the kernel function scale) approaches the Laplace-Beltrami operator of the underlying Riemannian manifold (\cite{coifman2005geometric,nadler2006diffusion}). This allows to consider the eigenvectors corresponding to the largest few eigenvalues of $\boldsymbol{P}$ as discrete approximations of the principal eigenfunctions of the Laplace-Beltrami operator. Since the principal eigenfunctions of the Laplace-Beltrami operator establish an accurate embedding of the manifold (\cite{jones2008manifold}), this result promotes the usage of the eigenvectors of $\boldsymbol{P}$ for practical (discrete) data embedding.
At this point, the so-called diffusion distance, which is an affinity distance related to the reachability between two points $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$, is given by
\begin{equation}
D_{t}^2(\boldsymbol{x}_i,\boldsymbol{x}_j) = ||p_t(\boldsymbol{x}_i,\cdot),p_t(\boldsymbol{x}_j,\cdot)||_{L_2,1/deg}^2 = \sum_{i=1}^{N} \frac{(p_t(\boldsymbol{x}_i,\boldsymbol{x}_k)-p_t(\boldsymbol{x}_j,\boldsymbol{x}_k))^2} {deg(\boldsymbol{x}_k)},
\label{diffdist}
\end{equation}
where $p_t(\boldsymbol{x}_i,\cdot)$ is the $i$-th row of $\boldsymbol{P}^t$.
The embedding of the $t$-step transition probabilities is achieved by forming a family of maps (DMs) of the $N$ points $\boldsymbol{x}_i \in \boldsymbol{X}$ in the Euclidean subspace of $\mathbb{R}^d$ defined by
\begin{equation}
\boldsymbol{\mathcal{R}}_t(\boldsymbol{x}_i)=\boldsymbol{y}_i=\left[\lambda_{1}^t\boldsymbol{v}_1(i),\lambda_{2}^t \boldsymbol{v}_2(i),\dots, \lambda_{d}^t \boldsymbol{v}_d(i)\right]^T, i=1,2,\dots N.
\end{equation}
A useful property of DMs is that the Euclidean distance in the embedded space $||\boldsymbol{\mathcal{R}}_t(\boldsymbol{x}_i)-\boldsymbol{\mathcal{R}}_t(\boldsymbol{x}_j)||_{L_2}$ is the best $d$-dimensional approximation of the diffusion distance, given by \eqref{diffdist}.
where the equality holds for $d=N$.
\section{Forecasting with Regression Models\label{ForecastingModels}}
Let us assume that the time series are being generated by a (nonlinear) law of the form
\begin{equation}
\boldsymbol{y}_{t}=\boldsymbol{\phi}(\boldsymbol{z},\boldsymbol{\mu})+\boldsymbol{e}_t,
\label{regressmodel}
\end{equation}
where $\boldsymbol{y}_{t} \in \mathbb{R}^d$ denotes the vector containing the measured values of the response variables at time $t$, $\boldsymbol{\phi}: \mathbb{R}^p \times \mathbb{R}^q \rightarrow \mathbb{R}^d $ is a smooth function encapsulating the law that governs the system dynamics, which depends on parameters represented by the vector $\mu \in \mathbb{R}^q$, $\boldsymbol{z} \in \mathbb{R}^p$ is the vector containing the explanatory input variables (predictors), which may include past values of the response variables at certain times, say $t-1, t-2, \dots, t-m$, and other exogenous variables, and $\boldsymbol{e}_t$ is the vector of the unobserved noise at time $t$.
In general, the forecasting problem (here in the low-dimensional space) using regression models can be written as a minimization problem of the following form:
\begin{equation}
\argmin_{\boldsymbol{g} \in G, \; \boldsymbol{\theta} \in \mathbb{R}^{l}}{\mathcal{L}(\boldsymbol{y}_{t+k}-\boldsymbol{g}(\boldsymbol{\boldsymbol{z},\boldsymbol{\theta}}))},
\end{equation}
where $k$ is the prediction horizon, $\boldsymbol{g}: \mathbb{R}^p \times \mathbb{R}^l \rightarrow \mathbb{R}^d$ is the regression model
\begin{equation}
\boldsymbol{\hat{y}}_{t+k}=\boldsymbol{g}(\boldsymbol{\boldsymbol{z},\boldsymbol{\theta}}),
\end{equation}
with parameters
$\boldsymbol{\theta} \in \mathbb{R}^l$, $G$ is the space of available models $\boldsymbol{g}$, $\hat{\boldsymbol{y}}_{t}$ is the output of the model at time $t$, and $\mathcal{L}$ is the loss function that determines the optimal forecast.
We note that the forecasting problem can be posed in two different modes (\cite{marcellino2006comparison}): the iterative and the direct one. In the iterative mode, one trains one-step ahead models based on the available time series and simulates/iterates the models to predict future values. In the direct mode, forecasting is performed in a sliding-window framework, where the model is retrained with the data contained within the sliding window to provide a multiperiod-ahead value of the dependent variables. Here, we aim at testing the performance of the proposed scheme for one-period ahead of time predictions (i.e. with $k=1$) with both iterative and directs modes. For our illustrations, we used two forecasting models, namely MVAR and GPR. A brief description of the models follows.
\subsection{Multivariate Autoregressive (MVAR) model\label{MVARsec}}
An MVAR model can then be written as
\begin{equation}
\boldsymbol{y}_t=\boldsymbol{\theta_0}+\sum_{j=1}^{m}\boldsymbol{y}_{t-j}\boldsymbol{\Theta_{j}}+\boldsymbol{e}_t,
\label{MVAR}
\end{equation}
where $\boldsymbol{y}_t = [y_{t1}, y_{t2}, \dots, y_{td}]^T \in \mathbb{R}^d$ is the vector of the response time series at time $t$, $m$ is the model order, i.e. the maximum time lag, $\boldsymbol{\theta_0} \in \mathbb{R}^d$ is the regression intercept, $\boldsymbol{\Theta}_{j} \in \mathbb{R}^{d \times d}$ is the matrix containing the regression coefficients of the MVAR model, and $\boldsymbol{e}_t=[e_{t}^{(1)}, e_{t}^{(2)}, \dots, e_{t}^{(d)}]^T$ is the vector of the unobserved errors at time $t$, which are assumed to be uncorrelated random variables with zero mean and constant variance $\sigma^2$.
In a more general form, in view of \eqref{regressmodel}, the MVAR model can be written as
\begin{equation}
y_{ik}=\theta_{0k}+\sum_{j=1}^{m}\theta_{jk}z_{ij}+e_{ik}, \quad i=1,2,\dots d, \quad k=1,2,\dots N,
\label{mvarcovfullindex}
\end{equation}
where $y_{ik}$ is the model output for the $i$-th variable at the $k$-th time instant, $\theta_{0k} \in \mathbb{R}$ is the corresponding regression intercept and $\theta_{jk}$ the corresponding $j$-th regression coefficient, $z_{ij}$ is the $j$-th predictor of the $i$-th response variable (e.g. the time-delayed time series).
According to the Gauss-Markov theorem, the best unbiased linear estimator of the regression coefficients is the one that results from the solution of the least-squares (LS) problem
\begin{equation*}
\arg\min_{\theta_{0k}, \theta_{jk}} \sum_{i=1}^{d} \sum_{k=1}^{N}
\left( y_{ik} - \theta_{0k} - \sum_{j=1}^{m} \theta_{jk}z_{ij} - e_{ik} \right)^2 ,
\end{equation*}
given by
\begin{equation}
\boldsymbol{\hat{\Theta}}=(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{Z}^T\boldsymbol{Y},
\end{equation}
where $\boldsymbol{Y} = [\boldsymbol{y}_1,\boldsymbol{y}_2,\dots \boldsymbol{y}_d] \in R^{N \times d}$ and $\boldsymbol{Z}=[\boldsymbol{1}_N, \boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_m] \in \mathbb{R}^{N\times (m+1)}$.
Assuming that the unobserved errors are i.i.d. normal random variables, then by the maximum likelihood estimate of the error covariance, one can also estimate the forecasting intervals of a new observation.
\subsection{Gaussian Process Regression (GPR)\label{GPR}}
For the implementation of GPR it is assumed that the unknown function $\boldsymbol{\phi}$ in \eqref{regressmodel} can be modelled by $d$ single-output Gaussian distributions given by
\begin{equation}
P(\boldsymbol{\phi}_i|\boldsymbol{z})=\mathcal{N}(\boldsymbol{\phi}_i|\boldsymbol{\mu},\boldsymbol{K}(\boldsymbol{z}, \boldsymbol{z} |\boldsymbol{\theta})),
\end{equation}
where $\boldsymbol{\phi}_i=[\phi_i(\boldsymbol{z}_1), \phi_i(\boldsymbol{z}_2), \dots, \phi_i(\boldsymbol{z}_N)]$, and $\phi_i$ is the $i$-th component of $\boldsymbol{\phi}$, $\boldsymbol{\mu}(\boldsymbol{z})$ is the vector with the expected values of the function, and $\boldsymbol{K}(\boldsymbol{z}, \boldsymbol{\theta})$ is a $N\times N$ covariance matrix formed by a kernel. The prior mean function is often set to $\boldsymbol{\mu}(\boldsymbol{z}) = \boldsymbol{0}$ with appropriate normalization of the data.
Predictions at a new point, say $\boldsymbol{z}_{*}$, are made by drawing $\boldsymbol{\phi}_{i*}$ from the posterior distribution $P(\boldsymbol{\phi}_i |(\boldsymbol{Z},\boldsymbol{y}_i))$, where $\boldsymbol{Z}=[\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_N]^T \in \mathbb{R}^{N \times p}$, and by assuming a Gaussian distributed noise, and appropriate normalization of the data points is given by the joint multivariate normal distribution:
\begin{equation}
\begin{bmatrix}
\boldsymbol{y}_i\\
\phi_i(\boldsymbol{x}_*)
\end{bmatrix}\sim \mathcal{N}
\begin{pmatrix}
\boldsymbol{0}\\
0
\end{pmatrix},
\quad
\begin{bmatrix}
\boldsymbol{K}(\boldsymbol{Z},\boldsymbol{Z}|\boldsymbol{\theta})+\sigma^2\boldsymbol{I} & \boldsymbol{k}(\boldsymbol{Z},\boldsymbol{z}_*\boldsymbol{\theta}) \\
\boldsymbol{k}(\boldsymbol{z}_*,\boldsymbol{Z}|\boldsymbol{\theta}) & k(\boldsymbol{z}_*,\boldsymbol{z}_*|\boldsymbol{\theta})
\end{bmatrix}, \quad i=1,2,\dots,d.
\end{equation}
It can be shown that the posterior conditional distribution
\begin{equation}
P(\phi_i(\boldsymbol{x}_*)|\boldsymbol{y}_i,\boldsymbol{Z},\boldsymbol{z}_*)
\end{equation}
can be analytically derived and the expected value and covariance of the estimation are given by:
\begin{equation}
{\boldsymbol{\bar{\phi}}_i}(\boldsymbol{z}_*)=\boldsymbol{k}(\boldsymbol{z}_*,\boldsymbol{Z}|\boldsymbol{\theta})
\left( \boldsymbol{K}(\boldsymbol{Z},\boldsymbol{Z}|\boldsymbol{\theta})+\sigma^2\boldsymbol{I} \right)^{-1} \boldsymbol{y}_i, \quad i=1,2,\dots, d
\label{meangp}
\end{equation}
\begin{equation}
\sigma_{*}^2={k}(\boldsymbol{z}_*,\boldsymbol{z}_*|\boldsymbol{\theta})-\boldsymbol{k}(\boldsymbol{z}_*,\boldsymbol{Z}|\boldsymbol{\theta})[ \boldsymbol{K}(\boldsymbol{Z},\boldsymbol{Z}|\boldsymbol{\theta})+\sigma^2\boldsymbol{I}]^{-1}\boldsymbol{k}(\boldsymbol{Z},\boldsymbol{z}_*|\boldsymbol{\theta})
\label{covgp}
\end{equation}
The hyperparameters in the above equations are estimated by minimizing
the marginal likelihood that is given by
\begin{align*}
\ell & = -\log p(\boldsymbol{y}_i|\boldsymbol{Z},\boldsymbol{\theta}) \\
& = \frac{1}{2}\boldsymbol{y}_{i}^T[ \boldsymbol{K}(\boldsymbol{Z},\boldsymbol{Z}|\boldsymbol{\theta})+\sigma^2\boldsymbol{I}]^{-1}\boldsymbol{y}_{i}+\frac{1}{2}\log[\boldsymbol{K}(\boldsymbol{Z},\boldsymbol{Z}|\boldsymbol{\theta})+\sigma^2\boldsymbol{I}]+\frac{N}{2}\log 2\pi .
\end{align*}
Here, for our computations we have used a mixed kernel composed by adding the fundamental components for forecasting with GPR (see e.g. \cite{corani2021time}), namely a radial basis function kernel:
\begin{equation}
k(\boldsymbol{z}_i,\boldsymbol{z}_j)=\theta_{1}^2 \exp \left( -\frac{||\boldsymbol{z}_i-\boldsymbol{z}_j||^{2}_{L_2}}{2\theta_{2}^2} \right),
\end{equation}
a linear kernel:
\begin{equation}
k(\boldsymbol{z}_i,\boldsymbol{z}_j)=\theta_{3}^2+\theta_{4}^2\langle \boldsymbol{z}_i,\boldsymbol{z}_j\rangle,
\end{equation}
a periodic kernel:
\begin{equation}
k(\boldsymbol{z}_i,\boldsymbol{z}_j) = \theta_{5}^2 \exp \left( -\frac{2\sin^2{(\pi||\boldsymbol{z}_i-\boldsymbol{z}_j||_{L_2}/\theta_6)}}{\theta_{7}^2} \right),
\end{equation}
and a white noise kernel:
\begin{equation}
k(\boldsymbol{z}_i,\boldsymbol{z}_j)=\theta_{8}^2\delta_{ij} .
\end{equation}
\section{Reconstruction of the high-dimensional space: solving the pre-image problem\label{LiftingOperators}}
The final step is to reconstruct the high-dimensional space from measurements on the manifold, i.e., to ``lift'' the predictions made by the reduced-order models on the manifold back to the original high-dimensional space. In the case of PCA, this task is trivial, as the solution to the reconstruction problem, given by
\begin{equation}
\argmin_{\boldsymbol{U}_d\in \mathbb{R}^{D\times d}}\sum_{i=1}^N||\boldsymbol{x}_i-\boldsymbol{U}_d\boldsymbol{U}_{d}^{T}\boldsymbol{x}_i||_{L_2}^2, \quad \boldsymbol{U}_{d}^{T}\boldsymbol{U}_d=I,
\end{equation}
is just a linear transformation which maximizes the variance of the data on the linear manifold, and is given by the first $d$ principal eigenvectors of the covariance matrix.
In the case of nonlinear manifold learning algorithms, such as LLE and DMs, we want to learn the inverse map (lifting operator):
\begin{equation}
\boldsymbol{\mathcal{L}}
\equiv \boldsymbol{\mathcal{R}}^{-1}: \boldsymbol{\mathcal{R}}(\boldsymbol{X}) \rightarrow \boldsymbol{X},
\end{equation}
for new samples on the manifold $\boldsymbol{y}_* \not\in \boldsymbol{\mathcal{R}}(\boldsymbol{X})$. This inverse problem is referred as ``out-of-sample extension pre-image'' problem. The ``out-of-sample extension'' problem usually refers to the direct problem, i.e. that of learning the direct embedding map (i.e. the restrictions to the manifold) $\boldsymbol{\mathcal{R}}(\boldsymbol{X}): \boldsymbol{X} \rightarrow \boldsymbol{\mathcal{R}}$, for new samples in the input space $\boldsymbol{x}_* \not\in
\boldsymbol{X}$. Towards this aim, a well established methodology is the Nystr\"{o}m extension (\cite{nystrom1929praktische}).
In general, the solution of the pre-image problem can be posed as:
\begin{equation}
\arg\min_{\boldsymbol{c}} ||\boldsymbol{y}_*-\boldsymbol{\mathcal{R}}(\boldsymbol{\mathcal{L}}(\boldsymbol{y}_*)|\boldsymbol{c}))|| ,
\label{preimage}
\end{equation}
subject to a constraint, where $\boldsymbol{\mathcal{L(\cdot)|\boldsymbol{c}}}$ is the lifting operator depending on some parameters $\boldsymbol{c}$.\par
Below, we describe the reconstruction methods that we used in this work, namely, RBFs interpolation and Geometric Harmonics (GH), which provide a direct solution of the inverse problem, thus giving some insight about their implementation and pros and cons. For a review and a comparison of such methods in the framework of chemical kinetics see~\cite{chiavazzo2014reduced}.
\subsection{Radial Basis Function (RBF) Interpolation\label{RBF}}
The lifting operator is constructed with interpolation through RBFs among the corresponding set of neighbors of the new state $\boldsymbol{y}_*$ in the ambient space. The lifting operator is defined by (\cite{chiavazzo2014reduced}):
\begin{equation}
\boldsymbol{\mathcal{L}}(\boldsymbol{y}_*) = x_{i*} = \sum_{j=1}^k c_{ji} \psi(||\boldsymbol{y}_*-\boldsymbol{y}_j||), \quad i=1,2,\dots, D ,
\label{liftrbf}
\end{equation}
where $x_{i*}$ is the $i$-th coordinate of $\boldsymbol{x}_*$, the $\boldsymbol{y}_j$'s are the neighbors of the unseen sample $\boldsymbol{y}_*$ on the manifold, and $\psi$ is the radial basis function.
Similarly, the restriction operator can be written as:
\begin{equation}
\boldsymbol{\mathcal{R}}(\boldsymbol{x}_*) =y_{i*} = \sum_{j=1}^k c_{ji}\psi(||\boldsymbol{\mathcal{L}}(\boldsymbol{y}_*)-\boldsymbol{\mathcal{L}}(\boldsymbol{y}_j)||), \quad i=1,2,\dots,D,
\label{restrictrbf}
\end{equation}
where $\boldsymbol{\mathcal{L}}(\boldsymbol{y}_j)=\boldsymbol{x}_j$ is the known image of $\boldsymbol{y}_j$ (the neighbors of $\boldsymbol{y}_*$ in the ambient space).
The unknown coefficients of the lifting operator can be computed by setting at the left hand side of \eqref{liftrbf} the coordinates of the one-to-one corresponding neighbors of $\boldsymbol{y}_*$ in the ambient space. Here, we consider a special class of RBFs known as radial powers, given by $$\psi(||\boldsymbol{\mathcal{L}}(\boldsymbol{y}_*)-\boldsymbol{\mathcal{L}}(\boldsymbol{y}_j)||)=||\boldsymbol{\mathcal{L}}(\boldsymbol{y}_*)-\boldsymbol{\mathcal{L}}(\boldsymbol{y}_j)||^{p}_{L_2},$$
where $p$ is an odd integer. By doing so, the unknown coefficients $c_{ji}$ of the lifting operator are given by the solution of the following linear system:
\begin{equation}
\boldsymbol{A}
\begin{bmatrix}
c_{1i}\\
c_{2i}\\
\vdots\\
c_{ki}\end{bmatrix}=
\begin{bmatrix}
x_{1i}\\
x_{2i}\\
\vdots\\
x_{ki}
\end{bmatrix}, \quad i=1,2,\dots,D,
\label{linsys_rbf}
\end{equation}
where
\begin{equation}
\boldsymbol{A }= \begin{bmatrix}
0& ||\boldsymbol{y}_1-\boldsymbol{y}_2||^{p}_{L_2}&\dots & ||\boldsymbol{y}_1-\boldsymbol{y}_{k}||^{p}_{L_2}\\
||\boldsymbol{y}_2-\boldsymbol{y}_1||^{p}_{L_2}& 0&\dots & ||\boldsymbol{y}_2-\boldsymbol{y}_{k}||^{p}_{L_2}\\
\vdots & \vdots & \dots & \vdots\\
||\boldsymbol{y}_{k}-\boldsymbol{y}_1||^{p}_{L_2}& ||\boldsymbol{y}_{k}-\boldsymbol{y}_2||^{p}_{L_2}&\dots & 0 \end{bmatrix},
\label{solverbf}
\end{equation}
where $x_{ji}$ is the $i$-th coordinate of the $j$-th point in the ambient space, whose restriction on the manifold is the $j$-th nearest neighbor of $\boldsymbol{y}_*$. For our computations, we used $p=1$.
Then, \eqref{liftrbf} can be used to find the coordinates of $\boldsymbol{x}_*$ in the ambient space.
We should note that radial powers are better for the task of interpolation compared to Gaussian kernels, as they do not suffer from ill conditioning. The matrix $\boldsymbol{A}$ is invertible with the assumption that the centers $\boldsymbol{y}_j$ are distinct (see e.g. \cite{monnig2014inverting,amorim2015facing}). However, $\boldsymbol{A}$ can be rank deficient from a numerical point of view, e.g. because the points may be very close to each other. Thus, the solution of \eqref{solverbf} using a method such as the LU factorization of $\boldsymbol{A}$ may result in numerical instabilities.
\subsection{Geometric Harmonics (GHs)\label{GH}}
GHs are a set of functions that allow the extension of the embedding of new unseen points on the manifold $\boldsymbol{x}_* \not\in \boldsymbol{X}$, which are not given in the set of points used for building the embedding (\cite{coifman2006geometric}). Their derivation is based on the Nystr\"{o}m (or quadratic) extension method (\cite{nystrom1929praktische}), which has been used for the
numerical solutions of integral equations (\cite{delves1988computational,press1990fredholm}), and in particular the Fredholm equation of second kind, reading:
\begin{equation}
f(t)=g(t)+\mu \int_{a}^{b} k(t,s)f(s) ds,
\label{fredholm}
\end{equation}
where $f(t)$ is the unknown function, while $k(t,s)$ and $g(t)$ are known. The Nystr\"{o}m method starts with the approximation of the integral, i.e.
\begin{equation}
\int_{a}^{b}y(s)ds \approx \sum_{j=1}^N w_j y(s_j),
\label{quadint}
\end{equation}
where $s_j$ are $N$ appropriately chosen collocation points and $w_j$ are the corresponding weights, which are determined, e.g. by the Gauss-Jacobi quadrature rule. Then, by using \eqref{quadint} in \eqref{fredholm} and evaluating $f$ and $g$ at the $N$ collocation points, we get the following approximation:
\begin{equation}
(\boldsymbol{I}-\mu \boldsymbol{\tilde{K}})\boldsymbol{\hat{f}}=\boldsymbol{g},
\end{equation}
where the matrix $\boldsymbol{\tilde{K}}$ has elements $\tilde{k}_{ij} = k(s_i,s_j) w_j$.
Based on the above, the solution of the homogenous Fredholm problem ($\boldsymbol{g}=0$) is given by the solution of the eigenvalue problem
\begin{equation}
\boldsymbol{\tilde{K}}\boldsymbol{\hat{f}}=\frac{1}{\mu}\boldsymbol{\hat{f}},
\end{equation}
i.e.
\begin{equation}
\sum_{j=1}^N w_j k(s_i,s_j) \hat{f}_j = \frac{1}{\mu}\hat{f}_i, \quad i=1,2,\dots, N,
\end{equation}
where $\hat{f}_i = \hat{f}(s_i)$ is the $i$-th component of $\boldsymbol{\hat{f}}$. The Nystr\"{o}m extension of $f(t)$, using a set of $N$ sample (collocation) points,
at an arbitrary point $x$ in the full domain is given by:
\begin{equation}
\boldsymbol{\mathcal{E}}({f}(x)) = \hat{f}(x) = \mu \sum_{j=1}^N w_j k(x,s_j) \hat{f}_j.
\end{equation}
Within the framework of DMs, we seek for the out-of-the-sample (filtered) extension of a real-valued function $f$ defined at $N$ sample points $\boldsymbol{x}_i \in \boldsymbol{X}$ to one or more unseen points $\boldsymbol{x}_* \notin \boldsymbol{X}$. The function $f$ can be for example a DM coordinate $\lambda_{j}^t \boldsymbol{v}_j(\boldsymbol{x}_i)$, $j=1,2,\dots d$, or another function representing the output of a regression model (see also the discussion in \cite{thiem2020emergent,rabin2016earthquake}.
Recalling that the eigenvectors $\boldsymbol{v}_j$ form a basis, the extension is implemented (see \cite{coifman2006geometric}) by first expanding $f(\boldsymbol{x}_i)$ in the first $d$ parsimonious eigenvectors $\boldsymbol{v}_l$ of the Markovian matrix $\boldsymbol{P}^t$:
\begin{equation*}
\hat{f}(\boldsymbol{x}_i) = \sum_{l=1}^d a_l \boldsymbol{v}_l(\boldsymbol{x}_i), \quad i=1,2,\dots N,
\end{equation*}
where $a_l=\langle \boldsymbol{v}_l, \boldsymbol{f}\rangle$ are the projection coefficients of the function on the first $d$ parsimonious eigenvectors, and $\boldsymbol{f} \in \mathbb{R}^N$ is the vector containing the values of the function $f$ at the $N$ points $\boldsymbol{x}_i$.
Then, one computes the Nystr\"{o}m extension of $\hat{f}$ at $\boldsymbol{x}_*$ using the same projection coefficients as
\begin{equation}
\boldsymbol{\mathcal{E}}(\hat{f}(\boldsymbol{x}_*)) = \sum_{l=1}^d a_l \boldsymbol{\hat{v}}_l(\boldsymbol{x}_*),
\end{equation}
where
\begin{equation}
\boldsymbol{\hat{v}}_l(\boldsymbol{x}_*)=\frac{1}{\lambda_{l}^t}\sum_{j=1}^N k(\boldsymbol{x}_j,\boldsymbol{x}_*) \boldsymbol{v}_{l}(\boldsymbol{x}_j), \quad l=1,2,\dots, d.
\end{equation}
are the corresponding GHs.
Scaling up the (filtered) extension of the function $f$ to a set of say $L$ new points can be computed using the following matrix product (\cite{coifman2006geometric,thiem2020emergent}):
\begin{equation}
\boldsymbol{\mathcal{E}}(\hat{\boldsymbol{f}})=\boldsymbol{K}_{L\times N}\boldsymbol{V}_{N\times d}{\boldsymbol{\Lambda}_{d\times d}^{-1}}\boldsymbol{V}_{d\times N}^T\boldsymbol{f}_{N\times 1},
\end{equation}
where $\boldsymbol{K}_{L\times N}$ is the corresponding kernel matrix, $\boldsymbol{V}_{N\times d}$ is the matrix with columns the $d$ parsimonious eigenvectors $\boldsymbol{v}_l$, $\boldsymbol{\Lambda}_{d\times d}$ is the diagonal matrix with elements $\lambda_{l}^t$.
The above direct approach provides a map from the ambient space to the reduced-order space (restriction) and vice versa (lifting).
\section{The forecasting problems\label{Forecasting}}
For demonstrating the performance of the proposed numerical framework and comparing the various embedding, modelling and reconstruction approaches, we used three synthetic time series resembling EEG recordings produced by linear and nonlinear stochastic discrete models, and a real-world FOREX pair data set spanning the period from 03/09/2001 until 29/10/2020. Below, we describe the models and the FOREX data set.
\subsection{The synthetic time series\label{Thesynthetictimeseries}}
Our synthetic stochastic signals resemble EEG time series (see e.g. \cite{baccala2001partial,nicolaou2016nonlinear}) produced by linear and nonlinear five-dimensional discrete stochastic models with white noise.
The linear five-dimensional stochastic discrete model is given by the following equations:
\begin{eqnarray}\label{linearmodel}
\nonumber
y^{(1)}_t=0.2y^{(1)}_{t-1}-0.4y^{(2)}_{t-1}+w^{(1)}_t ,
\\ \nonumber
y^{(2)}_t=-0.5y^{(1)}_{t-1}+0.15y^{(2)}_{t-1}+w^{(2)}_t ,
\\
y^{(3)}_t=-0.14y^{(2)}_{t-1}+w^{(3)}_t ,
\\ \nonumber
y^{(4)}_t=0.5y^{(1)}_{t-1}-0.25y^{(2)}_{t-1}+w^{(4)}_t ,
\\ \nonumber
y^{(5)}_t=0.15y^{(1)}_{t-1}+w^{(5)}_t,
\end{eqnarray}
where for the training process the model order is assumed to be known (here equal to 1).
A second (nonlinear) model is given by the following equations (see e.g. \cite{nicolaou2016nonlinear}):
\begin{eqnarray}\label{nonlinearmodel}
\nonumber
y^{(1)}_t=3.4y^{(1)}_{t-1}(1-{y^{(1)}_{t-1}}^2)\exp{(-{y^{(1)}_{t-1}}^2)}+w^{(1)}_t ,
\\ \nonumber
y^{(2)}_t=3.4y^{(2)}_{t-1}(1-{y^{(2)}_{t-1}}^2)\exp{(-{y^{(2)}_{t-1}}^2)}+0.5y^{(1)}_{t-1}y^{(2)}_{t-1}+w^{(2)}_t ,
\\
y^{(3)}_t=3.4y^{(3)}_{t-1}(1-{y^{(3)}_{t-1}}^2)\exp{(-{y^{(3)}_{t-1}}^2)}+0.3y^{(2)}_{t-1}+0.5{y^{(1)}_{t-1}}^2+w^{(3)}_t ,
\\ \nonumber
y^{(4)}_t=0.5y^{(1)}_{t-1}-0.25y^{(2)}_{t-1}+w^{(4)}_t ,
\\ \nonumber
y^{(5)}_t=0.15y^{(1)}_{t-1}+w^{(5)}_t.
\end{eqnarray}
Finally, the proposed scheme was also validated through the time series produced by a linear stochastic model with a model order greater than 1 given by the following equations:
\begin{eqnarray}\label{linearmodel3}
\nonumber
y^{(1)}_t=0.1y^{(1)}_{t-1}-0.6y^{(2)}_{t-3}+w^{(1)}_t ,
\\ \nonumber
y^{(2)}_t=-0.15y^{(1)}_{t-3}+0.8y^{(2)}_{t-3}+w^{(2)}_t ,
\\
y^{(3)}_t=-0.45y^{(2)}_{t-3}+w^{(3)}_t ,
\\ \nonumber
y^{(4)}_t=0.45y^{(1)}_{t-3}-0.85y^{(2)}_{t-3}+w^{(4)}_t ,
\\ \nonumber
y^{(5)}_t=0.95y^{(1)}_{t-2}+w^{(5)}_t.
\end{eqnarray}
\noindent
In all the above models, the time series $w^{(i)}_t$, $i=1,2,3,4,5$, are uncorrelated normally-distributed noise with zero mean and unit standard deviation.
\subsection{The FOREX forecasting problem\label{Headings}}
We used the \texttt{investpy 0.9.14} module of Python (\cite{Bartolome2020}) to download 10 USD-based FOREX pairs daily spot closing prices from the \url{www.investing.com} open API, spanning the period from 03/09/2001 until 29/10/2020:
EUR/USD, GBP/USD, AUD/USD, NZD/USD, JPY/USD, CAD/USD, CHF/USD, SEK/USD, NOK/USD, DKK/USD.
The time period was selected to contain crucial market crashes like the 2008 Lehman related stock market crash, the 2010 sovereign debt crisis in the Eurozone and even the most recent Covid-19 related market crash of 2020.
For our analysis, we computed the compounded daily returns of the FOREX pairs as
\begin{equation}
{r}_{i,t} = \log \left( \frac{S_{i,t}}{S_{i,t-1}} \right),
\label{eq:FXRawReturns}
\end{equation}
where $S_{i,t}$ is the spot price of the $i$-th FOREX pair in the data set above, and $r_{i,t}$ is the corresponding compounded return at date $t$.
However, trading the FOREX markets is not only associated with the spot price movements but also with the so-called ``currency carry trading''. The currency carry-trading involves first the identification of high interest-paying investment assets (e.g. bonds or short-term bank deposits) denominated in a country currency (e.g. Japanese Yen). Then, the investment is carried on by borrowing money in another currency, for which the paying interest rate is lower. Such a trading requires the approximation of the so called interest-rate-differential (IRD) excess returns, which should be incorporated in the FOREX price movements (see e.g. \cite{menkhoff2012carry}). Here, we used the short term interest rates data retrieved from the OECD database as the proxy of these IRD excess returns. Since the short term interest rates data are reported on a monthly basis and on an annual percentage format, we constructed daily approximations, by linearly interpolating through the available downloaded records (using the \texttt{Pandas} module of Python (\cite{McKinney2010})) and normalizing on the basis of an annual calendar period.
After this pre-processing step, we denote as IR$_{\text{USA},t}$ the time series of the daily approximations of the USA short term interest rates,
and IR$_{i,t}$ is the corresponding time series of each of the other $i=1,\dots, 10$ countries. Then, the interest rates differentials (IRD) time series are given by:
\begin{equation}
\text{IRD}_{i,t} = \text{IR}_{i,t} - \text{IR}_{\text{USA},t}.
\label{eq:IRD}
\end{equation}
Finally, the so called ``carry adjusted returns'' of the FOREX are given by (see also e.g. \cite{menkhoff2012carry}):
\begin{equation}
x_{i,t} = r_{i,t} + \text{IRD}_{i,t},
\label{eq:FXCarryAdjustedReturns}
\end{equation}
where $x_{i,t}$ is the $i$-th FX pair ``carry'' adjusted return
corresponding to its raw ``unadjusted'' market price return, $r_{i,t}$ is defined in \eqref{eq:FXRawReturns}, and IRD$_{i,t}$.
For quantifying the potential excess returns (profits), we constructed a trading strategy based on the so-called risk parity rationale (see e.g. \cite{Braga2015}) where each asset is allocated with a portfolio weight which is proportional to its inverse risk. Thus, one assigns higher portfolio weights to less volatile assets, and smaller portfolio weights to more risky assets. Thus, the risk is quantified using the volatility $\sigma_{i, t}$ (measured by the standard deviation of logarithmic returns over a specific time period up to the $t$ trading day) of each asset $i$ of the portfolio plus the total portfolio volatility. In our FOREX problem, the risk parity portfolio allocation practically means investing $1/\sigma_{i, t}$ at each of the $i=1,2,\dots,10$ FOREX pairs with corresponding carry-adjusted returns $x_{i,t}$. By performing one-step ahead predictions for each of carry-adjusted returns of the 10 pairs, denoted as $\hat{x}_{i,t+1}$, we create a ``binary'', or as otherwise called ``directional'', trading signal of ``buy'' or ``sell'' for each one of the FOREX. Thus, the trading strategy reads as follows:
\begin{equation}
u_{i,t} = sign(\hat{x}_{i,t+1}) = \\
\begin{cases}
\phantom{-}1, \; \text{``buy''} \\
-1, \; \text{``sell''} \\
\end{cases}
\end{equation}
Based on the above, the profit or loss at the next day ($t+1$) is given by:
\begin{equation}
\Pi_{t+1} = \sum_{i=1}^D \frac{u_{i,t} x_{i,t+1}}{{\sigma}_{i,t}},
\label{eq:RiskParityPortfolio}
\end{equation}
where $x_{i,t+1}$ is the real $i$-th FOREX pair return at time $t+1$, and $\Pi_{t+1}$ is the risk parity portfolio return at time $t+1$.
\section{Numerical Results}
\label{NumericalResults}
For the implementation of the numerical algorithms, we used the \texttt{datafold}, \texttt{sklearn} and \texttt{statsmodels} packages of Python (\cite{Lehmberg2020,scikit-learn,seabold2010statsmodels}).
The selection of the eigensolver was based on the sparsity and size of the input matrix: ARPACK was used if the size of the input matrix was greater than 200 and $n+1<10$ (where $n$ is the number of requested eigenvalues), otherwise, the ``dense'' option was used. The ARPACK package (\cite{lehoucq1998arpack}) implements implicitly restarted Arnoldi methods, using a random default starting vector for each iteration with a tolerance of $tol=10^{-6}$ and a maximum number of 100 iterations. This approach is implemented by the function \texttt{scipy.sparse.linalg.eigsh} of the \texttt{scipy} module (upon which the \texttt{sklearn} one depends) (\cite{Virtanen2020}). The ``dense'' eigensolver is implemented by the function \texttt{eigh} of the \texttt{scipy} module and returns the eigenvalues and eigenvectors computed using the LAPACK routine \texttt{syevd} using the divide and conquer algorithm (\cite{Cuppen1980,lapack99}). We used the default value of the tolerance of the Newton-Raphson iterations, which is of the order of the floating-point precision, and the maximum number of iterations was set to $30N$ iterations, where $N$ is the size of the matrix.
The DM embedding on the training set of all data sets was performed using $d$ parsimonious eigenvectors (\cite{dsilva2018parsimonious}). Here, for the construction of the graph at the first step, we used the standard Gaussian kernel, defined by
\begin{equation}
k(\boldsymbol{x}_i, \boldsymbol{x}_j) = \exp{\left( -\frac{\| \boldsymbol{x}_i - \boldsymbol{x}_j \|_{L_2}^2}{\sigma} \right)},
\end{equation}
where the $\| \boldsymbol{x}_i - \boldsymbol{x}_j \|_{L_2}$ is the Euclidean distance between $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$, and $\sigma$ is a scaling parameter that controls the size of the neighborhood (or the connectivity of the graph). For its derivation, we follow the systematic approach provided by \cite{singer2009detecting}. The full kernel is used for the calculations.
The VAR models were trained using the VAR class of the \texttt{statsmodels.tsa.} \texttt{vector\_ar.var\_model} routine, using the OLS default method for the parameters estimation. The corresponding LAPACK function used to solve the least-squares problem is the default \texttt{gelsd}, which exploits the QR or LQ factorization of the input matrix.
The hyperparameters of the GPR model were optimized using the (default) L-BFGS-B algorithm of the \texttt{scipy.optimize.minimize} method (\cite{Byrd1995,Zhu1997}). The gradient vector was estimated using forward finite differences with the numerical approximations of the Jacobian being performed with a default step size $eps=10^{-8}$. The iterations were stopped either when
$$ \frac{(f^k - f^{k+1})}{\max\{|f^k|,|f^{k+1}|,1\}} < 10^{-9}, $$
$f^k$ being the value of the loss function at step $k$, or when
$$ \max_i | \text{proj}(g)_i | < 10^{-5}, $$
$\text{proj}(g)_i$ being the $i$-th component of the projected gradient at the current iterate.
We used the default values for the maximum number of function evaluations and the maximum number of iterations (15,000) as well as the default value for the maximum number of line searches per iteration (20).
For the lifting task, 50 nearest neighbors were considered for interpolation by all methods. The underlying $k$-NN algorithm is based on the algorithm proposed in \cite{Maneewongvatana2002}. Using different values of the number of nearest neighbors within the range 20-100 did not change the outcomes of the analysis. In the case of RBF interpolation, the underlying linear system of equations was solved by the LAPACK \texttt{dgesv} routine from \texttt{scipy.linalg.lapack.dgesv}, which implements the default method of the LU decomposition with partial pivoting. For the GH approach, we used the Gaussian kernel. In effect, we are performing ``double" DM here - computing diffusion maps on the leading retained diffusion map components for the reduced embedding. This procedure, suggested in ~\cite{chiavazzo2014reduced} can actually be performed ``once and for all" globally~\cite{Evangelou}.
The computations were performed using a system with an Intel Core i7-8750H CPU @2.20 GHz and 16GB of RAM.
\subsection{Synthetic time series}
For both the linear and nonlinear models, we produced 2000 points. We used 1500 points for learning the manifold and for training the various models, and 500 points to test the forecasting performance of the various schemes. The forecasting performance was tested using the iterative mode, i.e. by training the models for one-step ahead predictions and then simulating the trained model iteratively to predict future values. The performance was measured using the Root Mean Square Error (RMSE) of the residuals, reading:
\begin{equation}
\mbox{RMSE} = \frac{1}{N} \sqrt{\sum_{i=1}^N (\hat{\boldsymbol{x}_i} - \boldsymbol{x}_i) ^ 2},
\label{MSE}
\end{equation}
where $\hat{\boldsymbol{x}_i}$ are the predictions and $\boldsymbol{x}_i$ the actual data. To quantify the forecasting intervals due to the stochasticity of the models, we performed 100 runs for each combination of manifold learning algorithms (DMs and LLE), models (MVAR and GPR) and lifting methods (RBFs and GHs), reporting the median and the 5-th and 95-th percentiles of the resulting RMSE. The RMSE values obtained with the naive random walk model, as well as with the MVAR and GPR models trained in the original space, are also given for comparison purposes.
In Table \ref{LinearManifoldTable}, we report the forecasting statistics of the time series produced with the linear model given by \eqref{linearmodel} as obtained over 100 runs. As it is shown, both the MVAR and GPR models trained in the original five-dimensional space outperform the naive random walk. The RMSEs of the MVAR and GPR models suggest a good match with the stochastic model, with the residuals being approximately within one standard deviation of the distribution of the noise level. In the same table, we provide the corresponding forecasting statistics as obtained with the proposed ``embed-forecast-lift'' scheme for the various combinations of manifold learning algorithms, regression models and lifting approaches.
\begin{table}[ht]
\caption{Linear Stochastic Model~\eqref{linearmodel}. RMSE statistics (median, 5th and 95th percentiles over 100 runs) for each of the five variables as obtained by: (a) training MVAR and GPR models with model order one in the original 5D space (MVAR(OS), GPR(OS)), and (b) the proposed ``embed-forecast-lift'' scheme for all combinations of the manifold learning algorithms (DMs and LLE) with two coordinates, models (MVAR and GPR) and lifting approaches (RBFs and GHs). For comparison purposes the RMSEs obtained with the naive random walk is also reported.}
\centering
\addtolength{\tabcolsep}{-5pt}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\rowcolor{Gray}
Model/Variable & $y^{(1)}_t$ & $y^{(2)}_t$ & $y^{(3)}_t$ & $y^{(4)}_t$ & $y^{(5)}_t$\\
\hline
\hline
\rowcolor{LightCyan}
Random Walk & 1.374 & 1.446 & 1.427 & 1.551 & 1.425 \\
\rowcolor{LightCyan}
& (1.287,1.466) & (1.333,1.528) & (1.339,1.497) & (1.455,1.654) & (1.335,1.512)\\
\hline
MVAR(OS) & 1.151 & 1.180 & 1.010 & 1.224 & 1.011 \\
& (1.077,1.218) & (1.113,1.269) & (0.966,1.062) & (1.156,1.281) & (0.960,1.063) \\
\hline
\rowcolor{LightCyan}
GPR (OS) & 1.151 & 1.179 & 1.010 & 1.224 & 1.011 \\
\rowcolor{LightCyan}
& (1.077,1.218) & (1.113,1.272) & (0.966,1.061) & (1.154,1.284) & (0.959,1.064) \\
\hline
\hline
DM-GPR-GH & 1.153 & 1.184 & 1.012 & 1.228 & 1.013 \\
& (1.082,1.220) & (1.117,1.273) & (0.966,1.061) & (1.160,1.287) & (0.959,1.065) \\
\hline
\rowcolor{LightCyan}
LLE-GPR-GH & 1.155 & 1.182 & 1.011 & 1.230 & 1.014 \\
\rowcolor{LightCyan}
& (1.077,1.218) & (1.115,1.272) & (0.966,1.065) & (1.157,1.292) & (0.962,1.064) \\
\hline
\hline
DM-GPR-RBF & 1.260 & 1.390 & 1.163 & 1.365 & 1.166 \\
& (1.111,2.201) & (1.149,1.980) & (0.997,2.331) & (1.173,1.939) & (0.997,1.853)\\
\hline
\rowcolor{LightCyan}
LLE-GPR-RBF & 1.197 & 1.234 & 1.076 & 1.270 & 1.103 \\
\rowcolor{LightCyan}
& (1.103,1.452) & (1.131,1.461) & (0.993,1.582) & (1.175,1.468) & (0.990,1.553) \\
\hline
\hline
DM-MVAR-GH & 1.151 & 1.180 & 1.011 & 1.225 & 1.011 \\
& (1.078,1.220) & (1.113,1.269) & (0.966,1.063) & (1.155,1.288) & (0.959,1.063) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR-GH & 1.155 & 1.182 & 1.011 & 1.229 & 1.014 \\
\rowcolor{LightCyan}
& (1.077,1.218) & (1.115,1.271) & (0.966,1.065) & (1.158,1.293) & (0.962,1.064) \\
\hline
\hline
DM-MVAR-RBF & 1.289 & 1.312 & 1.135 & 1.341 & 1.106 \\
& (1.124,2.132) & (1.146,2.018) & (0.996,1.635) & (1.186,1.933) & (0.99,1.601) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR-RBF & 1.187 & 1.230 & 1.080 & 1.264 & 1.096 \\
\rowcolor{LightCyan}
& (1.106,1.477) & (1.14,1.477) & (0.995,1.602) & (1.177,1.479) & (0.989,1.558) \\
\hline
\end{tabular}
\label{LinearManifoldTable}
\end{table}
For our illustrations, we have chosen the first two parsimonious DM coordinates, and the corresponding two LLE eigenvectors. As shown, the best performance is obtained with the GH lifting operator. Using GHs for lifting and any combination of the selected manifold learning algorithms and models outperforms all other combinations, thus resulting in practically the same RMSE values when compared with the predictions made in the original space. This suggests that the proposed ``embed-forecast-lift'' scheme applied in the 5D feature space provides a very good reconstruction of the predictions made in the original space. On the other hand, lifting with RBF interpolation with LU decomposition, generally resulted in poor reconstructions of the high-dimensional space, thus, in many cases, giving wide forecasting intervals that contained the median value of the naive random walk RMSE.
Next, in Table \ref{NonLinearManifoldTable}, we report the forecasting statistics of the time series for the nonlinear stochastic model \eqref{nonlinearmodel} as obtained over 100 runs. As in the case of the linear stochastic model, both MVAR and GPR trained in the original 5D space outperform the naive random walk. The resulting RMSE values suggest a good match with the nonlinear stochastic model. Yet, the match is poorer than the one obtained for the linear model.
\begin{table}[ht]
\caption{Nonlinear Stochastic Model \eqref{nonlinearmodel}. RMSE statistics (median, 5th and 95th percentiles over 100 runs) for each of the five variables as obtained by: (a) training MVAR and GPR in the original 5D space (MVAR(OS), GPR(OS)) with model order one, (b) using the proposed ``embed-forecast-lift'' scheme for all the combinations of the manifold learning algorithms (DMs and LLE), models (MVAR and GPR), and lifting approaches (RBFs and GHs). For the embedding, three coordinates were used.}
\centering
\addtolength{\tabcolsep}{-5pt}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\rowcolor{Gray}
Model/Variable & $y^{(1)}_t$ & $y^{(2)}_t$ & $y^{(3)}_t$ & $y^{(4)}_t$ & $y^{(5)}_t$\\
\hline
\hline
\rowcolor{LightCyan}
Random Walk & 1.711 & 2.085 & 2.292 & 1.731 & 1.435 \\
\rowcolor{LightCyan}
& (1.597,1.815) & (1.939,2.258) & (2.147,2.437) & (1.62,1.819) & (1.354,1.527) \\
\hline
MVAR(OS) & 1.181 & 1.438 & 1.565 & 1.212 & 1.021 \\
& (1.123,1.234) & (1.359,1.555) & (1.463,1.648)& (1.151,1.272) & (0.967,1.076) \\
\hline
\rowcolor{LightCyan}
GPR(OS) & 1.182 & 1.437 & 1.684 & 1.213 & 1.020 \\
\rowcolor{LightCyan}
& (1.123,1.233) & (1.360,1.555) & (1.540,1.799) & (1.151,1.272) & (0.966,1.076) \\
\hline
\hline
DM-GPR-GH & 1.181 & 1.440 & 1.573 & 1.214 & 1.022 \\
& (1.123,1.234) & (1.363,1.56) & (1.477,1.654) & (1.151,1.276) & (0.968,1.077)\\
\hline
\rowcolor{LightCyan}
LLE-GPR-GH & 1.182 & 1.451 & 1.579 & 1.214 & 1.023 \\
\rowcolor{LightCyan}
& (1.126,1.236) & (1.363,1.563) & (1.473,1.675) & (1.151,1.273) & (0.97,1.076) \\
\hline
\hline
DM-GPR-RBF & 1.488 & 1.618 & 2.060 & 1.710 & 1.199 \\
& (1.172,8.88) & (1.392,9.95) & (1.642,6.102) & (1.203,16.868) & (1.012,9.108)\\
\hline
\rowcolor{LightCyan}
LLE-GPR-RBF & 1.214 & 1.449 & 1.587 & 1.254 & 1.088 \\
\rowcolor{LightCyan}
& (1.133,1.557) & (1.365,1.561) & (1.502,1.692) & (1.159,1.446) & (0.989,1.5) \\
\hline
\hline
DM-MVAR-GH & 1.181 & 1.438 & 1.571 & 1.213 & 1.022 \\
& (1.123,1.234) & (1.365,1.555) & (1.472,1.649) & (1.151,1.274) & (0.968,1.076) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR-GH & 1.182 & 1.452 & 1.586 & 1.214 & 1.023 \\
\rowcolor{LightCyan}
& (1.125,1.235) & (1.364,1.563) & (1.468,1.669) & (1.152,1.275) & (0.97,1.076) \\
\hline
\hline
DM-MVAR-RBF & 1.387 & 1.534 & 1.902 & 1.560 & 1.131 \\
& (1.169,6.743) & (1.368,4.634) & (1.609,4.5) & (1.199,9.294) & (0.99,4.655) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR-RBF & 1.213 & 1.445 & 1.570 & 1.253 & 1.091 \\
\rowcolor{LightCyan}
& (1.135,1.507) & (1.366,1.561) & (1.484,1.657) & (1.159,1.621) & (0.998,1.674) \\
\hline
\end{tabular}
\label{NonLinearManifoldTable}
\end{table}
We also provide the corresponding forecasting statistics as obtained with the proposed ``embed-forecast-lift'' scheme. For embedding in the reduced space, we have taken three (parsimonious) coordinates.
Again, the best performance is obtained with the GHs lifting operator for any combination of manifold learning algorithms and models. Importantly, the reconstruction errors between the forecasts with the ``full'' MVAR and GPR models trained directly in the original 5D space and the ones obtained with the proposed ``embed-forecast-lift'' scheme are negligible up to a three-digit accuracy for all the five variables. As with the previous case, lifting with RBFs resulted in a poor reconstruction of the high-dimensional space, with forecasting intervals containing the median RMSE value of the naive random walk.
Finally, in Table \ref{modelorder3}, we report the RMSE statistics for the time series produced with the linear model with a model order three (see \eqref{linearmodel3}) as obtained over 100 runs from (a) the naive random walk model applied to the original 5D space, (b) the MVAR models trained in the original 5D data set with model orders one (MVAR(1)) and three (MVAR(3)), and (c) the proposed ``embed-forecast-lift'' method with the embedding applied to the original 5D data set using DM and LLE for embedding with three coordinates. In the reduced-order space we have trained MVAR models with model orders 1 and 3 and used GHs for lifting.
\begin{table}[ht]
\caption{Linear Stochastic Model with model order three (see \eqref{linearmodel3}). RMSE statistics (median, 5th and 95th percentiles over 100 runs) for 100 simulations, for each of the 5 variables as obtained by: (a) training MVAR(1) and MVAR(3) models in the original 5D feature space, and (b) the proposed scheme with DMs and LLE, MVAR(1) and MVAR(3) models and GHs for lifting. The embedding with DMs and LLE was implemented using three coordinates.}
\centering
\addtolength{\tabcolsep}{-5pt}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\rowcolor{Gray}
Model/Variable & $y^{(1)}_t$ & $y^{(2)}_t$ & $y^{(3)}_t$ & $y^{(4)}_t$ & $y^{(5)}_t$ \\
\hline
\hline
\rowcolor{LightCyan}
Random Walk & 2.138 & 2.910 & 1.930 & 3.499 & 2.477 \\
\rowcolor{LightCyan}
&(1.898,2.461) & (2.412,3.551) & (1.755,2.152) & (2.978,4.195) & (2.253,2.739) \\
\hline
MVAR(1) & 1.629 & 2.121 & 1.382 & 2.567 & 1.840 \\
& (1.466,1.851) & (1.817,2.515) & (1.275,1.517) & (2.244,2.997) & (1.699,2.031) \\
\hline
\rowcolor{LightCyan}
MVAR(3) & 1.611 & 2.092 & 1.371 & 2.531 & 1.824 \\
\rowcolor{LightCyan}
& (1.449,1.833) & (1.787,2.483) & (1.265,1.501) & (2.206,2.963) & (1.689,2.019) \\
\hline
\hline
DM-MVAR(1)-GH & 1.630 & 2.121 & 1.383 & 2.569 & 1.843 \\
& (1.467,1.854) & (1.825,2.515) & (1.273,1.516) & (2.244,2.998) & (1.697,2.037) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR(1)-GH & 1.651 & 2.159 & 1.397 & 2.619 & 1.866 \\
\rowcolor{LightCyan}
& (1.472,1.905) & (1.832,2.567) & (1.281,1.545) & (2.269,3.080) & (1.705,2.084) \\
\hline
\hline
DM-MVAR(3)-GH & 1.621 & 2.103 & 1.376 & 2.546 & 1.835 \\
& (1.455,1.839) &(1.791,2.484) & (1.267,1.509) & (2.217,2.972) & (1.694,2.028) \\
\hline
\rowcolor{LightCyan}
LLE-MVAR(3)-GH & 1.649 & 2.149 & 1.393 & 2.608 & 1.862 \\
\rowcolor{LightCyan}
& (1.473,1.902) & (1.820,2.572) & (1.277,1.550) & (2.252,3.065) & (1.703,2.094) \\
\hline
\end{tabular}
\label{modelorder3}
\end{table}
The best results were obtained when using the proposed scheme with DMs for embedding and a model order three for the training of the MVAR model in the corresponding manifold. Importantly, the implementation of DM-MVAR(3)-GH succeeds in reproducing quite well the results obtained by training MVAR with a maximum delay of three in the original space.
\subsection{FOREX Trading}
Here, we assess the performance of the proposed forecasting framework in the FOREX trading application described earlier, under the annualized Sharpe (SH) ratio (\cite{Sharpe1994}) of the constructed risk parity portfolio. The returns are computed by Eq. (\ref{eq:RiskParityPortfolio}). The basic formula for calculating the SH ratio is given by:
\begin{equation}
\text{SH} = \frac{\mu_{\Pi} - R_f}{\sigma_{\Pi}},
\end{equation}
where $\mu_{\Pi}$ is the sample average returns of the risk parity portfolio and $\sigma_{\Pi}$ the corresponding volatility, while the $R_f$ is the risk-free rate, usually set equal to the annual yield of the US Treasury Bonds. Here, for our analysis, we have set $R_f=0$ that is a fair approximation of the reality.
The underlying dynamics of the FOREX market is in general non-autonomous, at least over a long time period. In principle, there are time-varying exogenous factors including macroscopic economic indices, social interaction and mimesis (see e.g. \cite{Papaioannou2013}, where machine learning has been used to forecast FOREX taking into account Twitter messages), but also seasonal factors and rare events (such as the recent COVID19 pandemic), which influence FOREX over time. This comes in contrast to the synthetic time series that we have examined here, and also other autonomous models that have been considered in other studies that served as benchmarks to assess the performance of the various schemes (see also the discussion in the conclusions, Section~\ref{Conclusion}). Hence, to cope with the such changes of the FOREX market and in general of financial assets, one would set up a sliding/rolling window, then train models within the rolling window, and perform (usually) one-day ahead forecasts for trading purposes.
For our illustrations, we assessed the performance of the proposed scheme based on the so-called risk parity trading strategy, using a 1-year (250 trading days) embedding rolling window and the last 50 or 100 days of the 250-day rolling window for training. The forecasting performance of the proposed scheme using DMs and LLE for embedding with 3 coordinates, MVAR and GPR for prediction, and GHs for lifting was compared against the naive random walk and the full MVAR and GPR models trained and implemented directly in the original space. A comparison against the linear embedding provided by PCA with the same number of principal components was also performed. Figure~\ref{fig:SharpeBarPlot_3_AllinOne} depicts the SH ratios obtained with the various methods. As it is shown, the proposed schemes using the DM and LLE algorithms for embedding outperform all the other schemes, when considering the same size of the training window. In particular, the highest SH ratios ($\sim $0.83) are obtained with the combination of LLE and MVAR within the 100-day training window, followed by the combination of DMs and GPR in the 100-day training window, resulting in an SH ratio $\sim 0.76$. The third ($\sim 0.73$) and fourth ($\sim 0.72$) larger SH ratios result again from the implementation of the DMs and MVAR within the 50-day and 100-day training windows, respectively. On the other hand, the naive random walk (black bar) resulted in a negative SH ratio ($\sim \; -0.42$). Negative SH ratios (of the order of $-0.35$) resulted also from the implementation of PCA with GPR (yellow bars) for both sizes of the training window, while for the 50-day training window, the combination of PCA with MVAR resulted in an almost zero (but still negative) SH ratio. With PCA, a (small) positive SH value ($\sim 0.23$) was obtained with MVAR within the 100-day training window. The full MVAR and GPR models, trained and implemented directly in the original space, produced positive SH ratios, but still smaller than those of the DMs and LLE schemes. In particular, within the 50-day training window the full MVAR (GPR) model resulted in an SH ratio of $\sim 0.39$ ($\sim 0.04$), while within the 100-day training window the full MVAR (GPR) model resulted in an SH ratio of $\sim 0.68$ ($\sim 0.3$). Finally, we also tested the forecasting performance of the full MVAR and GPR models using the 250-day training window. Within this configuration, the MVAR model yielded an SH ratio of $\sim 0.49$, while the GPR model an SH ratio of $\sim 0.2$.
\begin{figure}[ht!]
\centering
\hspace*{-1.9cm}
\includegraphics[scale=0.4]{SharpeBarPlot_3_AllinOne.png}
\caption{FOREX trading. One-day-ahead predictions. Sharpe Ratios obtained with the proposed framework (using DMs and LLE for embedding, MVAR and GPR for prediction at the embedded space, and GHs for lifting), as well as with PCA, random walk, and MVAR and GPR models implemented directly in the original space. A rolling window of 250 trading days and 3 vectors were used for the embedding, while the MVAR and GPR models in the embedded space were trained using the last 50 or 100 points of the rolling window.}
\label{fig:SharpeBarPlot_3_AllinOne}
\end{figure}
\section{Conclusions and discussion\label{Conclusion}}
We proposed a numerical framework based on manifold learning for forecasting time series, which is composed of three basic steps: (i) embedding of the original high-dimensional data in a low-dimensional manifold, (ii) construction of reduced-order models and forecasting on the manifold, and (iii) reconstruction of the predictions in the high-dimensional space.
As mentioned in the introduction, the task of forecasting is different from that attempted for the reconstruction of high-dimensional models of dynamical systems in the form of ODEs or PDEs, in four main aspects. First, for real-world data sets, the existence of a relatively smooth low-dimensional manifold is not guaranteed as in the case of well-defined dynamical models (see e.g. the discussion in \cite{gajamannage2019nonlinear}). Second, non-stationary dynamics, which in general are not an issue when dealing with the approximation of dynamical systems, pose a major challenge for a reliable/consistent forecasting. It should be noted that the stationarity assumption may be required to hold true even for interpolation problems. For example, the implementation of the MVAR models, but also Gaussian Processes with a Gaussian kernel, requires the stationary covariance function assumption to be satisfied (see e.g. the discussion in \cite{rasmussen2003gaussian} and \cite{cheng2015time}). Third, when dealing with real-world time series, such as financial time series, the number of available snapshots (even at the intraday frequency of trading) is limited, in contrast to the size of temporal data that one can produce by model simulations. In such cases, the quest for beating the ``curse of dimensionality'' using the correct (parsimonious) embedding and modelling is stronger. Finally, in the case of smooth dynamical systems, as the dynamics emerges from the same physical laws as expressed by the underlying ODEs, PDEs or SDEs, what is usually sought is a single global manifold (a single geometry). However, in many complex problems such as those in finance, the parametrization of the manifold (if it exists) may change over time. Thus, one would seek for a family of (sequential-in-time) submanifolds, which can be ``identified'' within a rolling window approach.
Here, we assessed the performance of the proposed numerical framework by implementing and comparing different combinations of manifold learning algorithms (LLE and DMs), regression models (MVAR and GPR) and reconstruction operators (RBFs and GHs) on various problems, including synthetic stochastic data series and a real-world data set of FOREX time series. By doing so, we first showed that for the synthetic time series, the proposed ``embed-predict-lift'' framework, and in particular the one implementing DMs for embedding and GHs for lifting, reconstructs almost perfectly the predictions obtained in the high-dimensional space with the full regression models. We also showed that using a method such as the LU decomposition for the solution of the RBF interpolation should be in general avoided for lifting, as it might result in an ill-posed problem. However, in this case one could compute the least-norm solution using e.g. the Moore-Penrose pseudoinverse. Moreover, for large scale almost singular linear systems, one can use GMRES and a singular preconditioner as proposed by \cite{elden2012solving}.
We intend to study the performance of such an approach in a future work.
For the FOREX forecasting problem, we used the standard Sharpe Ratio metric and the parity risk portfolio to assess the forecasting performance of the proposed numerical scheme within a rolling window framework. The proposed scheme with DMs and LLE, MVAR and GPR models and GHs for lifting outperformed all other ``conventional schemes'', i.e. the full MVAR and GPR models implemented directly at the original space as well as the scheme that used PCA for the task of embedding. At this point we should note, that one could use different schemes such as the autoencoders (\cite{kramer1991nonlinear,chen2018molecular}), reservoir computing (\cite{lukovsevivcius2009reservoir,pathak2018model}) and LSTMs (\cite{greff2016lstm,vlachas2018data}). We aim at performing an extensice comparison of the above methodologies in a future work.
Finally, we note that in order to cope with the generalization property and the topological instability issues (see e.g. \cite{balasubramanian2002isomap}) that arise in implementing kernel-based manifold learning algorithms when the data set is not dense on the manifold, and/or in the presence of ``strong'' stochasticity, one can resort to techniques such as the constant-shift one to appropriately adjust the graph metric, and techniques for the removal of outliers from the data set and the construction of smooth geodesics (\cite{choi2007robust,wang2012geometric,gajamannage2019nonlinear}). We intend to explore the efficiency of such approaches in a future work.
\section*{Acknowledgements}
This Work partially supported by Gruppo Nazionale per il Calcolo Scientifico - Istituto Nazionale di Alta Matematica (GNCS-INdAM), Italy, and by the Italian program Fondo Integrativo Speciale per la Ricerca (FISR) - B55F20002320001. The work of IGK was partially supported by the US Department of Energy.
|
1,116,691,500,994 | arxiv |
\section{Introduction}
\label{sec:intro}
\begin{bullets}%
[localization in cellular networks]
\begin{bullets}%
[its applications]Localization services play a central role
in countless applications such as navigation, augmented reality,
autonomous driving, wireless communications and emergency
response to name a few.
\nextv{ \begin{bullets}
[wireless]For example, location information can be used in
mobile networks to improve accuracy in beam
alignment and channel estimation \cite{xiao2022integrated}
[emergency response]or in natural and man-made disasters to
monitor the environment or find
survivors~\cite{savvides2001dynamic}.
\end{bullets}
\end{bullets}%
[existing localization techniques]%
\begin{bullets}%
[pilots]Most localization systems rely on algorithms that
provide location estimates based on pilot signals that are
received from satellites or terrestrial transmitters.
%
[LOS$\rightarrow$~ model-based]In case of line-of-sight (LOS) reception,
\emph{model-based} approaches are typically pursued, where
geometric principles are applied to estimate locations from
distance and/or angle estimates obtained from channel features
such as time of arrival, time difference of arrival, or angle of
arrival.
%
[NLOS$\rightarrow$~ data driven $\rightarrow$~ fingerprinting]In turn, when there
is not LOS to a sufficient number of transmitters, as occurs
indoors or in urban scenarios, data-driven approaches are
preferred since the aforementioned distance or angle estimates
become too inaccurate.
\begin{bullets}%
[description]The most prominent example of this class of
algorithms is \emph{fingerprinting}, which involves recording a
set of channel state information (CSI) vectors measured at known
locations; see~\cite{sobehy2020nearest} and references therein.
\begin{bullets}
[k-nn]Location estimates can be obtained, for instance, by
comparing the CSI observed by the node to be located with the
entries of this data set and applying K-nearest neighbors.
[DNN]More sophisticated alternatives rely on deep
neural networks (DNNs) to learn a mapping from
CSI~\cite{arnold2018deep, arnold2019novel} or from
preprocessed
CSI~\cite{niitsoo2018convolutional,li2019massive,ferrand2020feature}
into location estimates.
\end{bullets}
%
[limitation]The main limitation of fingerprinting approaches
stems from the need for large data sets, which are costly to
acquire since each entry involves obtaining the position of a
sensor either manually or by means of auxiliary localization
systems, e.g. by using a robot.
\end{bullets}
\end{bullets}%
[ss channel charting]To alleviate the cost of data collection,
\emph{channel charting}~\cite{studer2018charting} has been recently
proposed.
\begin{bullets}%
[general principle of CC]The idea is to establish a connection
between the geometry in the \emph{radio space} where (features of)
the CSI vectors reside and the geographical geometry of the
\emph{physical space} where the nodes to be located lie. The key
assumption is that CSI vectors acquired at spatially near
locations are similar to each other.
[approaches]
\begin{bullets}
[no DNN]Fig.~\ref{fig:ecc_workflow}a depicts the main steps
in channel charting.
\begin{bullets}%
[description]%
\begin{bullets}%
[Workflow]There, a dimensionality reduction algorithm
assigns a point in 2D or 3D space to each input CSI vector
in such a way that the distance between each pair of points
is similar in some sense to the dissimilarity between the
feature representations of the CSI vectors acquired at those
points; see Sec.~\ref{sec:preliminary}. This mapping is
referred to as a \emph{channel chart}.
%
[anchors $\rightarrow$~ semi-supervised]The relative positions of
the points it returns approximately correspond to the
relative positions of the nodes in the physical space. If
in addition there are enough anchor nodes, i.e. nodes whose positions
are known, \emph{semi-supervised}
extensions~\cite{huang2019improving} can
provide absolute position estimates.
\end{bullets}
[limitations] In early works on channel charting,
feature extraction and dissimilarity metrics are manually
engineered by relying on physical principles and heuristic
considerations.
\end{bullets
%
[neural network-based channel charting]%
\begin{bullets
[description]To reduce the
inaccuracies arising from these approaches, DNN-based
alternatives learn one of these steps from data.
\begin{bullets}%
[learning dissimilarity, fixed features]For instance,
\cite{studer2018charting} and \cite{ferrand2020triplet} fix
the feature extraction step and learn the dissimilarity
metric or correspondence between the CSI vectors and the
channel chart.
%
[learning features, dissimilarity fixed]Conversely,
\cite{bromley1993signature} learns the mapping from CSI to
features while fixing the dissimilarity metric to be the
Euclidean distance.
%
\end{bullets}%
%
[limitations]
\begin{bullets}%
[arbitrary]In short, both approaches learn only part of
the workflow.
[explicit]Besides, the explicit construction of a
channel chart is convenient in those applications where
only relative positions are required, but bypassing such a
step is naturally expected to result in improved
localization performance when absolute positions are
needed.
\end{bullets}
\end{bullets}
\end{bullets}
\end{bullets}%
[our work]
\begin{bullets}%
[main contribution: implicit CC loc]Building upon these two
observations, the present work proposes
\begin{bullets}%
[description]\textit{implicit channel charting-based
localization} (ICCL), where the radio geometry is learned from
data without explicitly constructing a channel chart.
\begin{bullets}%
[step 1: NN]In the first step, a DNN is used to predict
the physical (or geographical) distances between
nodes given the CSI that they measure.
%
[step 2: multilateration]In the second step, these
distances are utilized in combination with the locations of
anchor nodes to estimate the absolute positions of the nodes.
\end{bullets}
%
[relative to other schemes]
\begin{bullets}%
[CC]Thus, unlike most channel charting schemes, ICCL is
supervised and provides absolute location estimates.
%
[model-based loc]Relative to model-based localization
algorithms, the proposed scheme inherits the robustness of
fingerprinting to non-LOS (NLOS) propagation.
%
[FP]%
\begin{bullets}%
[standard FP $\rightarrow$~ learn geom. ]As compared to conventional
fingerprinting, the proposed algorithm learns the radio
geometry from data,
%
[neural FP]whereas relative to DNN-based fingerprinting,
learning is heavily improved since the fact that distances
are learned instead of absolute positions gives rise to a
natural data augmentation effect, where the number of
training examples is quadratic in the number of entries of
the data set; cf. Sec.~\ref{sec:training}.
%
[more measurements]Finally, the proposed scheme
leverages CSI acquired by multiple nodes rather than from
only one, which is expected to increase robustness to noise
and reduce the size of the required data set.
\end{bullets}
\end{bullets}%
\end{bullets}
[side contribution: loc. with UAV]
\begin{bullets}
[no infrastructure]Although the proposed ICCL approach could
be used with arbitrary forms of CSI, this paper focuses on a
scenario where an unmanned aerial vehicle (UAV) is used to
locate nodes on the ground. This is well motivated when no
terrestrial infrastructure is operational because of a natural disaster or a
military attack and when no global navigation satellite systems
(GNSSs) can be used, e.g. because nodes lack the appropriate
sensors or because the propagation environment precludes LOS
propagation from the satellites.
%
[relation to other works]This constitutes another
contribution of the paper since, to the best of our knowledge,
\begin{bullets}%
[loc. with UAVs $\rightarrow$~ robust to channel impairments](i)
existing schemes for localization with UAVs rely on
model-based algorithms and therefore are sensitive to NLOS
conditions and other channel impairments, and
[1st CC+UAVs](ii) no previous work has considered channel
charting in setups involving UAVs.
\end{bullets}
%
\end{bullets}%
\end{bullets}
%
[structure of the paper] This paper is organized as follows.
\begin{bullets}%
[preliminary]After reviewing some relevant background in Sec.~\ref{sec:preliminary},
%
[system model] Sec.~\ref{sec:problem_formualation}
formulates the problem.
%
[localization] ICCL is proposed next in Sec.~\ref{sec:ICCL}
%
[simulation results]and its performance is empirically
assessed in Sec.~\ref{sec:simulation}.
%
[conclusion] Finally, Sec.~\ref{sec:conclusion} concludes
the paper.
\end{bullets}%
%
[notation]\emph{Notation.}
\begin{bullets}%
[scalar, vector, matrix] Lower and uppercase boldface
letters denote column vectors and matrices, respectively.
%
[identity matrix]$\bm{I}$ denotes the identity matrix of
appropriate size.
%
[conjugate transpose]The conjugate
transpose operator is $(.)^H$.
%
[complex Gaussian distribution] A circularly-symmetric
complex Gaussian distribution with mean $\mu$ and variance
$\sigma^2$ is represented as
$\mathcal{CN}\left(\mu,\sigma^2\right)$.
%
[Frobenius norm]Finally, $\|.\|$ denotes the Euclidean norm.
\end{bullets}%
\end{bullets}
\section{Channel Charting}
\label{sec:preliminary}
\begin{bullets}%
[overview]Channel charting was proposed in
\cite{studer2018charting} as an \emph{unsupervised} alternative to
algorithms such as fingerprinting, which suffer from high data
acquisition costs. In this context, \emph{supervised} means that
each entry of the data set is a pair of a CSI vector and the
location at which it was acquired, whereas \emph{unsupervised} means
that each entry of the data set contains just a CSI vector. The
price to be paid is that plain channel charting just provides coarse
information of the relative locations of the nodes. In some
applications, this kind of information suffices to enhance network
functionalities such as handover management, predictive radio
resource allocation, and user tracking or
pairing~\cite{ferrand2020triplet}.
[key idea]As indicated earlier, the core idea behind channel
charting is that spatially close sensors are expected to measure
similar CSI from the relevant transmitters.
%
[standard CC] To apply this principle, the key steps of channel
charting are described next and summarized in
Fig.~\ref{fig:ecc_workflow}a. Consider $M$ nodes located at positions
$\{\bm{p}_{\hc{i}}\}_{{\hc{i}}=1}^M\subset\mathbb{R}^{D}$, where $D$ equals $2$ or
$3$.
\begin{bullets}%
[steps]
\begin{bullets}%
[feature extraction]First,
\begin{bullets}%
[input]the CSI vector $\tbm{g}_{\hc{i}}\in\mathbb{C}^L$ acquired by
the ${\hc{i}}$-th node is mapped
[output] into a feature vector $\bm{f}_{\hc{i}}=\bm \phi(\tbm{g}_{\hc{i}})\in\mathbb{C}^{L^\prime}$.
[algorithms]For example, such a transformation may involve
computing second-order moments, scaling, and transforming
the result into the angular domain~\cite{studer2018charting}.
\end{bullets}%
[dissimilarity metric]For each pair of nodes, say $({\hc{i}},{\hc{j}})$, a
dissimilarity metric
$d_{{\hc{i}},{\hc{j}}}=\delta(\bm \phi(\tbm{g}_{\hc{i}}),\bm \phi(\tbm{g}_{\hc{j}}))$ is
subsequently computed. Ideally, function $\delta$ should be chosen
so that its returned value resembles
the physical distance between the locations of
these nodes as much as possible. However, this is not generally
doable and, for example,~\cite{agostini2020channel} uses the
so-called \emph{correlation matrix distance}
whereas~\cite{studer2018charting} uses Euclidean distance.
[dimensionality reduction]In the next stage, a dimensionality
reduction algorithm is applied to find $M$ points
$\{\bm{z}_{\hc{i}}\}_{{\hc{i}}=1}^M\subset\mathbb{R}^D$ in such a way that the
distance between the ${\hc{i}}$-th and the ${\hc{j}}$-th point is ideally
$d_{{\hc{i}},{\hc{j}}}$ for all ${\hc{i}},{\hc{j}}$. Sammon's mapping \cite{sammon1969mapping}
can be used to this end, but other methods such as principal
component analysis (PCA)
\cite{pearson1901lii
} and autoencoders have also been considered~\cite{studer2018charting}.
\end{bullets}%
[channel chart]The mapping from $\tbm{g}_{\hc{i}}$ to $\bm{z}_{\hc{i}}$ constitutes
the channel chart. The vectors $\bm{z}_{\hc{i}}$ are named
\emph{pseudopositions} because they approximately preserve the
\emph{relative} positions of the vectors $\bm{p}_{\hc{i}}$. For this reason,
the quality of a channel chart is typically quantified by ad-hoc
metrics such as the \emph{trustworthiness} and \emph{continuity}
\cite{venna2001neighborhood,kaski2003trustworthiness,vathy2013graph}.
\nextv{\acom{expand?}}%
\end{bullets}%
[semi-supervised]However, it is also possible to obtain absolute
location estimates with channel charting by relying on semi-supervised
learning~\cite{huang2019improving}.
\begin{figure}
\centering
\includegraphics[width=9cm]{figs/Fig.ecc_icc.pdf}
\caption{(a): Conventional (explicit) channel charting. (b):
Implicity channel charting (proposed).}
\label{fig:ecc_workflow}
\end{figure}
\end{bullets}%
\section{Problem Formulation}
\label{sec:problem_formualation}
\begin{bullets}
[scenario description] Consider $M$ nodes located at positions
$\{\bm{p}_{\hc{i}}\}_{{\hc{i}}=1}^M\subset\mathbb{R}^{D}$, where $D$ equals $2$ or
$3$.
\begin{bullets}%
[nodes]
\begin{bullets}%
[define anchors, unknowns]The positions
$\mathcal{P}_{a}=\{\bm{p}_{1},\bm{p}_{2},\ldots,\bm{p}_{M_a}\}$ of the
first $M_a\geq 3$ nodes are known and, therefore, these nodes are
referred to as \emph{anchors}. The locations
$\mathcal{P}_{u}=\{\bm{p}_{M_a+1},\bm{p}_{M_a+2},\ldots,\bm{p}_{M}\}$ of
the rest of the nodes are unknown and, consequently, these nodes
will be referred to as \emph{unknowns}.
[assumption of no localization technique]The unknowns are
not able to localize themselves via GNSS, which occurs for
example when (i) high buildings obstruct the LOS to satellites,
(ii) the nodes are indoors, or (iii) the nodes are covered by
debris, as occurs in applications where survivors from an
earthquake must be located\nextv{\acom{cite?
atif2021localization}}. The unknowns cannot localize
themselves using the terrestrial infrastructure either, which is
relevant when the latter is not operational due to a natural
disaster, a military attack, or a long
blackout. \nextv{\acom{also distances between
sensors?}}
\end{bullets}%
[uav]To localize the unknowns, a UAV flies over the area and
transmits pilot signals at $N$ waypoints
$\{\bm{u}_{{\hc{n}}}\}_{{\hc{n}}=1}^N\subset\mathbb{R}^{3}$ along its trajectory.
Although this paper considers a single UAV, it is straightforward
to accommodate
multiple UAVs.
%
[CSI]For each of these $N$ waypoints, each node measures the
CSI as described next. If the application at hand demands that the
UAV locates the nodes, then all nodes report their measured CSI to
the UAV. If, instead, each unknown must localize itself, the anchors
send their measured CSI vectors to the UAV and the latter
broadcasts them to all unknowns.
[fig]The entire setup is illustrated in
Fig.~\ref{fig:city_map}.
\begin{figure}
\centering
\includegraphics[width=7cm]{figs/Fig.city_map.pdf}
\caption{Illustration of a localization problem in an urban
scenario using a UAV. Green circles denote nodes with known
locations. Orange crosses represent nodes with unknown
locations. Blue blocks denote buildings.}
\label{fig:city_map}
\end{figure}
\end{bullets}%
[signal model]
\begin{bullets}%
[pilot sequence]At the ${\hc{n}}$-th waypoint, the UAV transmits a
pilot sequence consisting of $N_p$ symbols denoted as
$ \bm{x}_{{\hc{n}}}=\left[x_{{\hc{n}}}[1],x_{{\hc{n}}}[2],\ldots,x_{{\hc{n}}}[N_p]\right]^\top$.
%
[channel coefficients]For simplicity, assume that both the UAV
and the nodes have a single antenna and that the channel is
neither frequency nor time selective. Therefore, the channel
between the $n$-th waypoint and the ${\hc{i}}$-th node can be represented
by a single coefficient $h_{{\hc{i}},{\hc{n}}}\in \mathbb{C}$. The signal received
at node ${\hc{i}}$ is given by
\begin{equation}
\label{eq:rxsignal}
\bm{y}_{{\hc{i}},{\hc{n}}} = h_{{\hc{i}},{\hc{n}}}\bm{x}_{{\hc{n}}} + \bm{w}_{{\hc{i}},{\hc{n}}},
\end{equation}
where
$\bm{w}_{{\hc{i}},{\hc{n}}}=\left[w_{{\hc{i}},{\hc{n}}}[1],\ldots,w_{{\hc{i}},{\hc{n}}}[N_p]\right]^\top$
models noise.
\end{bullets}%
[problem formulation]
\begin{bullets}%
[given]Given
\begin{bullets}%
[anchors]the anchor positions $\mathcal{P}_{a}$,
%
[transmitted + received signal]the pilot sequences
$\{\bm{x}_{\hc{n}}\}_{\hc{n}}$, and the received signals at all nodes
$\{\bm{y}_{{\hc{i}},{\hc{n}}}\}_{{\hc{i}},{\hc{n}}}$,
\end{bullets}%
%
[estimate]the problem is to estimate the positions
$\mathcal{P}_{u}$ of the unknowns.
\end{bullets}%
\end{bullets}
\section{Implicit Channel Charting-based Localization}
\label{sec:ICCL}
\begin{bullets}
[approach] This section proposes ICCL to solve the problem formulated in
Sec.~\ref{sec:problem_formualation}. The algorithm consists of three
phases.
\begin{bullets}
[CSI extraction] First, CSI needs to be extracted from the received signals.
%
[estimate distances] Given the extracted CSI, a DNN
predicts geographical distances between each pair of nodes.
%
[localization]Finally, the multilateration algorithm~\cite{savvides2001dynamic} is used to recover
the absolute positions of the unknowns given the aforementioned
distances and the anchor locations.
\end{bullets}
The key steps in the proposed algorithm are shown in
Fig.~\ref{fig:ecc_workflow}b. Details of each phase will be provided in
the following subsections.
\end{bullets}
\subsection{CSI Extraction}
\label{sec:csiextraction}
\begin{bullets}%
[channel estimation]Although ICCL can be applied, in principle,
to arbitrary forms of CSI, for concreteness and simplicity, CSI in
this paper refers to the power gain.
\begin{bullets}%
%
[channel estimate]In view of the model in \eqref{eq:rxsignal},
the least-squares estimator of $h_{{\hc{i}},{\hc{n}}}$ given $\bm{y}_{{\hc{i}},{\hc{n}}}$ and
$\bm{x}_{{\hc{n}}}$ is given by
\begin{equation}
\label{eq:hestimator}
\tilde{h}_{{\hc{i}},{\hc{n}}} = {\bm{x}_{{\hc{n}}}^H\bm{y}_{{\hc{i}},{\hc{n}}}}/({\bm{x}_{{\hc{n}}}^H\bm{x}_{{\hc{n}}}}).
\end{equation}
\nextv{ If $\bm{w}_{{\hc{i}},{\hc{n}}}\sim\mathcal{CN}\left(\bm 0,
\sigma^2\bm{I}\right)$, then $\tilde{h}_{{\hc{i}},{\hc{n}}}$ is also the
\emph{minimum variance unbiased estimator}, the \emph{best
linear unbiased estimator}, and the \emph{maximum
likelihood estimator} of $h_{{\hc{i}},{\hc{n}}}$~\cite{kay1}. In this case, observe
that $\bm{y}_{{\hc{i}},{\hc{n}}}\sim\mathcal{CN}\left(\bm{x}_{{\hc{n}}}h_{{\hc{i}},{\hc{n}}},
\sigma^2\bm{I}\right)$ and, as a result,}
An estimate of the power gain can therefore be obtained as
$\tilde{g}_{{\hc{i}},{\hc{n}}}=|\tilde{h}_{{\hc{i}},{\hc{n}}}|^2$. The CSI
vector of the ${\hc{i}}$-th node can then be defined as
$\tilde{\bm{g}}_{\hc{i}}=\left[\tilde{g}_{{\hc{i}},1},\tilde{g}_{{\hc{i}},2},\ldots,\tilde{g}_{{\hc{i}},N}\right]^\top\in\mathbb{R}^{N}$.
[generation] For pre-training purposes, as discussed
later, it is convenient to be able to generate samples of
$\tilde{g}_{{\hc{i}},{\hc{n}}}$ without simulating the propagation of the
pilot signals through the channel as per
\eqref{eq:rxsignal}. To this end, one can set
$h_{{\hc{i}},{\hc{n}}}=\sqrt{g_{{\hc{i}},{\hc{n}}}}e^{j\varphi_{{\hc{i}},{\hc{n}}}}$, where
$g_{{\hc{i}},{\hc{n}}}\in\mathbb{R}_+$ is the true power gain provided by
some model, and
$\varphi_{{\hc{i}},{\hc{n}}}\sim\mathcal{U}\left(-\pi,\pi\right)$.
Observe that if
$\bm{w}_{{\hc{i}},{\hc{n}}}\sim\mathcal{CN}\left(\bm 0, \sigma^2\bm{I}\right)$,
then
$\bm{y}_{{\hc{i}},{\hc{n}}}\sim\mathcal{CN}\left(\bm{x}_{{\hc{n}}}h_{{\hc{i}},{\hc{n}}},
\sigma^2\bm{I}\right)$ and, as a result,
$ \tilde{h}_{{\hc{i}},{\hc{n}}}~\sim~\nextv{&\mathcal{CN}\left(\frac{\bm{x}_{{\hc{n}}}^H\bm{x}_{{\hc{n}}}
h_{{\hc{i}},{\hc{n}}}}{\bm{x}_{{\hc{n}}}^H\bm{x}_{{\hc{n}}}},\frac{\sigma^2}{\|\bm{x}_{{\hc{n}}}\|^2}\right)
\\ &=}
\mathcal{CN}\left(h_{{\hc{i}},{\hc{n}}},{\sigma^2}/{\|\bm{x}_{{\hc{n}}}\|^2}\right)
$.
Then, one could equivalently write
$\tilde{h}_{{\hc{i}},{\hc{n}}}$ as
$\tilde{h}_{{\hc{i}},{\hc{n}}} =
\sqrt{g_{{\hc{i}},{\hc{n}}}}e^{j\varphi_{{\hc{i}},{\hc{n}}}}
+ z_{{\hc{i}},{\hc{n}}}$, where
$z_{{\hc{i}},{\hc{n}}}\sim\mathcal{CN}\left(0,{\sigma^2}/{\|\bm{x}_{{\hc{n}}}\|^2}\right)$
models \emph{measurement error}. Since the noise is
circularly symmetric, one can set
$\varphi_{{\hc{i}},{\hc{n}}}=0$ without loss of
generality, which yields
\begin{equation}
\label{eq:ggen}
\tilde{g}_{{\hc{i}},{\hc{n}}}=\left|\tilde{h}_{{\hc{i}},{\hc{n}}}\right|^2 = \left|\sqrt{g_{{\hc{i}},{\hc{n}}}} + z_{{\hc{i}},{\hc{n}}}\right|^2.
\end{equation}
Thus, samples of $\tilde{g}_{{\hc{i}},{\hc{n}}}$ generated according to
\eqref{eq:ggen} are distributed as if the transmission of the
pilot signals is simulated through \eqref{eq:rxsignal} and
\eqref{eq:hestimator} is evaluated.
\end{bullets}%
%
\end{bullets}%
\subsection{From CSI to Distances}
\begin{bullets}
[overview]This subsection presents the process of predicting
distances between nodes from their CSI vectors $\{\tilde{\bm{g}}_{\hc{i}}\}_{\hc{i}}$. A DNN is trained to
this end and, therefore, it will be forced to implicitly learn the
geometry in the CSI space.
\subsubsection{Architecture}
\label{sec:architecture}
\begin{figure}
\centering
\includegraphics[width=7cm]{figs/Fig.high_level.pdf}
\caption{Architecture of the proposed network.\nextv{: (a) high level; (b) low level.}}
\label{fig:architecture}
\end{figure}
\begin{bullets}
[input-output]Given the CSI vectors $\hat{\bm{g}}_{{\hc{i}}}$ and
$\hat{\bm{g}}_{{\hc{j}}}$, the DNN obtains
$\Delta_{\bm{\theta}}(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}})$, where $\bm{\theta}$ is
a vector collecting all its trainable parameters. This function will be
fitted to the distances $\|\bm{p}_{\hc{i}}-\bm{p}_{\hc{j}}\|$.
%
[symmetric high level] Since
$\|\bm{p}_{\hc{i}}-\bm{p}_{\hc{j}}\| = \|\bm{p}_{\hc{j}}-\bm{p}_{\hc{i}}\| $, i.e., the distance from node
${\hc{i}}$ to node ${\hc{j}}$ equals the distance from node ${\hc{j}}$ to node ${\hc{i}}$, the
learned function must be invariant to permutations of its inputs,
i.e.
$\Delta_{\bm{\theta}}(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}})=\Delta_{\bm{\theta}}(\tilde{\bm{g}}_{\hc{j}},\tilde{\bm{g}}_{\hc{i}})$.
\begin{bullets}%
[training] This could be approximately achieved while
training by providing the network with each pair of nodes in
both orders, i.e., with the examples
$((\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}}), \|\bm{p}_{\hc{i}}-\bm{p}_{\hc{j}}\|)$ and
$((\tilde{\bm{g}}_{\hc{j}},\tilde{\bm{g}}_{\hc{i}}), \|\bm{p}_{\hc{i}}-\bm{p}_{\hc{j}}\|)$ for all ${\hc{i}},{\hc{j}}$.
%
[symmetric architecture]However, a more accurate and
efficient approach is to impose invariance by means of the
network architecture. To this end, one can let
\begin{equation}
\Delta_{\bm{\theta}}(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}}) = \frac{1}{2}\left(f_{\bm{\theta}}\left(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}}\right) + f_{\bm{\theta}}\left(\tilde{\bm{g}}_{\hc{j}},\tilde{\bm{g}}_{\hc{i}}\right)\right),
\end{equation}
where $f_{\bm{\theta}}$ is a subnetwork. This is shown in
Fig. \ref{fig:architecture}. Observe that, with this
architecture, only the pairs of nodes with ${\hc{i}}<{\hc{j}}$ need to be
provided at training time.
\end{bullets
[detailed]\nextv{Fig. \ref{fig:low_level} provides the details
of} For example, the subnetwork $f_{\bm{\theta}}$ used in
Sec.~\ref{sec:simulation} comprises the following layers:
convolutional 2D, max pooling, convolutional 2D, max pooling,
convolutional 2D, fully connected, and fully connected.
\begin{bullets}%
[conv2D]Each 2D convolutional layer has 64 filters and
$3\times 2$ kernels, except the last one, which
has a $3\times 1$ kernel.
%
[maxpooling2d]The pool size of the 2D max-pooling layers is
$2\times 1$.
%
[dense]The fully connected layers have 64 and 1 units,
respectively.
\end{bullets}%
\end{bullets}%
\end{bullets}%
\subsubsection{Training Process}
\label{sec:training}
\begin{bullets}%
[data collection]Training data can be collected in the same way
as for fingerprinting. In the specific setup considered here, the
UAV may start operating and sensors equipped with GNSS or other
localization systems (e.g. as in LTE or 5G) can be sequentially
placed at different positions where they measure the CSI. In case of
emergency response applications, this measurement campaign is
performed before the natural disaster or military attack.
[objective function]Once data is acquired, supervised learning
is used to train the DNN. The cost function is the mean square error:
\begin{equation*}
C(\bm{\theta}) \propto \nextv{=\frac{2}{M_0(M_0-1)}}\sum_{{\hc{i}}=1}^{M_0-1}\sum_{{\hc{j}}={\hc{i}}+1}^{M_0}\left[\Delta_{{\bm{\theta}}}\left(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}}\right) - \|\bm{p}_{\hc{i}}-\bm{p}_{\hc{j}}\|\right]^{2},
\end{equation*}
where $M_0$ denotes the number of measurement locations in the
data set. Observe that the number of training examples is
$M_0(M_0-1)/2$, whereas for DNN-based fingerprinting
(cf. Sec.~\ref{sec:intro}) it would be just $M_0$. Thus, the DNN of
ICCL is expected to be better trained than the DNN of DNN-based
fingerprinting and, as a consequence, the former is expected to
outperform the latter.
%
[pretraining]Nonetheless, DNNs are known to be
``data-hungry''. Even with $M_0$ in the order of hundreds, $\bm
\theta$ may not be learned properly if the network weights are
initialized at random. Thus, it is convenient to pre-train the
network using another data set, e.g. synthetically generated or
measured in a different environment.
\end{bullets}%
\subsection{From Distances to Locations}
Given the distance estimates
$\hat d_{{\hc{i}},{\hc{j}}} =
\Delta_{{\bm{\theta}}}\left(\tilde{\bm{g}}_{\hc{i}},\tilde{\bm{g}}_{\hc{j}}\right)$ provided
by the DNN as well as the anchor locations, ICCL estimates the
absolute positions of the unknowns via (possibly iterative)
multilateration \cite{savvides2001dynamic}. The possibility to use
this algorithm is a benefit of working directly with physical
distances rather than dissimilarity metrics in the radio geometry, as
in most channel charting algorithms.
\nextv{
\begin{bullets}
[algorithm]This paper uses multilateration
\cite{savvides2001dynamic} to locate unknowns based on their
distances to anchors.
%
[Given]Let $d_{i,j}\in\mathbb{R}_+$ be the predicted distance between anchor $i$ and unknown $j$. $\mathcal{D}=\{d_{i,j}\}$ is the set of all predicted distances between anchors and unknowns $i=1,\ldots,M_a;j=1,\ldots,M_u$.
%
[Estimating the $(n+1)$th location]
\begin{bullets}
[localizing] The following system of $n$ equations describes the geographical relationship between unknown $j$ and the anchors.
\begin{equation}
\left\{\begin{matrix}
\|\bm{p}_{j}-\bm{p}_{1}\| = d_{j,1}\\
\|\bm{p}_{j}-\bm{p}_{2}\| = d_{j,2}\\
\vdots\\
\|\bm{p}_{j}-\bm{p}_{M_a}\| = d_{j,M_a}
\end{matrix}\right.\\
\end{equation}
%
After squaring and subtracting the last equation from the rest, the system of equation becomes the following least squares problem.
\begin{equation}
\label{eq:ls_problem}
\min_{\bm{p}_{j}} \|\bm{A}_{j}\bm{p}_{j} - \bm{b}_{j}\|,
\end{equation}
where
\begin{align}
\label{eq:mA}
\bm{A}_{j} & = 2\begin{bmatrix}
\bm{p}_{M_a} - \bm{p}_{1}, & \bm{p}_{M_a} - \bm{p}_{2}, & \ldots, & \bm{p}_{M_a} - \bm{p}_{M_a-1}
\end{bmatrix}^\top\in\mathbb{R}^{(M_a-1)\times D};\\
\label{eq:vb}
\bm{b}_{j} & = \begin{bmatrix}
\|\bm{p}_{M_a}\|^2 - \|\bm{p}_{1}\|^2 - d_{M_a,j}^2 + d_{1,j}^2\\
\|\bm{p}_{M_a}\|^2 - \|\bm{p}_{2}\|^2 - d_{M_a,j}^2 + d_{2,j}^2\\
\vdots\\
\|\bm{p}_{M_a}\|^2 - \|\bm{p}_{(M_a-1)}\|^2 - d_{M_a,j}^2 + d_{(M_a-1),j}^2
\end{bmatrix}\in\mathbb{R}^{M_a-1}.
\end{align}
%
[solution is position] Estimated location of unknown $j$ is the solution of LS problem \eqref{eq:ls_problem}, which is given by $\hat{\bm{p}}_{j}=\left(\bm{A}_{j}^\top\bm{A}_{j}\right)^{-1}\bm{A}_{j}^\top\bm{b}_{j}$.
\end{bullets}
\end{bullets}
}
\section{Experiments}
\label{sec:simulation}
\begin{bullets}%
[setup]
\begin{bullets}%
[area] The simulation takes place in an urban area of size
$100 \times 80$ m.
%
[CSI size] The UAV trajectory is a horizontal circle with
center at (40, 45, 40)~m and radius 20~m.
%
[num CSI samples]At $N = 128$ waypoints, the UAV transmits a
pilot signal with transmit power
${\|\bm{x}_{\hc{n}}\|^2}/{N_p}= 30~ \rm{dBm}$. However, the pilot signals
are not explicitly generated; cf. Sec.~\ref{sec:csiextraction}.
%
[data generation]
\begin{bullets}%
[number + size of training samples]The CSI is then
measured at $M_0=200$ positions drawn uniformly at random
on the ground $(D=2)$.
%
[3D radio map]The true power gains $g_{{\hc{i}},{\hc{n}}}$ are
generated from the 3D city map depicted in
Fig.~\ref{fig:city_map} using a tomographic
model~\cite{patwari2008nesh} as in
\cite{romero2022aerial}.
%
[true CSI]To focus on impairments in the testing
phase, the noise power is set to 0 in the training data
but it is greater than 0 for testing data.
\end{bullets}%
\end{bullets}%
%
%
[compared algorithms] The proposed ICCL algorithm is compared
with two algorithms.
\begin{bullets}%
[distance FP]One is the classical \emph{distance-based
fingerprinting localization} (DFPL) algorithm;
cf. Sec.~\ref{sec:intro}.
\begin{bullets}%
[training data]This algorithm stores the training data.
%
[how it works]At testing time, given an input CSI
vector, this method searches over the stored data and
outputs the position that corresponds to the CSI vector that
has lowest Euclidean distance to the input.
\end{bullets}%
%
[neural FP]
\begin{bullets}%
[network architecture]The second algorithm, termed
\emph{neural-based fingerprinting localization} (NFPL), is
similar in spirit to those in \cite{arnold2018deep,
arnold2019novel,niitsoo2018convolutional,li2019massive,ferrand2020feature}
but it is applied to the plain CSI vectors introduced in
Sec.~\ref{sec:csiextraction}. To obtain absolute position
estimates, it trains a DNN with the same architecture as the
subnetwork of ICCL (cf. Sec.~\ref{sec:architecture}) except
for minor modifications to accommodate the different input
and output size. Specifically, the kernels of the
convolutional layers have size of $3\times 1$ instead of
$3\times 2$ and the output layer has 2 neurons.
%
\end{bullets}%
%
\end{bullets}%
[pre-training]
\begin{bullets}
[pretraining]Both NFPL and ICCL are pretrained with a data
set that comprises $M_0=1000$ CSI vectors generated in a
different environment, where the buildings have different
dimensions.
%
\end{bullets}
%
[metric]To quantify the error between the true and estimated
locations, the root mean square error
\nextv{ \begin{equation}
\textrm{RMSE} = \sqrt{\frac{1}{M-M_a} \sum_{j=M_a+1}^{M}\mathbb{E}\left[\|\hat{\bm{p}}_j-\bm{p}_j\|^2\right]}
\end{equation}
}
$ \textrm{RMSE} = [\frac{1}{M-M_a}
\sum_{{\hc{j}}=M_a+1}^{M}\mathbb{E}\left[\|\hat{\bm{p}}_{\hc{j}}-\bm{p}_{\hc{j}}\|^2\right]]^{1/2}$
is used, where the expectation runs over realizations of the node
locations and measurement noise.
%
[experiments]
\begin{bullets}%
[rmse vs num. anchors]Fig. \ref{fig:rmse_anchors} shows the
RMSE of ICCL vs. the number of anchors for different noise
levels by
\begin{figure}
\centering
\includegraphics[width=8cm]{figs/Fig.rmse_vs_anchors.pdf}
\caption{RMSE of the proposed ICCL algorithm.}
\label{fig:rmse_anchors}
\end{figure}
\begin{bullets}%
[descriptions] averaging over 100 Monte Carlo realizations with
$M=100$ nodes.
%
[comments]
\begin{bullets}%
[more anchors, more precise] As expected, the
more anchors, the more precise the estimated location.
%
[less than 10 meters] With only 7 anchors, the
proposed algorithm can locate unknowns with less than
10-meters average error provided that the noise power is
sufficiently low.
\end{bullets}%
\end{bullets}%
%
[rmse vs noise]Fig.~\ref{fig:noise} depicts the RMSE of
the compared algorithms vs. the noise level
\begin{figure}
\centering
\includegraphics[width=8cm]{figs/Fig.rmse_vs_noise.pdf}
\caption{RMSE of the compared algorithms.}
\label{fig:noise}
\end{figure}
\begin{bullets}%
[parameters]by averaging over 100 Monte Carlo
realizations with $M_a= 20$ anchors and $M-M_a=80$
nodes.
[comments]For a sufficiently small noise level, ICCL
outperforms both DFPL and NFPL, which corroborates its
ability to learn the radio geometry. However, at large
noise power, the accuracy of the ICCL distance estimates
degrades and DFPL works better. This is expected to
improve if the training data is augmented by adding noise.
[FP]An apparently counterintuitive fact is that DFPL
is seen to outperform NFPL. This phenomenon has already
been observed in~\cite{sobehy2020nearest} and may be
caused by the fact that DNNs require a large amount of
training data. ICCL is less sensitive to this issue, as
described in Sec.~\ref{sec:training}. In contrast, in
\cite{ferrand2020feature}, NFPL offers a better
performance than DFPL, but the reason may be that the
latter applies a pre-processing step to the CSI
vectors. Other works proposing NFPL schemes, such
as~\cite{arnold2018deep,
arnold2019novel,niitsoo2018convolutional,li2019massive},
do not compare with DFPL.
\end{bullets}%
\end{bullets}%
\end{bullets}%
\section{Conclusions}
\label{sec:conclusion}
This paper proposes implicit channel charting-based localization
(ICCL) as a localization approach that implicitly learns the radio
geometry of a collection of CSI vectors from a data set. The idea is
inspired by channel charting and builds upon the well-known
fingerprinting localization method. Simulation results corroborate the
merits of the proposed approach.
\balance
\printmybibliography
\end{document}
|
1,116,691,500,995 | arxiv | \section{Introduction}
Bayesian $l_0$ regularization is an attractive solution for high dimensional variable selection as it directly penalizes the number of predictors. The caveat is the need to search over all possible model combinations, as a full solution requires enumeration over all possible models which is NP-hard. The gold standard for Bayesian variable selection are spike-and-slab priors, or Bernoulli-Gaussian mixtures \citep{mitchell1988, george1993, scott2014}. Whilst spike-and-slab priors provide full model uncertainty quantification, they can be hard to scale to very high dimensional problems, and can have poor sparsity properties \citep{amini2012}. On the other hand, techniques like proximal algorithms \citep{polson2015proximal, polson2017proximal} can solve non-convex optimization problems which are fast and scalable, but they generally don't provide a full assessment of model uncertainty \citep{jeffreys1961,hans2009,scott2010,li2010bayesian,marjanovic2013}.
Our goal is to build on the single best replacement (SBR) algorithm of \cite{soussen2011} as one approach to Bayesian $l_0$ regularization, in contrast with other methods based on proximal algorithms \citep{parikh2014proximal,polson2015proximal,polson2017proximal}. Our approach also builds on other Bayesian regularization methods including, for example, the Bayesian bridge \citep{polson2014}, horseshoe regularization \citep{carvalho2010,bhadra2016}, SVMs \citep{polson2011}, Bayesian lasso \citep{hans2009,park2008,carlin1991}, Bayesian elastic net \citep{li2010bayesian,hans2011}, spike-and-slab lasso \citep{rockova2016}, and global-local shrinkage priors \citep{bhadra2016,griffin2010}.
To fix notation, statistical regularization requires the specification of a measure of fit, denoted by $l\left(\beta\right)$ and a penalty function, denoted by $\text{pen}_\lambda\left(\beta\right)$, where $\lambda$ is a global regularization parameter. From a Bayesian perspective, $l\left(\beta\right)$ and $\text{pen}_\lambda\left(\beta\right)$ correspond to the negative logarithms of the likelihood and prior distribution, respectively. Regularization leads to an optimization problem of the form
\begin{equation}
\label{eqn:reg}
\begin{aligned}
& \underset{\beta \in \mathbb{R}^p}{\text{minimize}}
& & l\left(\beta\right) + \text{pen}_\lambda\left(\beta\right) \; .
\end{aligned}
\end{equation}
Taking a probabilistic approach leads to a Bayesian hierarchical model
$$
p(y \mid \beta) \propto \exp\{-l(\beta)\} \; , \quad p(\beta) \propto \exp\{ -\text{pen}_\lambda\left(\beta\right) \} \ .
$$
The solution to the minimization problem estimated by regularization corresponds to the posterior mode,
$ \hat{\beta} = {\rm arg \; max}_\beta \; p( \beta|y) $, where $ p(\beta|y)$ denotes the posterior distribution.
For example, regression with a least squares log-likelihood
subject to a penalty such as an $l_2$-norm (ridge) Gaussian probability model or $l_1$-norm (lasso) double exponential probability model.
The rest of the paper is outlined as follows. Section \ref{s&s} defines the Bayesian $l_0$ regularization problem and explores the connections with spike-and-slab priors. Section \ref{phi} introduces a novel connection between regularization and Bayesian inference. Section \ref{survey} surveys recent developments on $l_0$ regularization and best subset selection problems. Section \ref{sbr} describes the SBR algorithm for $l_0$ regularization. Section \ref{eg} provides a comparison between SBR and other variable selection methods and models including Lasso and elastic net \citep{tibshirani1996,zou2005}, Bayesian bridge \citep{polson2014}, and spike-and-slab \citep{george1993}. Finally, Section \ref{dis} concludes with directions for future research.
\section{Bayesian $l_0$ regularization \label{s&s}}
Consider a standard Gaussian linear regression model, where $X = [X_1, \ldots, X_p]\in \mathbb{R}^{n \times p}$ is a design matrix, $\beta = (\beta_1, \ldots, \beta_p)^T\in\mathbb{R}^p$ is a $p$-dimensional coefficient vector, and $e$ is an $n$-dimensional independent Gaussian noise. After centralizing $y$ and all columns of $X$, we ignore the intercept term in the design matrix $X$ as well as $\beta$, and we can write
\begin{equation}
\label{eqn:linreg}
y = X\beta + e \ , \ \ \text{where } e \sim N(0, \sigma_e^2I_n) \ .
\end{equation}
To specify a prior distribution $p\left(\beta\right)$, we impose a sparsity assumption on $\beta$, where only a small portion of all $\beta_i$'s are non-zero. In other words, $\|\beta\|_0 = k \ll p$, where $\|\beta\|_0 \mathrel{\mathop:}= \#\{i : \beta_i\neq0\}$, the cardinality of the support of $\beta$, also known as the $l_0$ pseudo-norm of $\beta$. A multivariate Gaussian prior ($l_2$ norm) leads to poor sparsity properties in this situation \citep[see, e.g.,][]{polson2010shrink}.
Sparsity-inducing prior distributions for $\beta$ can be constructed to impose sparsity. The gold standard is a spike-and-slab priors \citep{jeffreys1961,mitchell1988,george1993}. Under these assumptions, each $\beta_i$ exchangeably follows a mixture prior consisting of $\delta_0$, a point mass at $0$, and a Gaussian distribution centered at zero. Hence we write,
\begin{equation}
\label{eqn:ss}
\beta_i | \theta, \sigma_\beta^2 \sim (1-\theta)\delta_0 + \theta N\left(0, \sigma_\beta^2\right) \ .
\end{equation}
Here $\theta\in \left(0, 1\right)$ controls the overall sparsity in $\beta$ and $\sigma_\beta^2$ accommodates non-zero signals. This family is termed as the Bernoulli-Gaussian mixture model in the signal processing community.
A useful re-parameterization, the parameters $\beta$ is given by two independent random variable vectors $\gamma = \left(\gamma_1, \ldots, \gamma_p\right)'$ and $\alpha = \left(\alpha_1, \ldots, \alpha_p\right)'$ such that $\beta_i = \gamma_i\alpha_i$, with probabilistic structure
\begin{equation}
\label{eq:bg}
\begin{array}{rcl}
\gamma_i|\theta & \sim & \text{Bernoulli}(\theta) \ ;
\\
\alpha_i | \sigma_\beta^2 &\sim & N\left(0, \sigma_\beta^2\right) \ .
\\
\end{array}
\end{equation}
Since $\gamma_i$ and $\alpha_i$ are independent, the joint prior density becomes
$$
p\left(\gamma_i, \alpha_i \mid \theta, \sigma_\beta^2\right) =
\theta^{\gamma_i}\left(1-\theta\right)^{1-\gamma_i}\frac{1}{\sqrt{2\pi}\sigma_\beta}\exp\left\{-\frac{\alpha_i^2}{2\sigma_\beta^2}\right\}
\ , \ \ \ \text{for } 1\leq i\leq p \ .
$$
The indicator $\gamma_i\in \{0, 1\}$ can be viewed as a dummy variable to indicate whether $\beta_i$ is included in the model. \citep{soussen2011}
Let $S = \{i: \gamma_i = 1\} \subseteq \{1, \ldots, p\}$ be the ``active set" of $\gamma$, and $\|\gamma\|_0 = \sum\limits_{i = 1}^p\gamma_i$ be its cardinality. The joint prior on the vector $\{\gamma, \alpha\}$ then factorizes as
$$
\begin{array}{rcl}
p\left(\gamma, \alpha \mid \theta, \sigma_\beta^2\right) & = & \prod\limits_{i = 1}^p p\left(\alpha_i, \gamma_i \mid \theta, \sigma_\beta^2\right) \\
& = &
\theta^{\|\gamma\|_0}
\left(1-\theta\right)^{p - \|\gamma\|_0}
\left(2\pi\sigma_\beta^2\right)^{-\frac p2}\exp\left\{-\frac1{2\sigma_\beta^2}\sum\limits_{i = 1}^p\alpha_i^2\right\} \ .
\end{array}
$$
Let $X_\gamma \mathrel{\mathop:}= \left[X_i\right]_{i \in S}$ be the set of ``active explanatory variables" and $\alpha_\gamma \mathrel{\mathop:}= \left(\alpha_i\right)'_{i \in S}$ be their corresponding coefficients. We can write $X\beta = X_\gamma \alpha_\gamma$. The likelihood can be expressed in terms of $\gamma$, $\alpha$ as
$$
p\left(y \mid \gamma, \alpha, \theta, \sigma_e^2\right)
=
\left(2\pi\sigma_e^2\right)^{-\frac n2}
\exp\left\{
-\frac1{2\sigma_e^2}\left\|y - X_\gamma \alpha_\gamma\right\|_2^2
\right\} \ .
$$
Under this re-parameterization by $\left\{\gamma, \alpha\right\}$, the posterior is given by
$$
\begin{array}{rcl}
p\left(\gamma, \alpha \mid \theta, \sigma_\beta^2, \sigma_e^2, y\right) & \propto &
p\left(\gamma, \alpha \mid \theta, \sigma_\beta^2\right)
p\left(y \mid \gamma, \alpha, \theta, \sigma_e^2\right)\\
& \propto &
\exp\left\{-\frac1{2\sigma_e^2}\left\|y - X_\gamma \alpha_\gamma\right\|_2^2
-\frac1{2\sigma_\beta^2}\left\|\alpha\right\|_2^2
-\log\left(\frac{1-\theta}{\theta}\right)
\left\|\gamma\right\|_0
\right\} \ .
\end{array}
$$
Our goal then is to find the regularized maximum a posterior (MAP) estimator $$\arg\max\limits_{\gamma, \alpha}p\left(\gamma, \alpha \mid \theta, \sigma_\beta^2, \sigma_e^2, y \right) \ .$$By construction, the $\gamma$ $\in\left\{0, 1\right\}^p$ will directly perform variable selection. Spike-and-slab priors, on the other hand, will sample the full posterior and calculate the posterior probability of variable inclusion.
Finding the MAP estimator is equivalent to minimizing over $\left\{\gamma, \alpha\right\}$ the regularized least squares objective function \citep{soussen2011}.
\begin{equation}
\label{obj:map}
\min\limits_{\gamma, \alpha}\left\|y - X_\gamma \alpha_\gamma\right\|_2^2
+ \frac{\sigma_e^2}{\sigma_\beta^2}\left\|\alpha\right\|_2^2
+ 2\sigma_e^2\log\left(\frac{1-\theta}{\theta}\right)
\left\|\gamma\right\|_0 \ .
\end{equation}
This objective possesses several interesting properties:
\begin{enumerate}
\item The first term is essentially the least squares loss function.
\item The second term looks like a ridge regression penalty and has connection with the signal-to-noise ratio (SNR) $\sigma_\beta^2/\sigma_e^2$. Smaller SNR will be more likely to shrink the estimate of $\alpha$ towards $0$. If $\sigma_\beta^2 \gg \sigma_e^2$, the prior uncertainty on the size of non-zero coefficients is much larger than the noise level, that is, the SNR is sufficiently large, this term can be ignored. This is a common assumption in spike-and-slab framework in that people usually want $\sigma_\beta \to \infty$ or to be ``sufficiently large" in order to avoid imposing harsh shrinkage to non-zero signals.
\item If we further assume that $\theta < \frac12$, meaning that the coefficients are known to be sparse \textit{a priori}, then $\log\left(\left(1-\theta\right) / \theta\right) > 0$, and the third term can be seen as an $l_0$ regularization.
\end{enumerate}
Therefore, our Bayesian objective inference is connected to $l_0$-regularized least squares, which we summarize in the following proposition.
\begin{proposition} (Spike-and-slab MAP \& $l_0$ regularization)
For some $\lambda > 0$, assuming $\theta < \frac12$, $\sigma_\beta^2 \gg \sigma_e^2$, the Bayesian MAP estimate defined by (\ref{obj:map}) is equivalent to the $l_0$ regularized least squares objective, for some $\lambda > 0$,
\begin{equation}
\label{obj:l0}
\min\limits_{\beta}
\frac12\left\|y - X\beta\right\|_2^2
+ \lambda
\left\|\beta\right\|_0 \ .
\end{equation}
\end{proposition}
\begin{proof} First, assuming that
$$
\theta < \frac12, \ \ \ \sigma_\beta^2 \gg \sigma_e^2, \ \ \ \frac{\sigma_e^2}{\sigma_\beta^2}\left\|\alpha\right\|_2^2 \to 0 \ ,
$$
gives us an objective function of the form
\begin{equation}
\min\limits_{\gamma, \alpha}
\label{obj:vs}
\frac12 \left\|y - X_\gamma \alpha_\gamma\right\|_2^2
+ \lambda
\left\|\gamma\right\|_0, \ \ \ \ \text{where } \lambda \mathrel{\mathop:}= \sigma_e^2\log\left(\left(1-\theta\right) / \theta\right) > 0 \ .
\end{equation}
Equation (\ref{obj:vs}) can be seen as a variable selection version of equation (\ref{obj:l0}). The interesting fact is that (\ref{obj:l0}) and (\ref{obj:vs}) are equivalent. To show this, we need only to check that the optimal solution to (\ref{obj:l0}) corresponds to a feasible solution to (\ref{obj:vs}) and vice versa. This is explained as follows.
On the one hand, assuming $\hat\beta$ is an optimal solution to (\ref{obj:l0}), then we can correspondingly define $\hat\gamma_i \mathrel{\mathop:}= I\left\{\hat\beta_i \neq 0\right\}$, $\hat\alpha_i \mathrel{\mathop:}= \hat\beta_i$, such that $\left\{\hat\gamma, \hat\alpha\right\}$ is feasible to (\ref{obj:vs}) and gives the same objective value as $\hat\beta$ gives (\ref{obj:l0}).
On the other hand, assuming $\left\{\hat\gamma, \hat\alpha\right\}$ is optimal to (\ref{obj:vs}), implies that we must have all of the elements in $\hat\alpha_\gamma$ should be non-zero, otherwise a new $\tilde\gamma_i \mathrel{\mathop:}= I\left\{\hat\alpha_i \neq 0\right\}$ will give a lower objective value of (\ref{obj:vs}). As a result, if we define $\hat\beta_i \mathrel{\mathop:}= \hat\gamma_i\hat\alpha_i$, $\hat\beta$ will be feasible to (\ref{obj:l0}) and gives the same objective value as $\left\{\hat\gamma, \hat\alpha\right\}$ gives (\ref{obj:vs}).
Combining both arguments shows that the two problems (\ref{obj:l0}) and (\ref{obj:vs}) are equivalent. Hence we can use results from non-convex optimization literature to find Bayes MAP estimators.
\end{proof}
\section{Bayesian regularization and Proximal Updating \label{phi}}
Section \ref{s&s} provides a connection between spike-and-slab and $l_0$ regularization, in the sense that $l_0$ regularization can be viewed as the MAP estimator of Bayesian spike-and-slab regression. From a Bayesian perspective, the posterior mean is preferred to the MAP estimator due to its superior risk minimization properties under mean squared error. \cite{starck2013} also discusses the inadequacy of interpreting sparse regularization as Bayesian MAP estimator. We now show that the posterior mean estimation is also connected to regularization and introduce the connection with the help of proximal operators and Tweedie's formula \citep{efron2011}.
\cite{gribonval2011} considers the relationship between the MAP estimator and the posterior mean. \cite{polson2016} considers the relationship between posterior modes and envelopes. These approaches try to uncover the implicit prior that maps a posterior mean to a mode. As the posterior mean is designed to minimize the mean squared error and Bayes risk, this is the appropriate calculation.
\subsection{Posterior mean optimality}
Consider a normal mean problem in Bayesian setting,
\begin{equation}
\label{normmean}
\begin{array}{rcl}
y | \beta & \sim & N(\beta, \sigma_e^2) \ ;\\
\beta & \sim & p\left(\beta\right) \ .
\end{array}
\end{equation}
Then the optimal estimator of $\beta$ with respect to the quadratic loss is the posterior mean. To calculate this posterior mean, $\hat\beta$, \cite{efron2011} introduced Tweedie's formula which, for the normal mean problem, gives
\begin{equation}
\label{eq:tweedie}
\hat{\beta} = E\left[ \beta \mid y \right] = y + \sigma_e^2\frac{d}{dy} \log m(y) \; ,
\end{equation}
where the marginal density of $y$ is
$$
m(y) = \int f\left( y \mid \beta\right) p( \beta ) d \beta \ .
$$
Here $f\left(y\mid\beta\right) = \frac{1}{\sqrt{2\pi}\sigma_e}\exp\left\{-\frac{(y - \beta)^2}{2\sigma_e^2}\right\}$ is the probability density function (pdf) of $N(\beta, \sigma_e^2)$. The interpretation of the Bayesian correction term,
$$
\sigma_e^2\frac{d}{dy} \log m(y) \ ,
$$
is to ``regularize" the unbiased maximum likelihood estimator $y$, which provides the optimal bias-variance trade-off for prediction, see \cite{pericchi1992} for further discussion. When both $m$ and $\sigma_e^2$ are unknown, \cite{donoho2013} proposes a procedure to estimate each term, which achieves the optimal Bayes risk, resulting in the plug-in estimator
$$
\hat\beta = y + \hat\sigma_e^2\frac{d}{dy} \log \hat m(y) \ .
$$
Another useful result applies Stein's risk function to Tweedie's formula. Then we can derive this optimal Bayes risk \citep{robbins1956},
$$
R(\hat\beta) = \sigma_e^2\left(1 - \sigma_e^2 I\left(m\right)\right) \; ,
$$
where $I(m) = E_y\left[\left(\frac{d}{dy}\log m(y)\right)^2\right]$ is the Fisher Information for $m$. This risk is optimal given the use of the posterior mean estimator.
Tweedie's formula can be generalized to Gaussian linear regression in the following theorem.
\begin{thm} Suppose in the Gaussian linear regression model,
$$
y = X\beta + e, \ \ \ \text{where } e\sim N\left(0, \Sigma\right) \ .
$$
Let $p\left(\beta\right)$ denote the prior density of $\beta$, and $m\left(y\right) = \int_\beta p\left(y \mid \beta\right)p\left(\beta\right)d\beta$ the marginal (prior predictive) density of $y$. Then, assuming $\left(X^T\Sigma^{-1}X\right)^{-1}$ exists, the posterior mean of $\beta$ given $y$ is
\begin{equation}
\label{eq:lspmbeta}
E\left[\beta\mid y\right]
=
\left(X^T\Sigma^{-1}X\right)^{-1}X^T\left(\Sigma^{-1}y + \nabla_y\log m\left(y\right)\right) \ .
\end{equation}
\end{thm}
\begin{proof}
Let $N\left(y; X\beta, \Sigma\right)$ denote the multivariate normal density of $y |\beta\sim N\left(X\beta, \Sigma\right)$, then the posterior density of $\beta$ given $y$,
$$
p\left(\beta \mid y\right) = \displaystyle\frac{p\left(y\mid \beta\right)p\left(\beta\right)}{m\left(y\right)} = \frac{1}{m\left(y\right)}N\left(y; X\beta, \Sigma\right)p\left(\beta\right) \ .
$$
Therefore, the posterior mean of the quantity $\Sigma^{-1}\left(y - X\beta\right)$,
\begin{equation}
\label{eq:lspm}
\begin{array}{rcl}
E\left[\Sigma^{-1}\left(y - X\beta\right)\mid y\right]
&=&\int_\beta
\Sigma^{-1}\left(y - X\beta\right)
p\left(\beta\mid y\right)d\beta\\
&=&
\frac{1}{m\left(y\right)}
\int_\beta
\Sigma^{-1}\left(y - X\beta\right)
N\left(y; X\beta, \Sigma\right)p\left(\beta\right)
d\beta \ .
\end{array}
\end{equation}
Note that by the property of the multivariate normal density,
$$
\Sigma^{-1}\left(y - X\beta\right)
N\left(y; X\beta, \Sigma\right)
=
-\nabla_y N\left(y; X\beta, \Sigma\right) \ ,
$$
and so (\ref{eq:lspm}) becomes
$$
E\left[\Sigma^{-1}\left(y - X\beta\right)\mid y\right]
=
\frac{1}{m\left(y\right)}
\int_\beta
-\nabla_y N\left(y; X\beta, \Sigma\right)
p\left(\beta\right)
d\beta
=
-\nabla_y \log m\left(y\right) \ .
$$
Multiplying both sides by $X^T$ and assuming $\left(X^T\Sigma^{-1}X\right)^{-1}$ exists, the posterior mean of $\beta$ given $y$ becomes
\vspace{0.5pc}
\hfill
$\displaystyle E\left[\beta\mid y\right] = \left(X^T\Sigma^{-1}X\right)^{-1}X^T\left(\Sigma^{-1}y + \nabla_y\log m\left(y\right)\right) \ .$
\hfill
\end{proof}
It's easy to see that, similar to Tweedie's formula for the normal means problem, the posterior mean in the Gaussian linear regression (\ref{eq:lspmbeta}) consists of two parts. One is the usual weighted least squares solution, and the other is a Bayesian correction by the gradient of the prior predictive score, $\nabla_y \log m\left(y\right)$. \cite{griffin2010} gives the equivalent result in the form when the least squares estimator $\hat\beta$ instead of $y$ is conditioned on. \cite{masreliez1975} discusses the posterior mean under Gaussian prior but non-Gaussian likelihood. \citep{pericchi1992}
\subsection{Regularized linear regression}
Proximal operators and Tweedie's formula also provide a way to connect the Bayesian posterior mean and the regularized least squares. Specifically, we want to find a $\phi$, such that the Bayesian posterior estimator $\hat{\beta} = E\left[ \beta \mid y \right]$ with the prior $p$ is the same as the solution to the $\phi$-regularized least squares. That is,
\begin{equation}
\label{obj:pmreg}
\hat{\beta} = E\left[ \beta \mid y \right] = \arg\min_\beta \left\{\frac1{2\sigma_e^2}(y -\beta)^2 + \phi(\beta)\right\}.
\end{equation}
We now use the theory of proximal mappings to re-write this estimator. First, here are some definitions. The Moreau envelope $E_{\gamma f} (x)$ and proximal mapping $ \mathop{\mathrm{prox}} _{\gamma f} (x)$ of a convex function are defined as
\begin{equation}
\label{def:prox}
\begin{array}{rcl}
E_{\gamma f} (x) &=& \inf_{z } \left\{f(z) + \frac{1}{2\gamma} \enorm{z - x}^2 \right\} \leq f(x) \ ;\\
\mathop{\mathrm{prox}} _{\gamma f} (x) &=& \arg \min_{z } \left\{ f(z)+ \frac{1}{2\gamma} \enorm{z - x}^2 \right\} \, .
\end{array}
\end{equation}
The Moreau envelope is a regularized version of $f$ and approximates $f$ from below, and has the same set of minimizing values as $f$.
The proximal mapping returns the value that solves the minimization problem defined by the Moreau envelope. It balances two goals: minimizing $f$, and staying near $x$.
Now, observe that if $\hat{z}(x) = \mathop{\mathrm{prox}} _{\gamma f}(x)$ is the value that achieves the minimum,
$$
\nabla \left\{f(\hat z) + \frac{1}{2\gamma} \enorm{\hat z - x}^2 \right\}
=\nabla f(\hat z) + \frac1\gamma(\hat z - x) = 0 \, ,
$$
which leads to $\hat z = x - \gamma\nabla f(\hat z) \, .$ By construction of the envelope,
$$
\nabla E_{\gamma f}(x) = \nabla \inf_{z } \left\{f(z) + \frac{1}{2\gamma} \enorm{z - x}^2 \right\} = \frac{1}{\gamma}[x - \hat{z}(x)] \, ,
$$
This leads to the fundamental proximal relation $\hat z = x - \gamma \nabla E_{\gamma f}(x) \, .$
Therefore, write
\begin{equation}
\label{eq:prox}
\mathop{\mathrm{prox}} _{\gamma f}(x) = x - \gamma\nabla f\left[ \mathop{\mathrm{prox}} _{\gamma f}(x)\right] = x - \gamma \nabla E_{\gamma f}(x) \, .
\end{equation}
Meanwhile, the definition of proximal mapping (\ref{def:prox}) and its property (\ref{eq:prox}) give us
$$
\hat{\beta} = \arg\min_\beta \left\{\frac1{2\sigma_e^2}(y -\beta)^2 + \phi(\beta)\right\} = \mathop{\mathrm{prox}} _{\sigma_e^2\phi}\left(y\right) = y - \sigma_e^2\nabla \phi\left(\hat\beta\right) \ .
$$
Combining with Tweedie's formula (\ref{eq:tweedie}), gives
\begin{equation}
\label{eq:pmreg}
\nabla \phi\left(\hat\beta\right) = - \frac{d}{dy} \log m(y) \ .
\end{equation}
Hence, if we want to match a regularized least squares with a posterior mean, we can ``solve'' for the penalty $\phi$, given a marginal distribution $m(y)$, via the equation for the proximal mapping (\ref{eq:pmreg}). If $ \phi $ is non-differentiable at a point, we replace $ \nabla $ by $ \partial $.
\cite{gribonval2011} provides the following answer. Given any $z$, find $\hat y$ such that $E\left[ \beta \mid\hat y\right] = z$. Then the penalty
\begin{equation}
\label{eq:phi}
\phi(z) = -\frac1{2\sigma_e^2}(\hat y - z)^2 - \log m\left( \hat y\right) + c \; ,
\end{equation}
with the constant $c$ to ensure that $\phi(0) = 0$. To see why this construction makes sense, simply take derivatives with respect to $\hat y$ on both sides, and get
$$
\begin{array}{rcl}
- \frac{d}{d\hat y} \log m(\hat y) & = & \nabla\phi(z)\frac{dz}{d\hat y} + \frac1{\sigma_e^2}\left(\hat y - z\right)\left(1 - \frac{dz}{d\hat y}\right) \\
\left[\text{ by (\ref{eq:tweedie}) }\right] & = & \nabla\phi(z)\frac{dz}{d\hat y} + \frac1{\sigma_e^2}\left(-\sigma_e^2\frac{d}{d\hat y} \log m(\hat y)\right)\left(1 - \frac{dz}{d\hat y}\right) \\
& = & - \frac{d}{d\hat y} \log m(\hat y) + \left(\nabla\phi(z) + \frac{d}{d\hat y} \log m(\hat y)\right)\frac{dz}{d\hat y} \\
\left[\text{ by (\ref{eq:pmreg}) }\right]& = & - \frac{d}{d\hat y} \log m(\hat y) \ .
\end{array}
$$
To summarize, the solution to the regularized least squares problem
$$
\min_\beta\frac1{2\sigma_e^2}\left(y - \beta\right)^2 + \phi\left(\beta\right)
$$
is the posterior mode with the prior $p\left(\beta\right) \propto \exp\left(-\phi\left(\beta\right)\right)$. Our proximal operators and Tweedie's formula discussion shows that the regularized least squares solution can also be viewed as the posterior mean under an implied prior $p\left(\beta\right)$, see \cite{strawderman2013}.
To illustrate our result, When $p$ is sparsity-inducing, such as the spike-and-slab, we can construct the associated penalty $\phi$, which is typically non-convex for both Gaussian and Laplace cases.
\subsection{Example: Spike-and-slab Gaussian \& Laplace prior}
For the normal mean problem (\ref{normmean}), assuming $p\left(\beta\right)$ is the aforementioned spike-and-slab (Bernoulli Gaussian) prior (\ref{eqn:ss}), the marginal distribution of $y$ is a mixture of two mean zero normals,
$$
\left.y\mid\theta\right. \sim \left(1 - \theta\right) N\left(0, \sigma_e^2\right) + \theta N\left(0, \sigma_e^2 + \sigma_\beta^2\right) \ .
$$
The posterior mean $E\left[\beta | y\right]$ is given by
$$
\hat\beta^{BG} = w(y) y \ , \; \; {\rm where} \; \; w(y) = \frac{\sigma_\beta^2 }{\sigma_e^2 + \sigma_\beta^2}
\left(1 +
\frac{
\left(1-\theta\right) \frac{1}{\sqrt{2\pi}\sigma_e}\exp\left\{-\frac{y^2}{2\sigma_e^2}\right\}
}
{
\theta \frac{1}{\sqrt{2\pi}\sigma_e}\exp\left\{-\frac{y^2}{2\sigma_e^2 + \sigma_\beta^2}\right\}
}
\right)^{-1} \ .
$$
Thus, $\forall z\in \mathbb{R}$, we can find $\hat y$ such that $w\left(\hat y\right)\hat y = z$; then the penalty $\phi^{BG}$ associated with the Bayesian posterior mean with the Bernoulli-Gaussian prior can be obtained by (\ref{eq:phi}). $\phi^{BG}$ doesn't have an analytical form, but can be computed numerically.
\cite{amini2012} argued that Bernoulli-Gaussian priors are usually not applicable to real-world signals, and proposed Bernoulli-Laplace priors which are infinitely divisible and more appropriate for sparse signal processing. The Bernoulli-Laplace priors are very similar to the Bernoulli-Gaussian ones, their only difference being that the ``slab" parts are replaced by Laplace distributions. Therefore, the prior
\begin{equation}
\label{prior:bl}
p\left(\beta\mid\sigma_\beta\right) = \left(1-\theta\right)\delta_0 + \theta \frac{1}{\sqrt{2}\sigma_\beta}\exp\left(-\frac{\sqrt{2}}{\sigma_\beta}\left|\beta\right|\right) \ .
\end{equation}
\cite{mitchell1994} and \cite{hans2009} studied the marginal and posterior distribution with the Laplace prior. With their results, the marginal density of $y$ with the Bernoulli-Laplace prior (\ref{prior:bl}) is given by
$$
m\left(y\right) = \left(1-\theta\right) \frac{1}{\sqrt{2\pi}\sigma_e}\exp\left\{-\frac{y^2}{2\sigma_e^2}\right\} + \theta
\frac{1}{\sqrt{2}\sigma_\beta}
\exp\left\{\frac{\sigma_e^2}{\sigma_\beta^2}\right\}
\left(F_{\sigma_\beta}\left(y\right) +
F_{\sigma_\beta}\left(-y\right)\right) \ ,
$$
where
$$
F_{\sigma_\beta}\left(y\right) =
\exp\left\{\frac{\sqrt{2}y} {\sigma_\beta}\right\}
\Phi\left(
-\frac{y}{\sigma_e}
-\frac{\sqrt{2}\sigma_e}{\sigma_\beta}
\right) \ .
$$
Here $\Phi$ is the cumulative distribution function (cdf) of the standard normal. The posterior mean $E\left[\beta | y\right]$ is then given by
$$
\hat\beta^{BL} =
\left(y - \left[\frac{F_{\sigma_\beta}(-y) - F_{\sigma_\beta}(y)}{F_{\sigma_\beta}(-y) + F_{\sigma_\beta}(y)}\right]\frac{\sqrt{2}\sigma_e^2}{\sigma_\beta} \right)
\left(1 +
\frac{
\left(1-\theta\right)
\frac{1}{\sqrt{2\pi}\sigma_e}\exp\left\{-\frac{y^2}{2\sigma_e^2}\right\}
}
{
\theta
\frac{1}{\sqrt{2}\sigma_\beta}
\exp\left\{\frac{\sigma_e^2}{\sigma_\beta^2}\right\}
\left(F_{\sigma_\beta}(y) + F_{\sigma_\beta}(-y)\right)
}
\right)^{-1}
$$
Similarly, we are also able to find the penalty $\phi^{BL}$ associated with this Bernoulli-Laplace prior numerically by (\ref{eq:phi}). It's worth noting that this prior is a special case of the spike-and-slab Lasso prior proposed by \cite{rockova2016}. In that paper the authors use a mixture of two Laplace distributions, one of which is very close to $\delta_0$ as its variance goes to zero. Both priors are capable of striking a balance between hard-thresholding and soft-thresholding.
For comparison, $\hat\beta^{BG}$ and $\hat\beta^{BL}$ are plotted in Figure \ref{fig:pm}; $\phi^{BG}$ and $\phi^{BL}$ are plotted in Figure \ref{fig:phi}. Both priors shrink small observations towards zero. For large observations, when $\sigma_\beta$ is small, Bernoulli-Gaussian, like ridge regression, unnecessarily penalizes large observations too much, whereas Bernoulli-Laplace is more like Lasso. As $\sigma_\beta$ gets larger, both priors get closer to hard-thresholding, and their associated penalties $\phi$ closer to SCAD-like non-convex penalties \citep{fan2001}.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{pm_bg.eps}
\end{minipage}
\begin {minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{pm_bl.eps}
\end{minipage}
\caption{Posterior mean $\hat\beta = E\left[\beta \mid y\right]$ with Bernoulli-Gaussian (left) and Bernoulli-Laplace (right) priors. \textit{Both priors shrink small observations towards zero. When $\sigma_\beta$ is small, Bernoulli-Gaussian priors shrink large observations more heavily than Bernoulli-Laplace priors which are more like soft-thresholding. As $\sigma_\beta$ gets larger, both get closer to hard-thresholding.}}
\label{fig:pm}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{phi_bg.eps}
\end{minipage}
\begin {minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{phi_bl.eps}
\end{minipage}
\caption{Penalty $\phi$ associated with the posterior mean of Bernoulli-Gaussian (left) and Bernoulli-Laplace (right) priors. \textit{Both $\phi^{BG}$ and $\phi^{BL}$ look ``spiky" around zero, seemingly to induce sparsity for small observations, although they are actually differentiable everywhere. When $\sigma_\beta$ is small, the penalties associated with Bernoulli-Gaussian priors behave like ridge regression for large observations, whereas those associated with Bernoulli-Laplace priors appear to have a Lasso flavor. As $\sigma_\beta$ gets larger, both get closer to non-convex penalties such as SCAD.}}
\label{fig:phi}
\end{figure}
\section{Computing the $l_0$-regularized regression solution \label{survey}}
We now turn to the problem of computation. $l_0$-regularized least squares (\ref{obj:l0}) is closely related to the best subset selection in linear regression as follows.
\begin{equation}
\label{obj:subset}
\begin{array}{rl}
\min\limits_{\beta} & \frac12\|y - X\beta\|_2^2\\
\text{s.t.} & \|\beta\|_0 \leq k \ .
\end{array}
\end{equation}
The $l_0$-regularized least squares (\ref{obj:l0}) can be seen as (\ref{obj:subset})'s Lagrangian form. However, due to high non-convexity of the $l_0$-norm, (\ref{obj:l0}) and (\ref{obj:subset}) are connected but not equivalent. In particular, for any given $\lambda \geq 0$, there exists an integer $k \geq 0$, such that (\ref{obj:l0}) and (\ref{obj:subset}) have the same global minimizer $\hat\beta$. However, it's not true the other way around. It's possible, even common, that for a given $k$, we cannot find a $\lambda \geq 0$, such that the solutions to (\ref{obj:subset}) and (\ref{obj:l0}) are the same.
Indeed, for $k \in \left\{1, 2, \ldots, p\right\}$, let $\hat\beta_k$ be respective optimal solutions to (\ref{obj:subset}) and $f_k$ respective optimal objective values, and so $f_1 \geq f_2
\geq \cdots \geq f_p$. If we want a solution $\hat\beta_\lambda$ to (\ref{obj:l0}) has $\left\|\hat\beta_\lambda\right\|_0 = k$, we need to find a $\lambda$ such that
$$
\max\limits_{i > k}\left\{f_k - f_i\right\} \leq \lambda \leq \min\limits_{j < k}\left\{f_j - f_k\right\} \ ,
$$
with the caveat that such $\lambda$ needs not exist.
Both problems involve discrete optimization and have thus been seen as intractable for large-scale data sets. As a result, in the past, $l_0$ norm is usually replaced by its convex relaxation $l_1$ norm to facilitate computation. However, it's widely known that the solutions of $l_0$ norm problems provide superior variable selection and prediction performance compared with their $l_1$ convex relaxation such as Lasso. \cite{zhang2014lower} studies the statistical properties of the theoretical solution to (\ref{obj:l0}), and points out that the solution to the $l_0$-regularized least squares should be better than Lasso in terms of variable selection especially when we have a design matrix $X$ that has high collinearity among its columns.
\cite{bertsimas2016} introduced a first-order algorithm to provide a stationary solution $\beta^*$ to a class of generalized $l_0$-constrained optimization problem, with convex $g$,
\begin{equation}
\label{eq:gen}
\begin{array}{rl}
\min\limits_{\beta} & g(\beta)\\
\text{s.t.} & \|\beta\|_0 \leq k \ .
\end{array}
\end{equation}
Let $L$ be the Lipschitz constant for $\nabla g$ such that $\forall\beta_1, \beta_2$, $\|\nabla g(\beta_1) - \nabla g(\beta_2)\| \leq L \|\beta_1 - \beta_2\|$. Their ``Algorithm 1" is as follows.
\begin{enumerate}
\item Initialize $\beta^0$ such that $\left\|\beta^0\right\|_0 \leq k$.
\item For $t \geq 0$, obtain $\beta^{t + 1}$ as
\begin{equation}
\label{subset:algo1}
\beta^{t + 1} = H_k\left(\beta^t - \frac1L\nabla g\left(\beta^t\right)\right) \ ,
\end{equation}
until convergence to $\beta^*$.
\end{enumerate}
where the operator $H_k\left(\cdot\right)$ is to keep the largest $k$ elements of a vector as the same, whilst to set all else to zero. It can also be called the hard thresholding at the $k^\text{th}$ largest element. In the least squares setting when $g(\beta) = \frac12\|y - X\beta\|_2^2$, $\nabla g$ and $L$ are easy to compute. \cite{bertsimas2016} then uses the stationary solution $\beta^*$ obtained by the aforementioned algorithm (\ref{subset:algo1}) as a warm start for their mixed integer optimization (MIO) scheme to produce a ``provably optimal solution" to the best subset selection problem (\ref{obj:subset}).
It's worth pointing out that the key iteration step (\ref{subset:algo1}) is connected to the proximal gradient descent (PGD) algorithm many have used to solve the $l_0$-regularized least squares (\ref{obj:l0}), as well as other non-convex regularization problems. PGD methods solve a general class of problems such as
\begin{equation}
\label{obj:pgd}
\begin{array}{rl}
\min\limits_{\beta} & g(\beta) + \lambda\phi(\beta) \ ,
\end{array}
\end{equation}
where $g$ is the same as in (\ref{eq:gen}), and $\phi$, usually non-convex, is a regularization term. In this framework, in order to obtain a stationary solution $\beta^*$, the key iteration step is
\begin{equation}
\label{pgd:algo}
\beta^{t + 1} = \mathop{\mathrm{prox}} _{\lambda\phi}\left(\beta^t - \frac1L\nabla g\left(\beta^t\right)\right) \ ,
\end{equation}
where $\beta^t - \frac1L\nabla g(\beta^t)$ can be seen as a gradient descent step for $g$ and $ \mathop{\mathrm{prox}} _{\lambda\phi}$ is the proximal operator for $\lambda\phi$. In $l_0$-regularized least squares, $\lambda\phi\left(\cdot\right) = \lambda\left\|\cdot\right\|_0$, and its proximal operator $ \mathop{\mathrm{prox}} _{\lambda\|\cdot\|_0}$ is just the hard thresholding at $\lambda$. That is, $ \mathop{\mathrm{prox}} _{\lambda\|\cdot\|_0}$ is to keep the same all elements no less than $\lambda$, whilst to set all else to zero. As a result, the similarity between (\ref{subset:algo1}) and (\ref{pgd:algo}) are quite obvious.
In a recent work, \cite{jewell2017} proposes an exact algorithm to obtain the global minimum of $l_0$-regularized optimization in a computational neuroscience context. Consider the optimization problem
$$
\min\limits_{c_1,\ldots,c_n}\left\{\frac12\sum\limits_{i = 1}^n\left(y_i - c_i\right)^2 + \lambda\sum\limits_{i = 2}^{n}\mathbb{I}{\left(c_i - \gamma c_{i-1}\right)}\right\} \ .
$$
By exploiting the sequential time series nature of the problem, one can recast the problem as a changepoint detection problem and use the available results in that literature to design a dynamic programming algorithm. Instead of trying to simultaneously find all $i$'s where $c_i - \gamma c_{i - 1} \neq 0$, the algorithm aims to find them sequentially, from $i = 1$ to $i = n$ at each step. In this sense, it can reach a global minimum within $\mathbb{O}\left(n^2\right)$. The authors further speed up the algorithm by pruning the set of possible changepoints to at each step of the sequential search, and reduce the expected time cost to $\mathbb{O}\left(n\right)$.
\section{Single best replacement (SBR) algorithm \label{sbr}}
The single best replacement (SBR) algorithm, originally developed by \cite{soussen2011}, provides solution to the variable selection regularization (\ref{obj:vs}). Since (\ref{obj:vs}) and the $l_0$-regularized least squares (\ref{obj:l0}) are equivalent, SBR also provides a practical way to give a sufficiently good local optimal solution to the NP-hard $l_0$ regularization.
Take a look at the objective (\ref{obj:vs}). For any given variable selection indicator $\gamma$, we have an active set $S = \left\{i: \gamma_i = 1\right\}$, based on which the minimizer $\hat\alpha_\gamma$ of (\ref{obj:vs}) has a closed form. $\hat\alpha_\gamma$ will set every coefficients outside $S$ to zero, and regress $y$ on $X_\gamma$, the variables inside $S$. Therefore, the minimization of the objective function can be determined by $\gamma$ or $S$ alone. Accordingly, the objective function (\ref{obj:vs}) can be rewritten as follows.
\begin{equation}
\label{obj:sbr}
\min\limits_{S}
f_{SBR}(S) =
\frac12 \left\|y - X_S \beta_S \right\|_2^2
+ \lambda
\left|S\right| \ .
\end{equation}
The SBR algorithm thus tries to minimize $f_{SBR}(S)$ via choosing the optimal $\hat S$.
The algorithm works as follows. Suppose we start as an initial $S$, usually the empty set. At each iteration, SBR aimes to find a ``single change of $S$", that is, a single removal from or adding to $S$ of one element, such that this single change decreases $f_{SBR}(S)$ the most. SBR stops when no such change is available, or in other words, any single change of $\gamma$ or $S$ will only give the same or larger objective value. Therefore, intuitively SBR stops at a local optima of $f_{SBR}(S)$.
SBR is essentially a stepwise greedy variable selection algorithm. At each iteration, both adding and removal are allowed, so this algorithm is one example of the ``forward-backward" stepwise procedures. It's provable that with this feature the algorithm ``can escape from some [undesirable] local minimizers" of $f_{SBR}(S)$ \citep{soussen2015}. Therefore, SBR can solve the $l_0$-regularized least squares in a sub-optimal way, providing a satisfactory balance between efficiency and accuracy.
We are now writing out the algorithm more formally. For any currently chosen active set $S$, define a single replacement $S\cdot i, i\in\left\{1, \ldots, p\right\}$ as $S$ adding or removing a single element $i$,
$$
S\cdotp i \mathrel{\mathop:}=
\begin{cases}
S\cup\{i\}, & i\notin S \\
S\backslash \{i\}, & i\in S
\end{cases} \ .
$$
Then we compare the objective value at current $S$ with all of its single replacements $S\cdot i$, and choose the best one. SBR proceeds as follows.
\begin{description}
\item Step 0: Initialize $S_0$. Usually, $S_0 = \emptyset$. Compute $f_{SBR}(S_0)$. Set $k = 1$.
\item Step $k$: For every $i \in \{1, \ldots, p\}$, compute $f_{SBR}(S_{k-1} \cdot i)$. Obtain the single best replacement $j \mathrel{\mathop:}= \arg\min\limits_{i}f_{\text{SBR}}(S_{k-1} \cdot i)$.
\begin{enumerate}
\item If $f_{SBR}(S_{k-1} \cdot j) \geq f_{SBR}(S_{k-1})$, stop. Report $\hat S = S_{k - 1}$ as the solution.
\item Otherwise, set $S_{k} = S_{k-1} \cdot j$, $k = k+1$, and repeat step $k$.
\end{enumerate}
\end{description}
\cite{soussen2011} shows that SBR always stops within finite steps. With the output $\hat S$, the locally optimal solution to the $l_0$-regularized least squares $\hat\beta$ is just the coefficients of $y$ regressed on $X_{\hat S}$ and zero elsewhere.
In order to include both forward and backward steps in the variable selection process, the algorithm needs to compute $f_{SBR}(S_{k-1} \cdot i)$ for every $i$ at every step. Because it involves a one-column update of current design matrix $X_{S_{k - 1}}$, this computation can be made very efficient by using the Cholesky decomposition, without explicitly calculating $p$ linear regressions at each step \citep{soussen2011}. An \texttt{R} package implementation of the algorithm is available upon request.
\section{Applications \label{eg}}
\subsection{Statistical properties of SBR and $l_0$ regularization \label{perform}}
The design matrix $X$ in this experimentation has $n = 120$ rows and $p = 100$ columns. In order to impose high collinearity in the columns of $X$, we construct it in the following way.
\begin{enumerate}
\item Construct a $p \times d$ matrix $L$ consisting of $N\left(0, 1\right)$ random samples and obtain $\Sigma_X = LL^T + I_p$. If $d \ll p$, $\Sigma_X$ will have a low-rank structure. Here we use $d = 5$.
\item Sample each of the $n$ rows of $X$ from the multivariate normal distribution $N_p\left(0, \Sigma_X\right)$.
\item Centralize and normalize the columns of $X$ such that each column sums to zero and has unit $l_2$ norm.
\end{enumerate}
A design matrix $X$ constructed this way has highly collinear columns. $\beta$ is a highly sparse coefficient vector with $100$ elements, $90$ of which are zero, and the rest $10$ randomly chosen to be $\left\{-5, -4, -3, -2, -1, 1, 2, 3, 4, 5\right\}$. The noise vector $e$ is sampled from $N\left(0, \sigma_e^2\right)$. In this setting, the signal-to-noise ratio was defined as
$$
\text{SNR}=10\log_{10}\left[\frac{\sigma^2(X\beta)}{\sigma_e^2}\right] \ ,
$$
and $\sigma_e$ is determined such that $\text{SNR}=20\text{ dB}$. Finally, let $y = X\beta + e$ be the observation.
Essentially all kinds of regularization methods, including ridge regression, Lasso, and $l_0$ regularization, share a common difficulty: to find a suitable regularization parameter $\lambda$. A thorough theoretical treatment on the optimal $\lambda$ hasn't been established, and in practice cross validation is often used.
Meanwhile, one of the advantages of $l_0$ regularization is that, compared with its $l_1$ counterpart, the estimates of coefficients are relatively insensitive to $\lambda$ when it's large enough, as shown in Figure \ref{fig:solutionpath}. Therefore, we don't have to worry too much about choosing a particularly optimal $\lambda$.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{solutionpath_l0.eps}
\end{minipage}
\begin {minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{solutionpath_l1.eps}
\end{minipage}
\caption{Solution paths of $l_0$ regularization by SBR (left) and its $l_1$ counterpart Lasso by \texttt{glment} \citep{glmnet}. \textit{The horizontal dotted lines indicate the true values of $\beta$, and the vertical dashed line indicates a $\hat\lambda^{CV}$ chosen by cross validation in $\log$-scale. The range of $\lambda$ in both plots are from $\frac12\hat\lambda^{CV}$ to $4\hat\lambda^{CV}$. Once $\lambda$ passes a certain value, the estimates of coefficients of $l_0$ regularization given by SBR is much more accurate and less sensitive to $\lambda$ than those of lasso given by \texttt{glmnet}.}}
\label{fig:solutionpath}
\end{figure}
In a large scale numerical experiment, we conducted $1000$ simulation trials with random $X, y$ generated as above. We compare SBR with the ordinary least squares (OLS), two sparsity regularization methods, Lasso and elastic net with the tuning parameter $\alpha = 0.5$, and two Bayesian MCMC methods, Bayesian bridge \citep{polson2014} and Bayesian spike-and-slab shrinkage \citep{scott2014}, each with their state-of-the-art implementations. The regularization parameter $\lambda$ in SBR is determined by a 10-fold cross validation. For Lasso and elastic net, the regularization parameter $\lambda$ is chosen by the built-in cross validation in \texttt{glmnet} \citep{glmnet}. For Bayesian bridge, we use the default prior in the package \texttt{BayesBridge} \citep{polson2012bridge}. For spike-and-slab, the prior inclusion probability is set to be $0.5$, and other hyperparameters are the same as in the default setting in the package \texttt{BoomSpikeSlab} \citep{scott2016}.
Figure \ref{fig:mse} shows the accuracy in estimating $\beta$ for the six methods. For OLS, Lasso, elastic net, and SBR, $\hat\beta$ are the minimizers of the corresponding optimization problems, and for Bayesian bridge and spike-and-slab, $\hat\beta$ are the posterior means. SBR performs as well as the gold standard Spike-and-slab and better than all else, including the two widely used convex regularization methods.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.4]{comparison_mse.eps}
\caption{Comparison in estimation accuracy. \textit{The boxplot depicts empirical mean squared errors of $1000$ simulation trials. The $l_0$ regularization by SBR and Bayesian posterior mean estimators under the gold standard spike-and-slab priors outperform the convex regularization estimators in this regards.}}
\label{fig:mse}
\end{figure}
In terms of variable selection, we compare SBR with the other $3$ sparsity-inducing methods, Lasso, elastic net, and Bayesian spike-and-slab. For spike-and-slab, an coefficient is selected when its posterior inclusion probability is greater than $0.5$. Figure \ref{fig:varsel} shows that all methods are able to select all the true non-zero coefficients almost all the time. However, when compared in terms of preventing false selection, SBR and spike-and-slab are the best, whereas Lasso and elastic net tend to drastically over-select.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{comparison_truesel.eps}
\end{minipage}
\begin {minipage}{0.49\textwidth}
\includegraphics[width=\linewidth]{comparison_falsesel.eps}
\end{minipage}
\caption{Comparison in variable selection accuracy. \textit{All $4$ methods generally select all of the $10$ (horizontal dotted line) true non-zero coefficients almost all the time. SBR and spike-and-slab are on par of successfully preventing over-selecting, but Lasso and elastic net tend to produce a lot of false selections.}}
\label{fig:varsel}
\end{figure}
Recently, \cite{hastie2017} compared Lasso with best subset selection \citep{bertsimas2016}, forward stepwise selection, and found similar phenomenon. Namely, the performance of best subset selection and forward stepwise selection is overall similar, and both tend to outperform Lasso in the high signal-to-noise setting. Since SBR is a forward-backward stepwise selection algorithm, it's no surprise then that SBR should give better results than Lasso in such setting.
\subsection{Computational efficiency and scalability of SBR}
In section \ref{perform} we've shown that the $l_0$ regularization has superior statistical properties in terms of both minimizing the estimation risk and selecting correct variables, especially its statistical performance improvement on convex regularization methods such as Lasso and elastic net.
Full Bayesian spike-and-slab performs very well statistically. In order to achieve this good performance, spike-and-slab needs to do a complete MCMC sampling, and this task could take a significant amount of time, especially in high-dimensional settings, whereas regularization methods are usually able to handle large-scale computation efficiently.
In order to compare the computational efficiency of different methods, two sets of experiments, one with $n = 120, p = 100$, the other $n = 300, p = 200$, are run, and the results are plotted in Figure \ref{fig:time}. SBR, as well as Lasso and elastic net, is almost as efficient as OLS, and only changes proportionally when the size of the problem increases. On the contrary, the two full Bayesian methods, Bayesian bridge and especially spike-and-slab, are costly and scale badly with the problem size. Actually when $n = 300, p = 200$, it could take as much as $40$ minutes to run even one spike-and-slab MCMC, whereas SBR finishes all $200$ simulation trials under $10$ seconds. When $n$ and $p$ are in thousands, spike-and-slab is computationally intractable.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.55]{comparison_time.eps}
\caption{Time cost by different methods for different problem sizes. \textit{SBR is as efficient as convex regularization methods Lasso and elastic net, whereas full Bayesian MCMC methods Bayesian bridge and spike-and-slab take significantly longer time. When the problem size increases from $n = 120, p = 100$ to $n = 300, p = 200$, averaging over $200$ simulation trials, the time cost of Lasso changes from $0.008$ to $0.024$ seconds, SBR $0.011$ to $0.046$ seconds, yet that of spike-and-slab MCMC surges from $10$ to more than $170$ seconds.}}
\label{fig:time}
\end{figure}
\subsection{Diabetes data \label{diabetes}}
Now we examine the performance of SBR on the classic diabetes data, available in the \texttt{R} package \texttt{lars} \citep{efron2004}. The design matrix $X$ has $64$ columns, including all $10$ biochemical attributes and certain interactions. Each column of $X$ has been normalized to have zero mean and unit $l_2$ norm. The response $y$ is centralized to have zero mean. We compare SBR with sparsity-inducing methods including Lasso, elastic net, and spike-and-slab priors, with the same settings as in Section \ref{perform}, and $\lambda$ determined by cross validation. The results shown in Table \ref{table:sel} indicate that SBR's performance on variable selection is in line with popular sparse linear regression alternatives.
\begin{table}[h!]
\centering
\begin{tabular}{| C || C | C | C | C |}
\hline
Variable & Lasso & Elastic Net & Spike \& Slab & SBR\\
\hline\hline
\texttt{sex} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\texttt{bmi} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\texttt{map} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\texttt{hdl} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\texttt{ltg} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\texttt{glu} & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{age}^2$ & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{bmi}^2$ & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{glu}^2$ & $\checkmark$ & $\checkmark$ & --- & $\checkmark$ \\
$\texttt{age}\cdot\texttt{sex}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
$\texttt{age}\cdot\texttt{map}$ & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{age}\cdot\texttt{ltg}$ & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{age}\cdot\texttt{glu}$ & $\checkmark$ & $\checkmark$ & --- & --- \\
$\texttt{sex}\cdot\texttt{map}$ & --- & $\checkmark$ & --- & --- \\
$\texttt{bmi}\cdot\texttt{map}$ & $\checkmark$ & $\checkmark$ & --- & $\checkmark$ \\
\hline
\end{tabular}
\caption{Variable selection for the diabetes data. \textit{Out of all $64$ variable, only those selected by at least one method are shown, chosen variables marked by $\checkmark$, and --- otherwise. Lasso and elastic net select the most variables, whereas spike-and-slab the least. Compared with these two extremes, SBR selects all variables chosen by spike-and-slab, but no variables not chosen by Lasso and elastic net. The result indicates that SBR performs a reasonable variable selection task.}}\label{table:sel}
\end{table}
\section{Discussion \label{dis}}
Bayesian $l_0$ regularization can be solved using a fast and scalable single best replacement (SBR) algorithm. In variable selection, this estimator possesses much of the statistical properties of spike-and-slab priors. We provide theoretical links between the spike-and-slab MAP estimator and $l_0$ regularization.
We also explore the connection between regularized MAP estimators and posterior means \citep{strawderman2013}. Tweedie--Masreliez construction of the posterior mean is re-interpreted as a proximal update rule. This proximal update identity shows how the sparse posterior mode can be viewed as a posterior mean under a suitably defined prior. Bernoulli-Gaussian (BG) and Bernoulli-Laplace (BL) priors are used for illustration. Our approach demonstrates how regularized estimators can have good out-of-sample mean squared error.
In simulated and real data applications, SBR performs favorably compared with popular convex regularization methods such as Lasso and elastic net, as well as full Bayesian sampling methods including Bayesian bridge and spike-and-slab priors.
Recently non-convex feature selection methods for sparse signals estimation have gained increasing attention from the statistical learning community, including the classic SCAD penalty \citep{fan2001}, $l_q$ penalty \citep{marjanovic2013}, and horseshoe regularization \citep{bhadra2017horseshoe}. There are a number of future directions for research, such as regularized logistic regression \citep{gramacy2012} and structural sparsity learning \citep{polson2017proximal}. A comprehensive theoretical treatment and empirical comparison on different non-convex regularizations on the trade-off between statistical accuracy and computational efficiency remains open.
|
1,116,691,500,996 | arxiv | \section{Introduction}
A complete solution to the backward Ricci flow $(\mathcal{M}^{n},g(\tau))$,
$\tau\in(0,\infty)$, is a \textbf{Type I }$\kappa$\textbf{-solution} if
$\left\vert \operatorname{Rm}\right\vert (\tau)\leq\frac{C}{\tau}$ for some
constant $C$ and each $g(\tau)$ is $\kappa$-noncollapsed below all scales. In
this definition we do not assume nonnegativity of the curvatures.
As a special case of his result Proposition 0.1 in \cite{NiClosedTypeI}, Lei
Ni has classified $3$-dimensional closed Type I $\kappa$-solutions. In all
dimensions Ni first showed that if $(\mathcal{M}^{n},g(\tau))$, $\tau
\in(0,\infty)$, is a solution to the backward Ricci flow on a compact manifold
and satisfies $\left\vert \operatorname{Rm}\right\vert (\tau)\leq\frac{A}%
{\tau}$, then there exists a constant $C(n,A)$ such that%
\begin{equation}
\operatorname{diam}\left( g(\tau)\right) \leq\max\{\operatorname{diam}%
\left( g(1)\right) ,C(n,A)\}\sqrt{\tau}\quad\text{for }\tau\in
(1,\infty).\label{LeiNi1}%
\end{equation}
In particular, by Lemma 8.3(b) in Perelman \cite{Perelman1}, there exists
$C=C(n,A)$ such that $\frac{\partial}{\partial\tau}d_{\tau}(x_{1},x_{2}%
)\leq\frac{C(n,A)}{2\sqrt{\tau}}$ for any $x_{1},x_{2}\in\mathcal{M}$ with
$d_{\tau}(x_{1},x_{2})\geq C(n,A)\sqrt{\tau}$. Then (\ref{LeiNi1}) follows
from the consequence that for $x_{1},x_{2}\in\mathcal{M}$ and $\tau\geq1$ we
have%
\[
\frac{d_{\tau}(x_{1},x_{2})}{\sqrt{\tau}}<C(n,A)\quad\text{or}\quad
\frac{\partial}{\partial\tau}\frac{d_{\tau}(x_{1},x_{2})}{\sqrt{\tau}}\leq0.
\]
Using this, Ni proved that if $\left( \mathcal{M}^{n},g\left( \tau\right)
\right) $, $\tau\in\left( 0,\infty\right) $, is a closed Type I $\kappa
$-solution with positive curvature operator (PCO), then $\left(
\mathcal{M},g(\tau)\right) $ is isometric to a shrinking spherical space
form. In particular, since $g\left( \tau\right) $ has PCO and $\mathcal{M}$
is closed, by Hamilton \cite{Hamilton1982}, \cite{Hamilton1986} when $n=3,4$
and B\"{o}hm and Wilking \cite{BohmWilking} when $n\geq5$, $g\left(
\tau\right) $ converges to a constant positive sectional curvature (CPSC)
metric $g_{0}$ as $\tau\rightarrow0$. By 11.2 in \cite{Perelman1}, fixing $p$,
there exist $q_{i}$ such that $\ell_{(p,0)}^{g}\left( q_{i},i\right)
\leq\frac{n}{2}$ and $\left( \mathcal{M}^{n},i^{-1}g\left( i\tau\right)
,q_{i}\right) $ subconverges in the Cheeger--Gromov sense to a complete
nonflat shrinking gradient Ricci soliton (GRS) $\left( \mathcal{M}_{\infty}^{n},g_{\infty}\left(
\tau\right) ,q_{\infty}\right) $. By (\ref{LeiNi1}), we have%
\[
\operatorname{diam}\left( \frac{1}{i}g\left( i\tau\right) \right) \leq
\max\{\operatorname{diam}\left( g(-1)\right) ,C(n,A)\}\sqrt{\tau}%
\quad\text{for }\tau\in(\frac{1}{i},\infty).
\]
Thus $\mathcal{M}_{\infty}$ is compact and diffeomorphic to $\mathcal{M}$.
Since $(\mathcal{M},g_{\infty}\left( \tau\right) )$ is irreducible with
nonnegative curvature operator on a topological spherical space form,
$g_{\infty}\left( \tau\right) $ must be a CPSC metric. By all of the above,
after rescaling, $g\left( \tau\right) $ converges to a metric which is
isometric to a constant multiple of $g_{\infty}\left( 1\right) $ as either
$\tau\rightarrow0$ or $\tau\rightarrow\infty$. This implies that Perelman's
invariant $\nu(g(\tau))$ must be constant, which implies that $g(\tau)$ is a
shrinking GRS and hence a CPSC metric.
As a corollary, any $3$-dimensional closed Type I $\kappa$-solution must be
isometric to a shrinking spherical space form. The reason is as follows. By
B.-L. Chen \cite{ChenStrongUniqueness}, $\operatorname{Rm}\geq0$. If
$\operatorname{Rm}>0$, then $g(\tau)$ is a CPSC metric by Ni's theorem. On the
other hand, if the sectional curvatures are not positive, then $\mathcal{M}%
^{3}$ is covered by $\mathcal{S}^{2}\times\mathbb{R}$. Since any closed such
solution is $\kappa$-collapsed, we are done.
Observe that, by Brendle and Schoen \cite{BrendleSchoen} and Brendle
\cite{BrendleHarnack} (the latter enabling Perelman's $\kappa$-solution theory
to extend), Ni's theorem holds under Brendle--Schoen postivity of curvature.
In this note we observe that the combined results of Perelman \cite{Perelman1}%
, Naber \cite{Naber}, Enders, M\"{u}ller and Topping \cite{EMT}, and Zhang and
the first author \cite{CaoZhang} yield the following special case of the
assertion by Perelman (private communication to Ni) that any $3$-dimensional
Type I $\kappa$-solution with PCO must be a shrinking CPSC metric. As we
mentioned in the abstract, this result is implied by the earlier work of
Ding \cite{Ding} and is generalized in the recent work of the third
author \cite{Zhang}, where the condition of being Type I forward in time
is removed.
\begin{proposition} \label{proposition}
Suppose that $(\mathcal{M}^{3},g(\tau))$, $\tau\in(0,\infty)$, is a $\kappa
$-solution to the backward Ricci flow with PCO forming a singularity at
$\tau=0$ and satisfying $\left\vert \operatorname{Rm}\right\vert (\tau
)\leq\frac{A}{\tau}$, then $\mathcal{M}$ is closed and $g(\tau)$ is a
shrinking CPSC metric.
\end{proposition}
Note that we have assumed that the solution is Type I both forward and
backward in time. Applications of this result to the study of shrinking
gradient Ricci soliton (GRS) singularity models follow from Naber
\cite[\S 5]{Naber}, Lu and the second author \cite[Theorem 3]{ChowLuSplit},
and Munteanu and Wang \cite{MunteanuWangAS}.
\section{Proof of the proposition}
Before we proceed to prove Proposition \ref{proposition}, we prove the following lemma that asserts the existence of a singular point at the forward singular time on a $3$-dimensional $\kappa$-solution. This is crucial in proving that the blow-up limit is nonflat. The existence of such a point is an issue because of the noncompactness of $\mathcal{M}$; see Remark 1.1 in \cite{EMT}. We actually prove that \emph{every} point of $\mathcal{M}$ is a singular point.
\begin{lemma}\label{lemma}
Let $(\mathcal{M}^{3},g(\tau))$, where $\tau\in(0,\infty)$, be a $\kappa$-solution that forms a singularity at $\tau=0$ in the sense that $\displaystyle \lim_{\tau\rightarrow0^+}\sup_{x\in \mathcal{M}}R(x,\tau)=\infty$, where $R$ denotes the scalar curvature. Then every $p\in \mathcal{M}$ is a singular point in the sense that $\displaystyle \limsup_{\tau\rightarrow 0^+}R(p,\tau)=\infty$.
\end{lemma}
\begin{proof}
Since $0$ is a singular time, by definition we may find a sequence $\{(x_i,\tau_i)\}_{i=1}^\infty$, such that $\tau_i\searrow 0$ and $R(x_i,\tau_i)\rightarrow\infty$. Suppose $p\in \mathcal{M}$ is not a singular point. Then there exists $C<\infty$ such that $R(p,\tau_i)\leq C$ for every $i\in\mathbb{N}$. By Hamilton's trace Harnack estimate \cite{Hamilton1993}, we have $\displaystyle \frac{\partial R}{\partial \tau}\leq 0$. Hence $R(p,\tau_i)\in[c,C]$, for all $i\in\mathbb{N}$, where we denote $c=R(p,\tau_1)>0$. Define $g_i(\tau)=g(\tau+\tau_i)$. Then we can use Perelman's $\kappa$-compactness theorem \cite{Perelman1} to extract a (not relabelled) subsequence from $\displaystyle \{(\mathcal{M},g_i(\tau),(p,0))_{\tau\in[0,\infty)}\}_{i=1}^\infty$, which converges to a $\kappa$-solution $(\mathcal{M}_\infty,g_\infty(\tau),(p_\infty,0))_{\tau\in[0,\infty)}$. In particular, $(\mathcal{M}_\infty,g_\infty(0))$ has bounded curvature. Let $A<\infty$ be the curvature bound of $(\mathcal{M}_\infty,g_\infty(0))$. By the definition of pointed smooth Cheeger--Gromov convergence and by passing to a suitable subsequence, there exists a sequence of open precompact sets $\{U_i\}_{i=1}^\infty$ exhausting $(\mathcal{M}_\infty,g_\infty(0))$, where each $U_i$ contains $p_\infty$, and there exists a sequence of diffeomorphisms
\begin{eqnarray*}
\psi_i:U_i&\rightarrow& V_i\subset(\mathcal{M},g_i(0)),
\\
\psi_i(p_\infty)&=&p,
\end{eqnarray*}
with the following properties. We have $\overline{B_{g_i(0)}(p,i)}\subset V_i$ and that $\psi_i^*g_i(0)$ is $i^{-1}$-close to $g_\infty(0)$ on $U_i$ with respect to the $C^i$-topology. Notice here that we actually have Cheeger--Gromov convergence of the solutions of the backward Ricci flow on the whole time interval $[0,\infty)$, but we need only to use the convergence on the time zero slice. Let $i_1\in\mathbb{N}$ be large enough so that $R(x_{i_1},\tau_{i_1})>100A$, where the existence of $i_1$ is guaranteed by the assumption that $R(x_i,\tau_i)\rightarrow\infty$. Then we select $i_2>i_1$ such that $dist_{g_{i_1}(0)}(p,x_{i_1})=dist_{g(\tau_{i_1})}(p,x_{i_1})<100^{-1}i_2$. Since the Ricci flow with nonnegative curvature shrinks distances forward in time, it follows that $dist_{g(\tau_{i_2})}(p,x_{i_1})<100^{-1}i_2$ and hence that $x_{i_1}\in\overline{B_{g_{i_2}(0)}(p,i_2)}\subset V_{i_2}$. Moreover, by Hamilton's trace Harnack estimate \cite{Hamilton1993} we have $R(g_{i_2}(0))(x_{i_1})=R(x_{i_1},\tau_{i_2})\geq R(x_{i_1},\tau_{i_1})>100A$, since $\tau_{i_2}<\tau_{i_1}$. This yields a contradiction when $i_2$ is large enough (say $i_2>10000$) since $\psi_{i_2}^{-1}(x_{i_1})$ is contained in the set $U_{i_2}$ on which $\psi_{i_2}^*g_{i_2}(0)$ is $i_2^{-1}$-close to $g_\infty(0)$ with respect to the $C^{i_2}$-topology, while the curvature of $g_\infty(0)$ is bounded by $A$.
\end{proof}
\bigskip
We now give the proof of our main result.
\bigskip
\begin{proof}[Proof of Proposition \ref{proposition}]
By Ni's theorem, we may suppose that $\mathcal{M}^{3}$ is noncompact, so that
$\mathcal{M}$ is diffeomorphic to $\mathbb{R}^{3}$. By the first part of
Theorem 3.1 in \cite{Naber}, for any $x\in\mathcal{M}$, $\tau_{i}%
^{-}\rightarrow0$, and $\tau_{i}^{+}\rightarrow\infty$, $(\mathcal{M}%
,(\tau_{i}^{\pm})^{-1}g(\tau_{i}^{\pm}\tau),(x,1))$ subconverges to a noncompact
shrinking GRS $(\mathcal{M}^{\pm},g^{\pm}(\tau),(x^{\pm},1))$ which does not contain any
embedded $\mathbb{R}P^{2}$. By Theorem 1.1 in \cite{EMT} and by Lemma \ref{lemma} above, $(\mathcal{M}%
^{-},g^{-}(\tau))$ is nonflat since every point is a singular point, whereas by Theorem 4.1 in \cite{CaoZhang} (see
also the statements in its proof), $(\mathcal{M}^{+},g^{+}(\tau))$ is nonflat,
since both of these results apply to noncompact manifolds. By Lemma 1.2 in
Perelman \cite{Perelman2}, $g^{\pm}(\tau)$ cannot have PCO. Thus the
$(\mathcal{M}^{\pm},g^{\pm}(\tau))$ are isometric to (shrinking) round cylinders
$\mathcal{S}^{2}\times\mathbb{R}$. By the second part of Theorem 3.1 in
\cite{Naber}, we conclude that the same is true for $(\mathcal{M}^{3}%
,g(\tau))$, which contradicts $g(\tau)$ having PCO.
\end{proof}
\begin{remark}
In \cite{Zhang} by the third author, it shown that there do not exist $3$-dimensional noncompact
PCO $\kappa$-solutions only assuming the solution is Type I backward. This confirms an assertion
that Grisha Perelman made to Lei Ni.
\end{remark}
\section{A criterion for ancient solutions to form forward singularities}
In this section we present an application of Lemma \ref{lemma}, which gives a necessary and sufficient condition for a $3$-dimensional $\kappa$-solution to form a forward singularity.
\begin{corollary}
A $3$-dimensional $\kappa$-solution forms a forward singularity if and only if at some time slice $\inf_{\mathcal{M}} R>0$.
\end{corollary}
\begin{proof}
Let $(\mathcal{M}^{3},g(\tau))$, where $\tau\in(0,\infty)$, be a $\kappa$-solution to the backward Ricci flow that forms a singularity at $\tau=0$. By Lemma \ref{lemma}, for every $p\in \mathcal{M}$, $R(p,\tau)$ increases to infinity as $\tau\searrow0$. By integrating Perelman's derivative estimate \cite{Perelman1}
\begin{eqnarray*}
\left|\frac{\partial R}{\partial\tau}\right|\leq\eta R^2,
\end{eqnarray*}
where $\eta$ depends only on $\kappa$, from $0$ to $\tau$, we have
\begin{eqnarray*}
R(p,\tau)\geq\frac{1}{\eta\tau}
\end{eqnarray*}
for every $p\in \mathcal{M}$ and $\tau\in(0,\infty)$. It follows immediately that $\displaystyle\inf_{p\in \mathcal{M}}R(p,\tau)>0$ for every $\tau\in(0,\infty)$.
\\
On the other hand, suppose $(\mathcal{M}^{3},g(\tau))$, where $\tau\in[0,\infty)$, is a $\kappa$-solution to the backward Ricci flow such that $\inf_{p\in \mathcal{M}}R(p,T)=c>0$ for some $T>0$. We use an idea of Perelman \cite{Perelman2} to show that the solution cannot be extended forward to time infinity. Up to scaling the solution by a constant factor, we can find a sequence $x_i\rightarrow\infty$, such that $\displaystyle \lim_{i\rightarrow\infty}R(x_i,T)=1$. Applying the $\kappa$-compactness theorem \cite{Perelman1}, we can extract a (not relabelled) subsequence of $\{(\mathcal{M},g(\tau+T),(x_i,0))\}_{i=1}^\infty$, converging to a $\kappa$-solution $(\mathcal{M}_\infty,g_\infty(\tau),(x_\infty,0))$, which must be the shrinking cylinder since we have splitting at infinity; see \cite{Perelman1}. Moreover, we have $R_\infty(x_\infty,0)=1$ and $(\mathcal{M}_\infty,g_\infty(\tau))$ has unbounded curvature as $\tau\rightarrow-1$. Then we can conclude that $(\mathcal{M},g(\tau))$ becomes singular as $\tau\rightarrow T-1$. For suppose this is not the case. Then there exists an $\varepsilon>0$ such that $R(g(\tau))$ is uniformly bounded for $\tau\in[T-1-\varepsilon,\infty)$. It then follows that the limit flow $(\mathcal{M}_\infty,g_\infty(\tau))$ exists and has bounded curvature for $\tau\in[-1-\varepsilon,\infty)$, which is a contradiction.
\end{proof}
\begin{acknowledgement}
We would like to thank Peng Lu, Ovidiu Munteanu, Lei Ni, and Jiaping Wang for
helpful discussions. X. Cao's research was partially supported by a grant from the Simons Foundation (\#280161).
\end{acknowledgement}
|
1,116,691,500,997 | arxiv | \section{Introduction}
Dijkstra's algorithm is a widely-used algorithm for solving the Single Source Shortest Path problem. Namely, given a vertex in a graph with non negative edge weights, compute the distances from this vertex to all other vertices. The algorithm employs a single data structure, a queue. While any queue implementation will suffice for the correctness of the algorithm, obviously different queue implementations provide different running time complexity, both asymptotically and in practice. Algorithm textbooks mostly recommend using a Fibonacci Heap as the chosen queue implementation because of its fast asymptotic running time \cite{Cormen90}. Practitioners \cite{Boost}, as well as some textbooks \cite{Cormen90}, recommend using a $d$-ary heap as the queue implementation, because of its fast running time in practice.
The Dijkstra algorithm is obviously an important building block in network science where it is used for studying graph characteristics \cite{example-JJ,example-BIU}, large graph clustering \cite{BSWW14}, and many other graph related problems. It is an important algorithm for the Internet, where it is
part of the shortest path calculation performed by OSPF \cite{ospf} and IS-IS \cite{is-is} protocols. It is also used by many applications in diverse fields, such as image processing \cite{avidan2007seam}, hardware design \cite{example-Cidon}, and many more.
Leveraging the algorithm's properties, we observe that the queue implementation does not have to deal with a general sequence of queries and updates. Indeed, since edge weights are non-negative, once a pop\_min() operation on the queue returned the value $x$, no value smaller than $x$ will ever be inserted (or be present) in the queue. We thus chose a queue implementation based on an array, where all vertices with current distance $d$ are stored in a linked list, whose base is the $d$-th cell of the array (for the clarity of the introduction, consider only the case of integer weights - floating point weights can also be dealt with, as will be explained later). Using this implementation gives O(1) insert() and decrease\_key() time, while the total time for all pop\_min() operations combined is also constant (and in practice, takes a few seconds).
We tested our implementation against Boost, which is one of the most highly regarded and expertly designed C++ library projects in the world \cite{Boost-brag}, and show that it outperforms Boost for both generated Erd\H{o}s-R\'{e}nyi networks, and the real-world mainland USA road network.
Similar performance analysis for several methods of implementing the algorithms, including methods similar to methods we suggest here, was performed by Cherkassky {\em et al.} \cite{cherkassky1996shortest}. However, the experiments in \cite{cherkassky1996shortest} were done some 20 years ago, on the limited hardware available at that time, and consequently, on graphs which could be processed on such hardware - such graphs are considered small nowadays. Just to give a sense of the differences in the order of magnitudes involved, \cite{cherkassky1996shortest} ran the experiments on a SUN Sparc-10 workstation with a 40 MHZ processor and 160 MBytes of memory, whereas most of our experiments were done on a 1600 MHZ machine with 15 Gigabytes of memory. The number of vertices in the graphs we use in our experiments is usually in the millions, whereas \cite{cherkassky1996shortest} have performed only 4 experiments where the number of vertices is above 1 million (and in these experiments, it is only slightly larger than 1 million).
Furthermore, \cite{cherkassky1996shortest} have mainly used a 3-ary heap and a double-bucket queue as queue implementations for the algorithm. A double-bucket queue implementation is similar to the mechanism we term Swap Prevention, described later in this report. In most experiments, they were unable to employ our suggested queue implementation, as it required too much memory. Indeed, with the limited amounts of memory available at the time, this would be expected. However, our results indicate that, assuming enough memory is available, our chosen queue implementation outperforms the Swap Prevention mechanism, and considering it is significantly simpler to implement, it would be the natural choice for the queue implementation. \cite{cherkassky1996shortest} could not run our chosen queue implementation on a single graph where the number of vertices is at least 1 million, and in several instances, they claim it was in fact unemployable even on small graphs (in nowadays standards) where the nodes number in the several thousands (we are surprised by this claim - while we expect memory constraints to be a problem for experiments done at the time, a reasonable implementation of our queue mechanism should be able to run on such graphs using less than the amount of memory they had available). In short, while \cite{cherkassky1996shortest} is obviously an important and beneficial work in this area, it does not analyze the performance of our queue implementation for graphs which would be considered a reasonable benchmark nowadays, on modern hardware.
The rest of the paper is organized as follows: Section \ref{sec:Qimpl} elaborates on the workings of the chosen queue implementation. Section \ref{sec:meas} details our measurements of running time for our implementation and the Boost implementation. Section \ref{sec:extn} discusses a few additional potential improvements for our implementation (including the aforementioned solution for floating point weights). Section \ref{sec:cncld} concludes the paper.
\section{Proposed Queue Implementation} \label{sec:Qimpl}
We will briefly remind the reader the Dijkstra algorithm for calculating the shortest path between one source vertex and the rest of the vertices \cite{Cormen90}.
The algorithm maintains a queue of vertices, sorted by distance from the starting vertex. The queue is initialized to contain the starting vertex, with its distance of 0 (and formally, all other vertices with distance infinity). In each iteration, a pop\_min() operation is performed on the queue, popping out the vertex with the smallest distance present in the queue. All edges of this vertex are relaxed (in no particular order), while maintaining the distances of target vertices as represented in the queue: i.e., for each edge, if the new distance achieved by adding the edge's weight to the distance of the popped vertex is lower than the previous distance of the target vertex, a decrease\_key() operation is performed on the target vertex as present in the queue (formally distances are decreased from their initial value of infinity to a real value, in practice most implementations first insert vertices to the queue only when their distance is less than infinity). After all edges of the popped vertex are relaxed, the algorithm continues to the next iteration, where it performs another pop\_min() operation and so on.
The algorithm performs (up to) $V$ pop\_min() operations, and (up to) $E$ decrease\_key() operations. Choosing a Fibonacci Heap as the queue implementation gives constant amortized time for decrease\_key(), and O($\log V$) for pop\_min(), bringing the total complexity to O($E+V\log V$). However, as noted by \cite[Ch.\ 21]{Cormen90}, the constant factors hidden behind the Big-O notation for Fibonacci Heap make the running time very long in practice. Choosing a $d$-ary heap for the queue implementation gives O($\log V$) for decrease\_key() and O($\log V$) for pop\_min(), bringing the total complexity to O($E\log V + V\log V$). This is the popular choice for implementations of the algorithm, as exemplified by Boost \cite{Boost}.
Before we describe the queue implementation, we need the following observation:
\begin{obsrv}
\label{non-decreasing}
For Dijkstra's algorithm, once a vertex with distance $x$ returned from the pop\_min() operation, no vertex with distance less than $x$ will ever be present in the queue.
\end{obsrv}
\begin{proof}
Indeed, at the moment the first vertex with distance $x$ returned from pop\_min(), no vertex with distance less than $x$ was present in the queue (otherwise, that other vertex would be the result of pop\_min()).
We now use a proof by induction: suppose at the start of iteration $i$ of the algorithm, no vertex with distance less than $x$ was present in the queue, then at the end of the iteration no such vertex will be present in the queue either (we define "iteration" to mean the sequence of actions taken between two successive pop\_min() operations):
Indeed, name the vertex popped out of the queue in iteration $i$, $v_i$. By our assumption, $v_i$'s distance when popped out, $d$, satisfies $x \le d$. During the iteration, decrease\_key() operations are performed, but the new distances are of the form $d + w$, where $w$ is a weight of some edge, which according to the algorithm's assumptions is non-negative. Hence, the new distances are also at least $x$.
\end{proof}
Obviously, Dijkstra's algorithm performance depends on the queue implementation. Thus, we now describe our new queue implementation. The queue consists of an array of size MAX\_INT (typically $2^{32}$), where cell $i$ in the array is the anchor of a linked list containing the vertices whose current distance is $i$. Most operations on the queue are somewhat trivial to implement:
\begin{enumerate}
\item init() simply zeroes all cells of the array.
\item insert(vertex $v$, distance $d$) simply inserts $v$ to the linked list in the $d$'th cell of the array.
Since the list is not ordered $v$ can be placed at the head.
\item decrease\_key(vertex $v$, distance new\_distance) first removes $v$ from the linked list it is currently present in (we use a doubly-linked list for convenience sake), then performs insert($v$, new\_distance).
\end{enumerate}
The only non-trivial operation is pop\_min(), which we now describe:
The queue maintains as one of its internal members a lower bound on the minimal distance of the next pop\_min() operation, which we will call min\_distance\_candidate. This value is initialized to zero, then increased as increasing values are popped out of the queue - indeed, recall that the series of popped distances is (non-strictly) increasing, as proven by Observation~\ref{non-decreasing}. Another way of understanding the role of min\_distance\_candidate is to think of it as pointing to the cell out of which the last popped node was popped.
On pop\_min() start (see Figure~\ref{fig:popmin}), the code scans the cells of the array starting from min\_distance\_candidate, until it reaches a cell containing a non-empty list. The new value of min\_distance\_candidate is the index of this cell. The code then pops the first element from the list, and returns it.
\begin{figure*}[htb]
\begin{center}
\begin{verbatim}
pop_min(Queue queue)
{
while (queue.min_distance_candidate <= queue.max_distance_ever_seen) {
cell = queue.array[min_distance_candidate]
if not cell.list.empty() {
result = cell.list.pop_start()
return result
}
queue.min_distance_candidate++
}
return NULL
}
\end{verbatim}
\end{center}
\caption{A pseudocode for the pop\_min() operation} \label{fig:popmin}
\end{figure*}
This brings the total running time of the algorithm with our queue implementation to O($E + MAX\_INT$): Recall that Dijkstra's algorithm performs (up to) $V$ pop\_min() operations, and (up to) $E$ decrease\_key() operations. Our running time for decrease\_key() is O(1), contributing the first term of the total running time. As for the pop\_min() operations, we observe that once min\_distance\_candidate points to a non-empty cell in the array, pop\_min() is O(1). The total amount of operations, over the entire running time of the algorithm, advancing min\_distance\_candidate from its initial value of 0 to its final value of MAX\_INT is clearly MAX\_INT.
Furthermore, min\_distance\_candidate does not need to reach its maximum theoretical value of MAX\_INT - once the queue is empty, pop\_min() can return NULL and the algorithm is done. This can be implemented by maintaining the current number of vertices present in the queue, then returning NULL once that number reaches 0. Thus, the final value of min\_distance\_candidate is the distance of the farthest vertex from the starting vertex, which we will designate by $U$. This brings the total complexity to O($E + U$).
In our current implementation, the queue instead maintains the largest distance that was ever present in the queue, termed max\_distance\_ever\_seen, then terminates when min\_distance\_candidate surpasses max\_distance\_ever\_seen. Maintaining this value is important for avoiding unnecessary initializations of array cells. We emphasize that the actual memory interactions with the array only occur with cells up to max\_distance\_ever\_seen, such that memory regions above max\_distance\_ever\_seen aren't even physically allocated.
Adding another value of the number of vertices present to the implementation is quite easy, but currently unimplemented, as we suspect it won't further reduce our running time.
Even in cases where the possible value of U is close to $2^{32}$ (this means using integers for storing distances is only marginally sufficient and thus may not be an appropriate choice), we emphasize at this early section of the paper that going over an array of size $2^{32}$ may sound prohibitively expensive, but in practice isn't: on our development machine (a strong machine for personal use), it takes about 50 Seconds to do so, while some memory needs to be swapped to the hard-drive. We elaborate on the matter later, but emphasize even now that this doesn't make our approach prohibitively expensive.
\section{Measurements} \label{sec:meas}
We now present measurements of the running time of our implementation, compared to the running time of the Boost implementation. We stress that achieving comparable performance to Boost is quite a feat in and of itself, since we're comparing code that was developed in a few weeks to a highly regarded, well polished library. Our code is publicly available at \cite{github}, and we encourage further experimentation with it. All time measurements were done on an Intel Core-i7 machine with 16 gigabytes of RAM (except for the protein network graph, see below). For Boost time measurements, we tested their implementation on each graph with 4 different heap implementations they recommend, then took the shortest time as the "Boost time".
We benchmarked the implementations against a set of graphs generated from the Erd\H{o}s-R\'{e}nyi Model \cite{Erdos}. Figure \ref{fig:ER-time} shows that our implementation outperforms Boost on all tested graph sizes and densities. The runtime speedup ranges from 1.47 to 8.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.45\textwidth,natwidth=1201,natheight=900]{ER.jpg}
\caption{Run time comparison between our implementation and Boost for generated Erd\H{o}s-R\'{e}nyi graphs. }
\label{fig:ER-time}
\end{center}
\end{figure}
Furthermore, we benchmarked both implementations on generated Barab\'{a}si-–Albert graphs \cite{barabasi1999emergence}, with the parameter $m$, the number of new edges per new vertex, ranging between 2 and 10, and with 10 million vertices. The weights are uniformly selected between 1 and 1000. Figure \ref{fig:BA-time} shows that in practice, there is little difference in the running time for different values of $m$; our implementation typically runs in about one milli-Second, and Boost typically runs in one and a half hundredth of a second, giving a speedup of about 15. Figure \ref{fig:BA-size} compares both implementations' running time for $m = 2$, while the number of vertices, $n$, grows. Our implementation's running time is always lower than Boost's, but it is not necessarily increasing with $n$. Such low running times, of typically less than 25 milli-Second, seem to indicate the graphs don't start to exercise the asymptotic behavior of our implementation. Boost's running time, on the other hand, is increasing with $n$, which seems to indicate that as $n$ will increase further, the running time will also increase further. For the graphs described above, each mark point represents the average of, at least, 20 random experiments.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.45\textwidth,natwidth=1201,natheight=900]{BA.jpg}
\caption{Run time comparison between our implementation and Boost for generated Barab\'{a}si-–Albert graphs. }
\label{fig:BA-time}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.45\textwidth,natwidth=1201,natheight=900]{BAsize.jpg}
\caption{Run time comparison between our implementation and Boost for generated Barab\'{a}si-–Albert graphs, for $m=2$. }
\label{fig:BA-size}
\end{center}
\end{figure}
We also benchmarked the implementations against the graph of the entire mainland USA road network, obtained from \cite{challenge9}. Our implementation typically runs in 2-3 Seconds, while the Boost implementation (again, taken as the shortest time between 4 possible heap implementations - in our experience, the variance of runtime between the different heap implementations is quite small) typically runs in 6-7 Seconds. Figure \ref{fig:usa} shows the performance over 1000 randomly selected starting vertices in that graph (note that the X axis designates a randomly chosen starting vertex for each point, not the 1000 vertices with the smallest indices) - clearly, our implementation runs faster than Boost on this graph regardless of the starting vertex.
Additionally, we attempted to benchmark our implementation against Boost on the protein network \cite{string-paper}, which can be found at \cite{string}, that translates to a graph with about five million nodes and 664 million edges.
Such a large graph strains the memory requirements on our machine. Nevertheless, our implementation's runtime ranges from 0.0019 Seconds to 0.082 Seconds on a sample of 13 starting vertices.
The Boost implementation initially hanged the machine due to large memory requirements for storing the graph and an initial data structure, which caused constant swapping and required quite an effort to help it executing. We managed to get it to run for one case in 0.4 Sec for an instance that took our implementation 0.02 Seconds to run, a factor of 20 speedup for our implementation.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.45\textwidth,natwidth=1201,natheight=900]{USA.jpg}
\caption{Run time comparison between our implementation and Boost for the full USA road network. }
\label{fig:usa}
\end{center}
\end{figure}
\begin{table}
\begin{tabular}{| r | c | l | l |l|}
\hline
Vertices & Density & Our time & Boost time & speedup \\
& & [Sec.] & [Sec.] & \\ \hline\hline
100,000 & 2.5 & 0.01 & ~0.08 & 8.00 \\ \hline
1,000,000 & 2.5 & 0.23 & ~0.53 & 2.30 \\ \hline
5,000,000 & 2.5 & 1.45 & ~4.14 & 2.86 \\ \hline
10,000,000 & 2.5 & 3.05 & ~9.7 & 3.18 \\ \hline
20,000,000 & 2.5 & 6.68 & 21.97 & 3.29 \\ \hline\hline
100,000 & 15 & 0.03 & ~0.11 & 3.67 \\ \hline
1,000,000 & 15 & 0.68 & ~1 & 1.47 \\ \hline
2,000,000 & 15 & 1.5 & ~2.51 & 1.67 \\ \hline
3,000,000 & 15 & 2.36 & ~4.12 & 1.75 \\ \hline
5,000,000 & 15 & 4.18 & ~7.59 & 1.82\\ \hline\hline
road USA & & & & \\
23,947,347 & 2.44 & 2.57 & 6.25 & 2.43\\ \hline
\end{tabular}
\caption{Run time and speedup comparison between our algorithm and Boost for generated E-R graphs, and USA road network. }
\label{tab:time}
\end{table}
\section{Improvements And Extensions} \label{sec:extn}
We will now discuss two potential extensions to our implementation: dealing with floating point weights, and swap-prevention. To the best of our knowledge, this is the first time the techniques presented in the previous sections are suggested for dealing with real numbers.
{\bf Dealing with floating point weights}: Naturally, as presented so far, our implementation lacks the capability to deal with floating point weights, as well as with weights larger than $2^{32}$. For the latter case, we suggest simply using a 32-bit floating point as the chosen weight representation, and accepting the resulting minimal loss of precision. We acknowledge that loss of precision might be unacceptable for some use cases of the algorithm, but postulate that this is quite rare. The same applies for 64-bit floating point representation: our method requires accepting the loss of precision resulting from switching to 32-bit floating point representation.
Focusing on 32-bit floating point weights, we observe that in essence, the queue depends on the weights being integers solely for the purpose of iterating over the weights in a monotonically increasing order. This can also be achieved for floating point values: a (positive) floating point value is in essence an ordered pair of a mantissa and an exponent, and comparing two values is simply comparing them lexicographically, exponent first and mantissa second. Thus, the floating point value corresponding to exponent $e$ and mantissa $m$, is larger than exactly $m + e \cdot M$ other floating point values, where $M$ is the number of possible mantissas. Therefore, the queue implementation can simply change to have cell $i$ contain a linked list of all vertices whose distance is the $i$-th smallest floating point value. It is easy to show that this preserves the correctness of the queue.
It should further be noted that most use cases of the algorithm can likely use a floating point representation that is 24 bits or less. For example, using 10 bits for the mantissa allows for a precision of 3 decimal digits past the decimal point, which probably suffices for the vast majority of cases, and using even just 6 bits for the exponent allows for orders of magnitude between $2^{-32}$ and $2^{32}$. We postulate that only rare use cases cannot accept a similar level of precision (this example fits in 16 bits. Using 24 bits and allocating the remaining 8 bits according to the use case's needs allows for an even greater level of precision). Obviously, assuming such level of precision is appropriate, our implementation's running time will be O($E + 2^{24}$), which in practice equals O($E$) since the O($E$) pointer manipulation operation is more meaningful in time consumption than reading the 64 mega-byte ($2^{24}$) long array.
{\bf Swap-Prevention}: as briefly mentioned earlier, going over the full possible range of distances, 0 to $2^{32}$, is in fact not prohibitively expensive even on a (not extraordinarily strong) 16 gigabyte RAM machine, where some memory required for the queue representation will have to be swapped to the hard drive. Just to give a sense of proportions, the total amount of memory required for our implementation in this case is 19.3 gigabytes (the major memory requirements are 16 gigabytes for the queue and 3.2 gigabytes statically allocated for the graph), and about 3 gigabytes are required for the operating system and other software running on the machine, which means roughly about 6 gigabytes will need to be swapped to the hard drive in the course of a single run of the program; this requires about 50 Seconds. Swapping obviously can be avoided entirely by purchasing more RAM, which is quite inexpensive nowadays, and bringing the total to 32 gigabytes.
While accepting the cost of swapping or purchasing more RAM may be acceptable for some cases, the problem can also be avoided by employing a mechanism we term Swap-Prevention: We can limit the required memory for the queue to small values (almost arbitrarily small ones - even smaller than the CPU cache size). The way Swap-Prevention works is by dividing the array to equally sized pieces which we term "chunks". We wish to guide the reader by example, making the mechanism much clearer. Suppose each chunk is "16 bits", or $2^{16}$ long. Thus, the array is divided to $2^{16}$ chunks. The first chunk contains vertices with distances 0 to $2^{16} - 1$, the second one holds vertices with distances $2^{16}$ to $2^{17} - 1$, etc.
At any given moment, a single chunk is "active", meaning its $2^{16}$ cells occupy an array accordingly-sized in memory. The active chunk is the chunk containing the min\_distance\_candidate cell. Non-active chunks are "condensed", each of those chunks occupies a single linked list, containing all of the vertices present in the chunk.
\begin{enumerate}
\item Inserting (and similarly, decrease\_key) is quite easy: if the new distance of a vertex falls inside the active chunk, the vertex is inserted to the linked list of vertices with that exact distance. Otherwise, the vertex is inserted to the single linked list containing all the vertices of its new, non-active chunk (this indeed causes the temporary "inconvenience" of having vertices with different distances in the same linked list - an inconvenience which will be dealt with later).
\item pop\_min() is somewhat more complex: if min\_distance\_candidate points to a non-empty cell, obviously a vertex can be popped out from the linked list and returned. Otherwise, min\_distance\_candidate is advanced in the hope of finding a non-empty cell inside the current active chunk. If min\_distance\_candidate reached the end of the chunk, the next (non-empty) chunk needs to be "expanded" into the array: The queue goes over the vertices of the "condensed" single linked list, and inserts each vertex to the appropriate cell inside the array. The chunk then becomes active, min\_distance\_candidate is reset to point to the first cell of the array, and the simpler pop\_min() case can now be executed.
\end{enumerate}
Clearly, the memory requirements when employing Swap-Prevention are an array whose size is the chunk size, and an additional array containing an anchor for each chunk. Thus, the memory requirements are $CHUNK\_SIZE + NUM\_OF\_CHUNKS$. Note that $CHUNK\_SIZE \cdot NUM\_OF\_CHUNKS = MAX\_INT$ must hold. Optimizing for memory consumption obviously gives the optimal chunk size as the square root of MAX\_INT, typically $2^{16}$. Note that there is no need to choose this particular value, any value for the chunk size will work (as long as there aren't divisibility considerations).
Swap-Prevention was implemented by us \cite{Swap-Prevention-Commit}, and somewhat surprisingly, found to actually impede performance by a factor of about 2. Our original aim was to fit the queue implementation inside the CPU cache, which is perfectly possible (our CPU, an Intel Core-i7, has 8 megabytes of cache, and choosing a $2^{16}$ chunk size keeps us under a single megabyte), but this neglects the fact that the graph representation is typically several gigabytes of memory, so cache misses will be abundant regardless. We welcome additional experimentation with the Swap-Prevention code. We also note that in cases where a 16-bit floating point (or even 16-bit integer) representation suffices, the queue can be made extremely small using Swap-Prevention, able to fit even in extremely tight memory requirements typically found in embedded systems.
\section{Conclusion} \label{sec:cncld}
We presented a novel queue implementation well-suited for the Dijkstra's algorithm. Using this implementation, the algorithm's runtime is O($E + U$), where $U$ is the distance of the vertex farthest from the starting vertex. A prototype implementation compares favorably to Boost, a well-known, widely-used library. We released the code to make it available to the research community.
\section*{Acknowledgements}
This report was supported in part by a scholarship from the Ministry of Science and Technology of Israel.
\bibliographystyle{acm}
|
1,116,691,500,998 | arxiv | \section{Introduction}
Nilmanifolds, i.e. compact quotients of simply connected nilpotent Lie
groups, are known to be a rich source of exotic geometry. We are
particular interested in pseudo-K\"ahler geometry and its deformation
theory on these spaces. We initially focus on the complex structures,
and will bring symplectic structures in the picture at the end.
It is a general principle that the deformation theories of complex and
symplectic structures are dictated by their associated differential
Gerstenhaber algebras \cite{GM} \cite{Mer2} \cite{Zhou}. The
associated cohomology theories are Dolbeault's cohomology with
coefficients in the holomorphic tangent bundle, and de Rham's
cohomology respectively. De Rham cohomology of nilmanifolds is known
to be given by invariant differential forms \cite{Nomizu} and there
are several results for Dolbeault cohomology on nilmanifolds pointing
in the same direction \cite{CF} \cite{CFP}. Therefore, in this paper
we focus on invariant objects, i.e. invariant complex structures and
invariant symplectic forms on nilpotent Lie algebras.
Analysis and classification of invariant complex structures and
pseudo-K\"ahler pairs on six-dimensional nilpotent algebras have been
in progress in the past ten years \cite{BDM} \cite{CF} \cite{CFGU}
\cite{Salamon} \cite{Ugarte}. In particular, it is known that a
complex structure can be part of a pseudo-K\"ahler pair, only if it is
nilpotent \cite{CFU}.
After a preliminary presentation on construction of differential
Gerstenhaber algebra for invariant complex and symplectic structures,
we give two key technical results, Proposition \ref{key technical} and
Proposition \ref{second technical}, describing the restrictive nature
of quasi-isomorphisms in our setting. We recall the definition of
nilpotent complex structure in Section \ref{sec:nil}. Numerical
invariants for these complex structures are identified, and used to
refine older classifications. This in particular allows to identify
the real algebra underlying a set of complex structure equations by
evaluation of the invariants. The results of Section~\ref{sec:nil}
including the invariants of complex structure equations and the
associated underlying real algebras are summarized in Table
\ref{tab:1}.
In Section \ref{sec:f}, we analyze the differential Gerstenhaber
algebra $\DGA({\lie g}, J)$ when a nilpotent complex structure $J$ on
a nilpotent Lie algebra $\lie g$ is given. The invariants of the
complex structure equations dictate the structure of $\DGA({\lie g},
J)$. Relying on the classification provided in Section \ref{sec:nil}
and Table \ref{tab:1}, and in terms of the same set of invariants, we
establish a relation between the Lie algebra structure of $\lie g$ and
that of $\DGA ({\lie g}, J)$. The total output of Section \ref{sec:f}
is provided in Theorem \ref{invariant theorem} and Table \ref{tab:f1}.
These results demonstrate the phenomenon of ``jumping'' of $\DGA(\lie
g, J)$ as $J$ varies through a family of nilpotent complex structures
on some fixed algebra $\lie g$. Results are given in Theorem
\ref{finding f} and Table \ref{tab:g-f1'}. With the aid of
Proposition \ref{second technical}, Theorem \ref{invariant theorem} we
also show that each differential Gerstenhaber algebra $\DGA(\lie g,
J)$ is isomorphic to a differential Gerstenhaber algebra $\DGA(\lie h,
O)$ derived from a certain Lie algebra $\lie h$ and linear isomorphism
$O\colon\lie h\to\lie h^*$. The result is stated in Theorem \ref{iso
dga}. However, the map $O$ is not necessarily induced by a
contraction with any symplectic form. A priori, it may not even be
skew-symmetric.
Finally in Section \ref{sec:appl}, we consider the differential
Gerstenhaber algebra $\DGA({\lie h}, \Omega)$ associated to an
invariant symplectic structure $\Omega$ on a nilpotent algebra $\lie
h$. We shall explain in Section \ref{sec:dga} that $\DGA({\lie h},
\Omega)$ is essentially generated by the Lie algebra structure on
${\lie h}$. This elementary observation, along with the results
established in Section \ref{sec:f} and Table \ref{tab:f1}, allows us
to answer the following question: Which six-dimensional nilpotent
algebra $\lie g$ admits a pseudo-K\"ahler structure $(J, \Omega)$ such
that there is a quasi-isomorphism
\[
\DGA ({\lie g}, J) \longrightarrow \DGA({\lie g}, \Omega) \ ?
\]
The construction of DGAs for complex structures and symplectic
structures is well known (e.g. \cite{Zhou}). It is a key ingredient
in homological mirror symmetry. Extending the concept of mirror
symmetry, Merkulov considers the notion of \it weak \rm mirror
symmetry \cite{Mer2} \cite{Mer}. In this paper, we call a Lie
algebra ${\lie g}$ with a complex structure $J$ and a Lie algebra
${\lie h}$ with a symplectic structure $\Omega$ a ``\it weak \rm
mirror pair'' if there is a quasi-isomorphism between $\DGA ({\lie
g}, J)$ and $\DGA ({\lie h}, \Omega)$. The aforementioned question
stems from a consideration on when ``self mirror'' occurs. For
four-dimensional nilpotent algebras, the answer could be derived
from results in \cite{Poon}. For six-dimensional nilpotent algebras,
our answer is in Theorem \ref{main}.
\section{Differential Gerstenhaber Algebras}\label{sec:dga}
\subsection{Preliminaries}
\begin{definition} {\rm {\cite{Ger} \cite[Definition 7.5.1]{Mac}}} Let
$R$ be a ring with unit and let $C $ be an $R$-algebra. Let
$\mathfrak{a}=\oplus_{n\in \mathbb{Z}}\mathfrak{a}^n $ be a graded
algebra over $C$. $\mathfrak{a}$ is a \emph{Gerstenhaber algebra}
if there is an associative product $\wedge$ and a graded commutative
product $[ {-} \bullet {-} ]$ satisfying the following axioms. When
$a\in \mathfrak{a}^n$, let $|a|$ denote its degree $n$. For $a\in
\mathfrak{a}^{|a|}$, $b\in \mathfrak{a}^{|b|}$, $c\in
\mathfrak{a}^{|c|}$,
\begin{eqnarray}
a\wedge b\in \mathfrak{a}^{|a|+|b|}, && b\wedge
a=(-1)^{|a||b|}a\wedge b. \\
{[ {a} \bullet {b} ]} \in \mathfrak{a}^{|a|+|b|-1},
&& [ {a} \bullet {b} ]=-(-1)^{(|a|+1)(|b|+1)}[ {b} \bullet {a} ].
\label{commutative}
\end{eqnarray}
\begin{equation}\label{jacobi} (-1)^{(|a|+1)(|c|+1)}[ {[ {a}
\bullet {b} ]} \bullet {c} ] +(-1)^{(|b|+1)(|a|+1)}[ {[ {b} \bullet
{c} ]} \bullet {a} ] +(-1)^{(|c|+1)(|b|+1)}[ {[ {c} \bullet {a} ]}
\bullet {b} ]=0.
\end{equation}
\begin{equation} [ {a} \bullet {b\wedge c} ]=[ {a} \bullet {b} ]\wedge
c+(-1)^{(|a|+1)|b|}b\wedge [ {a} \bullet {c} ]. \label{distributive}
\end{equation}
\end{definition}
On the other hand, we have the following construction.
\begin{definition} A differential graded algebra is a graded algebra $%
\mathfrak{a}=\oplus_{n\in \mathbb{Z}}\mathfrak{a}^n$ with a graded
commutative product $\wedge$ and a differential $d$ of degree $+1$,
i.e. a map $d:\mathfrak{a}\to \mathfrak{a}$ such that
\begin{equation}
d(\mathfrak{a}^n)\subseteq \mathfrak{a}^{n+1}, \quad d\circ d=0,
\quad
d(a\wedge b)=da\wedge b+(-1)^{|a|}a\wedge db. \label{compatible
with wedge}
\end{equation}
\end{definition}
\begin{definition}
Let $\mathfrak{a}=\oplus_{n\in \mathbb{Z}}\mathfrak{a}^n$ be a
graded algebra over $C$ such that $(\mathfrak{a}, [ {-} \bullet {-}
], \wedge)$ form a Gerstenhaber algebra and $(\mathfrak{a}, \wedge,
d)$ form a differential graded algebra. If in addition
\begin{equation}
d[ {a} \bullet {b} ]=[ {da} \bullet {b} ]+(-1)^{|a|+1}[ {a} \bullet
{db} ], \label{compatible with br}
\end{equation}
for all $a$ and $b$ in $\mathfrak{a}$, then $(\mathfrak{a}, [ {-}
\bullet {-} ], \wedge, d)$ is a differential Gerstenhaber algebra
{\rm (DGA)}.
\end{definition}
For any Gerstenhaber algebra, $\mathfrak{a}^1$ with the induced
bracket is a Lie algebra. Conversely, suppose $\mathfrak{a}^1$ is a
finite dimensional algebra over the complex or real numbers,
equipped with a differential compatible with the Lie bracket. Then a
straightforward induction allows one to construct a DGA structure on
the exterior algebra of $\mathfrak{a}^1$.
\begin{lemma}\label{natural extension} Let $\mathfrak{a}^1$ be a finite
dimensional Lie algebra with bracket $[ {-} \bullet {-} ]$. Let
$\mathfrak{a} $ be the exterior algebra generated by
$\mathfrak{a}^1$. Then the Lie bracket on $\mathfrak{a}^1$ uniquely
extends to a bracket on $\mathfrak{a}$ so that $(\mathfrak{a}, [ {-}
\bullet {-} ], \wedge)$ is a Gerstenhaber algebra.
If, furthermore, an operator $d:\mathfrak{a}^1\to\mathfrak{a}^2$ is
extended as in {\rm (\ref{compatible with wedge})}, then
$(\mathfrak{a}, [ {-} \bullet {-} ], \wedge)$ is a differential
Gerstenhaber algebra if and only if
\begin{equation}
d[ {a} \bullet {b} ]=[ {da} \bullet {b} ]+[ {a} \bullet {db} ],
\label{compatible with br one degree 1}
\end{equation}
for all $a$ and $b$ in $\mathfrak{a}^1$.
\end{lemma}
\begin{definition}
A homomorphism of differential graded Lie algebras is called a
quasi-isomorphism if the map induced on the associated cohomology
groups is a linear isomorphism.
A quasi-isomorphism of differential Gerstenhaber algebras is a
homomorphism of DGAs that descends to an isomorphism of cohomology
groups.
\end{definition}
Note that in the latter case the isomorphism is one of Gerstenhaber
algebras.
\subsection{DGA of complex structures}
Suppose $J$ is an \emph{integrable complex structure} on
$\mathfrak{g}$. i.e. $J$ is an endomorphism of $\mathfrak{g}$ such
that $J\circ J=-1$ and
\begin{equation} \label{integrable}
[ {x} \bullet {y} ]+J[ {Jx}
\bullet {y} ]+J[ {x} \bullet {Jy} ]-[ {Jx} \bullet {Jy} ]=0.
\end{equation}
Then the $\pm i$ eigenspaces $\mathfrak{g}^{(1,0)}$ and
$\mathfrak{g}^{(0,1)} $ are complex Lie subalgebras of the
complexified algebra $\mathfrak{g}_\mathbb{C}$. Let $\mathfrak{f}$ be the
exterior algebra generated by $\mathfrak{g}
^{(1,0)}\oplus\mathfrak{g}^{*(0,1)}$, i.e.
\begin{equation}
\mathfrak{f}^n:=\wedge^n(\mathfrak{g}^{(1,0)}\oplus\mathfrak{g}^{*(0,1)}),
\quad \mbox{ and } \quad \mathfrak{f}=\oplus_n\mathfrak{f}^n.
\end{equation}
The integrability condition in (\ref{integrable}) implies that
$\mathfrak{f}^1$ is closed under the \emph{Courant bracket}
\begin{equation}
[ x+\alpha \bullet y+\beta ]:=[x, y]+\iota_xd\beta-\iota_yd\alpha.
\end{equation}
A similar construction holds for the conjugate
${\overline{\mathfrak{f}}}$, generated by $
\mathfrak{g}^{(0,1)}\oplus\mathfrak{g}^{*(1,0)}. $
Recall that if $(\mathfrak{g}, [ {-} \bullet {-} ])$ is a Lie algebra,
the Chevalley-Eilenberg (C-E) differential $d$ is defined on the dual
vector space $\mathfrak{g}^*$ by the relation
\begin{equation} \label{C-E}
d\alpha (x,y):=-\alpha ([ {x} \bullet {y} ]),
\end{equation}
for $\alpha \in \mathfrak{g}^*$ and $x,y\in \mathfrak{g}$. This
operator is extended to the exterior algebra $\wedge\mathfrak{g}^*$ by
derivation. The identity $d\circ d=0$ is equivalent to the Jacobi
identity for the Lie bracket $[ {-} \bullet {-} ]$ on $\mathfrak{g}$.
It follows that $(\wedge \mathfrak{g}^*, d)$ is a differential graded
algebra.
The natural pairing on $(\lie g\oplus\lie g^*)\otimes \mathbb C$,
induces a complex linear isomorphism $ (\mathfrak{f}^1)^*\cong
{\overline{\mathfrak{f}}}^1$. Therefore, the C-E differential of the
Lie algebra ${\overline{\mathfrak{f}}}^1$ is a map from
$\mathfrak{f}^1$ to $\mathfrak{f}^2$. Denote this operator by
$\overline\partial$. Similarly, we denote the C-E differential of $
\mathfrak{f}^1$ by $\partial$. It is well known that the maps
\begin{equation}
\overline\partial: \mathfrak{g}^{*(0,1)}\to
\wedge^2\mathfrak{g}^{*(0,1)}, \quad \mbox{ and } \quad
\overline\partial: \mathfrak{g}^{(1,0)}\to
\mathfrak{g}^{(1,0)}\otimes \mathfrak{g}^{*(0,1)}
\end{equation}
are respectively given by
\begin{equation}
\overline\partial\overline\omega=(d\overline\omega)^{0,2}, \quad
(\overline\partial T){\overline W}= [ {{\overline W}} \bullet
{T}]^{1,0}
\end{equation}
for any $\omega$ in $\mathfrak{g}^{*(0,1)}$, $T\in
\mathfrak{g}^{1,0}$ and ${\overline W}\in \mathfrak{g}^{0,1}$.
If $\{T_\ell: 1\leq\ell\leq n\}$ forms a basis for $
\mathfrak{g}^{1,0}$ and $\{\omega^\ell:1\leq \ell\leq n\}$ the
dual basis in $\mathfrak{g}^{*(1,0)}$, then we have
\begin{equation}
\overline\partial{\overline\omega}^\ell=(d{\overline\omega}^\ell)^{0,2},
\quad (\overline\partial T)=\sum_\ell\bom{\ell}\wedge [ {{\overline
T}_\ell} \bullet T_j]^{1,0}
\end{equation}
Based on Lemma \ref{natural extension}, it is an elementary
computation to verify that the quadruples $(\mathfrak{f}, [ {-}
\bullet {-} ], \wedge, \overline\partial)$ and
$(\overline{\mathfrak{f}}, [ {-} \bullet {-} ], \wedge, \partial)$
are differential Gerstenhaber algebras.
For a given Lie algebra $\lie g$ and a choice of invariant complex
structure $J$, we denote the differential Gerstenhaber algebra
$(\mathfrak{f}, [ {-} \bullet {-} ], \wedge, \overline\partial)$ by
$\DGA ({\lie g}, J)$.
The following observation relying on the nature of the
$\overline\partial$ and $\partial$ will be helpful, although
apparently obvious.
\begin{lemma}\label{duality} Given a complex linear identification
${\overline{\lie f}}^1\cong ({\lie f}^1)^*$, the Lie algebras
$({\lie f}^1, [-\bullet -])$, $({\overline{\lie f}}^1, [-\bullet
-])$ and the graded differential algebras $({\lie f}, \overline\partial)$,
$({\overline{\lie f}},
\partial)$ determine each other.
\end{lemma}
\subsection{DGA of symplectic structures}\label{symp}
Let $\mathfrak{h}$ be a Lie algebra over $\mathbb{R}$. The exterior algebra
of the dual $\mathfrak{h}^*$ with the C-E differential $d$ is a
differential graded Lie algebra.
Suppose that $O:\mathfrak{h}\to\mathfrak{h}^*$ is a real linear map.
Define a bracket $[ {-} \bullet {-} ]_O$ on $\mathfrak{h}^*$ by
\begin{equation}
[ {\alpha} \bullet {\beta} ]_O:=O[ {O^{-1}\alpha} \bullet
{O^{-1}\beta} ].
\end{equation}
It is a tautology that $(\mathfrak{h}^*, [ {-} \bullet {-} ]_O)$
becomes a Lie algebra, with the map $O$ understood as a Lie algebra
homomorphism.
\begin{definition}
A linear map $O:\mathfrak{h}\to\mathfrak{h}^*$ from a Lie algebra to
its dual is said to be compatible with the C-E differential if for
any $\alpha, \beta$ in $\mathfrak{h}^*$,
\begin{equation}
d[ {\alpha} \bullet {\beta} ]_O=[ {d\alpha} \bullet {\beta} ]_O+[
{\alpha} \bullet {d\beta} ]_O.
\end{equation}
\end{definition}
Due to Lemma \ref{natural extension}, the next observation is a
matter of definitions.
\begin{lemma}\label{construction}
Suppose $\mathfrak{h}$ is a Lie algebra, and take an element $O$ in
$\Hom(\mathfrak{h}, \mathfrak{h}^*)$ compatible with the C-E
differential. Then $(\wedge^\bullet\mathfrak{h}^*, [ {-} \bullet {-}
]_O, \wedge, d)$ is a differential Gerstenhaber algebra.
\end{lemma}
When the algebra $\lie h$ has a symplectic form $\Omega$, the
contraction with $\Omega$ defines an $O$ as in the above lemma. In
such case, the differential Gerstenhaber algebra $(\wedge^\bullet{\lie
h}^*, [-\bullet -]_\Omega, \wedge, d)$ {\it after complexification}
is denoted by $\DGA ({\lie h}, \Omega)$.
\section{Complex Structures on Nilpotent Algebras}\label{sec:nil}
\subsection{General Theory}
Let $\lie g$ be a Lie algebra over $\mathbb R$ or $\mathbb C$. The
\emph{lower central series} of $\lie g$ is the sequence of
subalgebras $\lie g_{p+1} \subset \lie g_p\subset\lie g$ given by
\[
\lie g_0=\lie g, \quad \lie g_p = [\lie g_{p-1}\bullet\lie g]. \] A Lie
algebra $\lie g$ is $s$-step nilpotent if $s$ is the smallest
integer such that $\lie g_s=\{0\}$. Defining $V_p$ to be the
annihilator of $\lie g_p$ one has \emph{the dual sequence} $\lie
g^*\supset V_{p}\supset V_{p+1}$. The dual sequence may also be
defined recursively as
\[ V_0 = \{0\}, \quad V_p = \{
\alpha\in\lie{g}^*:d\alpha\in\Lambda^2 V_{p-1}\}. \]
We note that if the subscript $\mathbb{C}$ denotes complexification of vector
spaces and Lie algebras, then $V_p(\lie g)_\mathbb{C} = V_p(\lie g_\mathbb{C})$.
Write $n_p=\dim V_p$. A \emph{Malcev} basis for $\lie g^*$ is a basis
chosen such that $e^1,\dots,e^{n_1}$ is a basis for $V_1$,
supplemented with $e^{n_1+1},\dots,e^{n_2}$ to form a basis for $V_2$,
et cetera. For such a basis one has $de^p\in\Lambda^2\langle
e^1,\dots, e^{p-1}\rangle$. The short-hand notation
$12:=e^{12}:=e^1\wedge e^2$ is convenient. Using this one may
identify a Lie algebra by listing its structure equations with respect
to a Malcev basis as $(de^1,\dots,de^n)$. For instance we may write
$\lie g=(0,0,a{12})$ to mean the Lie algebra $\lie g$ generated by the
relations $de^1=0=de^2$, $de^3=a e^1\wedge e^2$. This has the single
non-trivial bracket $[e_1\bullet e_2]=-ae_3$.
\begin{lemma}\label{quasi}
Suppose $\lie h$ and $\lie k$ are Lie algebras, $\lie h$ is
nilpotent and $\phi\colon(\wedge \lie h^*,d)\to(\wedge\lie k^*,d)$ is a
quasi-isomorphism of the associated differential graded algebras.
Then $\phi$ is an isomorphism.
\end{lemma}
\noindent{\it Proof: }
If $\phi\colon\lie g^*\to\lie h^*$ is a homomorphism of the
associated differential graded algebras then $\phi(V_p(\lie
g))\subset V_p(\lie h)$. When $\phi$ is furthermore a
quasi-isomorphism then the restriction $V_1(\lie g)\to V_1(\lie h)$
is an isomorphism of vector spaces. Suppose that $\phi$ restricted
to $V_{p-1}(\lie g)$ is an isomorphism onto $V_{p-1}(\lie h)$. Then
clearly the induced map $\Lambda^2V_{p-1}(\lie
g)\to\Lambda^2V_{p-1}(\lie h)$ is also an isomorphism. Suppose that
$a\in V_p(\lie g)$ satisfies $\phi(a)=0$. Then
$da\in\Lambda^2V_{p-1}(\lie g)$ satisfies $\phi(da)=0$. But then
$da=0$, so $a\in V_1$, and $\phi(a)=0$ actually implies $a=0$.
\ q.~e.~d. \vspace{0.2in}
\begin{proposition}\label{key technical}
Suppose that $\lie g$ and $\lie h$ are finite dimensional nilpotent
Lie algebras, $J$ is an integrable complex structure on $\lie g$ and
$O\colon\lie h\to\lie h^*$ is a linear map compatible with the C-E
differential on $\lie h$. Then a homomorphism $\phi$ from $\DGA
({\lie g}, J)$ to $\DGA ({\lie h}, O)$ is a quasi-isomorphism if and
only if it is an isomorphism.
\end{proposition}
\noindent{\it Proof: } As a quasi-isomorphism of DGAs, $\phi$ is a
quasi-isomorphism of the underlying exterior differential algebras:
\[ \phi: (\wedge^*{\lie f}^1, \wedge, \overline\partial)\to (\wedge^*{\lie h}_\mathbb{C}^*,
\wedge, d).
\]
The last lemma shows that it has to be an isomorphism. \ q.~e.~d. \vspace{0.2in}
Two special types of complex DGAs were introduced above: Those coming
from an integrable complex structure $J$ on a real algebra $\lie g$,
denoted $\DGA(\lie g,J)$ and those derived from a linear
identification $O\colon\lie h\to\lie h^*$ compatible with the
differential. For the latter we write $\DGA(\lie h,O)$. A problem
related to ``weak mirror symmetry'' is: given an algebra $\lie g$ and
a complex structure $J$, when does an $\lie h$ and $O$ exist so that
$\DGA(\lie g,J)$ is quasi-isomorphic to $\DGA(\lie h,O)$? For
nilpotent algebras we shall see below that this always is the case.
Given $J$ on $\lie g$ write $\lie k$ for the Lie algebra ${\bar{\lie
f}}^1$ consisting of degree one elements in $\DGA(\lie g,J)$. As
$\lie h$ is complex we may speak of the complex conjugate algebra
consisting of the conjugate vector space $\bar{\lie h}$. Let
$c\colon\lie h\to\bar{\lie h}$ be the canonical map such that
$c(ax)=\bar a x$ for complex $a$ and $x$ in $\lie h$. Then
$[x,y]_c:=c[c(x),c(y)]$ equips $\bar{\lie h}$ with a Lie bracket. We
say that $\lie h$ is \emph{self-conjugate} if a complex linear
isomorphism $\lie h\to \bar{\lie h}$ exists.
\begin{proposition}\label{second technical}
Let $\lie g$ be a Lie algebra with complex structure $J$. Let
$\DGA(\lie g, J)$ be its differential Gerstenhaber algebra. Write
$\lie h$ for the Lie algebra $\lie f^1$ and suppose that $\lie h$ is
self-conjugate. Then there exists a complex linear isomorphism
$O\colon\lie h\to{\lie h}^*$ compatible with the C-E
differential $d$ on $\lie h$ so that $\DGA(\lie h,O)$ is isomorphic
to $\DGA(\lie g, J)$.
\end{proposition}
\noindent{\it Proof: } We construct the map $O$. Let $\phi \colon \lie h \to \lie
f^1$ be the identification of $\lie h$ as the Lie algebra given by
$\lie f^1$. Composing on both sides with complex conjugation gives
the isomorphism $\bar\phi := c\circ\phi\circ c \colon {\bar{\lie h}}
\to {\bar{\lie f}^1}$ of Lie algebras. Taking the identifications
${\bar{\lie f}^1}\cong(\lie f^1)^*$ and $\bar{\lie h}\cong\lie h$ into
account gives the isomorphism
\begin{equation}
\psi: \lie h\cong{\bar{\lie
h}} \stackrel{{\bar\phi}}{\rightarrow} {\bar{\lie
f}^1}\cong(\lie f^1)^*
\end{equation}
of Lie algebras. The dual map $\psi^*$ is now an isomorphism of
exterior differential algebras
\begin{equation}\label{natural1}
\psi^*: \wedge^*{\lie f}^1 \to \wedge^*{\lie h}^*, \quad \psi^*\circ \overline\partial
=d \circ \psi^*.
\end{equation}
We claim that the following composition
\begin{equation}
O: \lie h \stackrel{\phi}{\rightarrow} {\lie f}^1
\stackrel{{\psi}^*}{\rightarrow} \lie h^*
\end{equation}
is compatible with $d$. By Lemma~\ref{natural extension} this is the
case if equation \eqref{compatible with br one degree 1} holds for the
bracket $\sbr{\alpha}{\beta}_O=:O\sbr{O^{-1}\alpha}{O^{-1}\beta}$. But
\begin{equation}\label{natural2}
\sbr{\alpha}{\beta}_O= \psi^*\circ \phi
\sbr{\phi^{-1}\circ
(\psi^*)^{-1}\alpha}{\phi^{-1}\circ
(\psi^*)^{-1} \beta}= \psi^*
\sbr{ (\psi^*)^{-1}\alpha}{ (\psi^*)^{-1} \beta}.
\end{equation}
Hence,
\begin{eqnarray*}
d\sbr{\alpha}{\beta}_O&=&d(\psi^*
\sbr{ (\psi^*)^{-1}\alpha}{ (\psi^*)^{-1} \beta} ) \\
&=&\psi^* (\overline\partial \sbr{ (\psi^*)^{-1}\alpha}{ (\psi^*)^{-1} \beta} )\\
&=&\psi^* (\sbr{ \overline\partial (\psi^*)^{-1}\alpha}{ (\psi^*)^{-1} \beta} )+\psi^* ( \sbr{ (\psi^*)^{-1}\alpha}{ \overline\partial (\psi^*)^{-1} \beta} ) )\\
&=&\psi^* (\sbr{ (\psi^*)^{-1}d\alpha}{ (\psi^*)^{-1} \beta} )+\psi^* ( \sbr{ (\psi^*)^{-1}\alpha}{ (\psi^*)^{-1} d\beta} ) )\\
&=& \sbr{ d\alpha}{ \beta}_O+\sbr{\alpha}{d \beta}_O.
\end{eqnarray*}
By Lemma \ref{construction}, $\DGA(\lie k, O):=(\wedge^*{\lie k}^*,
\sbr{-}{-}_O, \wedge, d)$ forms a differential Gerstenhaber algebra.
It is clear from (\ref{natural1}) and (\ref{natural2}) that the map
$\psi^*$ yields an isomorphism from $\DGA(\lie g, J)$ to $\DGA(\lie k,
O)$. \ q.~e.~d. \vspace{0.2in}
It should be noted that the map $O$ is not necessarily skew-symmetric,
nor is it automatically closed when it is skew. In particular the DGA
structure obtained above does not necessarily arise from contraction
with a symplectic structure. Also note that the condition ${\bar{\lie
k}}\cong\lie k$ is satisfied precisely when $\lie k$ is the
complexification of some real algebra. Whilst in the context of
six-dimensional nilpotent algebras this is always the case, there
exist non-isomorphic real algebras having the same complexification.
\subsection{Nilpotent complex structures}
An almost complex structure $J$ on $\lie g$ may be
given by a choice of basis $\omega=\{ \omega^{k}, 1\leq k \leq m
\}$, $2m=\dim_\mathbb{R}\lie g$, of the space of $(1,0)$-forms in the
complexified dual $\lie g_\mathbb{C}^*$. Such a basis may equivalently be
given as a basis $e=(e^1,\dots,e^{2m})$ of $\lie g^*$ so that
$e^2=Je^1$, or $\omega^1=e^1+ie^2$, and so on. When $e$ and $\omega$
are related in this way we will write $e=e(\omega)$ or
$\omega=\omega(e)$. The almost complex structure is then
\emph{integrable} or simply a \emph{complex structure} if the ideal
in $\Lambda^*\lie g^*_\mathbb{C}$ generated by the $(1,0)$-forms is closed
under exterior differentiation. For a nilpotent Lie algebra, an
almost complex structure is integrable if there exists a basis
$(\om{j})$ of $(1,0)$-forms so that $d\om1=0$ and for $j>1$,
$d\om{j}$ lies in the ideal generated by $\om1,\dots,\om{j-1}$.
Equivalently,
\begin{equation}
0 = d(\om1\wedge\om2\wedge \dots\wedge
\om{p}),\quad p=1,\dots,m.
\end{equation}
Let the set of such bases be denoted $\Omega(\lie g,J)$.
On nilpotent Lie algebras certain complex structures are
distinguished. Among these are complex structures such that
$[X,JY]=J[X,Y]$. Equivalently,
$d\om{p}\in\Lambda^2\langle\om1,\dots,\om{p-1}\rangle$. These are the
complex structures for which $\lie g$ is the real algebra underlying a
complex Lie algebra. At the opposite end to these are the \emph{abelian
complex structures} which satisfy $[JX,JY]=[X,Y]$ \cite{BDM}.
Equivalently, the $+i$-eigenspace of $J$ in $\lie g_\mathbb{C}$ is an abelian
subalgebra of $\lie g_\mathbb{C}$. In particular abelian $J$s are always
integrable. In terms of $(1,0)$-forms a complex structure is abelian
if and only if there exists an $\omega$ in $\Omega(\lie g,J)$ such that
$d\om{j}$ is in the intersection of the two ideals generated by
$\om1,\dots,\om{j-1}$ and $\bom1,\dots,\bom{j-1}$, respectively.
The concept of abelian complex structures may be generalized to that
of \it nilpotent \rm complex structures \cite{CFGU}. A nilpotent
almost complex structure may be defined as an almost complex
structure with a basis of $(1,0)$-forms such that
\begin{equation}
d\om{p}\in\Lambda^2\langle\om1,\dots,\om{p-1},\bom1,\dots,\bom{p-1}\rangle.
\end{equation}
For a given algebra $\lie g$ and nilpotent almost complex structure
$J$ we write $P(\lie g,J)$ for the set of such bases. Nilpotent
almost complex structures are not necessarily integrable. If a
nilpotent $J$ is integrable, then $P(\lie g,J)\subset\Omega(\lie
g,J)$. A nilpotent complex structure is abelian if and only if
\begin{equation}
0 = d(\bom1\wedge\bom2\wedge \dots\wedge\bom{p-1}\wedge
\om{p}), \quad p=1,\dots,m.
\end{equation}
It is apparent that abelian complex structures are nilpotent.
In subsequent presentation, we suppress the wedge product sign.
\subsection{Six-dimensional algebras}
Some of the results of this section may be regarded as a re-organization of
past results in terms of invariants relevant to our further
analysis. Our key references are \cite{Salamon} and \cite{Ugarte}.
To name specific isomorphism classes of six-dimensional nilpotent
Lie algebras, we use the notation $\lie h_n$ as given in
\cite{CFGU}.
Suppose then $\dim_\mathbb{R}\lie g=6$. Let $J$ be a nilpotent almost complex
structure on $\lie g$. The \emph{structure equations} for an
\emph{integrable} element $\omega$ in $P(\lie g,J)$ are \cite{CFGU}
\begin{gather}\label{eq:6'}
\begin{cases}
d\om1 = 0,\\
d\om2 = \epsilon\om1\bom{1},\\
d\om3 = \rho\om1\om2 + A\om1\bom1 + B\om1\bom2 +
C\om2\bom1 + D\om{2}\bom2.
\end{cases}
\end{gather}
for complex numbers $\epsilon,\rho,A,B,C,D$. Note that $dd\om3=0$
forces $D\epsilon=0$. Moreover, if $\epsilon$ is not zero, $\om3$
may be replaced with $\epsilon\om3-A\om2$ so after re-scaling the
$\om{j}$ one obtains the \emph{reduced structure
equations} \cite{Ugarte}
\begin{gather}\label{eq:6}
\begin{cases}
d\om1 = 0,\\
d\om2 = \epsilon\om1\bom{1},\\
d\om3 = \rho\om1\om2 + (1-\epsilon)A\om1\bom1 + B\om1\bom2 +
C\om2\bom1 + (1-\epsilon)D\om{2}\bom2,
\end{cases}
\end{gather}
where $\epsilon$ and $\rho$ are either $0$ or $1$ and $A,B,C,D$ are
complex numbers.
To avoid ambiguity we rule out the case $\epsilon\not=0$, $d\om3=0$
for any form of the structure equations as this is equivalent to
$\epsilon=0, ~d\om3=\om{1}\bom1$.
Given structure equations~\eqref{eq:6'} for a nilpotent complex
structure, we will calculate $\DGA(\lie g,J)$ in
Section~\ref{sec:isom-class-lie}. However, if we take~\eqref{eq:6'}
as a starting point, it is not obvious to recognize the real algebra
$\lie g$ which underlies the complex structure. We shall first
provide a way to do this that will fit the purpose of this paper.
For this task, we identify invariants of $P({\lie g}, J)$.
The most immediate invariants are the dimensions of the vector spaces
in the dual sequence $V_0\subset V_1\subset\dots$ for $\lie g_\mathbb{C}$. As
the inclusions
\begin{equation}
\label{eq:34}
V_1\supset\langle\om1,\bom1,\om2+\bom2\rangle, \qquad
V_2\supset\langle\om1,\bom1,\om2,\bom2\rangle
\end{equation}
always hold, $V_3=\lie g_\mathbb{C}^*$ for any $6$-dimensional nilpotent
algebra with nilpotent complex structure. Define
\begin{equation}
n=(n_1,n_2)=(\dim V_1, \dim V_2). \end{equation} We now collect
several facts on these particular invariants.
\begin{lemma}\label{dependence} Given a nilpotent complex structure $J$ on a six-dimensional nilpotent algebra
$\lie g$, the following hold:
\begin{enumerate}[{\rm(a)}]
\item $3\leq n_1\leq 6$, $4\leq n_2\leq 6$ and $n_1\leq n_2$.
\item There exists $\omega$ in $P({\lie g}, J)$ such that $\epsilon=0$
or $\epsilon=1$.
\item If $\epsilon=1$, there exists $\omega$ in $P({\lie g}, J)$ such
that $A=D=0$.
\item If $\epsilon=0$, then $n_2=6$.\label{item:1}
\item $\rho=0$ if and only if $J$ is an abelian complex structure.
\item Let $d$ be the dimension of the complex linear span of $d\om3$
and $d\bom3$. Then $d\leq 1$ if and only if
\begin{equation}
\label{eq:32}
\rho=0,\qquad |B|^2=|C|^2,\qquad A\bar D=\bar A D,\qquad A\bar B=\bar A
C,\qquad D\bar B = \bar D C.
\end{equation}\label{item:14}
\end{enumerate}
\end{lemma}
Based on the above information, we re-organize some of the data
from \cite[Theorem 2.9]{Ugarte} and \cite[Table A.1]{Salamon}.
\begin{lemma}\label{lem:3}
Suppose a complex structure on $\lie
g$ is given with structure constants as in~\eqref{eq:6'} with
$\epsilon\in\{0,1\}$.
\begin{enumerate}[{\rm(a)}]
\item $n=(6,6)$ if and only if $\lie g\cong\lie h_1=(0,0,0,0,0,0)$.
\item $n=(5,6)$ if and only if $\epsilon=0$ and $d=1$. In this case,
\begin{gather*}
\lie g\cong
\begin{cases}
{\lie h_{8}}=(0,0,0,0,0,12),\\
\lie h_{3}=(0,0,0,0,0,12+34).
\end{cases}
\end{gather*}
\item If $n=(4,6)$, then $\epsilon=0$ and $d=2$. The Lie algebra is
\begin{gather*}
\lie g\cong
\begin{cases}
\lie h_{6}=(0,0,0,0,12,13),\\
\lie h_{2}=(0,0,0,0,12,34),\\
\lie h_{4}=(0,0,0,0,12,13+42),\\
\lie h_{5}=(0,0,0,0,13+42,14+23).
\end{cases}
\end{gather*} \label{item:d}
\item If $n=(3,6)$, then $\epsilon=1, \rho\neq 0$, and there exists an element $\sigma$ in $P({\lie g}, J)$
such that $d\te3 =
\te1(\te2+\bte2)$. Moreover, $\lie g\cong\lie
h_7=(0,0,0,12,13,23)$.
\label{item:2}
\item If $n=(4,5)$, there exists $\sigma$ in $P({\lie g}, J)$ such that $d\te3 =
\te1\bte2 + \te2\bte1$. The structure equations for $e(\sigma)$
are $(0,0,0,12,0,14-23)$ and so $\lie g\cong\lie h_{9}=(0,0,0,0,12,14+25)$. \label{item:3}
\item \label{item:4} If $n=(3,5)$, then there exists $\sigma$ in $P({\lie
g}, J)$ such that $d\te3 = (B-\bar C)\te1\te2+B\te1\bte2 +
C\te2\bte1$. For all such $\sigma$, $(B-\bar C)$ is non-zero.
Moreover,
\begin{gather*}
\lie g\cong
\begin{cases}
{\lie h_{10}}=(0,0,0,12,13,14),\\
{\lie h_{12}}=(0,0,0,12,13,24),\\
{\lie h_{11}}=(0,0,0,12,13,14+23).
\end{cases}
\end{gather*}
\item If $n=(3,4)$, then
\begin{gather*}
\lie g\cong
\begin{cases}
\lie h_{16}=(0,0,0,12,14,24),\\
\lie h_{13}=(0,0,0,12,13+14,24),\\
\lie h_{14}=(0,0,0,12,14,13+42),\\
\lie h_{15}=(0,0,0,12,13+42,14+23).
\end{cases}
\end{gather*}
\end{enumerate}
\end{lemma}
\noindent{\it Proof: }
The statements for $n=(6,6)$ and $(5,6)$
are elementary.
When $n=(4,6)$ the cases listed in (\ref{item:d}) are the only
possibilities given by the classification of \cite{Ugarte}.
When $n=(3,6)$, then $V_1=\langle\om1,\bom1,\om2+\bom2\rangle$ and
$d\om3\in\Lambda^2V_1$. It follows that $d\om3=\rho\om1(\om2+\bom2)$
and so we have~\eqref{item:2}.
If $n_2=5$, then $\epsilon=1$, and we may take $A=D=0$ as noted in
the previous lemma. By~\eqref{eq:34}, a complex number $u\not=0$
exists so that $ud\om3+\bar ud\bom3\in\Lambda^2 V_1$.
If in addition $n_1=3$, then $V_1=\langle \om1, \bom1,
\om2+\bom2\rangle$. Taking
$d\om3=\rho\om1\om2+B\om1\bom2+C\om2\bom1$ gives $u\rho=uB-\bar
u\bar C$. Now setting $\te1=\om1,~\te2=\om2$ and $\te3=u\om3$ puts
the structure equations in the form (\ref{item:4}). Note that if
$B=\bar C$ in (\ref{item:4}) then $d\te3$ and $d\bte3$ are linearly
dependent and so $n_1=4$.
If, on the other hand, $n_1=4$ then $V_1=\langle \om1, \bom1,
\om2+\bom2, \om3+\lambda\bom3\rangle$ for some $\lambda$. Then
$\rho=0$, $B=\lambda {\overline C}$ and $C=\lambda {\overline B}$,
since $d\om3+\lambda d\bom3=0$. In particular,
$d\om3=B\om1\bom2+\lambda {\overline B}\om2\bom1$. This yields case
(\ref{item:3}).
The remaining case is $n=(3,4)$. Since this is the minimum
possible combination for the invariant $n$, by exclusion all remaining
nilpotent complex structures found in \cite{Salamon} and
\cite{Ugarte} are covered in this case. \ q.~e.~d. \vspace{0.2in}
\begin{corollary}\label{cor:1}
Let $J$ be a nilpotent complex structure on a nilpotent Lie algebra
$\lie g$. Then complex structure is abelian if
$n=(6,6), (5,6), (4,5)$. It is not abelian if $n_1=3$ and $n_2>4$.
\end{corollary}
\subsection{More invariants of nilpotent complex structures}
Given $\omega$ in $P(\lie g,J)$. Suppose that its structure
equations are (\ref{eq:6'}). Let $\sigma$ be another element in
$P(\lie g,J)$. Viewing $\sigma$ and $\omega$ as row vectors, then
$\sigma=(\te1,\te2, \te3)$ and $\omega=(\om1,\om2,\om3)$ are related
by a matrix: $\te{j}=\sigma^j_k\om{k}$. This must be of the form
\begin{gather}\label{eq:29}
\sigma(\omega): =
\begin{pmatrix}
\sigma^1_1&\sigma^2_1&\sigma^3_1\\
\sigma^1_2&\sigma^2_2&\sigma^3_2\\
0&0&\sigma^3_3
\end{pmatrix},
\end{gather}
with $\epsilon\sigma^1_2=0$. So when $\epsilon\not=0$ the matrix
$\sigma(\omega)$ is upper triangular. Write $\Delta(\sigma,\omega)$
for the determinant of the transformation $\sigma(\omega)$, so that
$\te1\te2\te3 = \Delta(\sigma,\omega)\om1\om2\om3$, and
$\Delta(\sigma,\omega)^{-1} = \Delta(\omega,\sigma)$. Define
$\Delta'(\sigma,\omega)$ by $\te1\te2 =
\Delta'(\sigma,\omega)\om1\om2$ so that $\Delta(\sigma,\omega) =
\sigma^3_3\Delta'(\sigma,\omega)$. The space $\cal A$ of matrices as
in~\eqref{eq:29} may be considered the automorphism group of the
nilpotent complex structure, and $P(\lie g,J)$ is the orbit of
$\omega$ under the multiplication of elements in $\cal A$.
Consider the two functions $\Delta_1\colon P(\lie g,J)\to\mathbb C$,
$\Delta_2\colon P(\lie g,J)\to\mathbb R$ defined respectively by
\begin{gather}
d\te3\wedge d\te3 = 2\Delta_1(\sigma)\te1\bte1\te2\bte2,\\
d\te3\wedge d\bte3 = 2\Delta_2(\sigma)\te1\bte1\te2\bte2.
\end{gather}
In terms of the structure constants for $\omega$,
\begin{align}
\Delta_1(\omega) &= AD - BC,\label{eq:35}\\
\Delta_2(\omega) &= \tfrac12\left[{|B|}^2+{|C|}^2-A\bar D-\bar A
D
-{|\rho |}^2\right]\label{eq:36}.
\end{align}
If $\sigma=\sigma(\omega)$ then
\begin{align*}
d\te3\wedge d\te3 &= (\sigma^3_3)d\om3\wedge d\om3 = (\sigma^3_3)\Delta_1(\omega)\om1\bom1\om2\bom2 \\
&= (\sigma^3_3)^2
\Delta_1(\omega){|\Delta'(\omega,\sigma)|}^2 \te1\bte1\te2\bte2.
\end{align*}
Therefore
\begin{gather}
\Delta_1(\sigma) =
\Delta_1(\omega){|\Delta'(\omega,\sigma)|}^2(\sigma^3_3)^2,
\intertext{and similarly}
\Delta_2(\sigma) =
\Delta_2(\omega) {|\Delta'(\omega,\sigma)|}^2 {|\sigma^3_3|}^2.
\end{gather}
By choosing $\sigma$ appropriately we may assume that $\Delta_1$ is
either $0$ or $1$. We observe that if $\Delta_1$ is non-zero in some
basis then it is non-zero in every basis. In this situation
$\Delta_2/{|\Delta_1|}$ is invariant under transformations of the
form~\eqref{eq:29}. Note that $\Delta^2_2-{|\Delta_1|}^2$ is scaled
by a positive constant by an automorphism, so the sign of $\Delta^2_2
- {|\Delta_1|}^2$ is another invariant. The significance of this can
be seen as follows. Pick $\omega\in P$ and let $e=e(\omega)$ be the
corresponding real basis. Then $d\om3\wedge d\om3 = -8\Delta_1
e^{1234}$ and $d\om3\wedge d\bom3 = -8\Delta_2 e^{1234}$, whence
\begin{gather*}
de^5\wedge de^5 =
-4\left(\Delta_2 + \re(\Delta_1) \right)e^{1234},\\
de^6\wedge de^6 =
- 4\left(\Delta_2 -\re(\Delta_1) \right)e^{1234},\\
de^5\wedge de^6 = -4\im(\Delta_1)e^{1234}.
\end{gather*}
The numbers $\Delta_2 \pm \re(\Delta_1)$ determine whether or not the
two-form $de^5$ and $de^6$ are \emph{simple} or not. A two-form
$\alpha$ is simple if and only if $\alpha\wedge\alpha=0$. The equation
\begin{gather}
\label{eq:28}
(s de^5 - t de^6)\wedge(s de^5 - t de^6)=0.
\end{gather}
is equivalent to the second order homogeneous equation
\[
(\Delta_2 +\re(\Delta_1))s^2
- 2\im(\Delta_1) st + (\Delta_2 -\re(\Delta_1)) t^2 = 0.
\]
As the discriminant of this equation is $ {|\Delta_1
|}^2-\Delta_2^2$, it has non-trivial real solutions if and only if
${|\Delta_1 |}^2-\Delta_2^2\ge 0$.
If $d\om3$ and $d\bom3$ are linearly independent, a solution $(s,t)$
to~\eqref{eq:28} exists precisely when $sde^5+tde^6$ is simple.
When ${|\Delta_1 |}^2-\Delta_2^2=0$ there is precisely one such
non-trivial solution, when ${|\Delta_1|}^2-\Delta_2^2>0$ there are
two. When $d\om3$ and $d\bom3$ are linearly dependent it is easy to
see from equations~\eqref{eq:32}, \eqref{eq:35} and \eqref{eq:36}
that ${|\Delta_1 |}^2=\Delta_2^2$.
\subsection{Identification of underlying real algebras}
Given the invariants of the last section, we now have the means filter isomorphism classes of $\lie g$ for
a given set of structure constants $\epsilon,\rho,A,B,C,D$ of a
nilpotent complex structure. As we determine the underlying real
algebras, we also identify all the invariants in the complex
structure equations in the next few paragraphs.
\begin{lemma}\label{lem:4} The following
statements are equivalent.
\begin{enumerate}[$(1)$]
\item For every nilpotent complex structure $J$ on $\lie g$ and
every $\omega$ in $P(\lie g,J)$, the condition $\Delta_2(\omega) = 0 = \Delta_1(\omega)$
holds. \label{item:5}
\item There exists a nilpotent $J$ on $\lie g$ and some $\omega\in P(\lie
g,J)$ such that $\Delta_2(\omega) = 0 =
\Delta_1(\omega)$.\label{item:6}
\item The Lie algebra $\lie g$ is isomorphic to one of the following
\begin{gather*}
\lie h_1 = (0,0,0,0,0,0),\quad \lie h_8=(0,0,0,0,0,12), \quad
\lie h_6=(0,0,0,0,12,13), \\ \lie h_7=(0,0,0,12,13,23),\quad
\lie h_{10}=(0,0,0,12,13,14),\quad \lie h_{16}=(0,0,0,12,14,24).
\end{gather*}\label{item:7} \end{enumerate} \end{lemma}
\noindent{\it Proof: } It is clear that \eqref{item:5} implies \eqref{item:6}. Now
suppose \eqref{item:6} holds: pick $J$ and $\omega$ so that
$\Delta_2(\omega) = 0 = \Delta_1(\omega)$. Since
$d\om2,d\bom2,d\om3,d\bom3$ span $d\lie g^*_c$ and $d\om2\wedge d\om3
= 0 = d\om2\wedge d\bom3$ by the nilpotency of $J$, any two elements
$\alpha_1,\alpha_2$ in $d\lie g^*_\mathbb{C}$ satisfy
$\alpha_1\wedge\alpha_2=0$. Since this is in particular also true for
the real elements, a basis of simple two-forms for $d\lie g^*$ exist
so that any two basis elements satisfy $\alpha_1\wedge\alpha_2=0$.
Now consult the classification of six dimensional nilpotent Lie
algebras with complex structures \cite[Theorem 2.9]{Ugarte}. This
gives \eqref{item:7}. If \eqref{item:7} holds then any $\omega$ in
$P(\lie g,J)$ for any complex structure $J$ on $\lie g$ has
$d\om{i}\wedge d\om{j}=0=d\om{i}\wedge d\bom{j}$ for all $i,j$. This
completes the proof. \ q.~e.~d. \vspace{0.2in}
\begin{corollary}\label{cor:2}
Suppose $\lie g$ is not one of the Lie algebras listed in Lemma{\rm
~\ref{lem:4}}. For any integrable nilpotent $J$ and any $\omega$
in $P(\lie g,J)$, one has $\Delta_2(\om{} )^2+\abs{\Delta_1(\om{}
)}^2>0$.
\end{corollary}
\begin{lemma}\label{lem:5}
The following statements are equivalent.
\begin{enumerate}[$(1)$]
\item For every nilpotent complex structure $J$ on $\lie g$ and
every $\omega$ in $P(\lie g,J)$, the condition $\Delta_2(\omega)^2 <
\abs{\Delta_1(\omega)}^2$ holds. \label{item:8}
\item There exists a nilpotent $J$ on $\lie g$ and some $\omega$ in $P(\lie
g,J)$ such that the inequality $\Delta_2(\omega)^2<
\abs{\Delta_1(\omega)}^2$ is satisfied.\label{item:9}
\item The Lie algebra $\lie g$ is isomorphic to one of the following
\begin{gather*}
\lie h_2 = (0,0,0,0,12,34), \quad
\lie h_{12}=(0,0,0,12,13,24), \quad \lie
h_{13}=(0,0,0,12,13+14,24).
\end{gather*}\label{item:10}
\end{enumerate}
\end{lemma}
\noindent{\it Proof: } The implication \eqref{item:8}$\Rightarrow$\eqref{item:9} is
trivial. Suppose that $J$ and $\omega$ are given as in
\eqref{item:9}. Solving \eqref{eq:28}, we get two real, simple
two-forms in the span of $d\om3, d\bom3$. It follows that $d\lie g^*$
has a basis consisting only of simple two-forms. The classification
\cite[Theorem 2.9]{Ugarte}, Lemma~\ref{lem:4} and
Corollary~\ref{cor:2} give \eqref{item:10}.
Now suppose that \eqref{item:10} holds and let $J$ be a nilpotent
complex structure on $\lie g$. Pick any $\omega$ in $P(\lie g,J)$.
Represent $\lie h_2$ as $(0,0,0,0,13,24)$. For any of the three
algebras listed, any nilpotent complex structure and any $\omega$ in
$P(\lie g,J)$, there are constants $a,b,c,r$ such that
$d\om3=ae^{12} + b(e^{13}+re^{14})+ c e^{24}$ where $r=0$ or $1$. So
$d\om3\wedge d\om3 = -2bc e^{1234}$ and $d\om3\wedge d\bom3 = -(b\bar c
+\bar b c) e^{1234}$. By Corollary~\ref{cor:2}, $bc\not=0$ so
(after re-scaling of $\om1$ and $\om2$) we have $\abs{\Delta_1}^2 =
\abs{b\bar c}^2 \ge \re(b\bar c)^2 = \tfrac14\abs{b\bar c + \bar b
c}^2 = \Delta_2^2$. Equality occurs precisely if $b\bar c$ is real.
To see that this does not occur, note that by nilpotency of $J$,
$d\om2\wedge d\om2=0=d\om2\wedge d\om3$, whence $d\om2 = ue^{12}$ for some
complex number $u$. If $u=0$ then $\lie g=\lie h_2$ and $a=r=0$.
Otherwise, take $u=1$ and $\om3-\om2$ as a `new' $\om3$. This has
$a=0$. So for all three algebras and all $J$, we can take an $\omega$
in $P(\lie g,J)$ with $a=0$. Then $d\om3$ and $d\bom3$ are linearly
dependent precisely when $b\bar c$ is real. In this case $n_1$ is $4$
if $\epsilon\not=0$, and $5$ otherwise. The latter value is not
realized for the given algebras. Only $(0,0,0,0,13,24)$ has $n_1=4$
but clearly $d\om2\wedge d\om2=0=d\om2\wedge d\om3$ shows that for this
algebra $\epsilon=0$ for all $J$. Therefore $b\bar c$ is never real
and so $\abs{\Delta_1}^2>\Delta_2^2$ \ q.~e.~d. \vspace{0.2in}
\begin{lemma}\label{lem:6}
The following
statements are equivalent.
\begin{enumerate}[$(1)$]
\item For every nilpotent complex structure $J$ on $\lie g$ and
every $\omega\in P(\lie g,J)$, the condition $\Delta_2(\omega)^2 >
\abs{\Delta_1(\omega)}^2$ holds. \label{item:11}
\item There exists a nilpotent $J$ on $\lie g$ and some $\omega\in P(\lie
g,J)$ such that the inequality $\Delta_2(\omega)^2>
\abs{\Delta_1(\omega)}^2$ is satisfied.\label{item:12}
\item The Lie algebra $\lie g$ is isomorphic to one of the following
\begin{gather*}
\lie h_5 = (0,0,0,0,13+42,14+23), \quad
\lie h_{15}=(0,0,0,12,13+42,14+23).
\end{gather*}\label{item:13}
\end{enumerate}
\end{lemma}
\noindent{\it Proof: } The idea is as for the preceding Lemmas.
Suppose~\eqref{item:12}. There are then no simple elements in the
real span of $d\om3+d\bom3,~i(d\om3-d\bom3)$ as
equation~\eqref{eq:28} has no real solutions. This of course means
that for the real basis $e(\omega)$ all linear combinations of
$de^5$ and $de^6$ are non-simple. This also holds for all
elements in the span of $de^4,de^5,de^6$. In \cite[Theorem
2.9]{Ugarte} only two algebras have the property that all elements
in the span of $\{de^i\}$ are non-simple. These are listed in
(\ref{item:13}).
Building a nilpotent $J$ from $\lie
h_5$ or $\lie h_{15}$ gives $d\om3=a e^{12} +
b(e^{13}+e^{42})+c(e^{14}+e^{23})$. Then $\Delta_1 = b^2+c^2$ and
$\Delta^2_2 = \abs{b}^2+\abs{c}^2$, so $\Delta_2^2-\abs{\Delta_1}^2 =
2(\abs{b\bar c}^2-\re{(b\bar c)^2})= 2\im(b\bar c)\ge 0$, with equality if
and only if $b\bar
c$ is real. Arguing as in the proof of Lemma \ref{lem:5} one shows
that $b\bar c$ cannot be real. It proves the implications
\eqref{item:12} to \eqref{item:13}. The
implication \eqref{item:11} to \eqref{item:12} is obvious.
\ q.~e.~d. \vspace{0.2in}
Now one case is left, namely $\abs{\Delta_1}^2 = \Delta_2^2>0$. By
Lemmas~\ref{lem:4},~\ref{lem:5} and~\ref{lem:6} this condition must
characterize the remaining algebras in the classification of
Lemma~\ref{lem:3}. In one special case we may be more explicit.
\begin{lemma}\label{lem:7}
Suppose $\omega$ in $P(\lie g,J)$ is such that $\abs{\Delta_1}^2 =
\Delta_2^2>0$ and $d=1$. Then there exists $\sigma$ in $P(\lie
g,J)$
such that the real basis $e(\sigma)$ has the following structure
equations
\begin{itemize}
\item $\lie h_3=(0,0,0,0,0,12-\sign(\Delta_2) 34)$ if
$\epsilon=0,~\abs{\Delta_1}^2=\Delta_2^2>0$,
\item $\lie h_9=(0,0,0,12,0,14-23)$ if $\epsilon=1$ and
$\abs{\Delta_1}^2=\Delta_2^2>0$.
\end{itemize}
\end{lemma}
\noindent{\it Proof: } Suppose that $\epsilon=0$ and $\abs{\Delta_1}^2=\Delta_2^2>0$.
Note that $\Delta_2=\abs{C}^2-\bar A D$ by~\eqref{eq:32}. Define
$\lambda>0$ by $\Delta_2 = \sign(\Delta_2)\lambda^2$. Then
\begin{equation*}
\bar A d\om3 = (A\om1 + C\om2)(\bar A\bom1+\bar C\bom2) -
\sign(\Delta_2)\lambda^2\om2\bom2,
\end{equation*}
which gives that second part if $A\not=0$. If $D\not=0$ we rewrite
similarly.
If $A=0=D$, $\Delta_2=\abs{B}^2=\abs{C}^2>0$. We
pick square roots of $B$ and $C$, and
set
\[
\te1=\frac{1}{\sqrt2}\left(\sqrt{\frac{B}{C}}\om1+\om2\right), \quad
\te2=\frac{1}{\sqrt2}\left(-\om1+\sqrt{\frac{C}{B}}\om2\right),
\quad \te3= -\frac{1}{2\sqrt{BC}}\om3.
\]
Then $d\te3 = -(1/2) (\te1\bte1 -
\te2\bte2)$, whence $de^6=e^{12}-e^{34}$.
If $\epsilon=1$ then $A=D=0$. If $\abs{\Delta_1}^2=\Delta_2^2>0$, we take
\[
\te1=2\sqrt{\frac{B}{C}}\om1, \quad \te2=-2\om2, \quad
\te3=\frac{2}{\sqrt{BC}}\om3
\]
to get $d\te1 = 0, ~d\te2 =
-(1/2)\te2,~d\te3 = -(1/2)(\te1\bte2 + \te2\bte1)$.
\ q.~e.~d. \vspace{0.2in}
\begin{lemma}
Suppose $\omega$ in $P(\lie g,J)$ is such that $\abs{\Delta_1}^2 =
\Delta_2^2>0$, $d\om3$ and $d\bom3$ are linearly independent. Then
$\lie g$ is one of $\lie h_4, {\lie h}_{11}, {\lie h}_{14}$, with
$n_2$ being an invariant to distinguish the different spaces.
\end{lemma}
\noindent{\it Proof: } The proof is similar to the one of the last lemma. As this
is the last remaining case in the classification of all nilpotent
complex structures, one may also identify the concerned algebras
using \cite{Salamon} or \cite{Ugarte}. \ q.~e.~d. \vspace{0.2in}
Next, we tabulate the invariants for all nilpotent complex
structures according to their underlying nilpotent algebras.
\begin{theorem}
A nilpotent complex structure on a six-dimensional nilpotent Lie
algebra $\lie g$ is determined by the data of its nilpotent complex
structures, and vice-versa, as indicated in Table \ref{tab:1} below.
\end{theorem}
\begin{table}[h]
\begin{center}
\begin{tabular}{|@{}|l|l|c|c|c|c|c|c|c|@{}|}
\hline \( n \) & \( \lie g \) & \( \abs{\Delta_1}^2-\Delta_2^2 \) &
\( \abs{\Delta_1} \) & \( \abs{\Delta_2} \) & \( {\epsilon} \)
& \( \abs{\rho} \) & \( d \) \\
\hline \( (6,6) \) & \( \lie h_1=(0,0,0,0,0,0) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) \\
\( (5,6) \) & \( \lie h_8=(0,0,0,0,0,12) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 1 \) \\
\( (5,6) \) & \( \lie h_3=(0,0,0,0,0,12+34) \)
& \( 0 \) & \( + \) & \( + \) & \( 0 \) & \( 0 \) & \( 1 \) \\
\( (4,6) \) & \( \lie h_6=(0,0,0,0,12,13) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( + \) & \( 2 \) \\
\( (4,6) \) & \( \lie h_4=(0,0,0,0,12,14+23) \)
& \( 0 \) & \( + \) & \( + \) & \( 0 \) & \( * \) & \( 2 \) \\
\( (4,6) \) & \( \lie h_2=(0,0,0,0,12,34) \)
& \( + \) & \( + \) & \( * \) & \( 0 \) & \( * \) & \( 2 \) \\
\( (4,6) \) & \( \lie h_5=(0,0,0,0,13+42,14+23) \)
& \( - \) & \( * \) & \( + \) & \( 0 \) & \( * \) & \( 2 \) \\
\hline \( (4,5) \) & \( \lie h_9=(0,0,0,0,12,14+25) \)
& \( 0 \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) & \( 1 \) \\
\( (3,6) \) & \( \lie h_7=(0,0,0,12,13,23) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,5) \) & \( \lie h_{10}=(0,0,0,12,13,14) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,5) \) & \( \lie h_{11}=(0,0,0,12,13,14+23) \)
& \( 0 \) & \( + \) & \( + \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,5) \) & \( \lie h_{12}=(0,0,0,12,13,24) \)
& \( + \) & \( + \) & \( * \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,4) \) & \( \lie h_{16}=(0,0,0,12,14,24) \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,4) \) & \( \lie h_{13}=(0,0,0,12,13+14,24) \)
& \( + \) & \( + \) & \( * \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,4) \) & \( \lie h_{14}=(0,0,0,12,14,13+24) \)
& \( 0 \) & \( + \) & \( + \) & \( 1 \) & \( + \) & \( 2 \) \\
\( (3,4) \) & \( \lie h_{15}=(0,0,0,12,13+24,14+23) \)
& \( - \) & \( * \) & \( + \) & \( 1 \) & \( * \) & \( 2 \) \\
\hline
\end{tabular}
\end{center}
\caption { $\lie g$ and parameters in the complex structure
equations. In the table, \lq$\ 0$', \lq$+$' and \lq$-$' indicates that the
value of the corresponding number is zero, positive or negative,
while \lq$*$' means that the value is constrained only by the data
to its left in the table. The number $d$ of the right-most column
is the dimension of the linear span of $d\om3$ and $d\bom3$.} \label{tab:1}
\end{table}
\begin{remark}\label{rem:7}\ \rm
It is known that each of the four algebras with a \lq$*$' in the
$\abs{\rho}$ column admits both abelian and non-abelian complex
structures~\cite{Ugarte}. For $\lie h_5$ and $\lie h_{15}$ this is
particularly easy to see as both may be represented with either
$d\om3=\om1\om2$ or $d\om3=\om1\bom2$. An abelian complex
structures on $\lie h_2$ is given by $d\om3=i\om1\bom1+\om2\bom2$
and on $\lie h_4$ by $d\om3=i\om1\om1 + \om1\bom2+\om2\bom2$. A
non-abelian nilpotent complex structure on $\lie h_2$ and $\lie h_4$
may be obtained for instance by setting $d\om1=0=d\om2$ and $d\om3 =
\rho\om1\om2+ B\om1\bom2+B^{-1}\om2\bom1$ for some $B$ such that
$\abs{B}\not=1$ with $\abs{\rho}^2=(\abs{B}\pm\abs{B^{-1}})^2$ for
$\lie h_4$ and
$(\abs{B}-\abs{B^{-1}})^2<\abs{\rho}^2<(\abs{B}+\abs{B^{-1}})^2$ for
$\lie h_2$. We note that any other choice of $\rho$ gives a
non-abelian complex structure on $\lie h_5$ and one on $\lie h_{15}$
if we take $d\om2=\om1\bom1$ instead.
For the algebras with $*$'s in the other columns, i.e. those with
different values of $\abs{\Delta_1}^2,~\Delta_2^2$, it is also
always possible to find a complex structure such that the smaller of
the two is zero.
\end{remark}
\begin{lemma} If $d=2$,
$\Delta_1(\omega)=0=\Delta_2(\omega)$, then the complex
structure is non-abelian.
Furthermore,
$\epsilon=0$ if and only if there exists $\sigma$ in $ P(\lie g,J)$
such that
$e(\sigma)$ has structure equations $\lie h_6=(0,0,0,0,13,14)$.
When $\epsilon=1$, one of the following three cases occurs.
\begin{itemize}
\item If there exists an $\omega$ in $P(\lie g,J)$ such that $B=0$,
there exists a $\sigma$ such that the equation for appropriate
$e(\sigma)$ is ${\lie h}_{10}=(0,0,0,12,13,14)$.
\item If there exists an $\omega$ in $P(\lie g,J)$ such that
$B/\rho>0$, a $\sigma$ may be chosen such that the equation for
appropriate $e(\sigma)$ is $\lie h_7=(0,0,0,12,13,23)$.
\item Otherwise $\sigma$ may be chosen such that the equation for
appropriate $e(\sigma)$ the structure equations are $\lie
h_{16}\cong(0,0,-t(12),s(12),13,23)$ and $(s+it)^2=B/\rho$.
\end{itemize}
\end{lemma}
\noindent{\it Proof: } A simple exercise in algebra using the expressions
\eqref{eq:35} and \eqref{eq:36}, Lemma \ref{lem:3} and Lemma
\ref{dependence}(\ref{item:14}) shows that if
$\Delta_1(\omega)=0=\Delta_2(\omega)$ and $\rho=0$ then $d\om3$ and
$d\bom3$ are linearly dependent. This gives the first statement.
Suppose that $\epsilon=0$. If in addition $A=0$, then
$\Delta_1=-BC=0$. When $B=0$ and
$\abs{C}^2=\abs{\rho}^2$. Then we may rearrange to get
\begin{align*}
d\om3=(\rho(\om1+(\bar D/\bar C)\om2) - C(\bom1+(D/C)\bom2))\om2.
\end{align*}
So choose $r,c$ such that $r^2=\rho$ and $c^2=-C$ and set
$\te1=(r/c)(\om1+(\bar D/\bar C)\om2)$, $\te2=2\om2$ and
$\te3=\om3/(cr)$ we get $d\te3=\tfrac12(\te1+\bte1)\te2$.
If $C=0$, we note that
\begin{align*}
d\om3=(\om1+D/B\om2)(\rho\om2 + B\bom2).
\end{align*}
Take $r,b$ such that $r^2=\rho$ and $b^2=B$ and set
$\ta1=-(r/c)\om2,~\ta2 = 2(\om1+D/B\om2),~\ta3 =\om3/(bc)$. Then
$d\te3=\tfrac12(\te1+\bte1)\te2$, again.
If $D=0$ instead of $A=0$, we interchange $\om1$ and $\om2$ and proceed with an argument as above.
Finally, if $AD=BC\not=0$, we may write
\[
d\om3 = ((\abs{C}^2 - \bar A D)(\om1+(\bar D/\bar C)\om2) +
C(\bom1+(D/C)\bom2))((A/C)\om1+\om2).
\]
Since $0<\abs{\rho}^2=\abs{\abs{C}^2- \bar A D}^2/\abs{C}^2$ this is
equivalent to $d\te3=\tfrac12(\te1+\bte1)\te2$.
When $\epsilon=1$, then $A=0=D$. As $\Delta_1=0$, by definition
(\ref{eq:35}) $BC=0$. If $B=0$, then $d\om3=(\rho\om1-C\bom1)\om2$,
which we may treat precisely as above to get $d\te1=0,
~d\te2=-\tfrac12\te1\bte1, ~d\te3=\tfrac12(\te1+\bte1)\te2$. If
$C=0$, pick square roots: $r^2=\rho,~b^2=B$ and set
$\te1=\om1,~\te2=-\tfrac12(r/b)\om2, ~\te3=\tfrac12\om3$. Then
$d\te3 = \te1(\te2+\bte2)$ but $d\te2=(r/b)\te1\bte1$. Writing
$r/b= s+it$ we get
\[
de^1 =0, \quad de^2=0,\quad
de^3=-t e^{12}, \quad
de^4=se^{12},\quad
de^5= e^{13},\quad
de^6= e^{23}.
\]
When $r/b$ is real (which happens if and only if $\rho/B>0$) this is precisely
$(0,0,0,12,13,23)$. When $r/b$ is purely imaginary, we get
$(0,0,12,0,13,23)\cong\lie h_{16}$. Otherwise, replace $e^4$ by $se^3+te^4$ and
divide $e^3, e^5$ and $e^6$ with $-t$ to get $(0,0,12,0,13,23)$
again.
\ q.~e.~d. \vspace{0.2in}
\begin{corollary}\label{cor:22}
There are no abelian complex structures on $\lie h_p$ for
$p=6,7$,$10,11$,$12,13$,$14,16$.
Moreover, suppose that $\omega$ in $P(\lie
g,J)$ has structure constants $\epsilon,\rho,A,B,C,D$. If
$\epsilon=0=\rho$ and $\Delta_1=0$, then $\Delta_2\ge 0$ with $\Delta_2=0$
if and only if $d\om3$ and $d\bom3$ are linearly dependent. If
$\epsilon=1$ and $\rho=0$ then $\Delta_2^2-\abs{\Delta_1}^2\ge 0$
with equality if and only if $d\om3$ and $d\bom3$ are linearly
dependent.
\end{corollary}
\noindent{\it Proof: }
For $p=7,10,11$ and $12$ this was established by Lemma~\ref{lem:3}.
For $p=6$ and $16$, any complex structure on $\lie h_p$ has
$\Delta_1=0=\Delta_2$ by Lemma~\ref{lem:4}. However, $\Delta_1=0$
with $\rho=0=\epsilon$ implies $\Delta_2\ge0$ with equality if and
only if $d\om3$ and $d\bom3$ are linearly dependent. For $p=13$ and
$14$, $\epsilon=1$ and $\abs{\Delta_1}^2\ge\Delta_2^2$. The first
statement may then be seen to follow from the second and third.
If $\epsilon = 0 = \rho = \Delta_1 = 0$ then clearly
\begin{equation*}
2\Delta_2 =
\begin{cases}
\abs{B}^2+\abs{C}^2,&\text{if $A=0$},\\
\abs{\bar A B- A\bar C}^2/\abs{A}^2,&\text{if $A\not=0$}.
\end{cases}
\end{equation*}
In either case $\Delta_2\ge0$. If $\Delta_2=0$, $d\om3=D\om2\bom2$
in the first case, and $\bar A B = A\bar C$ in the second. It is
now easy to see that the equations of Lemma \ref{dependence}(\ref{item:14}) are
satisfied in either case.
If $\epsilon=1$ and $\rho=0$
\begin{equation*}
\Delta_2^2 - \abs{\Delta_1}^2 = (\abs{B}^2-\abs{C}^2)\ge 0
\end{equation*}
so equality implies $\abs{B}=\abs{C}$. Since we may assume that
$A=0$ when $\epsilon=1$ this shows that $d\om3$ and $d\bom3$ are
linearly dependent via Lemma \ref{dependence}(\ref{item:14}). \ q.~e.~d. \vspace{0.2in}
\section{Classification of $\DGA ({\lie g}, J)$}\label{sec:f}
\label{sec:isom-class-lie} In this section we calculate the
isomorphism class of the six-dimensional complex Lie algebras $\lie
f^1=\lie f^1(\lie g,J)$ obtained from a nilpotent algebra $\lie g$
equipped with a complex structure $J$. Our aim is to identify the
complex Lie algebra structure of $\lie f^1$ for a given $\lie g$ and
$J$. The result will identify $\lie f^1$ as the complexification of
one of the real nilpotent algebras ${\lie h}_n$.
When a complex structure $J$ is given, recall that the Lie algebra
structure on ${\overline{\lie f}}^1$ is defined by
$\overline\partial{}\colon\lie{g}^{(1,0)}\oplus\lie g^{*(0,1)}\to \Lambda^2(\lie
g^{(1,0)}\oplus\lie g^{*(0,1)})$. If $X\in\lie g^{(1,0)}, {\overline
Y}\in\lie g^{(0,1)},\omega\in\lie g^{*(0,1)}$, then $\overline\partial\omega$ is
the (2,0)-component of $d\omega$ and $(\overline\partial X)({\overline Y})$ is the
(1,0)-part of the vector $- [X,{\overline Y}]^{1,0}$. Let $T_1, T_2,
T_3$ be dual to $\omega^1, \omega^2, \omega^3$. Given the
equations~\eqref{eq:6'}, the differential $\overline\partial$ is determined by the
following structure equations.
\begin{gather}
\begin{cases}
\overline\partial\bom1 = 0,\quad \overline\partial\bom2 = 0,\quad \overline\partial\bom3 = \rho\bom{12},\\
\overline\partial T_1 = \epsilon\bom1 T_2 + (A\bom1 + B\bom2) T_3,\quad
\overline\partial T_2 = (C\bom1 + D\bom2) T_3,\quad
\overline\partial T_3 = 0.
\end{cases}
\end{gather}
The Schouten bracket is an extension of the following Lie bracket on
${\lie f}^1$.
\begin{gather}
\begin{cases}
[T_1\bullet T_2]=-\rho T_3, \\
[T_1\bullet\bom2]=-\epsilon\bom1, \quad
[T_1\bullet\bom3]=-{\overline A}\bom1-{\overline C}\bom2, \quad
[T_2\bullet\bom3]=-{\overline B}\bom1-{\overline D}\bom2.
\end{cases}
\end{gather}
In this section, we ignore at first the Lie algebra structure on $\lie
f^1$ and focus on the differential structure $\overline\partial$ of ${\lie f}^1$
seen as a differential graded algebra. Inspecting the differential
algebra structure, we identify the Lie algebra structure of $({\lie
f}^1)^*\cong {\overline {\lie f}}^1$ as the complexification of
$\lie h_n$ for some $n$. Taking complex conjugation, we recover the
Lie algebra structure on $\lie f^1$ as a complexification of the same
$\lie h_n$. The results are presented in Table \ref{tab:f1}.
In the presentation below, the subscript $\mathbb{C}$ in the identification
${\lie f}^1\cong ({\lie h}_n)_\mathbb{C}$ is suppressed.
Change basis by setting
\begin{equation}\label{eq:11}
(\et1,\et2,\et3,\et4,\et5,\et6):=(\bom1,T_3,\bom2, T_2,\bom3, T_1).
\end{equation}
This gives the following structure equations
\begin{equation}\label{structure}
\overline\partial\eta_1 =\overline\partial\eta_2=\overline\partial \eta_3= 0,\quad \quad
\overline\partial\et4 = C\et{12}+D\et{32},\quad
\overline\partial\et5 = \rho\et{13},\quad
\overline\partial\et6 = \epsilon\et{14} + A\et{12} + B\et{32},
\end{equation}
which clearly define a complex $6$-dimensional nilpotent Lie
algebra.
When the invariants $\epsilon, \rho, \Delta_1$ and $\Delta_2$ are
given, we shall use the complex structure equations (\ref{structure})
to identify the Lie algebra underlying ${\overline{\lie f}}^1$ and
hence $\lie f^1$. On the other hand, we use the invariants and the
classification in Table \ref{tab:1} to identify the originating Lie
algebra $\lie g$. These are listed in the right most column of Table
\ref{tab:f1}.
\subsection{The cases when $\epsilon=0$.} By Corollary~\ref{cor:1}, $n_2=6$.
Then the potentially non-zero structure
equations are
\begin{equation}
\overline\partial\et4 = C\et{12}+D\et{32},\quad
\overline\partial\et5 = \rho\et{13},\quad
\overline\partial\et6 = A\et{12} + B\et{32}.
\end{equation}
There are six possibilities depending on the rank of
$X:=\begin{spmatrix}
A&B\\C&D
\end{spmatrix}$ and $\rho$.
\subsubsection{When $\rho=0$.}
\begin{enumerate}[$(1)$]
\item If $\rank X=0$ then $\Delta_1=0$ and $\Delta_2=0$. It follows
that $\lie f_1\cong\lie h_1$ and $\lie g=\lie h_1$.
\item If the rank of $X$ is one then $\Delta_1=0$, $\Delta_2\ge 0$ and
$\lie f^1\cong\lie h_8$. By Corollary \ref{cor:22}, $d\om3$ and
$d\bom3$ are linearly dependent if and only if $\Delta_2=0$.
Therefore, by Table \ref{tab:1} $\lie g=\lie h_8$ when $\Delta_2=0$,
and $\lie g=\lie h_5$ when $\Delta_2\neq 0$.
\item If $\rank X=2$ and $\rho=0$ then $\Delta_1\not=0$, $\Delta_2$ is
unconstrained and $\lie f^1\cong\lie h_6$. By Table \ref{tab:1},
$\lie g=\lie h_2, \lie h_3, \lie h_4$ or $\lie h_5$.
\end{enumerate}
This case accounts for the first four items in Table \ref{tab:f1}.
\subsubsection{When $\rho\not=0$.}
\begin{enumerate}[$(1)$]
\item If $\rank X=0$ then $\Delta_1=0$ and $\Delta_2>0$. It follows that $\lie
f_1\cong\lie h_8$ and $\lie g=\lie h_5$.
\item If $\rank X=1$ then $\Delta_1=0$ and $\Delta_2$ is
unconstrained. Then $\lie f^1\cong \lie h_6$. However, when the
value of $\Delta_2$ varies from zero to non-zero, the algebra $\lie
g$ changes from $\lie h_5$ to $\lie h_6$.
\item If $\rank X=2$ then $\Delta_1\not=0$, $\Delta_2$ is
unconstrained and $\lie f^1\cong\lie h_7$. The invariants
$|\Delta_2 |$ and $|\Delta_1|^2-\Delta_2^2$ help to identify the
three possibilities $\lie h_2, \lie h_4, \lie h_5$ for the algebra
$\lie g$.
\end{enumerate}
\subsection{The cases when $\epsilon\neq 0$.} We assume that
$\epsilon=1$, $A=D=0$. Then the potentially non-zero structure
equations are
\begin{equation}
\overline\partial\et4 = C\et{12},\quad
\overline\partial\et5 = \rho\et{13},\quad
\overline\partial\et6 = \et{14} + B\et{32}.
\end{equation}
\subsubsection{When $\rho=0$.} There are three cases (discarding
$B=0=C$).
\begin{enumerate}[$(1)$]
\item If $C=0$ then $\Delta_1=0$, $\Delta_2>0$. It follows that $\lie f^1\cong\lie h_3$ and $\lie g=\lie h_{15}$.
\item If $B=0$ then $\Delta_1=0$ and $\Delta_2>0$. Then $\lie f^1\cong\lie
h_{17}=(0,0,0,0,12,15)$ and $\lie g=\lie h_{15}$.
\item If $BC\not=0$ then $\lie f^1\cong\lie h_9$. As
$\Delta_1\not=0$, by Corollary \ref{cor:22},
$\Delta^2_2-\abs{\Delta_1}^2\ge 0$ with equality if and only if $d\om3$ and
$d\bom3$ are linearly dependent. It yields two algebras for $\lie
g$, namely $\lie h_9$ and $\lie h_{15}$.
\end{enumerate}
\subsubsection{When $\rho\not=0$.} There are four cases for $\lie f^1$:
\begin{enumerate}[$(1)$]
\item If $B=0=C$ then $\lie f^1\cong\lie h_6$. As $\Delta_1=0$, $\Delta_2<0$,
and $\lie g=\lie h_{15}$.
\item If $C=0, B\neq 0$ then $\lie f^1\cong \lie h_4$.
As $\Delta_1=0$ but $\Delta_2$ is unconstrained, by Table
\ref{tab:1}, $\lie g$ could be one of $\lie h_7, \lie h_{16}$ or
$\lie h_{15}$.
\item If $B=0, C\neq 0$, then $\lie f^1 \cong \lie h_{10}$.
As $\Delta_1=0$, $\Delta_2$ is unconstrained, we get $\lie g=\lie
h_{10}$ if $\Delta_2=0$. Otherwise, we get $\lie g=\lie h_{15}$.
\item If $BC\not=0$ then $\lie f^1 \cong \lie h_{11}$.
$\Delta_1\not=0$, $\Delta_2$ is unconstrained. An inspection of
Table \ref{tab:1} yields the five different algebras $\lie h_{11},
\lie h_{12}, \lie h_{13}, \lie h_{14}$ and $\lie h_{15}$.
\end{enumerate}
To recap all the computations, we have used the invariants of the
complex structural equations to identity both the underlying real Lie
algebra and the structure of the Lie algebra ${\lie f}^1$. At the
cost of being repetitive, we recall in the following how the
invariants are defined.
\begin{theorem}\label{invariant theorem}
Suppose that $\lie g$ is a real six-dimensional nilpotent algebra
with a nilpotent complex structure $J$. Then there exists a basis
$\omega^1, \omega^2, \omega^3$ for $\lie g^{*(1,0)}$ such that
\begin{gather}
\begin{cases}
d\om1 = 0, \quad
d\om2 = \epsilon\om1\bom{1},\\
d\om3 = \rho\om1\om2 + A\om1\bom1 + B\om1\bom2 +
C\om2\bom1 + D\om{2}\bom2,
\end{cases}
\end{gather}
where $\epsilon, \rho\in \{0,1\}$. Moreover, let
\begin{eqnarray*}
&&\triangle_1=AD-BC; \quad \triangle_2=\frac12[|B|^2+|C|^2-A{\bar
D}-{\bar A}D-|\rho|^2];\\
&& d=\dim_\mathbb{C} \langle d\om3, d{\bar\omega}^3\rangle, \quad
X=\begin{spmatrix}
A&B\\C&D
\end{spmatrix}
\end{eqnarray*}
be the invariants associated to the structure equations. Given a real
algebra $\lie g$ in the right-most column, Table \ref{tab:f1} lists
constraints on the values of the invariants that can be realized by a
complex structure $J$ on $\lie g$, as well as the relevant isomorphism
class of the Lie algebra ${\lie f}^1$ in the left-most column. A
``\(\, * \)'' indicates an un-constrained invariant.
\end{theorem}
\begin{table}[h]
\begin{center}
\begin{tabular}{|@{}|l|c|c|c|c|c|c|c|c|c|c|@{}|}
\hline \( \lie f^1 \) & \( \epsilon \) & \( \abs{\rho} \) & \( \rank
X \) & \( \abs{\Delta_1} \) & \( \abs{\Delta_2} \) & \(
\abs{\Delta_1}^2-\Delta_2^2 \)
& \( d \) & \( \abs{B} \) & \( \abs{C} \) & \( \lie g \) \\
\hline \( \lie h_1 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( \lie h_1 \) \\
\( \lie h_8 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 1 \) & \( * \) & \( * \) & \( \lie h_{8} \)\\
\( \lie h_8 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( 0 \)
& \( + \) & \( - \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{5} \)\\
\( \lie h_6 \) & \( 0 \) & \( 0 \) & \( 2 \) & \( + \)
& \( * \) & \( - \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{2}, {\lie h}_3, {\lie h}_4,
{\lie h}_5 \)\\
\hline \( \lie h_8 \) & \( 0 \) & \( + \) & \( 0 \) & \( 0 \)
& \( + \) & \( - \) & \( 2 \) & \( 0 \) & \( 0 \) & \( \lie h_{5} \)\\
\( \lie h_6 \) & \( 0 \) & \( + \) & \( 1 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{6} \)\\
\( \lie h_6 \) & \( 0 \) & \( + \) & \( 1 \) & \( 0 \)
& \( + \) & \( - \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{5} \)\\
\( \lie h_7 \) & \( 0 \) & \( + \) & \( 2 \) & \( + \)
& \( + \) & \( + \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{2} \)\\
\( \lie h_7 \) & \( 0 \) & \( + \) & \( 2 \) & \( + \)
& \( + \) & \( 0 \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{4} \)\\
\( \lie h_7 \) & \( 0 \) & \( + \) & \( 2 \) & \( + \)
& \( * \) & \( - \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{5} \)\\
\hline \( \lie h_3
\) & \( + \) & \( 0 \) & \( 1 \)
& \( 0 \)
& \( + \) & \( - \) & \( 2 \) & \( + \) & \( 0 \) & \( \lie h_{15} \)\\
\( \lie h_{17}
\) & \( + \) & \( 0 \) & \( 1 \) & \(
0 \)
& \( + \) & \( - \) & \( 2 \) & \( 0 \) & \( + \) & \( \lie h_{15} \)\\
\( \lie h_{9} \) & \( + \) & \( 0 \) & \( 1 \) & \( + \)
& \( + \) & \( 0 \) & \( 1 \) & \( + \) & \( + \) & \( \lie h_{9} \)\\
\( \lie h_{9} \) & \( + \) & \( 0 \) & \( 2 \) & \( * \)
& \( 0 \) & \( - \) & \( 2 \) & \( + \) & \( + \) & \( \lie h_{15} \)\\
\hline \( \lie h_{6} \) & \( + \) & \( + \) & \( 0 \) & \( 0 \)
& \( + \) & \( - \) & \( 2 \) & \( 0 \) & \( 0 \) & \( \lie h_{15} \)\\
\( \lie h_{4} \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) & \( 0 \) &
\( 0 \) & \( 2 \) & \( \abs{\rho} \) & \( 0 \) & \( \lie
h_{7},~\lie h_{16} \)\\
\( \lie h_{4} \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) & \( + \) &
\( - \) & \( 2 \) & \( + \) & \( 0 \) & \( \lie
h_{15} \)\\
\( \lie h_{10} \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) & \( 0 \)
& \( 0 \) & \( 2 \) & \( 0 \) & \( \abs{\rho} \) & \( \lie
h_{10} \)\\
\( \lie h_{10} \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) & \( * \)
& \( - \) & \( 2 \) & \( 0 \) & \( + \) & \( \lie
h_{15} \)\\
\( \lie h_{11} \) & \( + \) & \( + \) & \( 2 \) & \( + \) & \( * \)
& \( + \) & \( 2 \) & \( + \) & \( + \) & \( \lie
h_{12},~\lie h_{13} \)\\
\( \lie h_{11} \) & \( + \) & \( + \) & \( 2 \) & \( + \) & \( + \)
& \( 0 \) & \( 2 \) & \( + \) & \( + \) & \( \lie
h_{11},~\lie h_{14} \)\\
\( \lie h_{11} \) & \( + \) & \( + \) & \( 2 \) & \( + \) & \( + \)
& \( - \) & \( 2 \) & \( + \) & \( + \) & \( \lie
h_{15} \)\\
\hline
\end{tabular}
\end{center}
\caption{$\lie f^1$ as a function of the parameters in the complex
structure equations.} \label{tab:f1}
\end{table}
Ignoring that the same algebra $\lie f^1$ occurs for distinct
complex structures or different algebras, we get
\begin{theorem}\label{finding f}
Given a six-dimensional nilpotent algebra $\lie g$, the associated
Lie algebra ${\lie f}^1(\lie g, J)$ for all possible nilpotent
complex structure $J$ are given in the rows of Table
\ref{tab:g-f1'}.
\end{theorem}
One observes for instance that for $\lie g=\lie h_{15}$ no less than
seven different isomorphism classes are realized for ${\lie f}^1(\lie
g, J)$ as $J$ runs through the space of complex structures on $\lie
g$. This is a yet another manifestation of the ``jumping phenomenon''
frequently seen in complex structure deformation theory.
Note that the classification of nilpotent Lie algebras in dimension 6
(see \cite{Magnin,Morosov}) over $\mathbb C$ (or $\mathbb R$) has as a
consequence that structure constant may be taken to always be integers,
and in particular real. Thus any six-dimensional complex nilpotent
algebra is self-conjugate. Then Proposition \ref{second technical}
implies that the complex isomorphism of Lie algebras between ${\lie
f}^1$ and ${\lie h}_n$ generates a C-E compatible linear isomorphism
$O\colon\lie h_h\to\lie h_n^*$ such that $\DGA(\lie g, J)$ and
$\DGA(\lie h_n,O)$ are isomorphic as differential
Gerstenhaber algebras. In other words
\begin{theorem}\label{iso dga}
Given a six-dimensional nilpotent algebra $\lie g$ with a nilpotent
complex structure $J$, there exists a differential Gerstenhaber
algebra $\DGA(\lie h, O)$ quasi-isomorphic to $\DGA(\lie g, J)$ if
and only if the pair $(\lie g,\lie h)$ is checked in Table
\ref{tab:g-f1'}.
\end{theorem}
\begin{table}[h]
\begin{center}
\begin{tabular}{||l|@{}|c|c|c|c|c|c|c|c|c|c|@{} |}
\hline \( \lie g\backslash \lie f^1({\lie g}, J) \) & \( \lie h_1 \)
& \( \lie h_3
\) & \( \lie h_{4} \) & \( \lie h_6
\) & \(
\lie h_7 \) & \( \lie h_8 \) & \( \lie h_{9} \) &\( \lie h_{10} \) & \( \lie h_{11} \) & \(\lie h_{17}
\) \\
\hline
\( \lie h_1 \) & \( \checkmark \) & & & & & & & & & \\
\hline
\( \lie h_2 \) & & & & \( \checkmark \) & \( \checkmark \) & & & & & \\
\hline
\( \lie h_3 \) & & & & \( \checkmark \) & & & & & & \\
\hline
\( \lie h_4 \) & & & & \( \checkmark \) & \( \checkmark \) & & & & & \\
\hline
\( \lie h_5 \) & & & & \( \checkmark \) & \( \checkmark \) & \( \checkmark \) & & & & \\
\hline
\( \lie h_6 \) & & & & \( \checkmark \) & & & & & & \\
\hline
\( \lie h_7 \) & & & \( \checkmark \) & & & & & & & \\
\hline
\( \lie h_8 \) & & & & & & \( \checkmark \) & & & & \\
\hline
\( \lie h_9 \) & & & & & & & \( \checkmark \) & & & \\
\hline
\( \lie h_{10} \) & & & & & & & & \( \checkmark \) & & \\
\hline
\( \lie h_{11} \) & & & & & & & & & \( \checkmark \) & \\
\hline
\( \lie h_{12} \) & & & & & & & & & \( \checkmark \) & \\
\hline
\( \lie h_{13} \) & & & & & & & & & \( \checkmark \) & \\
\hline
\( \lie h_{14} \) & & & & & & & & & \( \checkmark \) & \\
\hline
\( \lie h_{15} \) & & \( \checkmark \) & \( \checkmark \) & \( \checkmark \) & & & \( \checkmark \) & \( \checkmark \) & \( \checkmark \) & \( \checkmark \) \\
\hline
\( \lie h_{16} \) & & & \( \checkmark \) & & & & & & & \\
\hline
\end{tabular}
\end{center}
\caption{Isomorphism class of $\lie f^1$ against underlying real
algebra $\lie g$.}
\label{tab:g-f1'}
\end{table}
The algebra $\lie h_{17}$ appears as a candidate for $\lie f^1$ in the
case $\lie g=\lie h_{15}$. However $\lie h_{17}$ admits no symplectic
structure. This demonstrates that the differential Gerstenhaber
algebra $\DGA(\lie h, O)$ does not necessarily arise from a symplectic
structure, as remarked at the end of the proof of Proposition
\ref{second technical}. The issue of whether $\DGA(\lie h, O)$ is or
not coming from a symplectic structure will be deferred to future
analysis.
\section{Application}\label{sec:appl}
Once we identify the Lie algebra structure for ${\lie f}^1({\lie g},
J)$, we have in effect identified the structure of $\DGA({\lie g},
J)$. Inspired by the concept of weak mirror symmetry \cite{Mer}, one
could well look for seek nilpotent algebras $\lie h$ with symplectic
structure $\Omega$ whose induced differential Gerstenhaber algebra
$\DGA({\lie h}, \Omega)$ is quasi-isomorphic to $\DGA({\lie g}, J)$.
We shall deal with such a general question in the future. At present,
we take advantage of the results in the preceding sections to address
a more focused question.
Supposing that $(J, \Omega)$ is a pseudo-K\"ahler structure on a
six-dimensional real nilpotent algebra $\lie g$, when will there be a
quasi-isomorphism
\begin{equation}\label{self mirror question}
\DGA({\lie g}, J) \rightleftharpoons \DGA({\lie g}, \Omega)\ ?
\end{equation}
Such pseudo-K\"ahler structures can be interpreted as \it weak
self-mirrors\rm, a manifestation of which - in dimension $4$ - was
studied in \cite{Poon}.
In view of Lemma \ref{quasi}, a quasi-isomorphism is in the present
situation equivalent to an isomorphism on the degree-one level:
\[
({\lie f}^1(\lie g, J), [-\bullet -])\cong ({\lie g}^*_\mathbb{C},
[-\bullet -]_\Omega) \cong ({\lie g}_\mathbb{C}, [-\bullet -]).
\]
Recall that a complex structure can be part of a pseudo-K\"ahler
structure on a nilpotent algebra only if it is a nilpotent complex
structure \cite{CFGU}. In view of Table \ref{tab:f1}, a solution
$({\lie g}, J, \Omega)$ for the question (\ref{self mirror question})
could possibly exist only if $\lie g$ is one of the following:
\begin{equation}
\lie h_{1}, \quad \lie h_{6}, \quad \lie h_{8}, \quad \lie h_{9},
\quad \lie h_{10}, \quad \lie h_{11}.
\end{equation}
Below we extract from Table \ref{tab:f1} the invariants for the
candidate complex structures $J$ for these algebras.
\begin{center}
\begin{tabular}{||l|c|c|c|c|c|c|c|c|c|c||}
\hline \( \lie f^1 \) & \( \epsilon \) & \( \abs{\rho} \) & \( \rank
X \) & \( \abs{\Delta_1} \) & \( \abs{\Delta_2} \) & \(
\abs{\Delta_1}^2-\Delta_2^2 \)
& \( d \) & \( \abs{B} \) & \( \abs{C} \) & \( \lie g \) \\
\hline
\( \lie h_1 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( 0 \) & \( \lie h_1 \) \\
\hline
\( \lie h_6 \) & \( 0 \) & \( + \) & \( 1 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 2 \) & \( * \) & \( * \) & \( \lie h_{6}
\)\\
\hline
\( \lie h_8 \) & \( 0 \) & \( 0 \) & \( 1 \) & \( 0 \)
& \( 0 \) & \( 0 \) & \( 1 \) & \( * \) & \( * \) & \( \lie h_{8} \)\\
\hline
\( \lie h_{9} \) & \( + \) & \( 0 \) & \( 1 \) & \( + \)
& \( + \) & \( 0 \) & \( 1 \) & \( + \) & \( + \) & \( \lie h_{9} \)\\
\hline
\( \lie h_{10} \) & \( + \) & \( + \) & \( 1 \) & \( 0 \) &
\( 0 \) & \( 0 \) & \( 2 \) & \( 0 \) & \( \abs{\rho} \) & \( \lie
h_{10} \)\\
\hline
\( \lie h_{11} \) & \( + \) & \( + \) & \( 2 \) & \( + \) &
\( + \) & \( 0 \) & \( 2 \) & \( + \) & \( + \) & \( \lie
h_{11} \)\\
\hline
\end{tabular}
\end{center}
In the next few sections, we shall take the above complex structures,
and seek symplectic structures that realize the quasi-isomorphism
(\ref{self mirror question}). We shall analyze pseudo-K\"ahler
structures on $\lie h_6$, $\lie h_8$, and $\lie h_{11}$ in details,
merely outline the discussion for $\lie h_{9}$ and $\lie h_{10}$, and
skip the trivial case ${\lie h}_1$ completely.
\subsection{${\lie h}_6$}
Given the invariants, the reduced structure equations (\ref{eq:6})
are
\begin{equation}
d\om1 = 0, \quad
d\om2 = 0, \quad
d\om3 = \om1\om2 + A\om1\bom1 + B\om1\bom2 +
C\om2\bom1 + D\om{2}\bom2.
\end{equation}
Since $\Delta_1=0$ there exists a constant $\lambda$ such that
either
\[
d\om3 = \om1\om2 +(\om1+\lambda\om2)(A\bom1+B\bom2) \quad
\mbox{ or }
\quad
d\om3 = \om1\om2 +(\lambda\om1+\om2)(C\bom1+D\bom2).
\]
The condition $\Delta_2=0$ implies that in either case, there exists a
change of complex basis so that the structure equations transform to
\begin{equation}\label{standard 6}
d\om1 = 0, \quad
d\om2 = 0, \quad
d\om3 = \om1\om2 + \om1\bom2.
\end{equation}
It follows that the structure equations for $({\lie f}^1, [-\bullet
-], \overline\partial)$ are
\begin{equation}
[T_1, T_2]=-T_3, \quad [T_2, \bom3]=-\bom1, \quad \overline\partial T_1=\bom2
\wedge T_3, \quad \overline\partial\bom3=\bom1\wedge\bom2.
\end{equation}
Due to \cite[Lemma 3.4]{CFU}, given the complex structure equations,
any $(1,1)$-form of a compatible symplectic structure is given by
\[
\Omega=a_1\om1\bom1+b_2\om2\bom2+{\overline
a}_2\bom1\om2+a_2\om1\bom2+a_3(\om1\bom3+\bom1\om3),
\]
where $a_1$ and $b_2$ are imaginary numbers and $a_3$ is a real
number. This $2$-form is non-degenerate if and only if $b_2\neq 0$ and
$a_3\neq 0$.
Setting $\om1=e^2+ie^3$, $\om2=-\tfrac12(e^1+ie^4)$ and
$\om3=e^5+ie^6$, reduces the complex structure equation to
\begin{equation}
de^5=e^{12}, \quad de^6=e^{13}.
\end{equation}
Set $a_1=\tfrac{i}2a$, $b_2=2ib$, $a_2=c+ik$ and $a_3=\ell/2$ with
$b\neq 0$ and $\ell\neq 0$. Then the symplectic structure is
\[
\Omega=ae^{23}+be^{14}+c(e^{12}-e^{34})-k(e^{13}+e^{24})+\ell(e^{25}+e^{36}).
\]
Using the contraction with $\Omega$ as an isomorphism from $\lie h_6$ and
$\lie h_6^*$, we obtain a Lie bracket on $\lie h_6^*$ such that
\begin{equation}
b[e^4, e^5]_\Omega=e^2, \quad b[e^4, e^6]_\Omega=e^3, \quad
bl[e^5,e^6]_\Omega=(ce^2-ke^3).
\end{equation}
It is now apparent that the linear map
\begin{equation}
T_1\mapsto e^5 + \frac{k}{\ell}e^4, \quad T_2\mapsto be^4, \quad T_3\mapsto e^2, \quad
\bom1\mapsto-e^3, \quad \bom2\mapsto e^1, \quad \bom3\mapsto e^6 + \frac{c}{\ell}e^4.
\end{equation}
yields an isomorphism of differential Gerstenhaber algebras.
Note that the isomorphism exists so long as the symplectic form
$\Omega$ and the designated complex structure $J$ together form a
pseudo-K\"ahler structure.
\begin{proposition}\label{prop h6}
Let $J$ be any integrable complex structure on $\lie h_6$. Let
$\Omega$ be any symplectic form on $\lie h_6$ of type $(1,1)$ with
respect to $J$. Then the differential Gerstenhaber algebras
$\DGA(\lie h_6, J)$ and $\DGA(\lie h_6,\Omega)$ are isomorphic.
\end{proposition}
\subsection{${\lie h}_8$}
In this case, the invariants yield the following structure equations.
\begin{equation}
d\om1 = 0, \quad
d\om2 = 0, \quad
d\om3 =A\om1\bom1 + B\om1\bom2 +
C\om2\bom1 + D\om{2}\bom2,
\end{equation}
where the arrays $(A, B)$ and $(C,D)$ are linearly dependent but are
not identically zero. After a change of complex coordinates, they
could be reduced to
\begin{equation}\label{standard 8}
d\om1 = 0, \quad
d\om2 = 0, \quad
d\om3 =\om1\bom1.
\end{equation}
The induced structure equations for $\lie f^1$ are
\begin{equation}\label{f1-8}
[T_1, \bom3]=-\bom1, \quad \overline\partial T_1=\bom1\wedge T_3.
\end{equation}
By choosing
\begin{equation}
\om1=e^1+ie^2, \quad \om2=e^3+ie^4, \quad \om3=-2(e^5+ie^6),
\end{equation}
then the real structure equation is indeed the standard one for
$\lie h_8$:
\begin{equation}\label{d6}
de^6=e^{1}\wedge e^2.
\end{equation}
Again, due to \cite[Lemma 3.4]{CFU} given the complex structure
equations, any symplectic $(1,1)$-form is given by
\[
\Omega=ae^{12}+be^{34}+x(e^{13}+e^{24})-y(e^{23}-e^{14})-u(e^{15}+e^{26})+v(e^{25}-e^{16}),
\]
where $a, b, x,y, u,v$ are real numbers. $\Omega$ is non-degenerate
when $b\neq 0$ and $u^2+v^2\neq 0$. Then the induced Lie bracket on
$\lie h_{8}^*$ is given by
\[
[-ue^5-ve^6\bullet ve^5-ue^6]_\Omega=-(ue^2+ve^1). \] Since
$u^2+v^2\neq 0$, it is an elementary exercise to find isomorphism
from $\DGA({\lie h}_8, J)$ to $\DGA({\lie h}_8, \Omega)$. For
instance, when $v\neq 0$, one could construct
an isomorphism so
that
\begin{equation}
T_1\mapsto -ue^5-ve^6, \quad \bom3\mapsto ve^5-ue^6, \quad
\bom1\mapsto ue^2+ve^1.
\end{equation}
As in the last section, the computation demonstrates more than simply
the existence of a self-mirror pair of complex and symplectic
structure.
\begin{proposition}\label{prop h8} Let $J$ be any integrable complex structure on $\lie h_8$.
Let $\Omega$ be any symplectic form on $\lie h_8$ of type $(1,1)$
with respect to $J$. Then the differential Gerstenhaber algebras
$\DGA(\lie h_8, J)$ and $\DGA(\lie h_8,\Omega)$ are isomorphic.
\end{proposition}
\subsection{${\lie h}_9$}
The complex structure equations are given by
\[
d\om1 = 0,\quad
d\om2 = \om1\bom{1},\quad
d\om3 = B\om1\bom2 +
C\om2\bom1,
\]
where $B\neq 0$ and $C\neq 0$. Therefore, we can normalize to
\begin{equation}
d\om1 = 0,\quad
d\om2 = -\frac12\om1\bom{1},\quad
d\om3 = \frac12\om1\bom2 +\frac12
\om2\bom1,
\end{equation}
Choose
\begin{equation}
\om1=e^1+ie^2, \quad \om2=e^4+ie^5, \quad \om3=e^6+ie^3,
\end{equation}
to the effect that $\Omega=e^{13}-e^{26}-e^{45}$ is a pseudo-K\"ahler
form so that ${\lie f}^1(\lie h_9, J)$ is isomorphic to $(\lie h_9^*,
[-\bullet -]_\Omega)$.
\subsection{${\lie h}_{10}$}
The complex structure equation is given by
\begin{equation}
d\om1 = 0,\quad
d\om2 = \om1\bom{1},\quad
d\om3 = \om1\om2 +
\om2\bom1,
\end{equation}
In this case, when we choose
\[
\om1=e^1+ie^2, \quad \om2=e^3+ie^4, \quad \om3=e^5+ie^6,
\]
then $ \Omega=i(e^{16}-e^{25}-e^{34}) $ is a pseudo-K\"ahler form
such that
${\lie f}^1(\lie h_{10}, J)$ is isomorphic to $(\lie h_{10}^*,
[-\bullet -]_\Omega)$.
\subsection{${\lie h}_{11}$}
This case requires a careful analysis. We show that \emph{for every
pseudo-K\"ahler pair $(J,\Omega)$ on $\lie h_{11}$ the differential
Gerstenhaber algebras $\DGA(\lie h_{11},J)$ and $\DGA(\lie
h_{11},\Omega)$ are non-isomorphic}. To this end we shall suppose
that $\Phi\colon\DGA(\lie h_{11},J) \to \DGA(\lie h_{11},\Omega)$ is a
quasi-isomorphism of differential Gerstenhaber algebras obtained from
a pseudo-K\"ahler pair $(J,\Omega)$ and establish a contradiction.
Note that $\lie h_{11}$ is distinguished by the data: $n=(3,5)$ and
$\abs{\Delta_1}^2=\Delta_2^2>0$ for any $J$, see Lemma~\ref{lem:3} and
Lemma~\ref{lem:7}. Furthermore, for any complex structure on $\lie
h_{11}$ we may always choose a basis of $(1,0)$-forms such that
\begin{equation}
\label{eq:3}
d\om1 = 0,\quad
d\om2 = \om1\bom{1},\quad
d\om3 =\om1\om2 + B\om1\bom2 +
C\om2\bom1.
\end{equation}
Choosing $\omega$ this way, the constraints $n_2=5$ and
$\abs{\Delta_1}^2=\Delta_2^2>0$ on the invariants are equivalent to
$B$ being real, $\abs{C}^2=(B-1)^2$ and $BC\not=0$. We shall use this
extensively below. Precisely these conditions on $B$ and $C$ give
\begin{equation}
\label{eq:2}
d((B-1)\om3+C\bom3)=((B-1)\om1+C\bom1)(\om2+\bom2),
\end{equation}
whence
\begin{equation}
V_1(\lie h_{11}) = \langle{\om1, \bom1,\om2 + \bom2}\rangle,\qquad
V_2(\lie h_{11}) = \langle{\om1, \bom1, \om2, \bom2, (B-1)\om3 +
C\bom3}\rangle.
\end{equation}
Solving the equations $d\Omega=0$ and $\Omega=\bar\Omega$ in the space
of $(1,1)$-forms gives
\begin{equation}
\label{eq:1}
\Omega = a_1\om1\bom{1} + a_3(B+1)\om2\bom{2} + a_2\om1\bom2 - \bar
a_2\om2\bom1 + a_3(\om1\bom3+\om3\bom1),
\end{equation}
where $a_1+\bar a_1=0=a_3+\bar a_3$ and $a_1a_3(B+1)\not=0$ if and
only if $\Omega$ is non-degenerate\footnote{This also means: nilpotent
complex structures on $\lie h_{11}$ with $B=-1$ have no compatible
symplectic forms.}. Therefore $\Omega(T_1) =
a_1\bom1+a_2\bom2+a_3\bom3,~\Omega(T_2) = -\bar a_2\bom1
+a_3(B+1)\bom2,~\Omega(T_3) = a_3\bom1$ and
\begin{gather*}
\om1=-\frac1{a_3}\Omega(\bar T_3),\qquad \om2 = -
\frac1{(B+1)a_3}\Omega\left(\bar T_2-\frac{a_2}{a_3}\bar
T_3\right),\\
\om3 = -\frac1{a_3}\Omega\left(\bar T_1 + \frac{\bar
a_2}{(B+1)a_3}\bar T_2 -
\frac{(B+1)a_1a_3+\abs{a_2}^2}{(B+1)a_3^2}\bar T_3\right).
\end{gather*}
Now the brackets are easily computed
\begin{gather*}
[\om2\bullet\om3]_\Omega = - \frac1{(B+1)a_3}\om1,\qquad
[\om2\bullet\bom3]_\Omega
= - \frac1{(B+1)a_3}\left(\bar C\om1 + B\bom1\right),\\
[\om3\bullet\bom3]_\Omega = - \frac1{(B+1)a_3^2}\left((a_2+\bar a_2 \bar
C)\om1 - (\bar a_2+a_2 C)\bom1\right) -
\frac{B+1}{a_3}(\om2+\bom2),
\end{gather*}
and the lower central series for $(\lie
h_{11}^*,[\cdot\bullet\cdot]_\Omega)$ is
\begin{gather*}
(\lie h_{11}^*)_1 = \langle \om1, \bom1, \om2 + \bom2\rangle,\quad
(\lie h_{11}^*)_2 = \langle (B-1)\om1 + C\bom1\rangle,\quad (\lie h_{11}^*)_3=\{0\},
\end{gather*}
while the ascending series is
\begin{gather*}
D^1(\lie h_{11}^*) = \langle \om1,\bom1\rangle,\quad
D^2(\lie h_{11}^*) = \langle
\om1,\bom1,\om2,\bom2\rangle,\quad D^3(\lie h_{11}^*) =
\lie h^*_{11}.
\end{gather*}
On the other hand, the structure equations for $\DGA(\lie h_{11},J)$
given by~(\ref{eq:3}) are
\begin{gather*}
\overline\partial T_1=\bom1\wedge T_2+B\bom2\wedge T_3, \quad \overline\partial
T_2=C\bom1\wedge T_3, \quad \overline\partial\bom3=\bom1\wedge\bom2,\\
[T_1\bullet T_2]=-T_3,\quad[T_1\bullet\bom2]=-\bom1,\quad[T_1\bullet\bom3]=-\bar C\bom2,\quad[T_2\bullet\bom3]=-B\bom1.
\end{gather*}
Writing $\lie f^1$ for the space of degree one elements in $\DGA(\lie
h_{11},J)$ we have
\begin{gather*}
V_1(\lie f^1)=\langle T_3,\bom1,\bom2\rangle,\qquad V_2(\lie f^1) =
\langle T_2,T_3,\bom1,\bom2,\bom3 \rangle,\\
(\lie f^1)_1 = \langle T_3,\bom1,\bom2\rangle,\qquad (\lie f^1)_2 =
\langle \bom1\rangle,\qquad (\lie f^1)_3 = \{0\},\\
D^1(\lie f^1) = \langle \bom1,T_3\rangle,\quad D^2(\lie f^1) =
\langle \bom1,T_3,\bom2,T_2\rangle,\quad D^3(\lie f^1) = \lie f^1.
\end{gather*}
By Proposition~\ref{key technical}, any quasi-isomorphism
$\Phi\colon\DGA(\lie h_{11},J) \to \DGA(\lie h_{11},\Omega)$ must be
an isomorphism of DGAs and therefore maps $V_k(\lie f^1)$
isomorphically onto $V_k(\lie h_{11})$, $(\lie f^1)_j$ isomorphically
onto $(\lie h_{11}^*)_j$ and similarly for the ascending sequences.
It follows that complex constants $\phi^{m}_{n}$ exist such that
\begin{eqnarray*}
\Phi(\bom1)&=&\phi^1_1((B-1)\om1 + C\bom1),\label{phi bom1}\\
\Phi(T_3) &=& \phi^2_1\om1 +\phi^2_2\bom1, \label{phi t3}\\
\Phi(\bom2) &=& \phi^3_1\om1 +\phi^3_2\bom1 + \phi^3_3(\om2+\bom2), \label{phi bom2}\\
\Phi(T_2) &=& \phi^4_1\om1 +\phi^4_2\bom1 + \phi^4_3\om2 +
\phi^4_4\bom2, \label{phi t2}\\
\Phi(\bom3) &=& \phi^5_1\om1 +\phi^5_2\bom1 + \phi^5_3\om2 +
\phi^5_4\bom2 + \phi^5_5((B-1)\om3 + C\bom3), \label{phi bom3}\\
\Phi(T_1) &=& \phi^6_1\om1 +\phi^6_2\bom1 + \phi^6_3\om2 +
\phi^6_4\bom2 + \phi^6_5\om3 + \phi^6_6\bom3 \label{phi t1}.
\end{eqnarray*}
More detailed information is now obtained by applying $\Phi$ to the
structure equations. From $d(\Phi(T_2))=C\Phi(\bom1)\wedge\Phi(T_3)$ we
get
\begin{equation}
\label{eq:8}
(\phi^4_3-\phi^4_4)=C\phi^1_1((B-1)\phi^2_2-C\phi^2_1).
\end{equation}
The $((B-1)\om1+C\bom1)(\om2+\bom2)$-component of $d(\Phi(\bom3)) =
\Phi(\bom1)\wedge\Phi(\bom3)$ gives
\begin{equation}
\label{eq:9}
\phi^5_5=\phi^1_1\phi^3_3.
\end{equation}
Eliminating $\phi^6_5$ and $\phi^6_6$ in the equations derived from
$d(\Phi(T_1)) = \Phi(\bom1)\wedge\Phi(T_2) + B\Phi(\bom2)\wedge\Phi(T_3)$
leads to $C\phi^1_1(\phi^4_3-\phi^4_4) +
\phi^3_3((B-1)\phi^2_2-C\phi^2_1)=0$. The result of inserting
\eqref{eq:8} in this is
$((C\phi^1_1)^2+\phi^3_3)((B-1)\phi^2_2-C\phi^2_1)=0$. Since $\Phi$ is a
linear isomorphism $\Phi(\bom1)$ and $\Phi(T_3)$ are linearly
independent, and so
\begin{equation}
\label{eq:10}
\phi^3_3=-(C\phi^1_1)^2.
\end{equation}
The equation $ [\Phi(T_1)\bullet\Phi(\bom2)]_\Omega=-\Phi(\bom1)$ is
equivalent to
\begin{equation}
\label{eq:4}
C(B+1)a_3\phi^1_1 = \phi^3_3(C\phi^6_5 - (B-1)\phi^6_6)
\end{equation}
while the $\om2+\bom2$-component of $[\Phi(T_1) \bullet
\Phi(\bom3)]_\Omega = -\bar C\Phi(\bom2)$ gives
\begin{equation}
\label{eq:5}
a_3\bar C\phi^3_3 =
(B+1)\phi^5_5(C\phi^6_5 - (B-1)\phi^6_6).
\end{equation}
Substituting first \eqref{eq:9}, and then equations \eqref{eq:4} and
\eqref{eq:10} in \eqref{eq:5} yields
\begin{equation*}
a_3\abs{C}^2C (\phi^1_1)^2 = - a_3(B+1)^2C(\phi^1_1)^2.
\end{equation*}
Since $\abs{C}^2=(B-1)^2$, this implies $a_3C\phi^1_1=0$ and so
establishes our contradiction: if $a_3=0$ then $\Omega$ is degenerate,
$C=0$ cannot be realized on $\lie h_{11}$, and if $\phi^1_1=0$ then
$\Phi$ is no isomorphism. \ q.~e.~d. \vspace{0.2in}
\subsection{Conclusion}
The computation in the past few paragraphs is summarized in the
following observation.
\begin{theorem}\label{main} A six-dimensional nilpotent algebra $\lie g$ admits a
pseudo-K\"ahler structure $(J, \Omega)$ such that $\DGA({\lie g},
J)$ is quasi-isomorphic to $\DGA({\lie g}, \Omega)$ if and only if
$\lie g$ is one of $\lie h_1$, $\lie h_6$, $\lie h_8$, $\lie h_9$
and $\lie h_{10}$.
\end{theorem}
\begin{remark}
\rm In this paper, we have dealt exclusively with Lie algebras.
However, it is possible to extend the whole discussion to
nilmanifolds $M=G/\Gamma$, i.e. quotients of simply connected
nilpotent Lie groups $G$ with respect to co-compact lattices
$\Gamma$. Indeed, the de Rham cohomology of $M$ is given by
invariant forms on $G$ \cite{Nomizu}. Therefore, when $M$
has an invariant symplectic structure, the invariant differential
Gerstenhaber algebra $\DGA(\lie g, \Omega)$ provides a minimal model
for the differential Gerstenhaber algebra over the space of sections
of the exterior differential forms on the nilmanifold $M$.
Similarly, for nilpotent complex structures on nilmanifolds there
are partial results proving that the space of invariant sections is
a minimal model of the Dolbeault cohomology with coefficients in the
holomorphic tangent sheaf \cite{CF} \cite{CFP} \cite{CFGU}. Given
such a result for particular class of complex structures (e.g.
abelian complex structure \cite{CFP}), Theorem \ref{main} can be
paraphrased as a statement about quasi-isomorphisms of DGAs over
nilmanifolds with pseudo-K\"ahler structures.
\end{remark}
\begin{ack}
We are grateful to S. Chiossi for reading the manuscript and for his
extremely useful comments.
\end{ack}
|
1,116,691,500,999 | arxiv | \section{Introduction\label{sec:TM01}}
The advent of ultra-intense laser facilities has led to exciting possibilities in the development of a new generation of compact laser-driven electron accelerators. Among the proposed laser acceleration schemes, the use of ultra-intense radially polarized laser beams in vacuum (termed \textit{direct acceleration}) is very promising, as it takes advantage of the strong longitudinal electric field at beam center to accelerate electrons along the optical axis~\cite{varin05_pre}. Numerical simulations have shown that collimated attosecond electron pulses could be produced by this acceleration scheme~\cite{varin06_pre,karmakar07_lpb}.
Recent studies on direct acceleration have shown that reducing the pulse duration and beam waist size generally increases the maximum energy gain available~\cite{wong10_optexpress,wong11_optlett}. However, these analyses were carried under the paraxial and slowly varying envelope approximations. These approximations lose their validity as the beam waist size becomes comparable with the laser wavelength and the pulse duration approaches the single-cycle limit, conditions that are now often encountered in experiments . We propose a simple method to investigate direct acceleration in the nonparaxial and ultrashort pulse regime, and show that it offers the possibility of higher energy gains. We also highlight a peculiar feature of the acceleration dynamics under nonparaxial focusing conditions, namely the coexistence of forward and backward acceleration. This could offer a solution to the production of synchronized electron pulses required in some pump-probe experiments.
\section{Exact solution for a nonparaxial and ultrashort TM$_{01}$ pulsed beam\label{sec:TM01}}
Ultrashort and tightly focused pulsed beams must be modeled as exact solutions to Maxwell's equations. A simple and complete strategy to obtain exact closed-form solutions for the electromagnetic fields of such beams was recently presented by April~\cite{april10_intech}. For a TM$_{01}$ pulse, which corresponds to the lowest-order radially polarized laser beam, the field components are described by~\cite{april10_intech,marceau12_optlett}:
\begin{align}
&E_r (\mb{r},t) = \tr{Re}\, \bigg\{ \frac{3 E_0 \sin \tilde{2\theta}}{2\tilde{R}} \bigg( \frac{G_-^{(0)}}{\tilde{R}^2} - \frac{G_+^{(1)}}{c\tilde{R}} + \frac{G_-^{(2)}}{3c^2}\bigg) \bigg\} \ , \label{eq:npTM01Er}\\
&E_z (\mb{r},t) = \tr{Re}\, \bigg\{ \frac{E_0}{\tilde{R}} \bigg[ \frac{(3\cos^2\tilde{\theta}-1)}{\tilde{R}} \bigg( \frac{G_-^{(0)}}{\tilde{R}} - \frac{G_+^{(1)}}{c} \bigg) - \frac{\sin^2\tilde{\theta}}{c^2} G_-^{(2)} \bigg] \bigg\} \ , \label{eq:npTM01Ez} \\
&H_\phi (\mb{r},t) = \tr{Re}\, \bigg\{ \frac{E_0 \sin \tilde{\theta}}{\eta_0 \tilde{R}} \bigg( \frac{G_-^{(1)}}{c\tilde{R}} - \frac{G_+^{(2)}}{c^2}\bigg) \bigg\} \ . \label{eq:npTM01Hphi}
\end{align}
Here $E_0$ is an amplitude parameter, $\tilde{R}=[r^2 + (z+ja)^2]^{1/2}$, $\cos \tilde{\theta} = (z+ja)/\tilde{R} $, $G^{(n)}_\pm = \partial^n_t [f(\tilde{t}_+)\pm f(\tilde{t}_-)]$, $f(t) = e^{-j\phi_0}\left( 1- j \omega_0 t/s \right)^{-(s+1)}$, and $\tilde{t}_\pm = t \pm \tilde{R}/c + ja/c$. The function $f(t)$ is the inverse Fourier transform of the Poisson-like frequency spectrum of the pulse, in which $\omega_0 = c k_0$ is the frequency of maximum amplitude and $\phi_0$ is a constant phase~\cite{caron99_jmodoptic}. The parameter $a$, called the confocal parameter, is monotonically related to the beam waist size and characterizes the beam's degree of paraxiality: $k_0 a \sim 1$ for tight focusing conditions, while $k_0 a \gg 1$ for paraxial beams. The pulse duration $T$, which may be defined as twice the root-mean-square width of $|E_z|^2$, increases monotonically with $s$. In the limit $k_0 a \gg 1$ and $s \gg 1$, Eqs.~\eqref{eq:npTM01Er}--\eqref{eq:npTM01Hphi} reduce to the familiar paraxial TM$_{01}$ Gaussian pulse~\cite{fortin10_jpb}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.32\textwidth]{./npTM01_Ez_z_ka1_s10a} \
\includegraphics[width=0.32\textwidth]{./npTM01_Ez_z_ka1_s10b} \
\includegraphics[width=0.32\textwidth]{./npTM01_Ez_z_ka1_s10c}
\caption{Longitudinal on-axis electric field of a TM$_{01}$ pulse with $k_0a = 1$ and $s=10$. \label{fig:1}}
\end{figure}
The TM$_{01}$ pulsed beam described above may be produced by focusing a collimated radially polarized input beam with a high aperture parabolic mirror. Its field distribution consists of two counterpropagating pulse components, as shown in Fig.~\ref{fig:1}~\cite{april10_optexpress}.
\section{On-axis acceleration in the nonparaxial and ultrashort pulse regime\label{sec:results}}
Direct acceleration is simulated by integrating the conventional Lorentz force equation for an electron initially at rest at position $z_0$ on the optical axis and outside the laser pulse. Since $E_r$ and $H_\phi$ vanish at $r=0$, the particle is accelerated by $E_z$ along the optical axis.
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{./npTM01_energy_power_cal} \ \ \
\includegraphics[width=0.45\textwidth]{./npTM01_energy_power_uncal}
\caption{Maximum (a) normalized and (b) absolute energy gain of an electron initially at rest versus the laser pulse peak power for different values of $k_0 a$ and $s$. The curve with $\{k_0a=124,s=155\}$ corresponds to the limit of the paraxial regime investigated in~\cite{wong10_optexpress}. Figure taken from~\cite{marceau12_optlett}. \label{fig:2}}
\end{figure}
Figure \ref{fig:2} illustrates the variation of the maximum energy gain available $\Delta W_\tr{max}$ (after optimizing for $z_0$ and $\phi_0$) with the laser peak power $P_\tr{peak}$ for different combinations of $k_0a$ and $s$. Figure \ref{fig:2}a, in which $\Delta W_\tr{max}$ is expressed as a fraction of the theoretical energy gain limit $\Delta W_\tr{lim}$~\cite{fortin10_jpb}, shows that for constant values of $s$, the threshold power above which significant acceleration occurs is greatly reduced as $k_0 a$ decreases, i.e., as the focus is made tighter. According to Fig.~\ref{fig:2}b, MeV energy gains may be reached under tight focusing conditions with laser peak powers as low as 15 gigawatts. In constrast, a peak power about $10^3$ times greater is required to reach the same energy with paraxial pulses. At high peak power, Fig.~\ref{fig:2}a shows that shorter pulses yield a more efficient acceleration, with a ratio $\Delta W_\tr{max}/\Delta W_\tr{lim}$ reaching 80\% for single-cycle ($s=1$) pulses. Additional details about those results can be found in~\cite{marceau12_optlett}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{./npTM01_energy_z0phi_ka1_s1_2PW_delim} \ \ \
\includegraphics[width=0.35\textwidth]{./npTM01_FBenergy_ka}
\caption{(a) Energy gain of an electron initially at rest versus $z_0$ and $\phi_0$ for a laser pulse with $k_0 a = 1$, $s=1$. The dashed curve delimits the regions of forward and backward acceleration. (b)--(c) Maximum energy gain for electrons accelerated forward and backward versus $k_0 a$. In all figures, $P_\tr{peak}=2\times10^{15}$ W. \label{fig:3}}
\end{figure}
In the highly nonparaxial regime ($k_0 a \sim 1$), a closer look at the dynamics in the $(z_0,\phi_0)$ parameter space reveals the existence of two different types of acceleration (see Fig.~\ref{fig:3}a). In the first type, the electron is accelerated in the positive $z$ direction (forward acceleration), and may reach a high energy gain if its motion is synchronized with a negative half-cycle of the forward-propagating component of the beam. In the second type, the electron is accelerated in the negative $z$ direction (backward acceleration), and may similarly experience subcycle acceleration from the backward-propagating component of the beam. The maximum energy gain available from forward and backward acceleration is illustrated in Figs.~\ref{fig:3}b--c. A significant backward acceleration is only observed under tight focusing conditions ($k_0 a < 10$), since the amplitude of the backward-propagating component of the laser beam rapidly decreases as $k_0 a$ increases.
\section{Conclusion \label{sec:conclu}}
We have highlighted the importance of going beyond the paraxial and slowly varying envelope approximations in the analysis of electron acceleration in vacuum by radially polarized laser beams. It was shown that the acceleration threshold power may be greatly reduced under tight focusing conditions, which demonstrates that direct acceleration is much more accessible to the current laser technology than previously expected. Moreover, our results hints that high-aperture focusing optics could be used to generate synchronized counterpropagating electron bunches. The proposed acceleration scheme could therefore find applications in the context of pump-probe experiments.
This research was supported by the Natural Sciences and Engineering Research Council of Canada, Le Fonds de Recherche du Qu{\'e}bec, and the Canadian Institute for Photonic Innovations.
|
1,116,691,501,000 | arxiv | \section{INTRODUCTION}
\subsection{Motivation}
A wide range of quasi-one-dimensional materials undergo a
structural transition, known as the Peierls or charge-density-wave
(CDW) transition, as the temperature is lowered \cite{gru,con,gor,car}.
A periodic lattice distortion, with wave vector, $2k_F$, twice that of
the Fermi wavevector, develops along the chains.
Anomalies are seen in the electronic properties
due to the opening of an energy gap over the Fermi surface.
Over the past decade, due to the development of high-quality
samples and higher resolution experimental techniques,
new data has become available
which allows a quantitative comparison of experiment with theory.
The most widely studied material is the blue bronze, K$_{0.3}$MoO$_3$.
There is a well-defined three-dimensional transition at
$T_P=183$ K and careful measurements have been made of
thermodynamic anomalies \cite{bri} and CDW coherence lengths \cite{gir}
at the transition. The critical region, estimated from the
Ginzburg criterion \cite{gin} is only a few percent of
the transition temperature and so the transition should be described
by an anisotropic three-dimensional Ginzburg-Landau free energy
functional, except close to the transition temperature.
The challenge is to derive from a microscopic theory
the coefficients in the Ginzburg-Landau free energy
so a quantitative comparison
can be made between theory and experiment.
Inspiration is provided by the case of superconductivity.
The superconducting transition is well described by
Ginzburg-Landau theory and the coefficients can be calculated
from BCS theory \cite{schr} and depend on microscopic
parameters such as the normal state density of states,
Debye frequency, and the electron-phonon coupling.
This program is so successful that one can even consider
refinements to BCS theory, such as strong coupling effects,
in order to get better agreement between experiment and
theory \cite{carb}.
However, the problem of the CDW transition is more difficult
because of the large fluctuations due to the quasi-one dimensonality.
\subsection{Ginzburg-Landau theory}
The Peierls transition is described by an order parameter
which is proportional to the $2k_F$
lattice distortion along the chains.
The order parameter is complex if the lattice distortion
is incommensurate with the lattice. For a commensurate
lattice distortion (e.g., a half-filled band) the order
parameter is real.
I recently considered the general problem of Ginzburg-Landau
theory for a three-dimensional phase transition,
described by a complex order parameter,
in a system of weakly coupled chains \cite{mck2}.
The key results of that study are now summarized,
partly to put this paper in a broader context.
The Ginzburg-Landau free energy functional $F_1[\phi]$
for a {\it single} chain with a complex order
parameter $\phi(z)$, where $z$ is the co-ordinate along
the chain, is
\begin{equation}
F_1[\phi]=\int dz \left[
a \mid\phi\mid^2 + \ b \mid\phi\mid^4 +
\ c \mid {\partial \phi\over \partial z}\mid ^2 \right].
\label{aa1}
\end{equation}
Near the single chain mean-field transition temperature
$T_0$ the second-order coefficient $a(T)$ can be written
\begin{equation}
a(T)= a^\prime \left( {T \over T_0} - 1 \right).
\label{aa10}
\end{equation}
Due to fluctuations in the order parameter
this one-dimensional system cannot develop long-range order at
finite temperature \cite{lan,sca}.
To describe a finite-temperature
phase transition, consider a set of weakly interacting
chains. If $\phi_i(z)$ is the order parameter on the $i$-th chain
the free energy functional for the system is
\begin{equation}
F[\phi_i(z)]=\sum_i F_1[\phi_i(z)] -
{J \over 4} \sum_{<i,j>} \int dz {\rm Re} [\phi_i(z)^* \phi_j(z)]
\label{ad1}
\end{equation}
where $J$ describes the interchain interactions
between nearest neighbours.
A mean-field treatment of this functional will only
give accurate results if the width of the three-dimensional
critical region is much smaller than $T_0$. This requires
that the width of the one-dimensional critical region $\Delta t_{1D}
\equiv (bT_0)^{2/3}/a^\prime c^{1/3}$
be sufficiently small that
\begin{equation}
\Delta t_{1D} \ll \left( { J \over a^\prime} \right)^{2/3}.
\label{ad10}
\end{equation}
If this is not the case one can integrate out
the one-dimensional fluctuations to derive a new Ginzburg-Landau
functional with renormalized coefficients,
\begin{equation}
\tilde F[\Phi(x,y,z)]= {1 \over a_x a_y}\int d^3 x \left[
A \mid \Phi \mid^2
+B \mid \Phi \mid^4
+ C_x \mid {\partial \Phi \over \partial x }\mid^2
+C_y \mid {\partial \Phi \over \partial y }\mid^2
+ C_z \mid {\partial \Phi \over \partial z }\mid^2
\right]
\label{bg1}
\end{equation}
where $a_x$ and $a_y$ are the lattice constants
perpendicular to the chains.
The new order parameter $\Phi(x,y,z)$, is proportional
to the average of
$\phi_i(z)$ over neighbouring chains.
The three-dimensional mean-field temperature $T_{3D}$ is defined
as the temperature at which the the coefficient $A(T)$
changes sign.
Close to $T_{3D}$
\begin{equation}
A=A^\prime\left({T \over T_{3D}} -1 \right).
\label{abg1}
\end{equation}
The transition temperature $T_{3D}$ and the
coefficients $A^\prime$, $B$, $C_x$, $C_y$, and $C_z$
can be written in terms of the interchain interaction
$J$ and the coefficients
$a$, $b$, and $c$ of a single chain.
The coefficients in (\ref{bg1}) determine measurable quantities associated
with the transition such as the specific heat jump,
coherence lengths and width of the critical region.
Most of the physics is
determined by a {\it single} dimensionless parameter
\begin{equation}
\kappa \equiv { 2 (bT)^2 \over |a|^3 c}.
\label{aat1}
\end{equation}
which is a measure of the fluctuations along a single chain.
It was assumed that the coefficients $a$, $b$, and $c$
were independent of temperature and the measurable
quantities at the transition were determined as a function
of the interchain coupling. The transition temperature
increases as the interchain coupling increases. The coherence
length and specific heat jump depends only on the
single chain coherence length, $\xi_0 \equiv (c /|a|)^{1/2}$,
and the interchain coupling.
The width of the critical region, estimated from the
Ginzburg criterion, was virtually parameter independent,
being about 5-8 per cent of the transition temperature for
a tetragonal crystal. Such a narrow critical region is
consistent with experiment, and shows that Ginzburg-Landau
theory should be valid over a broad temperature range.
This paper uses a simple model to demonstrate
some of the difficulties involved in deriving the coefficients
$a$, $b$, $c$, and $J$ from a realistic microscopic theory.
\subsection{Microscopic theory}
The basic physics of quasi-one-dimensional CDW
materials is believed to be described by a Hamiltonian due
to Fr\"ohlich \cite{fro} which describes electrons
with a linear coupling to phonons.
Even in one dimension this is a highly non-trivial
many-body system and must treated by some approximation scheme.
The simplest treatment \cite{fro,ric0,all} is a rigid-lattice one
in which the phonons associated with the lattice
distortion are treated in the mean-field approximation
and the zero-point and thermal lattice motions are neglected.
The resulting
theory is mathematically identical to BCS theory \cite{all}.
An energy gap opens
at the Fermi surface at a temperature
$T_{RL} \simeq 1.14E_F e^{-1/ \lambda}$ where $E_F$ is
the Fermi energy and $\lambda$ is the dimensionless
electron-phonon coupling.
$T_{RL}$ is related to the
zero-temperature energy gap $\Delta_{RL}(0)$ by
\begin{equation}
\Delta_{RL}(0) = 1.76 k_B T_{RL}.
\label{trl}
\end{equation}
In this approximation the coefficients in the single-chain
Ginzburg -- Landau free energy functional (\ref{aa1}) are \cite{all}
\begin{equation}
a_{RL}(T)= {1 \over \pi v_F} \ln \left({T \over T_{RL}}\right)
\label{mfa}
\end{equation}
\begin{equation}
b_{RL}(T)= {1 \over \pi v_F} {7\zeta(3)\over(4\pi T)^2}
\label{mfb}
\end{equation}
\begin{equation}
c_{RL}(T)={1 \over \pi v_F}{7\zeta(3)v_F^2\over(4\pi T)^2}
\label{mfc}
\end{equation}
where $v_F$ is the Fermi velocity and $\zeta(3)$ is
the Riemann zeta function.
If $4t_\perp$ is the electronic bandwidth perpendicular to the
chains (see (\ref{fk}))
then the interchain coupling is given by \cite{sch,hor}
\begin{equation}
J_{RL}= \left( {4 t_\perp \over v_F}\right)^2 c_{RL}(T).
\label{mfc2}
\end{equation}
It might be hoped that the transition in real materials
can be described by the mean-field theory of the
functional (\ref{aa1}) with the coefficients (\ref{mfa}-\ref{mfc}).
However, this is not the case for several reasons.
(i) The width of the critical regime given by the one-dimensional
Ginzburg criterion \cite{ma} is very large:
$\Delta t_{1D} = 0.8 $ \cite{sch}, suggesting that
fluctuations are important because condition (\ref{ad10}) is
not satisfied.
(ii) A rigid-lattice treatment predicts a metallic density of
states at all temperatures above $T_{RL}$.
In contrast, magnetic susceptibility \cite{sco,joh,joh3},
optical conductivity \cite{deg,deg2,dre,dre2,bru,ber},
and photoemission \cite{dar,dar2,hwu} measurements suggest
that there is a gap
or pseudogap in the density of states for a broad temperature range
above $T_P$.
(iii) The transition temperature, specific heat jump, and coherence
lengths are inconsistent with rigid lattice predictions
(Table \ref{table1}).
This failure should not be surprising given that recent work has
shown that in the three-dimensionally ordered Peierls state
the zero-point and thermal lattice motions must
be taken into account to obtain a quantitative
description of the optical properties \cite{deg,deg2,mck,kim,lon}.
The next level of approximation is to use the coefficients
(\ref{mfa}-\ref{mfb}) and take into account the intrachain
order parameter fluctuations and the interchain coupling
and use results similar to those in References \cite{mck2}.
This is the approach that has been taken previously
\cite{sch,lee,die}.
There are two problems with this approach. First, if
the dimensionless parameter $\kappa$, given by (\ref{aat1}), is evaluated
using the expressions (\ref{mfa}-\ref{mfc})
the result is
\begin{equation}
\kappa_{RL}(T)= { 7 \zeta(3) \over 8 |\ln (T/T_{RL})|^3 }.
\label{at1}
\end{equation}
Hence, the temperature dependence is quite different
from the dependence
$\kappa \sim T^2$ that was assumed in References \cite{mck2,die,sca2}
and the analysis there needs to be modified.
The second and more serious problem is
one of self-consistency. The coefficients $a$, $b$, and $c$
are calculated neglecting fluctuations in the order
parameter which will modify the electronic properties
which in turn will modify the coefficients.
In this paper a simple model is used to demonstrate that
the fluctuations have a significant effect on the
single-chain coefficients.
An alternative microscopic theory, due to Schulz \cite{sch},
and which takes into account fluctuations in only the phase
of the order parameter
is briefly reviewed in Appendix \ref{appsch}.
\subsection{Overview}
Discrepancies between phonon rigid-lattice
theory and the observed properties of the Peierls state well
below the transition temperature $T_P$
were recently resolved \cite{mck,kim} by taking into account the
effect of the zero-point and thermal lattice motion on
the electronic properties. It was shown that the lattice fluctuations
have an effect similar to a Gaussian random potential. This mapping
breaks down near the transition temperature because of the phonon dispersion
due to the softening of the phonons near $2k_F$. In this paper
this dispersion is taken into account and the effect of the
large thermal lattice motion near the transition temperature
is studied.
The thermal lattice motion has the same effect on the electronic properties
as a static random potential with finite correlation length.
Close to the transition temperature, the problem reduces to
a simple model, corresponding to a single classical phonon,
which can be treated exactly (Section \ref{secham}).
This model was first studied by Sadovsk\~i\~i \cite{sad}.
It was recently used in the description of the destruction of
spin-density-wave states by high magnetic fields \cite{mck0}.
The one-electron Green's function is calculated in Section \ref{secgreen}.
There is a pseudogap in the density of states
(Fig \ref{figdos}).
The complexity of this simple model is indicated by two non-trivial
many-body effects: (i) Perturbation theory diverges
but is Borel summable. (ii) The traditional quasi-particle
picture breaks down
(Figure \ref{figspec}),
reminiscent of behaviour seen in Luttinger liquids\cite{voi}.
To illustrate that calculations based on perturbation
theory can be unreliable it is shown that a predicted scaling
relation between the specific heat and the temperature
derivative of the magnetic susceptibility \cite{cha} does not hold
if the {\it exact}, rather than approximate, density of
states is used in the calculation.
Using this model the coefficients $a$, $b$, and $c$
are calculated in Appendix \ref{seccoeff}.
The coefficients deviate significantly from the
rigid-lattice values if the pseudogap is
comparable to or larger than the transition temperature.
In Section \ref{secest} experimental data
is used to estimate the pseudogap in
K$_{0.3}$MoO$_3$.
\subsection{Previous work on fluctuations and the pseudogap}
To put this paper in context some important earlier work
is briefly reviewed.
Lee, Rice and Anderson \cite{lee}
considered how fluctuations in the order parameter
produce a pseudogap in the density of states.
It is important to be aware of the assumptions
made in their calculation. Although their results describe
much of the physics on a qualitative level, for the
reasons described below, their results cannot be expected to give
a quantitative description of the density of states near
the CDW transition.
The starting point of Lee, Rice, and Anderson
was the one-dimensional
Ginzburg-Landau functional (\ref{aa1}) with a {\it real}
order parameter and with the coefficients derived from
rigid-lattice theory (see equations (\ref{mfa}-\ref{mfc})).
Earlier, Scalapino, Sears, and Ferrell \cite{sca}
evaluated the correlation length $\xi_\parallel(T)$
for one-dimensional Ginzburg-Landau theory
with an exact treatment of the fluctuations in the
order parameter;
$\xi_\parallel(T)$ only diverges as $T \to 0$.
The results of this calculation were used by Lee, Rice, and Anderson as
input in a random potential
with correlations given by
\begin{equation}
<\Delta(z)\Delta(z')>=\Delta_{RL}
(T)^{2} \exp(-\mid z-z^{'}\mid /\xi_\parallel(T)) \label{af}
\end{equation}
where $\Delta_{RL}(T)$ is the rigid-lattice (BCS) order parameter
and the average is over the thermal fluctuations of the
order parameter.
The electronic
Green's function was calculated using equation (\ref{af}) and a
formula originally used for liquid
metals (essentially, second-order perturbation theory
for the random potential).
They found a gradual appearance of a gap as the
temperature decreased. For $T_P < 0.25 T_{RL},$ an absolute gap of
magnitude $\Delta_{RL}(0)$ appears.
Lee, Rice and Anderson
suggested that a three-dimensonal transition
occurs for $T_P \simeq 0.25 T_{RL}$ based on the
temperature at which $\xi_\parallel(T)$ becomes extremely large.
There are several problems with trying to use these results
to give a quantitative description of the CDW transtion,
because of the following assumptions.
(i) {\it A real order parameter.}
Most CDW transitions are described by a complex order parameter,
for which quantitatively distinct behaviour occurs.
For example, the transition to very large correlation
lengths for $T_P \simeq 0.25 T_{RL}$ does not occur
for a complex order parameter. (See Figure 6 in Reference
\cite{sca}).
(ii) {\it Rigid-lattice coefficients.}
It is shown in this paper that the pseudogap due to the
thermal lattice motion causes the Ginzburg-Landau coefficients to deviate
significantly from their rigid-lattice values (Figure \ref{figcoeff}).
(iii) {\it Perturbation theory.}
It is demonstrated in this paper that this is unreliable.
In particular as $\xi_\parallel(T) \to \infty $ in (\ref{af})
only a pseudogap rather than an absolute gap
develops in the density of states (Figure \ref{figdos}).
Rice and Str\"assler \cite {ric} calculated the contribution of the phonon
fluctuations to the electronic self energy in the Migdal approximation,
i.e., second-order perturbation theory.
Interchain interactions were included through an anisotropic phonon
dispersion. They found
a pseudogap in the density of states
above the transition temperature. At $T_P$ there is an absolute gap
whose magnitude is determined by the electron-phonon coupling
and the interchain interactions.
They equated the observed transition temperature
with the single-chain mean-field transition temperature $T_0$
which they found to be significantly
reduced below the rigid-lattice value $T_{RL}$
and to vanish as the interchain coupling vanishes.
In the limit of weak interchain interactions the analytic
form of the density of states
is identical to that of Lee, Rice, and Anderson \cite{lee}.
However, it is not commonly appreciated that the
origin of the pseudogap in the two calculations is quite different.
The magnitude of the Rice and Str\"assler pseudogap
is proportional to the thermal lattice motion (compare Section \ref{sectherm}),
while the pseudogap studied by Lee, Rice, and Anderson pseudogap is
by assumption equal to the rigid-lattice gap $\Delta_{RL}(T)$.
Calculations similar to that of Rice and Str\"assler have been
performed by Bjeli\~s and Bari\~si\~c \cite {bje}, Suzumura and Kurihara
\cite {suz}, Patton and Sham \cite {sha}, and Chandra \cite{cha}.
The main problem with these calculations are that they are
based on perturbation theory.
\section {MODEL HAMILTONIAN}
\label{secham}
The starting point for this paper is the
following one-dimensional model. The states in
an electron gas with Fermi velocity $v_F$
are described by spinors $\Psi(z)$. The upper and
lower components describe left and right moving electrons,
respectively.
The phonons are described by the field
\begin{equation}
\Delta(z) = g \sum_q \sqrt {\hbar \over 2M \omega_{2k_F+q}}
(b_{2k_F+q} + b_{-2k_F-q}^\dagger ) e^{iqz}
\end{equation}
where $b_s$ destroys a phonon of momentum $s$ and frequency $\omega_s$
and $g$ is the linear electron-phonon coupling.
The dimensionless electron-phonon coupling $\lambda$ is defined by
\begin{equation}
\lambda = 2 g^2 a_z/\pi v_F \omega_Q \label {bd}
\end{equation}
where $a_z$ is the lattice constant along the chains.
The electronic part of the Hamiltonian is \cite{bra}
\begin{equation}
H_{el} = \int dz \Psi^\dagger (z) \bigg[ - iv_F \sigma_3
{\partial \over \partial z} + {1 \over 2}(\Delta(z) \sigma_+ + \Delta(z)^*
\sigma_-)\bigg] \Psi(z)
\label{hamel}
\end{equation}
where $\sigma_3$ and
$ \sigma_{\pm} \equiv \sigma_1 \pm i \sigma_2$ are Pauli matrices.
This paper focuses on the following model where $\Delta(z)$
is replaced with a random potential with zero mean
and finite length correlations
\begin{equation}
\langle \Delta(z)\rangle = 0 \ \ \ \; \ \ \ \ \
\langle \Delta(z)\Delta(z')^* \rangle = \psi^2
\exp(-|z-z'|/\xi_\parallel).
\label{cor2}
\end{equation}
$\xi_\parallel$ is the CDW correlation length along the chains.
In most of this paper $\psi$ will be treated as a parameter.
It is central to this paper, being
a measure of the thermal lattice motion and
a measure of the pseudogap in the density of states.
This paper focuses on behaviour near $T_P$ and so the
limit $\xi_\parallel \psi/v_F \to \infty$ is taken.
A rough argument is now given to justify using this
model to describe thermal lattice motion near the phase transition.
\subsection {Thermal lattice motion}
\label{sectherm}
In rigid-lattice theory $\Delta(z)$ is replaced by its expectation value
$\langle \Delta(z) \rangle =\Delta_o$.
To go beyond this
the effect of the quantum and thermal
lattice fluctuations in the Peierls state was
recently modelled \cite{mck,kim,mck1} by treating $\Delta(z)$
as a static random potential with mean $\Delta_o$ and correlations
\begin{equation}
\langle \Delta(z)\Delta(z')^* \rangle =\Delta_o^2 + \gamma \delta(z-z')
\label{corprl}
\end{equation}
where
\begin{equation}
\gamma= {1 \over 2}\pi \lambda v_F \omega_{2k_F}
\coth\left({\omega_{2k_F} \over 2T}\right).
\label{gamma}
\end{equation}
This model is expected to be reliable except near the transition
temperature where there is significant dispersion in the phonons.
This dispersion is now taken into account.
Near the transition temperature the
phonons can be treated {\it classically} since in most materials the
frequencies of the phonons with wavevector near $2k_F$ are much smaller than
the transition temperature (Table \ref{table2}).
Following Rice and St\"assler \cite{ric} renormalized phonon frequencies
$\Omega(q,T)$ are used in
the expression for the correlations of the random potential
\begin{equation}
\langle \Delta(z)\Delta(z')^* \rangle =
\lambda\pi T{ v_F\over a_z}
\sum_q{\omega_Q^2\over\Omega(q,T)^2}
e^{iq(z-z')}.
\label{cor}
\end{equation}
At the level of the Gaussian approximation the
phonon dispersion relation can be written in the form
\begin{equation}
\Omega(q,T)^2=\Omega(T)^2 \bigl(1 + (q- 2k_F)^2
\xi_{\parallel}(T)^2 \bigr). \label{be1}
\end{equation}
Evaluating (\ref{cor}) then gives (\ref{cor2}) where
\begin{equation}
\psi^2=\lambda \pi T
\left({\omega_Q \over \Omega(T)}\right)^2 {v_F\over 2\xi_\parallel(T)}.
\label{ce}
\end{equation}
Note that this expression togehter with (\ref{cor2})
is then quite different from (\ref{af})
used by Lee, Rice, and Anderson \cite{lee}.
In the limit $\xi_\parallel \to 0$, i.e., the phonons become
dispersionless and the sum in (\ref{cor}) becomes a delta function
and giving (\ref{corprl}) (with $\Delta_o =0$) and (\ref{gamma}).
The rms fluctuations $\delta u$ in the
positions of the atoms due to
thermal lattice motion is related to the Debye-Waller factor
and given by
\begin{equation}
(\delta u)^2=kT \sum_q {1\over M\Omega(\vec{q},T)^2} \label {ca}
\end{equation}
This is related to $\psi = (2M \omega_Q)^{1/2} g \delta u$.
Hence $\psi$ {\it is proportional to the thermal lattice motion.}
If $\psi$ is defined by (\ref{ce}) it
diverges as $T \to T_P$ because
\begin{equation}
{\rm as}\ T \to T_P,\ \Omega(T) \to 0,\ \xi_\parallel(T) \to
\infty \ {\rm with}\ \Omega(T)\xi_\parallel(T) \
{\rm finite.} \label{bf}
\end{equation}
However, in a real crystal the phonons are three-dimensional
and the thermal lattice motion is finite.
Define
\begin{equation}
\psi^2=\lambda\pi T{ v_F\over a_z}
\sum_{\vec{q}}{\omega_Q^2\over\Omega(\vec{q},T)^2} \label{cd}
\end{equation}
and write the three-dimensional dispersion relation
(for a tetragonal crystal) in the form
\begin{equation}
\Omega(\vec q,T)^2=\Omega(T)^2 \bigl(1 + (q_{\parallel}- 2k_F)^2
\xi_{\parallel}(T)^2
+ (q_{\perp}- Q_{\perp})^2 \xi_{\perp}(T)^2 \bigr) \label{be}
\end{equation}
where $\vec Q =(Q_\perp,2k_F)$ is the nesting vector associated with
the three-dimensional CDW transition (see equation (\ref{bb})).
Due to the quasi-one-dimensionality of the crystal
the dispersion perpendicular to the chains is small and
$\xi_\perp \ll \xi_\parallel$.
Let $a_x$ denote the lattice constant perpendicular to the
chains.
Performing the integral over the wave vector in (\ref{cd}) gives
\cite{alternative}
\begin{equation}
\psi^2 = \lambda \pi T
\left({\omega_Q\over\Omega(T)}\right)^2
{a_x^2 v_F \over \pi^2 \xi_\parallel(T)\xi_\perp(T)^2}
\left[\sqrt{1+(\rho_c\xi_\perp(T))^2}-1\right]
\label{cf}
\end{equation}
where $\rho_c$ is a wavevector cutoff perpendicular to the chains.
If $\rho_c=\pi/a_x$ this expression reduces to (\ref{ce}) in the
one-dimensional limit $\xi_\perp \ll a_x$.
Near the transition, $\xi_\perp(T) \to \infty$, giving
\begin{equation}
\psi(T_P)^2 = \lambda \pi T
\left({\omega_Q\over\Omega(T)}\right)^2 {a_x v_F \over \pi
\xi_\parallel(T)\xi_\perp(T) }\label{cg}
\end{equation}
From (\ref{bf}) and the fact that $\xi_\parallel(T)/\xi_\perp(T)$
is finite it follows that $\psi$ is finite as $T \to T_P$.
Note that the magnitude of this quantity is dependent on the
choice of the momentum cutoff $\rho_c$.
The above treatment is quite similar to Schulz's discussion of
fluctuations in the order parameter in the Gaussian approximation
\cite{sch}.
Although the expressions (\ref{ce}), (\ref{cf}), and
(\ref{cg}) for $\psi$ in the different regimes look very
different $\psi$ is actually weakly temperature dependent and does not
vary much in magnitude. To see this (\ref{cf}) can be written as
\begin{equation}
\psi^2 = (\psi(T_P))^2
{1 \over \rho_c\xi_\perp(T)}\left[\sqrt{1+(\rho_c\xi_\perp(T))^2}-1\right]
\label{cg2}
\end{equation}
The postfactor is a slowly varying function of $\rho_c\xi_\perp(T)$.
Since well above $T_P$, $\rho_c\xi_\perp(T) \sim 1$
(e.g., for K$_{0.3}$MoO$_3$ $\xi_\perp(300 {\rm K}) \sim 4 \AA $ \cite{gir})
the postfactor does not vary by more than a factor of two
although $\rho_c\xi_\perp(T)$ varies by several orders of
magnitude.
Johnston et al. \cite{joh} used a crude method of estimating the
pseudogap and found it to be weakly temperature dependent above $T_P$
for K$_{0.3}$MoO$_3$.
\subsection{Solution of the model}
\label{secsol}
Sadovsk\~i\~i \cite{sad2} solved the one-dimensional model (\ref{hamel})
and (\ref{cor2}) exactly. He calculated the one-electron Green's function
in terms of a continued fraction by
finding a recursion relation satisfied by the self energy.
He found \cite{sad}
that the Green's function reduced to a simple analytic form
in the limit of large correlation lengths ($\xi_\parallel \gg v_F/\psi$).
This can be seen by the following rough argument.
In the limit $\xi_\parallel \to \infty$
the moments of the random potential $\Delta(z)$ are independent of
position:
\begin{equation}
\langle \Delta(z)\rangle = 0
\ \ \ \ \ \ \langle \Delta(z)\Delta(z')^* \rangle = \psi^2.
\label{cori}
\end{equation}
This means that
the random potential has only one non-zero Fourier component, i.e.,
the one with zero wavevector.
The potential can be written $\Delta(z)=v \psi$
where $v$ is a complex random variable with a Gaussian distribution.
Averages over the random potential can then be written
\begin{equation}
\langle A[\Delta(z)]\rangle=
\int{dvdv^*\over\pi} e^{-vv*} A[v \psi].
\end{equation}
It is then a straight-forward exercise
to evaluate the averages of different electronic
Green's functions.
\section {ONE-ELECTRON GREEN'S FUNCTION NEAR $T_P$}
\label{secgreen}
The matrix Matsubara Green's function, defined at the Matsubara energies
$\epsilon_n=(2n + 1) \pi T$, for the Hamiltonian (\ref{hamel})
with (\ref{cori}) is
\begin{equation}
\hat G\left(i\epsilon_n ,k\right)=\int{dvdv^*\over\pi} e^{-vv*}
\hat G \left(i\epsilon_n,k,v\right)\label{ea}
\end{equation}
where
\begin{equation}
\hat G\left(i\epsilon_n,k,v\right)={-(i\epsilon_n
-kv_F\sigma_3-\psi(v\sigma_++v^*\sigma_-))\over\epsilon_n^2
+(kv_F)^2+vv^*\psi^2}\label{ea2}
\end{equation}
is the matrix Green's function for the Hamiltonian (\ref{hamel})
with $\Delta(z)=v\psi$.
The off-diagonal
(anomalous) terms vanish when the integral over $v$ is performed
indicating there is no long range order. The integral over
the phase of $v$ can be performed and variables
changed to $\varphi=vv^*$ and
obtain
\begin{equation}
\hat G(i\epsilon_n,k)=-(i\epsilon_n
-kv_F\sigma_3)\int_0^\infty d\varphi{e^{-\varphi}\over\epsilon_n^2
+(kv_F)^2+\varphi\psi^2}\label{eb}
\end{equation}
Sadovsk\~i\~i \cite {sad} obtained the same expression by
diagrammatic summation. For the case of a half-filled band $v$ is strictly
real and the resulting expressions are the same as those obtained by
Wonneberger and Lautenschl\"ager \cite{won}.
Expanding (\ref{eb}) in powers of $\psi$ gives
\begin{equation}
\hat G(i\epsilon_n,k)=\hat G_0(i\epsilon_n,k)\int_0^\infty d\varphi
e^{-\varphi}
\sum^\infty_{n=0}\left[{-\varphi\psi^2\over\epsilon_n^2+(kv_F)^2}
\right]^n \label{ec}
\end{equation}
where $\hat G_0=(i\epsilon_n -kv_F\sigma_3)^{-1}$
is the free-electron Green's function. Performing the
integral over $\varphi$ gives
\begin{equation}
\hat G(i\epsilon_n,k)=\hat G_0(i\epsilon_n,k)\sum^\infty_{n=0}n!
\left[{-\psi^2\over\epsilon_n^2+(kv_f)^2}\right]^n \label{ed}
\end{equation}
This is a {\it divergent} series and asymptotic expansion. However, it is
Borel summable \cite{bor}. This divergence suggests that
perturbation theory as used in References \cite {lee,cha,ric,bje,suz,sha}
may give unreliable results.
This can be seen in Figure \ref{figdos} and Section \ref{seccha}.
\subsection {Density of States}
The electronic density of states is calculated directly from the
imaginary part of the one-electron Green's function (\ref{eb}). The result is
\begin{equation}
\rho(E)=\rho_o \int_0^\infty d\varphi
e^{-\varphi}{E\over\left[E^2-\varphi\psi^2\right]^{1/2}}
\theta\left(\mid E\mid^2-\varphi\psi^2\right)
\end{equation}
\begin{equation}
=2\rho_o
\bigl|{E\over\psi}\bigl|\exp(-\left({E\over\psi}\right)^2)
{\rm erfi}\left({E\over\psi}\right)\label{rf}
\end{equation}
where $\rho_o=1/\pi v_F$ is the free-electron density of states
and erfi is the error function of imaginary argument
\cite{err}. Figure \ref{figdos}
shows the energy dependence of the density of states. It vanishes at
zero energy (the Fermi energy) and is suppressed over an energy
range of order $\psi$, i.e., there is a pseudogap.
It has the asymptotic behavior:
\begin{equation}
\rho(E) \simeq 2 \rho_o ({E\over\psi})^2 \quad {\rm for}
\ E\ll\psi \label{eg}
\end{equation}
$$\rho(E) \simeq \rho_o \quad {\rm for} \ E\gg\psi \label{eg2}$$
Figure \ref{figdos}
shows that the exact result (\ref{rf}) (solid line)
deviates significantly from the result of
second-order perturbation theory in References \cite{lee,ric} (dashed line),
\begin{equation}
\rho(E)=\rho_o {E\over\left[E^2-\psi^2\right]^{1/2}}
\theta\left(E^2-\psi^2\right) \label{eh}
\end{equation}
This latter form has been assumed in much earlier
work \cite{hor,joh,sha,joh2}.
The above expressions for the density of states
are all for infinite correlation length
($\xi_\parallel \psi/v_F \to \infty $),
i.e., very close to the three-dimensional transition temperature $T_P$.
What happens
{\it above} $T_P$ as the intrachain correlation length decreases?
This problem was considered in detail by Sadovsk\~i\~i \cite{sad2}.
(He calculated the density of states for
the random potential
(\ref{cor2}) with finite $\xi_\parallel$ exactly).
As the correlation length decreases the density of states
at the Fermi energy increases, i.e., the pseudogap fills in.
How quickly this happens depends on the dimensionless
ratio $v_F/(\psi \xi_\parallel).$
(See equation (\ref{suc}) below and Figures 5 and 6 in Reference \cite{sad2}).
Sadovsk\~i\~i showed that perturbation theory \cite {lee,cha,ric,bje,suz,sha}
only gives reliable results for $|E| < \psi$
when $\xi_\parallel < v_F/\psi$, i.e.,
well above $T_P$.
What happens
{\it below} $T_P$ as the intrachain correlation length decreases?
In Reference \cite{mck} it was shown that in the
three-dimensionally ordered Peierls state,
well below $T_P$,
there is an absolute gap with a subgap tail that
increases substantially
as the temperature becomes larger than the phonon frequency.
A smooth crossover to the pseudogap discussed here is expected.
It is an open problem to construct a single theory that
can describe the density of states over the complete
temperature range.
\subsection {Spectral Function}
The spectral function for right moving electrons
of momentum $k$ is given by
\begin{eqnarray}
A(k,E)&=&-{1\over\pi}{\rm Im}\ G_{11}(k,E+i\eta) \nonumber \\
&=&\int_0^\infty d\varphi e^{-\varphi}\left[
\delta\left(E-\sqrt{(kv_F)^2+\varphi\psi^2}\right)
+\delta\left(E+\sqrt{(kv_F)^2+\varphi\psi^2}\right)\right]
\nonumber \\
&=&{\mid E\mid\over\psi^2}\exp\left(
{(kv_F)^2-E^2\over\psi^2}\right)
\theta\left(E^2-(kv_F)^2\right) \label{ej}
\end{eqnarray}
where the momentum $k$ is relative to the Fermi momentum $k_F$.
Note that this form is very different from the Lorentzian form
associated with the quasi-particle picture and perturbation theory
\cite{rick}.
The spectral function
is asymmetrical, very broad, and has a significant high energy tail.
Figure \ref{figspec} shows how the quasi-particle weight is reduced
near the Fermi momenta, i.e., the quasi-particles are not well defined.
This was first pointed out by
Wonneberger and Lautenschl\"ager \cite{won} for the
corresponding model for a half-filled band.
This is strictly a non-perturbative effect. In perturbation theory
the quasi-particles are well defined.
This breakdown of the quasi-particle picture is similar to the
properties of a Luttinger liquid \cite{voi}.
The momentum distribution function $n(k)$ at $T=0$ for right moving
electrons is given by
\begin{equation}
n(k)\equiv \int_{-\infty}^0 dE A(k,E)
= {1\over 2} \left[ 1 - \sqrt{\pi}
\left({kv_F \over \psi} \right)
\exp \left( \left({kv_F \over \psi} \right)^2 \right)(1-
{\rm erf} \left({kv_F \over \psi} \right))
\right]
\label{ek}
\end{equation}
where ${\rm erf}$ is the error function.
The inset to Figure \ref{figspec}
shows how the momentum distribution $n(k)$
at $T=0$ is smeared over a momentum range $\delta k \sim \psi/v_F$.
The absence of a step at $k=k_F$ indicates that there
is no clearly defined Fermi surface.
However, this is {\it not} like in a Luttinger liquid,
but solely due to disorder. In fact, in an ordinary metal
with mean free path $\ell$ similar behaviour is seen;
disorder smears out $n(k)$ over a momentum range $\delta k \sim 1/\ell$.
\subsection{Electronic specific heat }
The electronic specific heat $C_e(T)$ is related to the density of
states $\rho(E)$ by
\begin{equation}
C_e(T) = - {4 \over T} \int_0^\infty dE E^2
\rho(E) {\partial f \over \partial E}
\label{spa}
\end{equation}
where $f(E)$ is the Fermi-Dirac distribution function.
In the absence of a pseudogap $C_e(T)={2 \pi^2 \over 3} \rho_0 T
\equiv C_0(T).$
If the expression (\ref{rf}) is used for the density of
states in the presence of a pseudogap then $C_e(T)/C_0(T)$
only depends on $\psi/T$ and is shown in Figure \ref{figparam}.
A similar result was recently used \cite{mck0} to explain the temperature
dependence of the electronic specific heat near a spin-density-wave
phase boundary of the organic conductor (TMTSF)$_2$ClO$_4$.
Note that when $\psi \sim T$, $C_e(T)$ can be slightly larger
than $C_0(T)$ because $E^2 {\partial f \over \partial E}$
has a maximum near $ E \sim T$ and for $ E \sim \psi$,
$\rho(E)$ is larger than $\rho_0$ (Figure \ref{figdos}).
\subsection{Pauli Spin Susceptibility}
\label{secsusc}
The Pauli spin susceptibility $\chi(T)$ is related to the density of
states $\rho(E)$ by
\begin{equation}
\chi(T) = -\mu_B^2 \int_0^\infty dE \rho(E) {\partial f \over \partial E}
\label{sua}
\end{equation}
where $f(E)$ is the Fermi-Dirac distribution function
and $\mu_B$ is a Bohr magneton \cite{ash}.
In the absence of a pseudogap $\chi(T)=\mu_B^2 \rho_0 \equiv \chi_0$
which is independent of temperature.
If the expression (\ref{rf}) is used for the density of
states in the presence of a pseudogap then $\chi(T)/\chi_0$
only depends on $\psi/T$ and is shown in Figure \ref{figparam}.
This result will be used in Section \ref{secest} to provide an
estimate of the pseudogap in K$_{0.3}$MoO$_3$.
\subsection{Chandra's scaling relation}
\label{seccha}
The effect of thermal lattice fluctuations on the temperature
dependence of $\chi(T)$ was first considered
by Lee, Rice, and Anderson \cite{lee}. They argued that as the
temperature is lowered towards $T_P$ the intrachain
correlation length increases, more of a pseudogap opens in
the density of states and $\chi(T)$ decreases.
This problem was recently reconsidered by Chandra \cite{cha}
who derived a scaling relation between the derivative
$ d \chi / d T$ and the specific heat $C_P$ in the critical region.
I now repeat the essential features of her argument.
She calculated the electronic self energy in the Born
approximation, taking into account the interchain interactions
and the finite mean free path of the electrons. She assumed
that the pseudogap is much larger than the transition
temperature ($\psi \gg T_P$; it will be shown in Section \ref{secest}
that this is poor approximation for K$_{0.3}$MoO$_3$) so that
$\chi(T) \simeq \mu_b^2\rho(0)$. Chandra also assumed that the temperature
dependence of the density of states at the Fermi energy
is determined solely by the temperature dependence
of $\xi_\parallel(T)$. Moreover, based on the Born approximation,
she found
\begin{equation}
\rho(0) \sim {1 \over \xi_\parallel(T)}.
\label{sua2}
\end{equation}
Defining $t \equiv |T-T_P|/T_P$, then gives the scaling relation
\begin{equation}
{d \chi(T) \over dT } \sim {d \over dT } {1 \over \xi_\parallel(T)}
\sim {d \over dT } t^{1/2} \sim C_P
\label{sub}
\end{equation}
where use has been made of the temperature dependence of $\xi_\parallel(T)$
and $C_P$ in the Gaussian approximation \cite{ma}.
This same scaling relation was suggested earlier
by Horn, Herman and Salamon \cite{horn}. They claimed to
have found the critical exponent for $d \chi / d T$ to be --0.5 for TTF-TCNQ.
Kwok, Gr\"uner, and Brown \cite{kwo2}
claim to have observed a scaling between
$d (T \chi) / d T$ and $C_P$ within a 30 K
region about $T_P=183$ K for K$_{0.3}$MoO$_3$.
However, Mozurkevich has argued that the
Gaussian approximation is not valid in this
temperature range \cite{moz}.
Chung {\it et al.} \cite{chu} found that $d \chi / d T$
was comparable to $C_P$ when a background contribution
was subtracted from the latter.
Brill {\it et al.} \cite{bri} found that $\chi$
was proportional to the entropy (evaluated from
integrating the specific heat) between 140 and 220 K.
(This is equivalent to
a scaling between $d \chi / d T$
and $C_P$). They show that this is
what is expected if $\chi$ and $C_P$ are derived from
a free energy functional in which the complete
magnetic field dependence is contained in the
field dependence of $T_P$.
Chandra's derivation of the scaling relationship (\ref{sub}) is
not valid. It depends on (\ref{sua2})
which is a direct result of the perturbative treatment
of the lattice fluctuations. The exact Green's function calculated
by Sadovsk\~i\~i \cite{sad2} gives different results. He found that for
$\xi_\parallel(T) \gg v_F/\psi$
\begin{equation}
{\rho(0) \over \rho_0} \simeq (0.54 \pm 0.01)
({v_F \over \psi \xi_\parallel(T)})^{1/2}
\label{suc}
\end{equation}
(see Figure 6 in Reference \cite{sad2})
rather than (\ref{sua2}).
This will give
\begin{equation}
{d \chi(T) \over dT } \sim t^{-3/4}
\label{sud}
\end{equation}
and so the scaling relation (\ref{sub}) does not hold.
It should be stressed that this result assumes $\psi \gg T_P$,
a condition that is poorly satisfied in most materials
(Section \ref{secest}).
\section{PROPERTIES OF THE GINZBURG-LANDAU COEFFICIENTS}
\label{secprop}
In Appendix \ref{seccoeff} the coefficients $a$, $b$, and $c$ in the
Ginzburg-Landau free energy (\ref{aa1}) describing the
Peierls transition are evaluated in the presence of the
random potential (\ref{cori}) which is used here to model the
thermal lattice motion. The calculation is based on a linked
cluster expansion similar to that used to derive
the Ginzburg-Landau functional for superconductors \cite{has}.
The results are:
\begin{equation}
a(T)= {1 \over \pi v_F} \left[
\ln\left({T\over T_{RL}}\right)+\pi
T\sum_{\epsilon_n}\left({1\over\left|\epsilon_n\right|}-\int_0^\infty
d\varphi
e^{-\varphi}{\epsilon_n^2\over\left(\epsilon_n^2+
\varphi \psi^2\right)^{3/2}}\right) \right]
\label{gla}
\end{equation}
\begin{equation}
b(T)={ T \over 4 v_F}\sum_{\epsilon_n}
\int_0^\infty d\varphi e^{-\varphi}
\left( {\epsilon_n^2 \over
\left(\epsilon_n^2+ \varphi\psi^2 \right)^{5/2}}
-{5 \varphi(\psi \ \epsilon_n)^2 \over
\left(\epsilon_n^2+ \varphi \psi^2 \right)^{7/2}}
\right)
\label{glb}
\end{equation}
\begin{equation}
c(T)={v_F T\over 4}
\sum_{\epsilon_n}\epsilon_n^2\int_0^\infty{d\varphi
e^{-\varphi} \over (\epsilon_n^2+\varphi\psi^2)^{5/2}}\label{ff}
\label{glc}
\end{equation}
The integrals over $\varphi$ in the above
expressions can be written in terms of error functions and incomplete
gamma functions \cite{err}. However, for both numerical and
analytical calculations it is actually more convenient to
use the expressions above.
As $\psi \to 0$ the above expressions
reduce to the rigid-lattice values (\ref{mfa}--\ref{mfc}).
{\it Single-chain mean-field transition temperature.}
$T_0$ is determined by the temperature
at which the second-order Ginzburg-Landau coefficient (\ref{gla}) vanishes:
\begin{equation}
a(T_0)=0. \label{fk2}
\end{equation}
This defines relations between $T_0/T_{RL}$ and
$\psi/T_0$, shown in Figure \ref{figcoeff}
(The inset shows $T_0/T_{RL}$ versus $\psi/T_{RL}$).
The pseudogap
suppresses the transition temperature. At a crude level, this is
because in the presence of a pseudogap
opening a gap due to a Peierls distortion causes a smaller
decrease in the electronic energy than in the absence of a pseudogap.
In most materials $T_P < 0.4 T_{RL}$ (Table \ref{table2})
and so the inset of Figure \ref{figcoeff} implies $\psi \sim T_{RL}$
which is comparable to the zero-temperature gap.
Rice and Str\"assler \cite{ric} found from second-order perturbation theory
that for $T_0 \ll T_{RL},$ $\psi \simeq 1.05 T_{RL}.$
Thus, the single-chain mean-field transition temperature
can be quite different from
$T_{RL}$, defined by (\ref{trl}), and often referred to
as the mean-field transition temperature,
and so no experimental signatures are expected at $T=T_{RL}$.
{\it Fourth-order coefficient.}
The ratio of the fourth-order coefficient $b$ to its rigid-lattice value
as a function of the ratio of the pseudogap $\psi$ to the
temperature is shown in Figure \ref{figcoeff}.
Note that $b$ is negative for $\psi/T > 2.7$.
This will change the nature of the phase transition.
One must then include the sixth-order term in the free energy.
If it is positive (I have calculated it and found it to be positive
for this parameter range)
then the transition will be {\it first order.}
A complete discussion of such a situation is given by
Toledano and Toledano \cite{tol}.
Imry and Scalapino have discussed the effect of
one-dimensional fluctuations for this situation \cite{imr}.
At the mean-field level there is a co-existence of phases
for the temperature range defined by
\begin{equation}
0 < a(T) < { b(T)^2 \over 3 d(T)}
\label{fm0}
\end{equation}
where $d(T)$ is the sixth-order coefficient.
Hysteresis will be observed
in this temperature range.
I recently suggested that the first-order nature of the destruction
by high magnetic fields of spin-density-wave states
in organic conductors is due to similar effects \cite{mck0}.
If at low temperatures the electron phonon coupling
$\lambda$ is varied then $\psi/T_{RL} \sim \lambda e^{1/\lambda}$.
According to the inset of Figure \ref{figcoeff} there will
be a critical coupling below which the CDW phase will
be destroyed. This transition will be first order.
It is interesting that Altshuler, Ioffe, and Millis \cite{alt}
recently obtained a similar result for a two-dimensional
Fermi liquid (with a quasi-one-dimensional
Fermi surface) using a very different approach.
However, it should be pointed out that when $b$ is small
corrections due to other effects such as a finite correlation
length and interchain coupling
will be important and could make $b$ positive.
It is unclear whether this unexpected behaviour is only a result
of the simplicity of the model or actually is relevant to
real materials. The
three-dimensional transition occurs when the parameter $\kappa$,
defined by (\ref{aat1}), becomes sufficiently small \cite{mck2}.
Generally this is assumed to be due to the temperature becoming sufficiently
low. However, I speculate that the transition
could alternatively be driven by $b$ becoming
sufficiently small.
The fact that $\psi \sim (2-3) T_P$ in K$_{0.3}$MoO$_3$
(Section \ref{secest}) is consistent with $b$ being small.
{\it The coefficient of the longitudinal gradient term}
is given by (\ref{glc}).
It can be shown that $c(T)/c^{RL}(T)$ is a universal
function of $\psi/T$ (see Figure \ref{figcoeff})
and that the pseudogap reduces the
value of $c$.
{\it Interchain coupling.}
Consider a crystal with tetragonal unit cell of dimensions
$a_x \times a_x \times a_z$,
where the z-axis is parallel to the chains. For a tight-binding model
the electronic band structure is given by the dispersion relation
\begin{equation}
E(k)=-2t_\perp(\cos(k_xa_x)+\cos(k_ya_x))-2t_\parallel \cos(k_z a_z).
\label{ba}
\end{equation}
Assume the band-structure is highly anisotropic, i.e., $t_\parallel\gg
t_\perp$.
The Fermi velocity $v_F$ is defined by $v_F=2t_\parallel a_z \sin(k_F a_z)$.
Horovitz, Gutfreund, and Weger \cite {hor} have shown that
imperfect nesting of the Fermi surface (i.e., $E(k) \simeq - E(k+Q))$
occurs for the nesting vector
\begin{equation}
\vec{Q}=(\pi/a_x,\pi/a_x,2k_F). \label {bb}
\end{equation}
To calculate the interchain coupling $J$ in the
Ginzburg-Landau functional (\ref{ad1}) it is assumed that
the one-dimensional Green's function (\ref{ea2})
can simply be replaced
with the corresponding one with the anisotropic band structure, given by
equation (\ref{ba}).
The calculation
is then essentially identical to the rigid-lattice calculation of Horovitz,
Gutfreund, and Weger \cite{hor} and so only the result is given
(compare (\ref{mfc2})):
\begin{equation}
J = \left( { 4 t_\perp \over v_F}\right)^2 c(T).
\label{fk}
\end{equation}
Since the pseudogap reduces the value of the longitudinal
coefficient $c$ it will also reduce the interchain coupling.
\section{MEAN-FIELD THEORY OF A SINGLE CHAIN}
The single chain Ginzburg-Landau functional
with the coefficients discussed in the previous section
is now considered.
In particular it is shown that the one-dimensional fluctuations
can be much smaller than for the functional with the
rigid-lattice coefficients.
The first step is to consider the temperature dependence of
the second-order coefficient $a(T)$ near $T_0$,
the mean-field transition temperature.
This is difficult because to be realistic the
temperature dependence of the parameter $\psi$
must be included. This is done at a crude level
using the simple model based on the discussion of
thermal lattice motion in Section \ref{sectherm}.
This is then used to evaluate $a^\prime$, defined
by (\ref{aa10}), and needed to evaluate physical
quantities associated with the transition:
the specific heat jump, the coherence length,
and width of the critical region.
The jump in the specific heat at $T_0$ is
\begin{equation}
\Delta C_{1D}=
{(a^\prime)^2 \over 2 \ b \ T_0}.
\label{ac1}
\end{equation}
An important length scale is the coherence length $\xi_0$,
defined by
\begin{equation}
\xi_0=\left({c \over a^\prime}\right)^{1/2}
\label{acd1}
\end{equation}
The one-dimensional
Ginzburg criterion \cite{gin} provides an estimate of the
temperature range, $\Delta T_{1D}$, over which critical fluctuations
are important.
\begin{equation}
\Delta t_{1D} \equiv {\Delta T_{1D} \over T_0}
= \left({b \ T_0 \over a^{\prime 3/2} c^{1/2}}
\right)^{2/3}
= {1 \over (2 \xi_0 \Delta C_{1D})^{2/3}}
\label{acc1}
\end{equation}
\subsection {Self-consistent determination of the pseudogap}
At the level of the Gaussian approximation the
phonon dispersion is related to the Ginzburg-Landau
coefficients by
\begin{equation}
\Omega(q,T)^2=\lambda \omega_Q^2 \bigl(a(T) +
c(T) (q- 2k_F)^2 + Ja_x^2 (q_\perp - Q_\perp)^2 \bigr). \label{bz1}
\end{equation}
Hence the phonon dispersion depends on the pseudogap
$\psi$.
However, it was shown in Section \ref{sectherm} that $\psi$ depends on the
dispersion. Hence, $\psi$ must be determined self-consistently.
Equation (\ref{cg}) gives the dependence of the pseudogap at $T_0$ on the
phonon dispersion. Equation (\ref{ff}) gives the dependence of the
coefficient $c(T)$ on the pseudogap.
These can be combined with (\ref{fk}) to give
\begin{equation}
1= t_\perp \psi^2 \
\sum_{\epsilon_n}
\epsilon_n^2\int_0^\infty {d\varphi e^{-\varphi}\over
(\epsilon_n^2+\varphi\psi^2)^{5/2}}.\label{fm}
\end{equation}
It follows that $\psi/T$ is a universal function of
$t_\perp /T$.
{\it Dependence of $T_0$ on the interchain interactions.}
The self-consistent equation for the pseudogap (\ref{fm})
can be solved simultaneously with the equations for $T_0$,
and (\ref{fk})
to give the single-chain mean-field
transition temperature as function of the interchain interactions.
The transition temperature is then a monotonic
increasing function of the interchain hopping.
A similar procedure was followed by Rice and Str\"assler
\cite{ric}.
The transition temperature tends to zero as the interchain
coupling tends to zero, consistent with the fact that there
are no finite temperature phase transitions in a strictly one-dimensional
system \cite{lan}.
\subsection {Evaluation of $a'$}
It is now assumed that the temperature dependence of
the pseudogap $\psi$ is
given implicitly by equation
(\ref{fm}).
Implicit differentiation then gives
\begin{equation}
{d \over dT} \left( {\psi \over T} \right)=
{\psi \over 2 T^2}
{X(T) \over Y(T)}
\label{yz}
\end{equation}
where
\begin{equation}
X(T)=\sum_{\epsilon_n}\epsilon_n^2\int_0^\infty{d\varphi
e^{-\varphi} \over (\epsilon_n^2+\varphi\psi^2)^{5/2}}
\end{equation}
\begin{equation}
Y(T)=\sum_{\epsilon_n}\epsilon_n^2\int_0^\infty{d\varphi
e^{-\varphi} \varphi \over (\epsilon_n^2+\varphi\psi^2)^{5/2}}
\end{equation}
Note that since the right-hand side of (\ref{yz})
is positive
$\psi/T$ is always an increasing function of temperature.
A lengthy calculation gives
\begin{equation}
a'={1 \over \pi v_F} \left( 1 + {3 \over 2} \psi^2 \pi T
\sum_{\epsilon_n}\epsilon_n^2\int_0^\infty{d\varphi
e^{-\varphi} \over (\epsilon_n^2+\varphi\psi^2)^{5/2}}
\right)
\end{equation}
This is large than the rigid-lattice value
$a'_{RL} \equiv 1/\pi v_F$. This enhancement
will enhance the specific heat jump (\ref{ac1}) and reduce the
coherence length (\ref{acd1}).
\subsection {Specific heat jump}
The specific heat jump $\Delta C$ at the transition temperature
is calculated from equation (\ref{ac1}). It is shown in Figure
\ref{figjump}.
Note that the jump is much larger than the
rigid-lattice value of $1.43\gamma T_P$.
The trend shown in Figure \ref{figjump} can be explained by a
rough argument correlating the sizes of $\Delta C/ \gamma T_P$
and $\Delta(0)/k_B T_P$. Simply put, if $\Delta(0)/k_B T_P$ is large
then $\Delta(T)^2$ will have a large slope at $T_P$.
It has previously been noted
experimentally \cite{sat} that the order parameter has a
BCS temperature dependence with $\Delta(0)$
and $T_P$ treated as independent parameters.
Some theoretical justification was recently provided
for such a temperature dependence well away from $T_P$ \cite{mck}.
Close to $T_P$ the BCS form gives
\begin{equation}
\Delta(T) \simeq 1.74 \Delta(0)\left(1-{T \over T_P}\right)^{1/2}.
\end{equation}
Within a BCS type of framework
the specific heat discontinuity is given by \cite{tin}
\begin{equation}
\Delta C \sim -\rho_o {d \Delta^2 \over dT} \Big|_{T_P}
= 3.03 \rho_o {\Delta(0) ^2 \over T_P}.
\end{equation}
Using $\Delta(0)=1.76k_B T_{RL}$ and $\gamma= 2 \pi^2 \rho_o/3$
gives
\begin{equation}
{\Delta C \over 1.43 \gamma T_P} \sim \left({ T_{RL} \over T_P}\right)^2
\end{equation}
This simple argument gives the correct trend that as the
fluctuations increase the enhancement of the specific heat jump
increases.
\subsection {Width of the one-dimensional critical region}
The width of the one-dimensional critical region
is calculated from equation (\ref{acc1})
with the Ginzburg-Landau coefficients in the
presence of the pseudogap. It is shown in Figure
\ref{figjump}, normalized to the rigid-lattice value
$\Delta t_{1D}=0.8$.
The large reduction is very important because it
means that even for weak interchain coupling,
it may be possible for condition (\ref{ad10}) to
be satisfied and for a mean-field treatment of
a single chain functional, such as that used in
this section, to be justified.
\section{Estimate of the pseudogap in K$_{0.3}{\rm MoO}_3$}
\label{secest}
Optical conductivity, magnetic susceptibility and
photoemission experiments all suggest that near $T_P = 183$ K
there is a pseudogap in the density of states.
{\it Optical conductivity}. Sadovsk\~i\~i has calculated the
optical conductivity $\sigma(\omega)$ for the model introduced
in Section \ref{secham}
\cite{sad}. For small frequencies $\sigma(\omega)$
is linear in $\omega$ and has a peak at about $\omega \simeq 3 \psi$.
The data in References \cite{deg,dre} then implies
$\psi \sim $ 40 meV
and $\psi/T_P \sim 2.5$.
On a less rigorous level $\psi$ can be estimated based on the
analysis contained in the inset of Figure \ref{figcoeff}. If
the single-chain mean-field transition temperature $T_0 < 0.4 T_{RL}$
then $\psi \sim T_{RL}$. Using the BCS relation (\ref{trl})
and the estimate $\Delta (0) \simeq 80 $ meV for the zero-temperature
gap from the optical conductivity \cite{deg} gives
$\psi \sim $ 45 meV and $\psi/T_P \sim 3$.
{\it Magnetic susceptibility.}
The data of References \cite{bri,joh} gives $\chi(T_P)/\chi(300 K)
\simeq 0.5 $.
Assuming that $\chi(300 {\rm K}) \simeq \chi_0$
and using Figure \ref{figparam} gives $\psi /T_P \sim 2.4$.
Note that all of the above three estimates for $\psi/T_P$
are consistent with one another and
are all in the regime where the fourth-order coefficent $b$
is small (Figure \ref{figcoeff}).
{\it Photoemission.}
Recent high resolution photoemssion measurements
\cite{dar,dar2,hwu}
on K$_{0.3}$MoO$_3$ and (TaSe$_4$)$_2$I
have several puzzling features:
(1) There is a suppression of spectral weight over a large energy range
(of the order of 200 meV for K$_{0.3}$MoO$_3$)
near the Fermi energy.
(2) The spectrum is very weakly temperature dependent. The suppression
occurs even for $T \sim 2 T_P$.
(3) At $T_P$ the spectrum does not just shift near $E_F$, due to the
opening of the Peierls gap, but also at energies of order 0.5 eV
from $E_F.$
These features {\it cannot} be explained using the model
presented in this paper.
The photoemission data suggests that the pseudogap is
about $\psi \sim $ 130 meV. Clearly this estimate is inconsistent
with the estimates ($\psi \sim $ 40-50 meV)
given above from the optical conductivity
and magnetic susceptibility.
Furthermore, in the model presented here
the pseudogap occurs only when $\xi_\parallel(T) \gg v_F/\psi$,
i.e., fairly close to $T_P$.
The temperature dependence of the Pauli spin susceptibility
and the optical conductivity \cite{deg,deg2} suggest that
the pseudogap disappears for $T ~> 2 T_P$ (in contrast to
(2) above). Dardel et al. \cite{dar} speculate that the
anomalous behaviour that they observe may arise
because the photoemission intensity $I(E)$ might be related
to the density of states $\rho(E)$ by $I(E)=Z \rho(E)$
and the quasi-particle weight $Z$ vanishes
due to Luttinger liquid effects.
This suggestion has been examined critically by Voit \cite{voi}
who concludes
that the photoemission data is only quantitatively
consistent with a Luttinger
liquid picture if very strong long-range interactions are involved.
Kopietz, Meden, and Sch\"onhammer \cite{kop}
have recently considered such models.
\section{CONCLUSIONS}
In this paper a simple model has been used to illustrate
some of the difficulties involved in constructing from
microscopic theory a Ginzburg-Landau theory of the CDW transition.
The main results are:
(1) The large thermal lattice motion near the transition temperature
produces a pseudogap in the density of states.
(2) Perturbation theory diverges and gives unreliable results.
This is illustrated by showing that a predicted \cite{cha} scaling relation
between the specific heat and the temperature derivative
of the susceptibility does not hold.
(3) The pseudogap significantly alters the coefficients in the
Ginzburg-Landau free energy.
The result is that one-dimensional order parameter
fluctuations are less important,
making a mean-field treatment of the
single-chain Ginzburg-Landau functional more reasonable.
This work raises a number of questions and opportunities for
future work.
(a) The most important problem is that there is still no
microscopic theory that can make reliable quantitative
predictions about how dimensionless ratios such as
$\Delta (0)/k_B T_P$, $\Delta C/ \gamma T_P$, and $\xi_{0z}T_P/v_F$
depend on parameters such as $v_F$, the electron phonon coupling
$\lambda$, $T_P$ and the interchain coupling.
(b) Is the change of the sign of the fourth-order
coefficient $b$ of the single-chain Ginzburg-Landau functional
for $\psi > 2.7 T_P$
an important physical effect or merely a result of
the simplicity of the model considered here?
(c) Calculation of the contribution of the sliding CDW
to the optical conductivity in the presence of the short-range
order associated with the pseudogap \cite{dre,dre2}.
\acknowledgments
I have benefitted from numerous discussions with J. W. Wilkins.
This work was stimulated by discussions with J. W. Brill.
I am grateful to him for showing me his group's data
prior to publication.
I thank K. Bedell and K. Kim for helpful discussions.
Some of this work was performed at The Ohio State University
and supported by the U.S. Department of Energy,
Basic Energy Sciences, Division of Materials Science
and the OSU Center for Materials Research.
Work at UNSW was supported by the Australian Research Council.
\twocolumn
\narrowtext
\begin {references}
\bibitem[*]{email}electronic address: [email protected]
\bibitem{gru} G. Gr\"uner, {\it Density Waves in Solids},
(Addison-Wesley, Redwood City, 1994).
\bibitem{con} For a review, see {\it Highly Conducting Quasi-One-Dimensional
Organic Crystals},
edited by E. Conwell (Academic, San Diego, 1988).
\bibitem{gor} For a review, see {\it Charge Density Waves in Solids}, edited by
L. P. Gorkov and G. Gr\"uner (North-Holland, Amsterdam, 1989).
\bibitem{car} K. Carneiro, in {\it Electronic Properties of Inorganic
Quasi-One-Dimensional Compounds, Part 1}, edited by P. Monceau (Reidel,
Dordrecht, 1985), p.1.
\bibitem{bri} J. W. Brill, M. Chung, Y.-K. Kuo, X. Zhan,
E. Figueroa, and G. Mozurkewich, Phys. Rev. Lett.
{\bf 74}, 1182 (1995);
and references therein.
\bibitem{gir} S. Girault, A. H. Moudden, and J. P. Pouget,
Phys. Rev. {\bf 39}, 4430 (1989).
\bibitem{gin} V. L.Ginzburg, Fiz. Tverd. Tela
{\bf 2}, 2031 (1960) [Sov. Phys. Solid State {\bf 2}, 1824
(1960)].
The width of the critical region, $\Delta T$, is defined by the
temperature at which the fluctuation contribution
to the specific heat below the transition temperature,
calculated in the Gaussian
approximation, equals the mean-field specific heat jump $\Delta C$.
It should be stressed that this gives only a very rough
estimate of the importance of fluctuations and that
there are several alternative definitions of the width
of the critical region.
Consequently, care should be taken when comparing estimates
from different references. This is particulary true
since different definitions can differ by
numerical factors as large as $32 \pi^2$!
\bibitem{schr} J. R. Schrieffer, {\it Theory of Superconductivity},
(Addison-Wesley, Redwood City, 1983) Revised edition, p. 248 ff.
\bibitem{carb} See e.g., J. P. Carbotte, Rev. Mod. Phys.
{\bf 62}, 1027 (1990).
\bibitem{mck2}R. H. McKenzie, Phys. Rev. B
{\bf 51}, 6249 (1995).
\bibitem{lan} L. D. Landau and E. M. Lifshitz, {\it Statistical
Physics}, 2nd. ed., (Pergamon, Oxford, 1969), p. 478.
\bibitem{sca} D. J. Scalapino, M. Sears, and R. A. Ferrell, Phys. Rev. B
{\bf 6}, 3409 (1972).
\bibitem{fro} H. Fr\"ohlich, Proc. R. Soc. London A {\bf 223}, 296
(1954).
\bibitem{ric0}M. J. Rice and S. Str\"assler,
Solid State Commun. {\bf 13}, 125 (1973).
\bibitem{all} D. Allender, J. W. Bray, and J. Bardeen, Phys. Rev. B
{\bf 9}, 119 (1974).
\bibitem{sch} H. J. Schulz, in {\it Low-Dimensional Conductors and
Superconductors}, edited by D. J\'erome and L.G. Caron (Plenum, New York,
1986), p. 95.
\bibitem {hor} B. Horovitz, H. Gutfreund, and M. Weger,
Phys. Rev. B {\bf 12}, 3174 (1975).
\bibitem{ma} S. K. Ma, {\it Modern Theory of Critical Phenomena},
(Benjamin/Cummings, Reading, 1976), p.94.
\bibitem{sco} J. C. Scott, S. Etemad, and E. M. Engler, Phys. Rev. B
{\bf 17}, 2269 (1978) [TSeF-TCNQ, TTF-TCNQ].
\bibitem{joh} D. C. Johnston, Phys. Rev. Lett. {\bf 52}, 2049
(1984) [K$_{0.3}$Mo0$_3$].
\bibitem{joh3} D. C. Johnston, M. Maki, and G. Gr\"uner,
Solid State Commun. {\bf 53}, 5 (1985) [(TaSe$_4$)$_2$I].
\bibitem{deg} L. Degiorgi, G. Gr\"uner, K. Kim, R.H. McKenzie and P.
Wachter, Phys. Rev. B {\bf 49}, 14754 (1994) [K$_{0.3}$Mo0$_3$].
\bibitem{deg2} L. Degiorgi, St. Thieme, B. Alavi,
G. Gr\"uner, R.H. McKenzie, K. Kim, and F. Levy,
Phys. Rev. B, to appear (1995)
[K$_{0.3}$Mo0$_3$, (TaSe$_4$)$_2$I].
\bibitem{dre} B. P. Gorshunov, A. A. Volkov, G. V. Kozlov,
L. Degiorgi, A. Blank, T. Csiba,
M.Dressel, Y. Kim, A. Schwartz, and G. Gr\"uner,
Phys. Rev. Lett. {\bf 73}, 308 (1994) [K$_{0.3}$Mo0$_3$].
\bibitem{dre2}
A. Schwartz, M.Dressel, B. Alavi, S. Dubois, G. Gr\"uner,
B. P. Gorshunov, A. A. Volkov, G. V. Kozlov,
S. Thieme, L. Degiorgi, and F. L\'evy,
Phys. Rev. B, to appear (1995) [K$_{0.3}$Mo0$_3$].
\bibitem{bru} P. Br\"uesch, S. Str\"assler, and H. R. Zeller, Phys. Rev. B
{\bf 12}, 219 (1975) [K$_2$Pt(CN)$_4$Br$_{0.3}$].
\bibitem{ber} D. Berner, G. Scheiber, A. Gaymann, H. P. Geserich,
P. Monceau, and F. L\'evy, J. Phys. France IV,
{\bf 3}, 255 (1993) [(TaSe$_4$)$_2$I].
\bibitem{dar} B. Dardel, D. Malterre, M. Grioni, P. Weibel, and Y. Baer,
Phys. Rev. Lett. {\bf 61}, 3144 (1991)
[K$_{0.3}$Mo0$_3$, (TaSe$_4$)$_2$I].
\bibitem {dar2} B. Dardel, D. Malterre, M. Grioni, P. Weibel, Y. Baer,
C. Schlenker, and Y. P\'etroff, Europhys. Lett. {\bf 19}, 525
(1992) [K$_{0.3}$Mo0$_3$].
\bibitem{hwu} Y. Hwu, P. Alm\'eras, M. Marsi, H. Berger,
F. L\'evy, M. Grioni, D. Malterre, and G. Margaritondo,
Phys. Rev. B {\bf 46}, 13624 (1992).
\bibitem{mck}R. H. McKenzie and J. W. Wilkins,
Phys. Rev. Lett. {\bf 69}, 1085 (1992).
\bibitem{kim}K. Kim, R. H. McKenzie, and J. W. Wilkins,
Phys. Rev. Lett. {\bf 71}, 4015 (1993).
\bibitem{lon} F. H. Long, S. P. Love, B. I. Swanson, and R. H. McKenzie,
Phys. Rev. Lett. {\bf 71}, 762 (1993).
\bibitem{lee}P. A. Lee, T. M. Rice, and P. W. Anderson,
Phys. Rev. Lett. {\bf 31}, 462 (1973).
\bibitem{die} W. Dieterich, Adv. Phys. {\bf 25}, 615 (1976).
\bibitem{sca2} D. J. Scalapino, Y. Imry, and P. Pincus, Phys. Rev. B
{\bf 11}, 2042 (1975).
\bibitem{sad} M. V. Sadovsk\~i\~i, Zh. Eksp. Teor. Fiz. {\bf 66}, 1720
(1974) [Sov. Phys. JETP {\bf 39}, 845 (1974)]; Fiz. Tverd. Tela
{\bf 16}, 2504 (1974) [Sov. Phys. Solid State {\bf 16}, 1632
(1975)].
\bibitem{mck0} R. H. McKenzie,
Phys. Rev. Lett. {\bf 74}, 5140 (1995).
\bibitem{voi} J. Voit, J. Phys. Condens. Matter {\bf 5}, 8305 (1993)
and references therein.
\bibitem{cha}P. Chandra, J. Phys. Condens. Matter {\bf 1}, 10067 (1989).
\bibitem{ric}M. J. Rice and S. Str\"assler,
Solid State Commun. {\bf 13}, 1389 (1973).
\bibitem{bje} A. Bjeli\~s and S. Bari\~si\'c, J. Physique Lett.
{\bf 36}, L169 (1975).
\bibitem{suz} Y. Suzumura and Y. Kurihara, Prog. Theor. Phys.
{\bf 53}, 1233 (1975).
\bibitem{sha} L. J. Sham, in {\it
Highly conducting one-dimensional
solids}, edited by J. T. Devreese, R. P. Evrard, and V. E. Van Doren
(Plenum, New York, 1979), p. 277.
\bibitem{bra}S. A. Brazovskii and I. E. Dzyaloshinskii,
Zh. Eksp. Teor. Fiz. {\bf 71}, 2338 (1976)
[Sov. Phys. JETP {\bf 44}, 1233 (1976)].
\bibitem{mck1}R. H. McKenzie and J. W. Wilkins,
Synth. Met. {\bf 55-57}, 4296 (1993).
\bibitem{alternative}
There are different ways of handling the cutoff.
In evaluating this integral I have followed Rice
and Str\"assler \cite{ric} and Schulz \cite{sch}
and performed the integral over $q_\parallel$
without any cutoff while a cutoff is used for
$q_\perp$. A slightly different result is
obtained if a cutoff is included also for $q_\parallel$.
\bibitem{sad2} M. V. Sadovsk\~i\~i, Zh. Eksp. Teor. Fiz. {\bf 77}, 2070
(1979) [Sov. Phys. JETP {\bf 50}, 989 (1979)].
\bibitem{won} W. Wonneberger and R. Lautenschl\"ager,
J. Phys. C. {\bf 9}, 2865 (1976).
\bibitem{bor} For a definition and discussion of Borel summation see
J. W. Negele and H. Orland, {\it Quantum Many-Particle Systems},
(Addison Wesley, Redwood City, 1988), p. 373.
\bibitem{err} M. Abramowitz and I. A. Stegun,
{\it Handbook of Mathematical Functions},
(Dover, New York, 1972). This integral is also
known as Dawson's integral.
\bibitem{joh2} D. C. Johnston, Solid State Commun. {\bf 56}, 439
(1985); D. C. Johnston, J. P. Stokes, and R. A. Klemm,
J. Mag. Mag. Mat. {\bf 54-57}, 1317 (1986).
\bibitem{rick}G. Rickayzen, {\it Green's Functions and Condensed
Matter}, (Academic, London, 1984), p.37.
\bibitem{ash}N. W. Ashcroft and N. D. Mermin,
{\it Solid State Physics} (Saunders, Philadelphia, 1976), p. 663.
\bibitem{horn}P. M. Horn, R. Herman, and M. B. Salamon,
Phys. Rev. B {\bf 16}, 5012 (1977).
\bibitem{kwo2} R. S. Kwok, G. Gr\"uner, and S. E. Brown, Phys. Rev. Lett.
{\bf 65}, 365 (1990)
[K$_{0.3}$Mo0$_3$].
\bibitem{moz} G. Mozurkewich,
Phys. Rev. Lett. {\bf 66}, 1645 (1991);
R. S. Kwok, G. Gr\"uner, and S. E. Brown,
ibid. {\bf 66}, 1646 (1991).
\bibitem{chu} M. Chung, Y.-K. Kuo, G. Mozurkewich, E. Figueroa,
Z. Teweldemedhin, D. A. Dicarlo, M. Greenblatt, and
J. W. Brill, J. Phys. France IV {\bf 3}, 247 (1993)
[K$_{0.3}$Mo0$_3$].
\bibitem{has} R. F. Hassing and J. W. Wilkins,
Phys. Rev. B {\bf 7}, 1890 (1973).
\bibitem{tol} J. C. Toledano and P. Toledano, {\it The Landau theory
of phase transitions: application to structural, incommensurate, magnetic,
and liquid crystal systems}, (World Scientific, Singapore, 1987), p. 167.
\bibitem{imr}
Y. Imry and D. J. Scalapino, Phys. Rev. B
{\bf 9}, 1672 (1974).
\bibitem{alt}
B. L. Altshuler, L. B. Ioffe, and A. J. Millis,
preprint, cond-mat/9504024
\bibitem{sat} See e.g., M. Sato, M. Fujishita, S. Sato, and S. Hoshino,
J. Phys. C {\bf 18}, 2603 (1985); R. M. Fleming, L. F. Schneemeyer,
and D. E. Moncton, Phys. Rev. B {\bf 31}, 899 (1985).
\bibitem{tin} M. Tinkham, {\it Introduction to Superconductivity},
(Krieger, Malabar, 1985), p.36.
\bibitem{kop} P. Kopietz, V. Meden, K. Sch\"onhammer,
Phys. Rev. Lett. {\bf 74}, 2997 (1995).
\bibitem{wha} M. -H. Whangbo and L. F. Schneemeyer, Inorg. Chem. {\bf
25}, 2424 (1986).
\bibitem{agd} A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinskii,
{\it Methods of Quantum Field Theory in Statistical Physics},
(Dover, New York, 1975), p. 130.
\end{references}
\narrowtext
\twocolumn
\centerline{\epsfxsize=7.0cm \epsfbox{fig1.ps}}
\begin{figure}
\caption{Pseudo-gap in the density of states near the
three-dimensional transition
temperature $T_P$. Perturbative treatments (dotted line,
compare Ref. \protect\cite{lee,ric}) give an absolute gap
$\psi$ at the transition temperature
whereas the exact treatment (solid line) gives
only a pseudogap. The energy $E$ is relative to the
Fermi energy and the density of states is normalized
to the free-electron value $\rho_o$.
The density of states is symmetrical about $E=0$.
This result is only valid sufficiently close to
$T_P$ that the longitudinal CDW correlation length
$\xi_\parallel \gg v_F/\psi$. As the temperature
increases above $T_P$, $\xi_\parallel $ decreases and
the density of states at the Fermi energy increases, i.e.,
the pseudogap gradually fills in (see Figures 5 and 6
in Reference \protect\cite{sad2}).
\label{figdos}}
\end{figure}
\centerline{\epsfxsize=7.0cm \epsfbox{fig2.ps}}
\begin{figure}
\caption{Breakdown of the quasi-particle picture. The electronic
spectral function is shown for several different momenta $k$,
relative to the Fermi momentum $k_F$.
As the momentum approaches $k_F$
the spectral function broadens
significantly, similar to the behaviour of a Luttinger liquid.
Inset: Momentum dependence of the occupation function $n(k)$.
The dashed line is the result in the absence of a pseudogap,
i.e., a non-interacting Fermi gas.
\label{figspec}}
\end{figure}
\centerline{\epsfxsize=7.5cm \epsfbox{fig3.ps}}
\begin{figure}
\caption{
Modification of the electronic specific heat $C_e(T)$ and
the Pauli spin susceptibility $\chi(T)$
by the pseudogap.
Both are normalized to their values in the
absence of the pseudogap.
\label{figparam}}
\end{figure}
\centerline{\epsfxsize=7.5cm \epsfbox{fig4.ps}}
\begin{figure}
\caption{The pseudogap due to thermal lattice motion has
a significant effect on the coefficients in the
Ginzburg-Landau free energy (\protect\ref{aa1}) for a single chain.
The ratio of the single-chain
mean-field transition temperature $T_0$
and the coefficients $b$ and $c$ to
their rigid lattice values (given by (\protect\ref{mfa} -
\protect\ref{mfc})) are shown as a function of
the ratio of the pseudogap $\psi$ to the temperature.
For $\psi > 2.7 T$ the coefficient $b$ becomes
negative and the transition will be first order
(Section \protect\ref{secprop}).
Inset: Relationship between $T_0/T_{RL}$ and $\psi/T_{RL}$.
\label{figcoeff}}
\end{figure}
\centerline{\epsfxsize=7.5cm \epsfbox{fig5.ps}}
\begin{figure}
\caption{
Dependence on the pseudogap of physical quantities
associated with mean-field theory of a single chain.
The plot shows the coherence length $\xi_0$,
the width of the one-dimensional critical region $\Delta t_{1D}$,
and the inverse of the specific heat jump $\Delta C$.
All quantities are normalized to their rigid-lattice values.
For $\psi > 2.7 T$ the coefficient $b$ becomes
negative and the transition will be first order
(Section \protect\ref{secprop}).
The large reduction of $\Delta t_{1D}$ below the rigid lattice
value of 0.8 means that a mean-field treatment of the single
chain Ginzburg-Landau functional may be justified.
\label{figjump}}
\end{figure}
\begin{table}
\caption{
Comparison of experimental values for
K$_{0.3}$MoO$_3$ of various dimensionless ratios
with the predictions of two simple microscopic models.
The three-dimensional transition temperature is
$T_P= 183 $ K.
The zero-temperature
energy gap $\Delta(0)$ is estimated from optical conductivity
data \protect\cite{deg}.
A Fermi velocity of $v_F=2 \times 10^5$ cm/sec was
estimated from band structure calculations \protect\cite{wha}.
$\Delta C$ is the specific heat jump
at the transition \protect\cite{bri} and $\gamma T_P$ is
the normal state electronic specific heat that has been
calculated from the density of states estimated from
magnetic susceptibility measurements \protect\cite{joh}
well above the transition temperature.
The longitudinal coherence length $\xi_{0z}$ has
been estimated from x-ray scattering experiments \protect\cite{gir}.
In both models the dimensionless ratios are independent
of any parameters,
except for $\Delta(0)/k_B T_P$ in
Schulz's model, which is described in Appendix \protect\ref{appsch}.
The rigid lattice theory \protect\cite{all,sch} involves a mean-field
treatment of the single-chain Ginzburg Landau
functional (\protect\ref{aa1}) with the
coefficients (\protect\ref{mfa} - \protect\ref{mfc}).
}
\begin{tabular}{lllcc}
Dimensionless & Experimental& Schulz & Rigid lattice\\*[-0.05in]
ratio & value &model & theory\\
\tableline
$\displaystyle {\Delta(0) \over k_B T_P }$
& $5 \pm 1$ & -- & 1.76\\
$\displaystyle{{\Delta C \over \gamma T_P}}$
& $5 \pm 1$ & 3.4 & 1.43\\
$\displaystyle{{\xi_{z0} T_P \over v_F}}$
& $0.18 \pm 0.04$ & 0.23 & 0.23 \\
\end{tabular}
\label{table1}
\end{table}
\begin{table}
\caption{Parameters for several quasi-one dimensional materials.
The observed transition temperature $T_P$ is always much smaller than
the rigid-lattice transition temperature
$T_{RL}$. The phonons near
$2k_F$, which soften at the transition,
can be treated classically since they have frequencies
of the order of $\Omega(0)$ (estimated from Raman and
neutron scattering) which is much smaller than $T_P$.
The zero-temperature gap $\Delta(0)$, estimated from
the peak in the optical absorption
was used to calculate $T_{RL}$
($ T_{RL}=\Delta(0)/1.76k_B$).}
\begin{tabular}{lcccc}
& $T_P$ (K) & $\Delta(0)$ (meV) & $T_P/T_{RL}$ & $\Omega(0)$ (K)\\
\tableline
K$_{0.3}$MoO$_3$& 183 & 90\tablenotemark[1] & 0.31 & 80
\tablenotemark[2] \\
(TaSe$_4$)$_2$I & 263 & 200$\tablenotemark[3]$ & 0.20 &
130\tablenotemark[4] \\
K$_2$Pt(CN)$_4$Br$_{0.3}$ & 120\tablenotemark[5]& 100\tablenotemark[6]
& 0.18 & 58 \tablenotemark[7]\\
TSeF-TCNQ & 29 & 10\tablenotemark[8] & 0.42 & -- \\
\end{tabular}
\label{table2}
\tablenotetext[1]{ Ref. \cite{deg}}
\tablenotetext[2]{ J. P. Pouget, B. Hennion, C.
Escribe-Filippini, and M. Sato, Phys. Rev. B {\bf 43}, 8421 (1991)}
\tablenotetext[3]{ Ref. \cite{ber}}
\tablenotetext[4]{ S. Sugai, M.Sato, and S. Kurihara, Phys. Rev. B
{\bf 32}, 6809 (1985).}
\tablenotetext[5]{ Complete ordering does not occur \cite{car}.}
\tablenotetext[6]{Ref. \cite{bru}}
\tablenotetext[7]{Ref. \cite{car}}
\tablenotetext[8]{
From activation energy of dc conductivity, Ref. \cite{sco}}
\end{table}
\onecolumn
\widetext
|
1,116,691,501,001 | arxiv | \section{Introduction}
In August 2018 Radio New Zealand (RNZ), a New Zealand public radio broadcaster, reported that the use of Google Web search by the New Zealand Police may have unwittingly revealed a link between two suspects facing charges for a crime committed together but who had no documented history of any joint crime or crime of the same kind \cite{rnz2018}. This particular incident was significant because one of the suspects had official name suppression, so that the revelation of the name could have opened a loophole for the defending lawyers to counter the charges on the basis of the violation of the suspect's rights. So far it is assumed that the police, when investigating the two suspects by performing Web searches using Google, triggered the search engine's algorithms to learn a connection, which led to them appearing together in the Google search results when searching for the one suspect only whose name was not officially suppressed.
Because the inner workings of the Google search ranking and personalisation algorithms are likely to remain a corporate secret \cite{ormen2016googling}, it can only be speculated about what caused this particular incident to happen. However, the case highlights a general and important issue with public sector officials' work that involves the use of digital services provided by global IT companies. Some of the questions in this problem domain, which we suggest is as an understudied area that requires timely and deeper investigation, are: What is the impact of search engine personalisation on the work of public sector officials? Which technical features of search engine personalisations impact public sector officials' work? Is it possible for public sector officials to prevent being affected by search engine personalisation with respect to their work?
Here we contribute to this line of scientific inquiry by performing an experiment involving public sector professionals from a range of governmental agencies in New Zealand. In order to understand the impact of Google's search result personalisation on knowledge work in the public sector, we address the following questions: (RQ1) How reliant are public sector officials on the use of Google search? (RQ2) Is there a difference between personalised and un-personalised Google search for queries in different public sector agencies? (RQ3) How does the personalisation of search results affect the perceived relevance of search results for public sector officials with respect to their work?
By answering these questions we make the following contributions: First, we show how highly public sector officials self-assess their dependency on Google Web search and provide evidence for a lack of awareness that Google search personalisation may have an impact on knowledge work in professional contexts. Second, we quantify the amount of relevant information that may be missed due to Web search personalisation. Third, we provide insight into how alternative search practices may help to overcome this issue.
The remainder of this paper is structured as follows: We begin with a description of the foundations of Web search and Web search personalisation, followed by a review of related studies that looked into quantifying the impact of Web search personalisation. Informed by the related studies, we describe the research design and subsequent results. We then discuss the implications of the results for research and practice.
\section{Preliminaries and related work}
\subsection{A brief history of Web search}
Yahoo, AltaVista, Lycos and Fireball were among the first search engines to emerge when the World Wide Web was established \cite{holscher2000web,lewandowski2015evaluating}. While using traditional cataloging, indexing and keyword matching techniques initially was sufficient for basic information retrieval on the Web, it was soon regarded to be a poor way to return search results when focusing on the commercialisation of Web content and search \cite{page1999pagerank}. With the entry of Google into the search engine market began the era of algorithms that take "advantage of the graph structure of the Web" to determine the popularity of Web content \cite{broder2000graph} in order to produce better, more relevant search results. Over time Google outperformed other search engine providers to become the market leading search engine, now with a market share of just over 74\% in 2017, getting as high as 90\% for mobile users thanks to the Chrome application that is embedded into the Android operating system for mobile devices. It is due to this widespread use of Google that the search engine is likely to play a role not only in people's private life but also impacts their behaviour when they are at work \cite{mangles2018statistics}.
\subsection{Search results personalisation}
Personalisation, regarded as a process that "tailors certain offerings (such as content, services, product recommendations, communications, and e-commerce interactions) by providers (such as e-commerce Web sites) to consumers (such as customers and visitors) based on knowledge about them, with certain goal(s) in mind" \cite{adomavicius2005personalization} was introduced to Web search by Google in 2005 as a means of getting better at providing the most relevant results \cite{google2005, google2009}. From the perspective of the search engine provider this was necessary since the vast (and continuously growing) amount of information available on the Web meant that more effective information retrieval systems were required in order to provide users the most relevant items according to their query \cite{brin2012reprint}. While Google's personalisation process is not fully transparent \cite{ormen2016googling}, it is known to include a plethora of behavioral signals captured from search engine users, such as past search results a user has clicked through, geographic location or visited Web sites, for example \cite{google2009,roesner2012detecting}.
Such search result personalisation has led to concerns about what has been coined the \textsl{filter bubble}, i.e. the idea that people only read news that they are directly interested in and agree with, resulting in less familiarity with new or opposing ideas \cite{pariser2011filter,foster2012news}. However, there is still no academic consensus about whether the filter bubble actually does exist at all, or whether it is an overstated phenomenon \cite{foster2012news,dutton2017search,haim2018burst}. Hence, research as the one described here is still required to bring clarity to the current ambiguity about that matter.
\subsection{Related studies on search result comparison}
In \cite{du2011academic} a heavy reliance on search engines by academic users was found. This brought personalisation into the focus of research, prompting Salehi et al.'s \cite{salehi2015examining} research into personalisation of search results in academia. Using alternative search setups involving Startpage and Tor to depersonalize search results and comparing the rank order of different search results using the percentage of result overlap and Hamming distance \cite{salehi2015examining}, it was found that on average only 53\% of search results appear in both personalised and unpersonalised search.
The work by Hannak et al. \cite{hannak2013measuring} introduced a different approach for measuring personalisation of search results. They compared search results of a query performed by a participant (personalized) with the same query performed on a 'fresh' Google account (control) with no history. Comparison between the two sets is done using the Jaccard Index and Damerau-Levenshtein distance as well as Kendall's Tau to understand the difference in the rank order between two search results. They observed measurable personalisation when conducting search when signed into a Google account, and location personalisation from the use of the IP address. Ultimately, they observed that 11.7\% of search results were different due to personalisation.
In their audit of the personalisation and composition of politically related search result pages, Robertson et al. \cite{robertson2018auditing} found, while relatively low, a higher level of personalisation for participants who were signed into their Google account and/or regularly used Alphabet products. In order to account for the behavioral pattern of search engine users to stronger focus on top results \cite{lu2016effect} they used Rank-Biased Overlap \cite{webber2010similarity} to compare search results.
Overall, these previous studies confirm corporate statements by Google regarding the use of location data and the profile of the user conducting the search \cite{google2005,google2009} for the tailoring of search results. Our work benefits from the continuous improvement of the methodologies used for search result comparison and transfers such a study setup into the public sector to shed light on the impact of personalisation of professional knowledge work.
\section{Research design and data}
We based the research design of our experiment on the previous studies that sought to investigate search result personalisation in an academic search context \cite{du2011academic,salehi2015examining} and the quantification of search result personalisation \cite{hannak2013measuring,robertson2018auditing}. Additionally, we introduce search result relevance as an additional dimension. The idea of self-assessed relevance has been explored perhaps most notably in \cite{pan2007google}. In this work we will investigate the relevance of the results that appear in personalised and unpersonalised search.
\subsection{Study Participants}
We recruited 30 participants from the public sector following the typical procedure of convenience sampling (21 self-identified as female and 9 as male). Of these participants, 5 were at managerial level or higher. The results are slightly skewed towards one public sector organisation, with over half of the participants (16) from that particular organization, but participants were chosen randomly. Most participants are experienced, indicating they have been in their current industry for 10 years or more.
\subsection{Survey}
To gauge how 'important' the use of Google search was to public sector officials we performed a pre-experiment survey. The survey design was informed by two of the studies mentioned earlier of how academic researchers sought information by Du and Evans \cite{du2011academic}, and personalisation in academic search by Salehi et al. \cite{salehi2015examining}. Due to time constraints, the survey remained pre-qualifying, we did not perform a follow-up survey or interview. In order to determine the importance of Google and search engines, questions were directed at how important the participants believed that Google was to their work functions. For example, the survey included questions that seek to determine the extent of a participant's self-assessed reliance on Google and how often they used Google as part of their work routines.
\subsection{Experiment}
Following completion of the survey, we asked participants to perform two Google searches on their work computers, to simulate a "normal" search query that they might perform in the course of their everyday work duties. For each query that was performed, we performed the same search queries at the same time under two different conditions, both designed to obfuscate Google's knowledge of who performed the search. For the first query (Query 1), participants were asked to search something that they had actually searched before. For the second query (Query 2), participants were asked to search something that they would potentially search in the course of their work duties, but to the best of their knowledge had not searched before. This results in two queries being performed under 3 different search conditions.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig19}
\caption{Overview of the study setup and the three search result sets generated per query and participant.}
\normalsize
\label{fig:setup}
\end{figure}
\paragraph{Personalized search:} Participants performed the search for both queries at work on their work computers to simulate search performed during the course of their normal working day.
\paragraph{Unpersonalised search 1:} This condition attempted to depersonalise search query results through the use of a virtual machine running Mozilla Firefox on Linux’s Ubuntu Operating System. A virtual machine allows for a virtual computer to be created within a computer. By using a virtual machine, it is less likely that a person’s real identity will be left, unless they did something that allowed for their identity to be linked to the virtual machine \cite{van2017deviating}. The identity of whoever is performing the Google search should be tied to the virtual machine. Since each virtual machine was created for the purpose of this experiment, there is no history of any past searches that could influence the output of the search results, nor any identity to link to. A test run of this condition found that location personalisation was present, but only to the extent that the country from which the search was performed could be identified. It is believed that this is the extent of personalisation for this condition.
\paragraph{Unpersonalised search 2:} This second condition attempted to completely depersonalise search query results through the use of the Startpage search engine running on Tor, and is borrowed directly from Salehi et al. \cite{salehi2015examining}. Startpage is a search engine that gathers the best Google results, but does not reveal or store any personal information of the user. It has also been awarded a European Privacy Seal \cite{salehi2015examining}. Tor is essentially a modified Mozilla Firefox Browser with additional proxy applications and extensions that hides any identifying information by ‘fragmenting’ the links between the client and server by redirecting the traffic through thousands of relays \cite{van2017deviating}.
After each search result was retrieved, we asked the participants to rate the relevance of each of the top 10 search result items on a three-point Likert scale (relevant, maybe relevant, not relevant). This three-point scale was chosen to reduce potential ambiguity of more nuanced levels on any larger scale with respect to the rating of the relevancy of search results. When using larger scales we experienced higher variance in how study participants interpret the different levels which would lead to undesired limitations for the study of the result relevancy.
\subsection{Data analysis}
The survey responses as well as the self-assessed relevance scores for search results were analysed using exploratory data analysis (EDA) techniques such as calculating the mean and standard deviation (SD) for survey responses. To compare the rank of any pair of ordered sets of URIs $A$ and $B$ we use the Rank-Biased Overlap (RBO) measure as justified in \cite{robertson2018auditing}. RBO provides a rank similarity measure that takes top-weightedness (stronger penalties for differences at the top of a list) and incompleteness (lists with different items) into account. In our study setup we only compare sets of equal size limited to the top 10 results retrieved under the three aforementioned query conditions.
The RBO measure contains the parameter $\Psi \in [0,1]$, which represents the degree to which a fictive user is focused on top ranked results, with smaller values of $\Psi$ reflecting stronger focus on top results. RBO is a common measure for this kind of analysis and outperforms other measures to assess the similarity or distance of vectors of strings (e.g. hamming distance) due to the possibility to factor in the focus on top results.
For each unpersonalised result set (i.e. unpersonalised search 1 and unpersonalised search 2), we computed the proportion of URIs self-assessed as relevant that were not in the respective personalised search result set. This provides us with an understanding how much relevant information is missed in the personalised search.
We also computed six sets of URIs that were common between pairs of result sets (leading to three such sets per query) but that were rated differently in terms of their relevance in order to find out whether participants were consistent with their relevance assessment. To investigate deeper whether the rank order may add bias to the participants' self-assessment of the search result relevance, we also analysed the rank change for URIs within those sets (i.e. whether a URI that was assessed differently moved up or down in the ranking).
Finally, we derived the sets of URIs that are deemeed relevant in any of the unpersonalised result sets but that were not present in a respective personalised search result. To understand whether there is any bias in the participant's assessment of the relevancy (e.g. implicit assumption that highly ranked results in search must be relevant) we then computed the distribution of the ranks of those URIs in the four respective unpersonalised search result sets.
\section{Findings}
\subsection{Trust in and reliance on Google in the public sector}
As presented in Table~\ref{tab:use} the majority of participants indicated that they use Google every day for both work and non-work purposes. Furthermore, most participants said that Google is their first point of enquiry as opposed to other sources such as asking co-workers. Participants also indicated that they do not compare the results of their Google searches with other search engines. These responses indicate a high level of trust in and reliance on Google in the public sector. The responses to questions asking about the quality of people's work if they were not able to use Google further confirms this reliance. Participants indicated that they generally believed that their work would become of worse quality if they could not use Google, even if they could use other sources of information.
\begin{table}[hbt]
\centering
\scriptsize
\begin{tabular}{l|l}
\textbf{Survey item} & \textbf{Mean response (SD)} \\ \hline
Frequency of use & 4 (0.92) \\ \hline
As the first point of enquiry & 4 (0.91) \\ \hline
Search engine comparison frequency & 2 (1.05) \\ \hline
Impact on quality of work & 4 (0.90) \\
\end{tabular}
\caption{Mean and standard deviation for the survey responses related to the use and trust in Google as a first and single point of online research.}\label{tab:use}
\end{table}
That the overwhelming majority of participants use Google as their first point of enquiry at work draws comparisons with studies that found that around 80\% of Internet users in an academic context used Google search as their first point of enquiry \cite{du2011academic,salehi2015examining}. The participants in the study by Du et al. \cite{du2011academic} indicated that this was because they found Google easy to use, and that it had become a habit to use Google as the first option when they needed to search for information. While participants in our study were not explicitly asked why they used Google as their first point of enquiry, other factors such as the fact that they did indicate that they do not compare results with other search engines' results point into the direction that Google plays a similar role in the public sector.
\subsection{Variance in personalised and unpersonalised search results}
Figure~\ref{fig:rbo} shows the results of our analysis of the RBO. We plotted a smoothed line graph for $20$ RBO scores for $\Psi$ in the range from $0.05$ to $1.0$ (increased in steps of $0.05$). Since smaller $\Psi$ values indicate stronger focus on top ranks in search results, the shape of these graphs shows that the similarity of search results is consistently lower for top ranked search results and increases as lower ranks are taken into account. The similarity of search results is consistently the highest for personalised and unpersonalised search 1, reaching an RBO of almost $0.8$ when focusing on low-ranked results and a lower bound of around $0.4$ when relaxing the $\Psi$ parameter to focus on the top results only. Any comparison with unpersonalised search 2 does not even reach an RBO of $0.4$ even when focusing on low-ranked results.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig3}
\end{subfigure}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig4}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig5}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig6}
\end{subfigure}
\caption{Rank-Biased Overlap analysis with variable $\Psi$ threshold from 0 to 1 (in steps of 0.05) for both queries performed by the study participants under all three experimental conditions.}
\label{fig:rbo}
\end{figure}
While this result supports the recommendations to use advanced measures to compare search result rankings \cite{webber2010similarity,robertson2018auditing}, we suggest that it is a call for deeper investigations into the lower ranks of search results to quantify and qualify the information professional knowledge workers would miss out on if they focus on top ranked search results.
\subsection{Result relevance in personalised and unpersonalised search}
With respect to the search result relevance assessment we find that between half and two-thirds of the search results have been assessed as relevant by the study participants. However, the mean number of maybe responses for query 2 result sets is about 50\% smaller than that of query 1, which means participants were making more certain assessments whether a result is relevant or not for the second query they performed during the experiment.
\begin{table}
\centering
\scriptsize
\arrayrulecolor{black}
\begin{tabular}{!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}}
\hline
\multicolumn{2}{!{\color{black}\vrule}l!{\color{black}\vrule}}{} & Yes (SD) & No (SD) & Maybe (SD) \\
\hline
\multirow{3}{*}{Query 1} & Personalised & 6 (2.6) & 2.6 (2.5) & 1.3 (1.7) \\
\cline{2-5}
& Unpersonalised 1 & 6 (2.7) & 2.9 (2.7) & 1.1 (1.7) \\
\cline{2-5}
& Unpersonalised 2 & 5 (2.8) & 3.8 (2.9) & 1.2 (1.6) \\
\hline
\multirow{3}{*}{Query 2} & Personalised & 6.6 (2.1) & 2.8 (2.2) & 0.6 (1.3) \\
\cline{2-5}
& Unpersonalised 1 & 6.3 (2.5) & 3.1 (2.5) & 0.5 (1.1) \\
\cline{2-5}
& Unpersonalised 2 & 6.2 (2.5) & 3.3 (2.6) & 0.5 (1.4) \\
\hline
\end{tabular}
\arrayrulecolor{black}
\caption{Means for the yes, no and maybe responses of the relevance assessment.}\label{tab:surv}
\end{table}
The proportion of URIs that are found in different result sets for the same participant but that this participant rated differently ranges from 15.7\% up to 19.7\% of all URIs in the intersection of pairs of result sets as shown in Table~\ref{tab:uris}. We also highlight that this relevance assessment inconsistency is higher for query 1, which is the query that the participant did perform before as part of her work.
\begin{table}
\centering
\scriptsize
\arrayrulecolor{black}
\begin{tabular}{!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}}
\hline
\multirow{3}{*}{Query 1} & Personalised vs. unpersonalised 1 & 19.7\% \\
\cline{2-3}
& Personalised vs. unpersonalised 2 & 19.7\% \\
\cline{2-3}
& Unpersonalised 1 vs. unpersonalised 2 & 19.3\% \\
\hline
\multirow{3}{*}{Query 2} & Personalised vs. unpersonalised 1 & 18\% \\
\cline{2-3}
& Personalised vs. unpersonalised 2 & 16.3\% \\
\cline{2-3}
& Unpersonalised 1 vs. unpersonalised 2 & 15.7\% \\
\hline
\end{tabular}
\arrayrulecolor{black}
\caption{Proportion of inconsistently rated URIs.}\label{tab:uris}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig7}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9}
\end{subfigure}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig11}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{fig12}
\end{subfigure}
\caption{Change in rank of unique URIs for which the users' relevance assessment varied between the three experimental conditions.}
\label{fig:boxplot1}
\end{figure}
The results of our deeper investigation of whether unique URIs for which the participants' assessment varied between the different conditions are depicted in Figure~\ref{fig:boxplot1}. The graphs show that macroscopically there is no tendency towards URIs that are ranked higher or lower to be assessed inconsistently.
Further to the results shown in Figure~\ref{fig:boxplot1} we also investigated whether there is any trend towards higher or lower ranking of inconsistently assessed URIs specifically for those URIs for which the assessment increased (e.g. assessment of maybe to yes), decreased (e.g. assessment from yes to no) or remained the same. As Figure~\ref{fig:rankchange} shows, there is if at all a moderate tendency that URIs are ranked higher if the perception changed bu that it has no influence whether the perception increased or decreased.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig17}
\caption{Change in rank of unique URIs for which the users' relevance assessment increased, decreased or remained the same.}
\normalsize
\label{fig:rankchange}
\end{figure}
All these results related to the relevance assessment can be interpreted that the self-assessment of search result relevance is either a task prone to human error or that the participants are impacted by an unobserved factor in the experimental setup that causes this behavior. The former would be again in-line with previous studies with regards to Internet search behaviour and people's ability to assess search result relevance \cite{pan2007google}, while the latter means we additionally suggest that there is potentially a cognitive bias at work impacting the participants' assessment. In other words, the observation that the relevance assessment inconsistency is higher for query 1, which is the query that the participant did perform before as part of her work, allows to question whether the links in the query 1 result sets were more tricky to assess consistently because of the fact that this was a query they performed before so had more detailed knowledge about the topic leading to more nuanced opinions, or whether the participants just became more certain in how to rate relevance as the experiment progressed because of a training effect kicking in.
\subsubsection{Missing relevant results}
Table~\ref{tab:uris2} shows the proportion of unique URLs that were exclusively found in unpersonalised search result sets but considered relevant as per participants' assessment. The numbers show that in both, unpersonalised search 1 and unpersonalised search 2, there is a significant amount of relevant information to be found. Most significantly, the depersonalised search setup using Tor and startpage.com allowed to retrieve up to 20.3\% of relevant information that were not found under personalised search conditions in our experiment. While previous studies also found that people may miss information due to search engine personalisation \cite{hannak2013measuring}, our unique experiment using the two different unpersonalised search settings allows to further detail how one may circumvent this filter bubble effect and also quantifies the difference this may make.
\begin{table}
\centering
\scriptsize
\arrayrulecolor{black}
\begin{tabular}{!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}}
\hline
\multirow{2}{*}{Query 1} & Unpersonalised 1 & 7.3\% \\
\cline{2-3}
& Unpersonalised 2 & 16.7\% \\
\hline
\multirow{2}{*}{Query 2} & Unpersonalised 1 & 6\% \\
\cline{2-3}
& Unpersonalised 2 & 20.3\% \\
\hline
\end{tabular}
\arrayrulecolor{black}
\caption{Overall proportion of unique URLs that are not found in personalised search but in one of the unpersonalised searches and that are assessed as relevant.}\label{tab:uris2}
\end{table}
Figure~\ref{fig:relevantorder} shows the rank order distribution of those relevant results that are missing from personalised search. The distributions show a weak tendency that those missing but relevant information are found in the lower ranks of search results. In the light of multiple previous studies that found that search engine users focus substantially on top ranked results \cite{granka2004eye,pan2007google} this is important because it means finding all relevant information is not just a challenge to be solved by either removing or circumventing personalisation algorithms but also a user interface (UI) and user experience (UX) design issue.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig13}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig14}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig15}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig16}
\end{subfigure}
\caption{Rank order distribution of links that are missing from personalised but are deemed relevant in any of the unpersonalised search result sets.}
\label{fig:relevantorder}
\end{figure}
\section{Limitations}
Similar to previous work \cite{hannak2013measuring,salehi2015examining} our research was limited due to the small sample size. This was a practical constraint due to the way the experiment was designed and can only be avoided when either accepting uncertainty about whether the participants are actually public sector workers (e.g. by running it as a self-administered online experiment) or by running it over a much longer period of time. We consider the latter for our future research combined with an extension to cover alternative search engines and performing the experiment in multiple countries to also account for localisation. Future research should also expand the investigation of relevance of search results and in particular the properties and implications of self-assessment of relevance. Relevance is a subjective matter, and how the participants rated relevance in our experiments differed between each participant. Pan et al. \cite{pan2007google} were able to take this subjectivity into account through an objective third party evaluation, which we did not do, because our participants were the subject matter experts for the queries that they performed in a work context.
\section{Conclusion}
In this paper we presented findings from a Web search experiment involving public sector workers. We investigated not only how important they perceive Google Web search for fulfilling their information needs, but also whether Google's Web search personalisation means they may miss relevant information. We find that the majority of participants in our experimental study are neither aware that there is a potential problem nor do they have a strategy to mitigate the risk of missing relevant information when performing online searches. Most significantly, we provide empirical evidence that up to $20\%$ of relevant information may be missed due to Web search personalisation.
The fact that personalisation has an impact on search results was not surprising, particularly in the light of previous studies focused on academic Web search \cite{hannak2013measuring,salehi2015examining}. However, our work provides new empirical evidence for this phenomenon in the public sector. Therefore, our research has significant implications for public sector professionals, who should be provided with training about the potential algorithmic and human biases that may affect their judgments and decision making, as well as clear guidelines how to minimise the risk of missing relevant information. This does not just involve comparing search results using different search engines and to actively look further down the ranks for relevant search results, but maybe even that it is necessary that public sector agencies provide dedicated infrastructure to obfuscate the users' identities to circumvent personalisation.
\bibliographystyle{splncs04}
|
1,116,691,501,002 | arxiv | \section{Introduction}
Image super-resolution (SR) refers to the task of recovering a latent high-resolution (HR) image from a corresponding low-resolution (LR) image. This has been one of the most widely explored inverse problems in computer vision \cite{dian2017hyperspectral,liu2013infrared}. A LR image $I_{x}$ is assumed to be modeled as the output of the following degradation:
\begin{equation}
I_{x}=\mathbf{D}(I_{y};\lambda).
\end{equation}
where $\mathbf{D}$ defines a degradation mapping function, $I_{y}$ is the corresponding HR image, and $ \lambda$ denotes the degradation parameter. The higher the degradation, the harder becomes the task to reconstruct the HR image \cite{wang2020deep}.
The image super-resolution problem is also explored in the fields of remote sensing \cite{lanaras2015hyperspectral}, surveillance imaging \cite{shamsolmoali2019deep}, and medical imaging \cite{mahapatra2019image}. While a number of approaches have been attempted to solve this problem, it remains ill-posed particularly since any specific LR image may correspond to croppings from multiple HR counterparts.
Three types of solutions are usually provided to solve the SR problem; interpolation-based \cite{hung2011robust}, learning-based \cite{he2013beta,zeng2015coupled,yang2013fast} and reconstruction-based \cite{sun2010context,wang2014fast,fattal2007image}. Learning-based SR methods learn the non-linear mapping between HR and LR image using probabilistic generative models, random forest, linear or non-linear models, neighbor embedding \cite{chang2004super} and sparse regression \cite{kim2010single}. Interpolation-based methods utilize the adjacent pixels to calculate the interpolated pixels by using an interpolation kernel. Several types of existing interpolation-based models have been used to tackle the SR problem such as bicubic\cite{ruangsang2017efficient}, edge-directed estimation \cite{wang2013edge}, and auto-regressive models \cite{lu2017two}. Interpolation-based methods are very computationally efficient and relatively simple than other architectures. However, these methods suffer from low accuracy compared to other methods because of poor representation learning capacity \cite{shukla2020technical}.
\begin{figure}[t!]
\centering
\subcaptionbox{HR Image}{
\includegraphics[width = 0.37\linewidth, trim={0cm 8.5cm 7.5cm 0.88cm}, clip]{introduction_picture.pdf}
}
\hspace{0.1cm}
\centering
\subcaptionbox{LR Image}{
\includegraphics[width = 0.37\linewidth, trim={7.5cm 8.5cm 0cm 0.88cm}, clip]{introduction_picture.pdf}
}
\\[5pt]
\centering
\subcaptionbox{DIP}{
\includegraphics[width = 0.37\linewidth, trim={0cm 0.88cm 7.5cm 9.25cm}, clip]{introduction_picture.pdf}
}
\hspace{0.1cm}
\centering
\subcaptionbox{NLVAE}{
\includegraphics[width = 0.37\linewidth, trim={7.5cm 0.88cm 0cm 9.25cm}, clip]{introduction_picture.pdf}
}
\caption{Visual Comparison of Deep Image Prior (DIP) (untrained) \& NLVAE (untrained)}
\label{fig:introduction_picture}
\end{figure}
\par
Learning-based solutions are widely used in the SR task. They can primarily be categorized into three types: Code-based\cite{freeman2002example,zhang2010partially,zeng2015coupled}, CNN-based \cite{kim2016accurate,gupta2018cnn} and regression-based \cite{zhang2012single}. In the past, learning-based solutions have shown great success for image super-resolution due to a robust feature learning capability \cite{ha2018deep,kim2016deeply}. The regression-based solutions are much faster than any other methods but, compared to other learning-based methods, produce blurry images and low peak-signal-noise ratio, due to poor representation learning \cite{kim2010single,yang2013fast}. Learning-based solutions generally measure the similarity between LR \& HR images. A number of methods have been proposed to solve this problem but among them, CNN-based methods have superior performance because of their robust representation learning capabilities \cite{bhowmik2017training}. As a learning-based method, a hierarchical pyramid structure was developed using residual layers for image super-resolution \cite{lai2017deep}.
\par
Zhang \textit{et al.} \cite{zhang2018residual} introduced residual dense block in order to learn hierarchical feature maps, utilizing a bottleneck layer at the end of the residual dense layer. The local connection in the same block represents short-term memory and skip connections define long-term memory for representation learning. Lim \textit{et al.} \cite{lim2017enhanced} proposed Enhanced Deep Super-Resolution (EDSR) with a huge improvement in performance utilizing the residual structure. Despite the excellent performance, these methods utilize residual structure that is computationally resourceful. In \cite{hu2019channel}, a combination of channel-wise attention and spatial attention block was developed for single image super-resolution (SISR). Both of these blocks were again combined with residual creating robust SR methods. This method captures sufficient representations from feature space and suppresses irrelevant information. As this approach combines two kinds of attention block with a stacked neural network, it consumes large amounts of memory and slows down the training process.
\par
In \cite{mao2016image}, an autoencoder-based SISR method has been introduced with symmetric skip connections. Similarly, Tai \textit{et al.} \cite{tai2017image} featured a deep recursive residual network (DRRN) utilizing memory block with residual representation learning. These solutions provide better results with the help of large scale architecture. Park \textit{et al.} \cite{park2018high} introduced a high dynamic range for super-resolution task that is very lightweight and easy to implement but decomposing the image causes loss of important information which results in low peak-signal noise ratio (PSNR) values.
Many upsampling strategies have been observed in the literature. Efficient sub-pixel CNN (ESPCN) uses sub-pixel convolution for upsampling \cite{shi2016real} that stores channel information for extra points then reorganizes those points for HR reconstruction. Fast super-resolution CNN (FSRCNN) utilizes deconvolution operation to upsample \cite{dong2016accelerating}. Hua \textit{et al.} used a deconvolution operation for the upsampling process, featuring the arbitrary interpolation operator and subsequent convolution operator \cite{hua2019image}. It is to be noted that the deconvolution operation has two disadvantages: deconvolution is used at the end of the network, and the downsampling kernel is not known. Unknown input estimation consequently results in poor performance. To avoid these issues, we utilize linear upsampling of LR images so that we only focus on reconstruction quality rather than upsampling kernels.
\par
From a theoretical perspective, it can be deduced that the deeper neural architecture provides better result than shallow architecture \cite{montufar2014number}. Keeping this in mind, Kim \textit{et al.} \cite{kim2016accurate} first proposed a very deep architecture for SISR task. With 20 layers, VGGNet uses 3x kernels for all layers. Additionally, this method uses a high learning rate for faster convergence and utilizes gradient clipping to alleviate the gradient explosion problem. To learning short-term memory information, skip connections have been used in many tasks. Another work introduced recursive topology with parameter reduction using recursive convolution kernel \cite{tai2017image}. However, these settings are risky as self-supervised settings because a faster learning rate will provide shallow feature learning process, thus resulting in poor performance.
\par
Several reconstruction-based SISR methods have been introduced to solve the SISR problem, utilizing a shallow feature learning process \cite{lian2019fg,dou2020super}. KernelGAN \cite{bell2019blind}, consisting of a deep linear generator and a discriminator, supports blind SISR. A deep linear generator removes non-linear activation functions, but the overall loss function is not convex. The discriminator uses fully convolutional layers with no strides and pooling. Even though the overall structure means that the model converges faster, it is still difficult to obtain the global minimum. Our method utilizes the non-linear activation function in both encoder and decoder, making network learn more intuitive information than KernelGAN. Shaham \textit{et al.} \cite{shaham2019singan} proposed a multidisciplinary generative model capable of performing multiple computer vision tasks. To our knowledge, this work was the first attempt to use an unconditional generative model for the ZSSR task. It utilizes an adversarial network as a reconstruction-based method learning only abstract features from image patches. Due to the complexity of training an adversarial network, GAN models often suffer from convergence failure and mode collapse. Moreover, training adversarial training takes longer than discriminative models.
\par
To alleviate these problems, we have devised an image-specific architecture, called probabilistic non-local variational neural autoencoder (\texttt{NLVAE}), which can generate high-quality images with a robust pixel learning capability. Our generative solution is specifically designed for ZSSR, storing more disentangled and intuitive features and learning from low-dimensional space.
Our specific contributions can be summarized as follows:
\begin{itemize}
\item An unconventional internal method has been introduced for the ZSSR task where only one one LR image and its corresponding HR image are required for the training process. The proposed method is comnpletely unsupervised and does not require any prior training. It establishes a new state-of-the-art (SOTA) which outperforms currently available methods.
\item The proposed light-weight non-local feature extraction module harvests maximum representations from different receptive regions boosting the super-resolution performance.
\item The proposed loss function aids to reconstruct high quality images by controlling the Lagrange multiplier and marginal value.
\end{itemize}
The rest of the paper is organized as follows. Section II shows some works related to our proposed network structure. Section III describes the working principle of our method. In section IV, we provide quantitative and qualitative results using our model. Section V provides some ablation studies demonstrating the robustness of our network and section VI discusses the limitation of our strategy along with similarities and dissimilarities with other methods. Section VII provides concluding remarks.
\section{Related Work}
\textbf{Generative Models.} Generative models have been proven to reconstruct finer texture details and are able to generate more photo-realistic images than CNN-based methods. While shallow CNN-based SR methods provide detailed low-frequency information, GAN-based methods as generative models can discover high-frequency information. Super-Resolution GAN (SRGAN) \cite{ledig2017photo} makes use of perceptual loss as well as residual dense network generating high-resolution images. Wang \textit{et al.} \cite{wang2018esrgan} proposed residual-in-residual without batch normalization which produced HR images through adversarial training. Majdabad \textit{et al.} \cite{majdabadi2020capsule} attached a capsule network as a complex network with GAN for face super-resolution. In \cite{qiao2019image}, a conditional GAN has been introduced using ground-truth as a conditional variable for the discriminator. Similar to conditional GAN, conditional autoregressive generative models utilize maximum likelihood estimation depending on conditions. Based on these conditions, the generated HR images are reconstructed based on previous learned pixels \cite{van2016pixel,van2016conditional}. However, these generative models suffer from mode collapse and convergence failure problems \cite{goodfellow2016nips}. Moreover, these methods are computationally expensive and integration with self-supervised training is quite difficult to implement as GAN-based methods require more image data for training than learning-based methods \cite{takano2019srgan,wang2018esrgan}.
\par
\textbf{Non-Local Networks.} Non-local networks usually comprise an attention module with non-local blocks. Wang \textit{et al.} \cite{wang2019deformable} proposed a deformable non-local attention module for video super-resolution. In \cite{liu2018non}, a non-local recurrent model was introduced for SISR task, which can learn deep feature correlation among neighbourhood locations of patches. Zhang \textit{et al.} \cite{zhang2018residual} featured residual network with non-local attention units for image super-resolution. Another work presents a cross-scale non-local attention module for learning intrinsic feature correlations of images \cite{mei2020image}.
\par
\iffalse
\textbf{Self-Supervised Training}
\subsubsection{Efficient Training}
Depth-wise separable convolution operation was introduced in \cite{sifre2014rigid} for image classification, speeding up the training process. Depth-wise separable convolution is a combination of depth-wise convolution and point-wise convolution operation. The depth-wise convolution layer performs the convolution operation in each input channel and the point-wise convolution layer is basically a standard convolution operation followed by $1\times1$ kernel. However, this is not sufficient enough to learn semantic and channel information from the feature space, resulting in drop in accuracy. We take motivation from this work to develop the non-local convolutional unit attaching an additional convolutional layer to improve performance.
\fi
\begin{figure}[t!]
\centering
\includegraphics[width = 0.48\textwidth]{non_local.pdf}
\caption{Overview of non-local block used in our \texttt{NLVAE} network. The non-local block is composed of 3$\times$ kernels and 1$\times$ kernels. The initial convolution kernel is concatenated with the last feature transform to learn relative positional features. $S\times{S}$ defines the spatial size of the feature and the channel information is denoted as $C$}
\label{fig:non_local}
\end{figure}
\textbf{Zero-Shot Super-Resolution Methods.} Shocher \textit{et al.} \cite{shocher2018zero} introduced the term "ZSSR," presenting a shallow CNN model to learn the probability distribution of the LR and HR images. The major disadvantage of this network is that it extracts local features utilizing a simple CNN architecture and a shallow CNN model, which also results in poor performance. Another internal method proposed in \cite{ulyanov2018deep}, introducing Deep Image Prior (DIP) to build a bridge between a CNN and convolutional sparse coding. The solution takes the neural network as the output of the reconstruction and random input signals. This is the first approach which creates a bridge between a code-based method and a learning-based method for ZSSR. Untrained DIP basically focuses on the smaller receptive fields for intuitive neural representation, but loses more context as feature extraction is limited to the smaller regions. Fig. ~\ref{fig:introduction_picture} depicts that DIP method shows very weak structural information compared to \texttt{NLVAE}. Due to a weak feature extraction process, this method suffer from low accuracy in terms of performance metrics \cite{bengio2013representation}.
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.95\linewidth]{architecture.pdf}
\caption{Network structure of the proposed non-local variational autoencoder (\texttt{NLVAE}) model. Probabilistic encoder-deocder are composed of non-local units and various convolution and upsampling layers. The reconstruction quality is controlled by the operator $\beta$. Global avg pooling are used to calculate the mean and variance leveraging global structural details during the reconstruction process.}
\label{fig:nlvae}
\end{figure*}
\section{Network Structure}
In this section, we demonstrate the structure of our proposed non-local block in the neural encoder and decoder. We also show the measurement of posterior distribution and loss function and we provide a full analysis of how the Lagrange multiplier controls the reconstruction quality of generated image.
\subsection{Non-Local Encoder-decoder}
As shown in Fig.~\ref{fig:nlvae}, our proposed \texttt{NLVAE} model consists of an encoder and a decoder consisting of non-local convolution blocks. Fig.~\ref{fig:non_local} depicts the overview of non-local blocks. The non-local block utilized in \texttt{NLVAE} can exploit spatial correlation between neighbourhood and locations. To design a computationally effective spectral correlation module, we have overlooked residual structure. Each non-local block is composed of convolution blocks and each convolution block comprises of 2D convolution, point-wise convolution and batch normalization \cite{ioffe2015batch} followed by Leaky-ReLU activation function. The encoder encodes the input image $x$ into the latent representation $z$ ( $z= \psi (x)$) and the decoder reconstructs the representation back to its approximate original data. We assume that the low-resolution image is an input vector denoted by $x$ and $z$ denotes the latent representation. The latent variables are controlled by a Gaussian distribution along with a diagonal covariance matrix. The latent space dimension is denoted by $J$. The output of the non-local convolutional encoder comprises a mean $\mu$ and a log of variances $log(\sigma)$. Through the reparameterization trick, a noise vector $(\in)$ is obtained from the latent space \cite{kingma2013auto}. The goal of the \texttt{NLVAE} model is the ability to produce a high-resolution reconstructed image from the low-resolution image, exploiting the relationship between the input vector and prior distribution $p(z)$.
\subsection{Posterior Distribution}
\vspace{-0.07cm}
We denote the low-resolution input data distribution as $ x ~ p_{d}x$ and the high-resolution reconstructed data distribution as $\bar{x} ~ p(x) $. The encoded and decoded data distributions are represented as $q_{\phi}(z|x)$ and $p_{\theta}(z|x)$ respectively, where $\phi$ and $\theta$ are the variables of the encoder and decoder networks. The $q(x)$ tries to approximate the output prior $p(z)$. Centered isotropic multivariate Gaussian $N(0,I)$ was chosen as the prior $p(z)$ over the latent variables \cite{durrieu2012lower}. The inference model is designed to output two individual variables $\mu$ and $\sigma$, and thus the posterior $q_{\phi}(z|x) = N(z;\mu,\sigma_{2})$. With this setting, to get the desired prior distribution, the non-local convolutional encoder and decoder are trained to optimize the reconstruction error (that is, the mean squared error). The loss function tries to approximate between each patch of LR and HR over a fake minibatch $x_{n}$ where $n$ is the size of the fake minibatch. The total number of data points is denoted as $N$.
\begin{algorithm}[t!]
\caption{Training \texttt{NLVAE} model}
\label{alg:nlvae}
\SetAlgoLined
\textbf{Input : }Initialize network parameters
\While{not converged}{
$\mathbf{X} =$ psuedo labels of single $L_{r}$ image
$Z_{p} \leftarrow$ Distribution from prior $ \textbf{N}(0,I)$
$Z_{p} \leftarrow \mathbf{Non-Local Encoder}\big(X \big)$
$X_{r} \leftarrow Non-Local Decoder\big(Z\big)$
$X_{p} \leftarrow Non-Local Decoder\big(Z_{p}\big)$
$L_{KL} \leftarrow L_{KL}\big(X_{r}, X_{p}\big)$
$\phi_{E} \leftarrow \phi_{E} - \eta \nabla_{\phi e}(L_{R}+\beta L_{AE}\big)$
(Updating Adam for $\phi_{E}$)
}
\end{algorithm}
The input of the decoder is sampled from $\mathbf{N}(z;\mu,\sigma_{2})$ using a reparameterization trick $z = \mu + \sigma \odot \epsilon$ where $\epsilon = \mathbf{N} (0,I)$. The aggregated posterior distribution is defined as $z = q(z)$:
\begin{equation}
\label{eqn:posterior}
q(\mathbf {z}) = \int _{\mathbf {x}} q_{\phi }(\mathbf {z}|\mathbf {x})p_{d}(\mathbf {x}) d\mathbf {x}.
\end{equation}
\subsection{Loss Function}
It is useful to prepare the low-resolution input image by clustering in latent space, eradicating the noise $L_{R}$ loss is summed over the data points and the average of the fake minibatch is calculated. Thus, it provides more weight to the reconstruction error helping reduce potential model collapse.
\begin{equation}
\label{eqn:reconstruction_loss}
{L}_{R}({\phi }, {\theta };{x}_{M}, {\epsilon }) = \frac {1}{M} \sum _{i=1}^{M} \sum _{j=1}^{N}({x}_{i,j} - \hat {{x}}_{i,j})^{2}.
\end{equation}
To obtain the desired prior distribution, $KL$ divergence is utilized on the encoded variable to measure the probability distance of LR and HR images. $KL$ divergence is calculated over the fake minibatch as
\begin{equation}
\label{eqn:kl_loss}
{L}_{KL}( {\phi }; {x}_{M}) \\= \frac {1}{M} \sum _{i=1}^{M}\sum _{j=1}^{J} \left ({1 + log(\sigma _{i,j} ^{2}) - \mu _{i,j}^{2} - \sigma _{i,j} ^{2} }\right) .
\end{equation}
Therefore, the total loss is calculated as
\begin{equation}
\label{eqn:total_loss}
{L}_{BETA}({\phi },{\theta };{x}_{M},{\epsilon }) = {L}_{R} + \beta{L}_{KL} + \alpha
\end{equation}
where $\beta$ denotes the Lagrangian multiplier and $\alpha$ denotes the marginal value. As the negation of $L_{BETA}$ is the lower bound of the Lagrangian, minimization of the loss is equivalent to maximization of the Lagrangian, which is useful for our initial optimization problem. The $\alpha$ controls the quality of image reconstruction as an aid to the objective function. For $\beta=1$, the working principle is the same as traditional VAE. When $\beta > 1$, it applies a stronger constraint on the latent bottleneck and limits the representation capacity of $z$ \cite{chen2018isolating}. Maintaining the disentanglement is the most effective representation for some of the conditionally independent generative factors \cite{mathieu2019disentangling}.
\subsection{Lagrangian multiplier variation for different upscaling}
The addition of $\beta $ in VAE provides more disentangled information and sharp gradients compared to traditional VAE \cite{higgins2016beta}. A higher value of $\beta$ provides more efficient encoded latent vectors and further encourages disentanglement. However, too large $\beta$ may lead to poorer reconstruction quality as it creates a trade-off with the extent of disentanglement. The reconstruction loss ensures the network captures useful information while forming the latent distribution. An increase in the number of latent variables reduces image quality, thus through empirical evaluation, we selected a different Lagrangian multiplier for different upscaling. For our experimental settings, we have selected 150, 200, and 300 for 3 $\times$, 4 $\times$, and 8 $\times$ upscaling factor respectively.
\subsection{Computational Efficiency of Non-local Block}
In this subsection, the computational efficiency of the non-local block is briefly explained. The point-wise convolution is the core of non-local block for calibrating spatial information. It also serves as the channel reduction technique in this network. The weights of the point-wise convolution can be calculated as:
\begin{align}
\label{eqn:point-wise}
W_{PC} = K \times K \times N \times P.
\end{align}
For this operation, $K = 1$. Then Equation~\ref{eqn:point-wise} becomes:
\begin{align}
\label{eqn:point_weights}
W_{PC} = N \times P.
\end{align}
And the corresponding number of operations is therefore:
\begin{align}
\label{eqn:point_operation}
O_{PC} = M \times M \times K \times K \times N \times P \\ \nonumber
= M \times M \times N \times P. &&
\end{align}
For the standard convolution operation, the number of weights will be:
\begin{align}
\label{eqn:std_conv_weights}
W_{SC} = K \times K \times N \times P.
\end{align}
And the corresponding number of operation is:
\begin{align}
\label{eqn:std_conv_operation}
O_{SC} = M \times M \times K \times K \times N \times P.
\end{align}
Now, the reduction factors of weights and operations can be defined as:
\begin{align}
\label{eqn:reduction_weights}
F_{W} = \frac{W_{PC}}{W_{SC}}.
\end{align}
\begin{align}
\label{eqn:reduction_operation}
F_{O} = \frac{O_{PC}}{O_{SC}}.
\end{align}
From the reduction factors of weights and operation, we can observe the reduction in computational cost due to the use of point-wise convolutions.
\begin{table*}[ht!]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Benchmark results for SISR methods. Best results are in bold. All the methods are trained on DIV2K datasets.}
\medskip
\label{tab:performance}
\begin{tabular}{@{} cc l c cc c cc c cc c cc c cc @{}}
\toprule
\multirow{2}{*}{Scale} &
\phantom{a} &
\multirow{2}{*}{Method} &
\phantom{a} &
\multicolumn{2}{c}{Set5} &
\phantom{a} &
\multicolumn{2}{c}{Set14} &
\phantom{a} &
\multicolumn{2}{c}{BSDS100} &
\phantom{a} &
\multicolumn{2}{c}{Urban100} &
\phantom{a} &
\multicolumn{2}{c}{Manga109} \\
\cmidrule{5-6}
\cmidrule{8-9}
\cmidrule{11-12}
\cmidrule{14-15}
\cmidrule{17-18}
&& && PSNR & SSIM &&
PSNR & SSIM &&
PSNR & SSIM &&
PSNR & SSIM &&
PSNR & SSIM \\
\midrule
\multirow{10}{*}{\rotatebox[]{45}{3$\times$}} &&
Bicubic &&
30.40 & 0.8684 &&
27.55 & 0.7743 &&
27.19 & 0.7388 &&
24.45 & 0.7358 &&
26.95 & 0.8558 \\
&&
A+ \cite{timofte2014a+} &&
32.51 & 0.9080 &&
29.10 & 0.8202 &&
28.21 & 0.7829 &&
25.86 & 0.7891 &&
29.90 & 0.9101 \\
&&
SRCNN \cite{dong2015image} &&
32.75 & 0.9090 &&
29.30 & 0.8215 &&
28.28 & 0.7832 &&
25.87 & 0.7888 &&
30.56 & 0.9124 \\
&&
FSRCNN \cite{dong2016accelerating} &&
33.17 & 0.9141 &&
29.39 & 0.824 &&
28.59 & 0.7940 &&
26.43 & 0.8075 &&
31.05 & 0.9189 \\
&&
VDSR \cite{kim2016accurate} &&
33.67 & 0.9212 &&
29.78 & 0.8318 &&
28.83 & 0.7982 &&
27.14 & 0.8280 &&
32.07 & 0.9337 \\
&&
LapSRN \cite{lai2017deep} &&
33.82 & 0.9227 &&
29.84 & 0.8322 &&
28.82 & 0.7982 &&
27.07 & 0.8270 &&
32.21 & 0.9342 \\
&&
MemNet \cite{tai2017memnet} &&
34.09 & 0.9248 &&
30.00 & 0.8350 &&
28.95 & 0.8001 &&
27.53 & 0.8270 &&
32.58 & 0.9382 \\
&&
SRGAN \cite{ledig2017photo} &&
33.73 & 0.9102 &&
29.58 & 0.8215 &&
28.62 & 0.7790 &&
26.04 & 0.8168 &&
31.56 & 0.9187 \\
&&
NLVAE (Proposed) &&
\textbf{34.10} & \textbf{0.9270} &&
\textbf{30.81} & \textbf{0.8398} &&
\textbf{29.05} & \textbf{0.7805} &&
\textbf{28.07} & \textbf{84.02} &&
\textbf{33.19} & \textbf{0.9437} \\
\midrule
\multirow{10}{*}{\rotatebox[]{45}{4$\times$}} &&
Bicubic &&
28.43 & 0.8109 &&
26.00 & 0.7026 &&
25.95 & 0.6698 &&
23.13 & 0.6598 &&
24.89 & 0.7865 \\
&&
A+ \cite{timofte2014a+} &&
30.25 & 0.8601 &&
27.21 & 0.7503 &&
26.65 & 0.7103 &&
24.19 & 0.7198 &&
27.08 & 0.8519 \\
&&
SRCNN \cite{dong2015image} &&
30.48 & 0.8628 &&
27.50 & 0.7513 &&
26.90 & 0.7114 &&
24.52 & 0.7221 &&
27.60 & 0.8583 \\
&&
FSRCNN \cite{dong2016accelerating} &&
30.72 & 0.8658 &&
27.60 & 0.7538 &&
26.95 & 0.7138 &&
24.62 & 0.7280 &&
27.86 & 0.8602 \\
&&
VDSR \cite{kim2016accurate} &&
31.35 & 0.8838 &&
28.02 & 0.7682 &&
27.29 & 0.7165 &&
25.18 & 0.7530 &&
28.87 & 0.8862 \\
&&
LapSRN \cite{lai2017deep} &&
31.54 & 0.8860 &&
28.16 & 0.7724 &&
27.32 & 0.7161 &&
25.21 & 0.7558 &&
29.09 & 0.8890 \\
&&
MemNet \cite{tai2017memnet} &&
31.76 & 0.8893 &&
28.26 & 0.7726 &&
27.42 & 0.7280 &&
25.50 & 0.7628 &&
29.64 & 0.8938 \\
&&
SRGAN \cite{ledig2017photo} &&
29.37 & 0.8471 &&
26.01 & 0.7396 &&
25.13 & 0.6645 &&
24.35 & 0.7331 &&
28.39 & 0.8603 \\
&&
ESRGAN \cite{wang2018esrgan} &&
30.47 & 0.8512 &&
26.28 & 0.6987 &&
25.32 & 0.6519 &&
24.36 & 0.7337 &&
28.44 & 0.8609 \\
&&
NLVAE (Proposed) &&
\textbf{31.96} & \textbf{0.8903} &&
\textbf{28.67} & \textbf{0.7776} &&
\textbf{27.86} & \textbf{0.7367} &&
\textbf{25.88} & \textbf{0.7751} &&
\textbf{30.11} & \textbf{0.8945} \\
\midrule
\multirow{10}{*}{\rotatebox[]{45}{8$\times$}} &&
Bicubic &&
24.42 & 0.6580 &&
23.10 & 0.5660 &&
23.65 & 0.5483 &&
20.74 & 0.5160 &&
21.55 & 0.6509 \\
&&
A+ \cite{timofte2014a+} &&
25.21 & 0.6875 &&
23.48 & 0.5889 &&
23.97 & 0.5605 &&
21.02 & 0.5403 &&
22.11 & 0.6813 \\
&&
SRCNN \cite{dong2015image} &&
25.33 & 0.6900 &&
23.76 & 0.5910 &&
24.13 & 0.5659 &&
21.29 & 0.5438 &&
22.40 & 0.6846 \\
&&
FSRCNN \cite{dong2016accelerating} &&
20.13 & 0.5520 &&
19.75 & 0.4820 &&
24.21 & 0.5672 &&
21.32 & 0.5379 &&
22.39 & 0.6730 \\
&&
VDSR \cite{kim2016accurate} &&
25.95 & 0.7242 &&
24.26 & 0.6140 &&
24.37 & 0.5767 &&
21.65 & 0.5704 &&
23.16 & 0.7230 \\
&&
LapSRN \cite{lai2017deep} &&
26.14 & 0.7384 &&
24.35 & 0.6200 &&
24.53 & 0.5865 &&
21.81 & 0.5805 &&
23.39 & 0.7533 \\
&&
MemNet \cite{tai2017memnet} &&
26.16 & 0.7414 &&
24.38 & 0.6199 &&
24.59 & 0.5843 &&
21.88 & 0.5824 &&
23.56 & 0.7386 \\
&&
SRGAN \cite{ledig2017photo} &&
25.88 & 0.7069 &&
24.02 & 0.6015 &&
24.41 & 0.5786 &&
21.68 & 0.5614 &&
24.61 & 0.7864 \\
&&
ESRGAN \cite{wang2018esrgan} &&
26.30 & 0.7551 &&
24.07 & 0.6011 &&
24.64 & 0.5850 &&
22.57 & 0.6279 &&
24.75 & 0.7872 \\
&&
NLVAE (Proposed) &&
\textbf{27.23} & \textbf{0.7860} &&
\textbf{25.32} & \textbf{0.6469} &&
\textbf{25.31} & \textbf{0.5983} &&
\textbf{22.97} & \textbf{0.6353} &&
\textbf{25.12} & \textbf{0.8013} \\
\bottomrule
\end{tabular}
\end{table*}
For the standard convolution operation, the number of weights will be
\begin{align}
\label{eqn:std_weights}
W_{SC} = K \times K \times N \times P.
\end{align}
And the corresponding number of operation is,
\begin{align}
\label{eqn:std_operation}
O_{SC} = M \times M \times K \times K \times N \times P.
\end{align}
Now, the reduction factors of weights and operations can be defined as
\begin{align}
\label{eqn:reduction_weights}
F_{W} = \frac{W_{PC}}{W_{SC}}.
\end{align}
\begin{align}
\label{eqn:reduction_operation}
F_{O} = \frac{O_{PC}}{O_{SC}}.
\end{align}
From the reduction factors of weights and operation, we can observe the reduction in computational cost due to the use of point-wise convolutions.
\section{Experimental Evaluations}
\subsection{Datasets}
We have evaluated our proposed \texttt{NLVAE} model against seven different datasets---Set5 \cite{bevilacqua2012low}, Set14 \cite{zeyde2010single}, BSD100 \cite{martin2001database}, Manga109 \cite{matsui2017sketch}, Urban100 \cite{huang2015single}, General100 \cite{dong2016accelerating}, and T19 \cite{yang2010image}. In the qualitative analysis, we have used the General100, Set14, Set5, and T91 datasets, while the quantitative analyses are performed using the Set5, Set14, BSD100, Urban100, and Manga109 datasets. We have compared our model against a number of baseline and SOTA models, reporting PSNR and SSIM metrics.
\subsection{Implementation Details \& Training Settings}
We make use of the TensorFlow framework with Python for all the experiments. The experiments are implemented on a a Nvidia GeForce GTX Titan X and Intel Xeon CPU at 2.40 GHz machine. All the images are resized to $256 \times 256$. The Adam optimizer \cite{kingma2014adam} is utilized with $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, and $\sigma$ = $10^{-8}$. Pesudo labels are created for training purposes as there exists only one single image. The model is trained till 2000 epochs. We use the $L2$ loss function for our solution. For our settings, the hyperparameters are selected empirically. We perform the experiments for three different scaling factors---$3\times$, $4\times$, and $8\times$. The value of $\beta$ is set to 500 for all the experiments.
\par
As the proposed method utilizes self-training strategy, it takes both $L_R$ and $H_R$ single image as input for training pipeline. Then, it tries to learn the relationship leveraging non-local attention blocks. Finally, the self-training model tries to generate a single $H_R$ image from the pre-trained weights. It is to be noted that the performance metric calculation is the mean of all single generated images for each datasets.
\subsection{Results}
\begin{figure*}[ht!]
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{c c c c c}
& {\Large HR Image} & {\Large SRGAN} & {\Large ESRGAN} & {\Large NLVAE} \\
\noalign{\smallskip}
{\Large \rotatebox{90} {\hspace{1.5cm}{tt17}}} &
\includegraphics[width=0.24\linewidth, trim={1.25cm 15.5cm 23cm 1.1cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={9cm 15.5cm 15.25cm 1.1cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={16.25cm 15.5cm 8cm 1.1cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={24cm 15.5cm 0.25cm 1.1cm}, clip]{generative_model.pdf}
\\
\noalign{\smallskip}
{\Large \rotatebox{90} {\hspace{1cm}{butterfly}}} &
\includegraphics[width=0.24\linewidth, trim={1.25cm 7.9cm 23cm 8.7cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={9cm 7.9cm 15.25cm 8.7cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={16.25cm 7.9cm 8cm 8.7cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={24cm 7.9cm 0.25cm 8.7cm}, clip]{generative_model.pdf}
\\
\noalign{\smallskip}
{\Large \rotatebox{90} {\hspace{1.4cm}{img\_087}}} &
\includegraphics[width=0.24\linewidth, trim={1.25cm 0.1cm 23cm 16.5cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={9cm 0.1cm 15.25cm 16.5cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={16.25cm 0.1cm 8cm 16.5cm}, clip]{generative_model.pdf} &
\includegraphics[width=0.24\linewidth, trim={24cm 0.1cm 0.25cm 16.5cm}, clip]{generative_model.pdf}
\\
\end{tabular}
}
\caption{Visual comparison of reconstruction-based methods on 'tt17.png' from T91 dataset and 'butterfly.png' from Set5 dataset and 'img\_087.png' Urban100}
\label{fig:reconstruction}
\end{figure*}
\begin{figure*}[ht!]
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{c c c c c}
& {\Large HR Image} & {\Large VDSR} & {\Large LapSRN} & {\Large NLVAE} \\
\noalign{\smallskip}
{\Large \rotatebox{90}{\hspace{1cm} {im\_078}}} &
\includegraphics[width=0.24\linewidth, trim={1.25cm 7.9cm 23cm 1.25cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={9cm 7.9cm 15.25cm 1.25cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={16.25cm 7.9cm 8cm 1.25cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={24cm 7.9cm 0.25cm 1.25cm}, clip]{learning.pdf}
\\
\noalign{\smallskip}
{\Large \rotatebox{90}{\hspace{1.4cm}{pepper}}} &
\includegraphics[width=0.24\linewidth, trim={1.25cm 0.25cm 23cm 8.75cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={9cm 0.25cm 15.25cm 8.75cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={16.25cm 0.25cm 8cm 8.75cm}, clip]{learning.pdf} &
\includegraphics[width=0.24\linewidth, trim={24cm 0.25cm 0.25cm 8.75cm}, clip]{learning.pdf}
\end{tabular}
}
\caption{Visual comparison of learning-based methods on 'im\_078.png' from General100 dataset and 'pepper.png' from Set14 dataset.}
\label{fig:learning}
\end{figure*}
\begin{figure*}[ht!]
\centering
\resizebox{0.95\linewidth}{!}{
\subcaptionbox{L1 \& L2 loss functions, and various optimizers with respect to epochs.}{
\includegraphics[width = 0.45\linewidth]{optimizer_loss.pdf}
}
\hfill
\subcaptionbox{Various feature learning units with respect to epochs.}{
\includegraphics[width = 0.45\linewidth]{feature_loss.pdf}
}
}
\caption{Ablation study: Evaluation of loss functions and feature learning blocks on the Set5 dataset}
\label{fig:ablation}
\end{figure*}
\subsubsection{Quantitative} Table~\ref{tab:performance} reports the quantitative results for ($\times3$), ($\times4$), ($\times8$) SR methods. Both learning-based (Bicubic, A+, Super-Resolution CNN (SRCNN), Fast SRCNN (FSRCNN), Very Deep Super-Resolutio (VDSR), Laplacian Pyramid Super-Resolution Network (LapSRN), Memory Network (MemNet) and reconstruction-based methods (SRGAN, ESRGAN) have been compared against the proposed framework. It is worth mentioning that deeper architectures perform better over the shallower networks. We also note that larger scaling factors affect the performance of the existing external methods. Among learning-based methods, MemNet demonstrates good performance over large scaling factors because of its large architecture, but its performance drops when the scaling factor is relatively small. The reconstruction-based strategies have higher structural similarity than other methods. Most importantly, our proposed \texttt{NLVAE} model outperforms other reconstruction-based methods in all the scaling factors, generating high-resolution photo-realistic images. This justifies the incorporation of the non-local convolutional block which enables the model to perform better, specifically, on smaller scaled images. Moreover, the deeper architecture of the generative models enhances the performance on large scaling factors, leading the network to a robust zero-shot super-resolution network.
\subsubsection{Qualitative} Fig.~\ref{fig:learning} and Fig.~\ref{fig:reconstruction} depict the visualization of learning-based methods and reconstruction-based methods respectively. Samples from the Set14 and General100 datasets have been used in order to visualize the learning-based solutions. It is confirmed from Fig.~\ref{fig:learning} that our solution produces sharp edges and avoids any undesirable artifacts. As can be seen, a featured generative solution learns better representations between an LR image \& its corresponding HR image. For a fair comparison with reconstruction-based solutions, we utilized Set5, Urban100, and BSDS100 for qualitative comparison among generative SISR models. The visual property of the reconstructed images is magnificent compared to other methods because of its global contextual feature learning process. Fig.~\ref{fig:reconstruction} demonstrates that our method can reduce the blurring artifacts presenting a powerful feature learning ability. It is imperative that the \texttt{NLVAE} model provides better details in regions of irregular structures. More detailed visualizations containing random HR samples from the generated sets and real datasets are provided in the supplementary material.
\section{Ablation Study}
\subsection{Loss function \& Optimizers} In Fig.~\ref{fig:ablation}(a), we have explored different loss functions and optimizers to evaluate the performance of our proposed model. We observe that the combination $L2$ + \texttt{Adam} converges to gradient more smoothly than any other solutions. Among all optimizers, \texttt{Adam} converges to gradient faster than others. \texttt{SGD} and \texttt{RMSProp} provide competitive results but developing slower solutions for a zero-shot process. Between $L2$ and $L1$ loss functions, $L2$ provides faster training and finer reconstruction quality. All the hyperparamters are fixed for this ablation study.
\subsection{Feature Extraction Blocks} To verify the robustness of our non-local convolutional block, we explored various feature extraction units. Fig.~\ref{fig:ablation}(b) shows that non-local convolutional unit performs better than other feature learning units. We observe that the residual unit learns slightly better representations than other units but has relatively larger computational burdens. Comparing our non-local unit against other traditional convolution operations (including depth-wise separable convolution, transposed convolution, and standard convolution operations) our method shows excellent performance with the lowest MSE between LR \& HR images.
\subsection{Feature Extraction Blocks} To verify the robustness of our non-local convolutional block, we explored various feature extraction units. Fig.~\ref{fig:ablation}(b) shows that non-local convolutional unit performs better than other feature learning units. We observe that the residual unit learns slightly better representations than other units but has relatively larger computational burdens. Comparing our non-local unit against other traditional convolution operations (including depth-wise separable convolution, transposed convolution, and standard convolution operations) our method shows excellent performance with the lowest MSE between LR \& HR images.
\subsection{Non-Local Blocks} In this ablations, we study the essentially of non-local blocks for image generation. In table \ref{tab:non_local}, the PSNR and SSIM values are depicted against number of non-local blocks for our proposed method. We note that an increase in non-local blocks provide more accurate image but also increases computational resources. Moreover, we note unusual instability using 5 or more non-local blocks with \textit{ADAM} optimizer.
\begin{table}[t!]
\setlength{\tabcolsep}{3pt}
\centering
\caption{Number of Non-local blocks for convolutional encoder \& decoder on Set5 dataset}
\medskip
\label{tab:non_local}
\resizebox{0.95\linewidth}{!}{
\begin{tabular}{l|l|l|l}
Convolutional Encoder & PSNR & Convolutional Decoder & PSNR \\ \midrule
Non-Local Block - 1 & Unstable & Non-Local Block - 5 & 31.83 \\ \midrule
Non-Local Block - 2 & 27.45 & Non-Local Block - 6 & 32.29 \\ \midrule
Non-Local Block - 3 & 30.12 & Non-Local Block - 7 & 33.81 \\ \midrule
Non-Local Block - 4 & 33.27 & Non-Local Block - 8 & 33.97 \\ \midrule
Non-Local Block - 5 & 34.10 & Non-Local Block - 9 & 34.10 \\ \bottomrule
\end{tabular}
}
\end{table}
\section{Discussions}
In this subsection, we discuss the similarity, dissimilarities and limitations of our method compared to other data-driven strategies. Table \ref{tab:discussion} shows that the input image is linearly upscaled before processing for super-resolution. Similar to VDSR and DRCN, we also upscale the low-resolution image but linearly. The reconstruction process in our method is progressive as we combine both learning-based and reconstruction-based methods. Learning-based methods generally utilized direct reconstruction of HR images. We adopt the $L2$ loss function for faster convergence in order to maintain high-reconstruction quality. As mentioned above, we do not use a residual representation learning process due to the computational cost for self-supervised settings. Our settings use small modifications of self-supervised settings. We do not use batches of images per epochs; instead we utilize fake batches of a single image for every epochs. Moreover, we perform all these experiments on different sizes of dataset to explore structural variation. Experiments have done on small datasets (Set5, Set14) as well as large datasets (Manga109, Urban100, BSD100) to justify the performance of our proposed solution.
\begin{table}[ht!]
\setlength{\tabcolsep}{6pt}
\centering
\caption{A comparison among various SISR methods defining the loss function, input types, reconstruction types and feature extraction modules.}
\medskip
\label{tab:discussion}
\resizebox{0.98\linewidth}{!}{
\begin{tabular}{c|c|c|c|c}
\toprule
Methods & \begin{tabular}[c]{@{}c@{}}Residual\\ Features\end{tabular} & \begin{tabular}[c]{@{}c@{}}Input\\ Types\end{tabular} & \begin{tabular}[c]{@{}c@{}}Reconstruction \\ Types\end{tabular} & \begin{tabular}[c]{@{}c@{}}Loss\\ Function\end{tabular} \\ \bottomrule \midrule
SRCNN & No & LR & Direct & L2 \\ \midrule
FSRCNN & No & LR & Direct & L2 \\ \midrule
VDSR & Yes & LR + Bicubic & Direct & L2 \\ \midrule
DRCN & Yes & LR + Bicubic & Direct & L2 \\ \midrule
LapSRN & Yes & LR & Progressive & L1 \\ \midrule
\textbf{NLVAE} & No & LR + Linear & Progressive & L2 \\ \bottomrule
\end{tabular}
}
\end{table}
\section{Conclusions}
We have presented \texttt{NLVAE}, an untrained generative model, featuring a neural encoder-decoder framework capable of reconstructing high-resolution images. With the use of non-local convolutional modules, the model is enabled to capture high-quality semantic information. In addition, the beta variational autoencoder provides more disentangled information reconstructing high-resolution images. Combining the learning-based and reconstruction-based methods, the present method generates sharp and photo-realistic images. The effectiveness of the present model has been confirmed through an extensive experimentation compared with a number of SOTA methods, both qualitatively and quantitatively on multiple benchmark datasets. Moreover, leveraging the power of robust feature learning and generative modeling, the proposed model obviates the need for a large scale dataset while performing SISR. It is to be noted that our proposed method relies on linear upsampling before the super-resolution task. Our future work will include further validation of the NLVAE model against more challenging data settings across various domains as well as more powerful automatic upsampling strategy. We envision more extensive comprehension of our model and more intuitive design of the objective function .
\bibliographystyle{IEEEtran}
|
1,116,691,501,003 | arxiv | \section{Pseudocodes of the CIs of the Two Planning Styles}
\label{app_sec:CIs_algorithms}
\begin{algorithm}[h!]
\centering
\caption{Tabular Online Monte-Carlo Planning (OMCP) \citep{tesauro1996line} with an Adaptable Model} \label{alg:alg_CI_DT}
\begin{algorithmic}[1]
\State \text{Initialize $\pi^i\in\mathbb{\Pi}$ as a random policy}
\State \text{Initialize $\bar{m}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $n_r\gets \text{number of episodes to perform rollouts}$
\While{\text{$\bar{m}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{MC\_rollout}(S, \bar{m}, n_r, \pi^i))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{m}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $\bar{m}(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{Tabular Exhaustive Search \citep{campbell2002deep} with an Adaptable Model} \label{alg:alg_CI_DT_exs}
\begin{algorithmic}[1]
\State \text{Initialize $\bar{m}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $h\gets \text{search heuristic}$
\While{\text{$\bar{m}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{exhaustive\_tree\_search}(S, \bar{m},h))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{m}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $\bar{m}(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/rollout.pdf}
\caption{\small Pure Rollouts} \label{app_fig:pure_rollout}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/exsearch.pdf}
\caption{\small Pure Search} \label{app_fig:exhaustive_search}
\end{subfigure}
\par\bigskip \vspace{-0.25em}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/rollout_and_search.pdf}
\caption{\small Search + Rollouts} \label{app_fig:search_and_rollout}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/bootstrap_and_search.pdf}
\caption{\small Search + Bootstrapping on the value estimates} \label{app_fig:search_and_bootstrap}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small Various planning styles within DT planning in which planning is performed (i) by purely performing rollouts, (ii) by purely performing search, (iii) by performing rollouts after performing some amount of search, and (iv) by bootstrapping on the value estimates after performing some amount of search. The subscripts and the superscripts on the states indicate the time steps and state identifiers, respectively. The black traingles indicate the terminal states.}
\vspace{-0.1cm}
\label{app_fig:dt_planning_figures}
\end{figure}
\begin{algorithm}[h!]
\centering
\caption{General Tabular Dyna-Q \citep{sutton1990integrated, sutton1991dyna}}\label{alg:alg_CI_B}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $\mathcal{SA}_{\text{prev}}\gets \{\}$
\State $n_p\gets \text{number of time steps to perform planning}$
\While{\text{$Q$ and $\bar{\bar{m}}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State $\mathcal{SA}_{\text{prev}}\gets \mathcal{SA}_{\text{prev}} + \{(S, A)\}$
\State \text{Update} $Q(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State \text{Update} $\bar{\bar{m}}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $i\gets 0$
\While{$i<n_p$}
\State $S_{\bar{\bar{m}}}, A_{\bar{\bar{m}}} \gets \text{sample from } \mathcal{SA}_{\text{prev}}$
\State $R_{\bar{\bar{m}}},S_{\bar{\bar{m}}}', \text{done}_{\bar{\bar{m}}}\gets \bar{\bar{m}}(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$
\State \text{Update} $Q(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$ \text{with $R_{\bar{\bar{m}}}$, $S_{\bar{\bar{m}}}'$, $\text{done}_{\bar{\bar{m}}}$}
\State $i\gets i+1$
\EndWhile
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\vspace{-0.1cm}
\begin{algorithm}[h!]
\centering
\caption{Tabular Dyna-Q of Interest}\label{alg:alg_CI_Bi}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\While{\text{$Q$ and $\bar{\bar{m}}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{\bar{m}}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\While{\text{$Q$ has not converged}}
\State $S_{\bar{\bar{m}}}, A_{\bar{\bar{m}}}\gets \text{sample from } \mathcal{S}\times\mathcal{A}$
\State $R_{\bar{\bar{m}}},S_{\bar{\bar{m}}}', \text{done}_{\bar{\bar{m}}} \gets \bar{\bar{m}}(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$
\State \text{Update} $Q(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$ \text{with $R_{\bar{\bar{m}}}$, $S_{\bar{\bar{m}}}'$, $\text{done}_{\bar{\bar{m}}}$}
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\section{Discussion on the Choice of the CIs of the Two Planning Styles}
\label{app_sec:CIs_discussion}
As indicated in the main paper, for DT planning, we study the OMCP algorithm of \citet{tesauro1996line}, and, for B planning, we study the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna}. We choose these algorithms as they are easy to analyze. In this study, as we are interested in scenarios where the model has to be learned from pure interaction, we consider a version of the OCMP algorithm in which a parametric model is learned from experience (see Alg.\ \ref{alg:alg_CI_DT} for the pseudocode). Note that this is the only difference compared to the original version of the OMCP algorithm. And, in order to make a fair comparison with this version of the OMCP algorithm, we consider a simplified version of the Dyna-Q algorithm (see Alg.\ \ref{alg:alg_CI_B} \& \ref{alg:alg_CI_Bi} for the pseudocodes of the original and simplified versions, respectively). Compared to the original version of Dyna-Q, in this version, there are several minor differences:
\begin{itemize}
\item While planning, the agent can now sample states and actions that it has not observed or taken before. Note that this is also the case for the version of the OMCP algorithm considered in this study.
\item Now, instead of using samples from both the environment and model, the agent updates its VE with samples only from the model. Note that the version of the OMCP algorithm considered in this study also makes use of only the model while performing planning.
\item Now, instead of planning for a fixed number of time steps, the agent performs planning until its VE converges. Note that, in order to allow for sample efficiency, usually $n_p$ is also set to high values in the original version of the Dyna-Q algorithm. Also note that, in order to properly evaluate the base policy, usually $n_r$ is also set to high values in the version of the OMCP algorithm considered in this study. Thus, in practice, both the original version of the Dyna-Q algorithm and the version of the OMCP algorithm considered in this study also devote a significant budget to perform planning.
\item Lastly, instead of performing planning after every time step, now the agent only performs planning after every episode. Rather than to allow for a fair comparison, this is to allow the Dyna-Q version of interest to be able operate in fast-response-requiring domains (planning until convergence after every time step would obviously slow down the response time of the algorithm and prevent it from operating in fast-response-requiring domains).
\end{itemize}
\section{Proofs}
\label{app_sec:proofs}
\begin{appproposition}
\label{app_prop:prop1}
Let $m\in\mathcal{M}$ be a PCM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pcm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_m^r$ and $\pi_m^{ce}$ are the policies that are obtained after performing one-step PI and full VI on top of a $\pi^b$ in model $m$, respectively. Thus, we have $J_{m}^{\pi_m^r} \leq J_{m}^{\pi_m^{ce}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pcm}, implies $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop2}
Let $m\in\mathcal{M}$ be a PRM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^{r}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:prm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_m^r$ and $\pi_m^{ce}$ are the policies that are obtained after performing one-step PI and full VI on top of a $\pi^b$ in model $m$, respectively. Thus, we have $J_{m}^{\pi_m^r} \leq J_{m}^{\pi_m^{ce}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:prm}, implies $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^r}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop3}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}} \in\mathcal{M}$ be a PNM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pnm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_{\bar{\bar{m}}}^{ce}$ is the policy that is obtained after performing full VI on top of $\pi^b$ in model $\bar{\bar{m}}$. Thus, $\pi_{\bar{\bar{m}}}^{ce}$ is one of the optimal policies of model $\bar{\bar{m}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pnm}, implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} = \min_{\pi\in\Pi} J_{m^*}^{\pi}$ and thus $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \leq J_{m^*}^{\pi} \forall \pi\in\mathbb{\Pi}$. This in turn implies $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop4}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}}\in\mathcal{M}$ be a PXM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pxm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_{\bar{\bar{m}}}^{ce}$ is the policy that is obtained after performing full VI on top of $\pi^b$ in model $\bar{\bar{m}}$. Thus, $\pi_{\bar{\bar{m}}}^{ce}$ is one of the optimal policies of model $\bar{\bar{m}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pxm}, implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} = \max_{\pi\in\Pi} J_{m^*}^{\pi}$ and thus $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi} \forall \pi\in\mathbb{\Pi}$. This in turn implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{proof}
\section{Pseudocodes of the MIs of the Two Planning Styles}
\label{app_sec:MIs_algorithms}
\vspace{-0.5cm}
\begin{algorithm}[H]
\centering
\caption{The DT Planning Algorithm in \citet{zhao2021consciousness}}\label{alg:alg_MI_DT}
\begin{algorithmic}[1]
\State \text{Initialize the parameters $\theta$, $\eta$ \& $\omega$ of $\phi_{\theta}:\mathcal{O}_E\to \mathcal{S}_A$, $Q_{\eta}:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ \& $\bar{m}_{p \omega} = (p_{\omega}, r_{\omega}, d_{\omega})$}
\State \text{Initialize the replay buffer $\bar{m}_{np}\gets \{ \}$}
\State $N_{ple}\gets \text{number of episodes to perform planning and learning}$
\State $N_{rbt}\gets \text{number of samples that the replay buffer must hold to perform planning and learning}$
\State $n_s\gets \text{number of time steps to perform search}$
\State $n_{bs}\gets \text{number of samples to sample from } \bar{m}_{np}$
\State $h\gets \text{search heuristic}$
\State $S\gets \text{replay buffer sampling strategy}$
\State $i\gets 0$
\While{$i<N_{ple}$}
\State $O\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{tree\_search\_with\_bootstrapping}(\phi_{\theta}(O), \bar{m}_{p \omega}, Q_{\eta}, n_s, h))$}
\State \text{$R, O', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{m}_{np}\gets \bar{m}_{np} + \{(O,A,R,O', \text{done})\} $}
\If{$|\bar{m}_{np}| \geq N_{rbt}$}
\State $\mathcal{B}_{np}\gets \text{sample\_batch}(\bar{m}_{np}, n_{bs}, S)$
\State Update $\phi_{\theta}$, $Q_{\eta}$ \& $\bar{m}_{p \omega}$ with $\mathcal{B}_{np}$
\EndIf
\State $O\gets O'$
\EndWhile
\State $i\gets i+1$
\EndWhile
\State \textbf{Return} $\phi_{\theta}$, $Q_{\eta}$ \& $\bar{m}_{p \omega}$
\end{algorithmic}
\end{algorithm}
\vspace{-0.5cm}
\begin{algorithm}[h!]
\centering
\caption{The B Planning Algorithm in \citet{zhao2021consciousness}}\label{alg:alg_MI_B}
\begin{algorithmic}[1]
\State \text{Initialize the parameters $\theta$, $\eta$ \& $\omega$ of $\phi_{\theta}:\mathcal{O}_E\to \mathcal{S}_A$, $Q_{\eta}:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ \& $\bar{\bar{m}}_{p \omega} = (p_{\omega}, r_{\omega}, d_{\omega})$}
\State \text{Initialize the replay buffer $\bar{\bar{m}}_{np}\gets \{ \}$ and the imagined replay buffer $\bar{\bar{m}}_{inp}\gets \{ \}$}
\State $N_{ple}\gets \text{number of episodes to perform planning and learning}$
\State $N_{rbt}\gets \text{number of samples that the replay buffer must hold to perform planning and learning}$
\State $n_{ibs}\gets \text{number of samples to sample from } \bar{\bar{m}}_{inp}$
\State $n_{bs}\gets \text{number of samples to sample from } \bar{\bar{m}}_{np}$
\State $S\gets \text{replay buffer sampling strategy}$
\State $i\gets 0$
\While{$i<N_{ple}$}
\State $O\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q_{\eta}(\phi_{\theta} (O),\cdot))$}
\State \text{$R, O', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{\bar{m}}_{np}\gets \bar{\bar{m}}_{np} + \{(O,A,R,O', \text{done})\} $}
\State \text{$\bar{\bar{m}}_{inp}\gets \bar{\bar{m}}_{inp} + \{(\phi_{\theta}(O),A)\} $}
\If{$|\bar{\bar{m}}_{np}| \geq N_{rbt}$}
\State $\mathcal{B}_{inp}\gets \text{sample\_batch}(\bar{\bar{m}}_{inp}, n_{ibs}, S)$
\State $\mathcal{B}_p\gets \mathcal{B}_{inp} + \bar{\bar{m}}_{p \omega}(\mathcal{B}_{inp})$
\State $\mathcal{B}_{np}\gets \text{sample\_batch}(\bar{\bar{m}}_{np}, n_{bs}, S)$
\State Update $\phi_{\theta}$ \& $Q_{\eta}$ with $\mathcal{B}_{np} + \mathcal{B}_p$
\State \text{Update} $\phi_{\theta}$ \& $\bar{\bar{m}}_{p \omega}$ \text{with $\mathcal{B}_{np}$}
\EndIf
\State $O\gets O'$
\EndWhile
\State $i\gets i+1$
\EndWhile
\State \textbf{Return} $\phi_{\theta}$ \& $Q_{\eta}$
\end{algorithmic}
\end{algorithm}
\section{Discussion on the Choice of the MIs of the Two Planning Styles}
\label{app_sec:MIs_discussion}
As indicated in the main paper, we study the DT and B planning algorithms in \citet{zhao2021consciousness}. More specifically, for DT planning, we study the ``UP'' algorithm, and, for B planning, we study the ``Dyna'' algorithm in \citep{zhao2021consciousness}. We choose these algorithms as they are reflective of many of the properties of their state-of-the-art (SOTA) counterparts (MuZero \citep{schrittwieser2020mastering} and SimPLe \citep{kaiser2020model} / DreamerV2 \citep{hafner2021mastering}, respectively) and their code is publicly available\footnote{\url{https://github.com/mila-iqia/Conscious-Planning}}. The pseudocodes of these algorithms are presented in Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}, respectively. Note that, similar to their SOTA counterparts, these two algorithms do not employ the ``bottleneck mechanism'' introduced in \citep{zhao2021consciousness}. Some of the important similarities and differences between these algorithms and their SOTA counterparts, are as follows:
\begin{enumerate}
\item Similarities and differences between the DT planning algorithm in \citet{zhao2021consciousness} and MuZero \citep{schrittwieser2020mastering}
\begin{itemize}
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also performs planning with both a parametric and non-parametric model.
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also represents its parametric model using NNs.
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also learns a parametric model through pure interaction with the environment. However, rather than unrolling the model for several time steps and training it with the sum of the policy, value and reward losses as in MuZero, it unrolls the model for only a single time step and trains it with the sum of the value, dynamics, reward and termination losses (see the total loss term in Sec.\ 3 of \citep{zhao2021consciousness}).
\item Lastly, similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} selects actions by directly bootstrapping on the value estimates of a continually improving policy (without performing any rollouts), which is obtained by planning with a non-parametric model, after performing some amount search with a parametric model. However, rather than performing the search using Monte-Carlo Tree Search (MCTS, \citep{kocsis2006bandit}) as in MuZero, it uses best-first search (during training) and random search (during evaluation) (see App.\ H of \citep{zhao2021consciousness} for the details of the search procedures).
\end{itemize}
\item Similarities and differences between the B planning algorithm in \citet{zhao2021consciousness} and SimPLe \citep{kaiser2020model} / DreamerV2 \citep{hafner2021mastering}
\begin{itemize}
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also performs planning with a parametric model. Additionally, it also performs planning with a non-parametric model.
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also represents its parametric model using NNs and it also updates its VE using the simulated data generated with this model.
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also learns a parametric model through pure interaction with the environment. However, rather than performing planning with this model after allowing for an initial kickstarting period as in SimPLe / DreamerV2 (referred to as a ``world model'' learning period), for a fair comparison with the DT planning algorithm, it starts to perform planning right at the beginning of the model learning process.
\item Lastly, similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} selects actions by simply querying its VE.
\end{itemize}
\end{enumerate}
Even though there are some differences between the planning algorithms in \citep{zhao2021consciousness} and their SOTA counterparts, except for the kickstarting period in SOTA B planning algorithms, these are just minor implementation details that would not have any impact on the conclusions of this study. Although the kickstarting period can mitigate the harmful simulated data problem to some degree (or even prevent it if the period is sufficiently long), allowing for it would definitely prevent a fair comparison with the DT planning algorithm in \citep{zhao2021consciousness}. This is why we did not allow for it in our experiments.
\section{Discussion on the Combined View of Models}
\label{app_sec:combined_view_discussion}
In order to be able to view the DT and B planning algorithms that perform planning with both a parametric and non-parametric model through our proposed DP framework, we view the two separate models of these algorithms as a single combined model. This becomes obvious for B planning algorithms if one notes that they perform planning with a batch of data that is jointly generated by both a parametric and non-parametric model (see e.g., line 20 in Alg.\ \ref{alg:alg_MI_B} in which $\phi_{\theta}$ and $Q_{\eta}$ are updated with batch of data that is jointly generated by both $\bar{\bar{m}}_{p \omega}$ and $\bar{\bar{m}}_{np}$), which can be thought of performing planning with a batch of data that is generated by a single combined model. It also becomes obvious for DT planning algorithms if one notes that they perform planning by first performing search with a parametric model, and then by bootstrapping on the value estimates of a continually improving policy that is obtained by planning with a non-parametric model (see e.g., line 13 in Alg.\ \ref{alg:alg_MI_DT} in which action selection is done with both $\bar{m}_{p \omega}$ and $Q_{\eta}$ (which is obtained by planning with $\bar{m}_{np}$)), which can be thought of performing planning with a single combined model that is obtained by concatenating the parametric and non-parametric models.
\section{Experimental Details}
\label{app_sec:exp_details}
In this section, we provide the details of the experiments that are performed in Sec.\ \ref{sec:experiments}. This also includes the implementation details of the CIs and MIs of the two planning styles considered in this study.
\subsection{Details of the CI Experiments}
\label{app_sec:details_of_CI_exps}
In all of the CI experiments, we have calculated the performance with a discount factor ($\gamma$) of $0.9$.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards.pdf}
\vspace{-0.1cm}
\caption{\small SG Rewards} \label{app_fig:simple_gridworld_rew}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_m8.pdf}
\vspace{-0.1cm}
\caption{\small $m_8$ Rewards} \label{app_fig:simple_gridworld_rew_m8}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_PDM_init.pdf}
\vspace{-0.1cm}
\caption{\small Initial PDM Rewards} \label{app_fig:simple_gridworld_rew_pdm}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_tr.pdf}
\vspace{-0.1cm}
\caption{\small TST Rewards} \label{app_fig:simple_gridworld_rew_tr}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small Reward functions of (a) the SG environment, (b) the $m_8$ model, (c) the initial PDM, and (d) the TST.}
\vspace{-0.1cm}
\label{app_fig:simple_gridworld_env_details}
\end{figure}
\begin{wrapfigure}{R}{0.225\textwidth}
\vspace{-0.5cm}
\centering
\begin{subfigure}{0.225\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_SA.pdf}
\end{subfigure}
\vspace{-0.25cm}
\caption{\small The form of SA used in this study.}
\vspace{-0.1cm}
\label{app_fig:simple_gridworld_SA}
\end{wrapfigure}
\textbf{Environment \& Models.} All of the experiments in Sec.\ \ref{sec:classic_inst_exp} are performed on the SG environment. Here, the agent spawns in state S and has to navigate to the goal state depicted by G. At each time step, the agent receives an $(x,y)$ pair, indicating its position, and based on this, selects an action that moves it to one of the four neighboring cells with a slip probability of $0.1$. The agent receives a negative reward that is linearly proportional to its distance from G and a reward of $+10$ if it reaches G (see Fig.\ \ref{app_fig:simple_gridworld_rew}). The agent-environment interaction lasts for a maximum of 100 time steps, after which the episode terminates with a reward of $0$ if the agent was not able to reach the goal state G. (\textbf{PP Setting}) For the experiments in the PP setting, the agent is provided with series of models in which it receives a reward of $+10$ if it reaches the goal state and a reward of $0$ elsewhere. For example, see the reward function of model $m_8$ in Fig.\ \ref{app_fig:simple_gridworld_rew_m8}. Note that these models have the same transition distribution and initial state distribution with the SG environment. (\textbf{P\&L Setting}) For the experiments in the P\&L setting, the models of both of the planning styles are initialized as a hand-designed PDM with a reward function as in Fig.\ \ref{app_fig:simple_gridworld_rew_pdm} and with a goal state located at the bottom right corner. Note that in these experiments, we have assumed that the agent already has access to the transition distribution and initial state distribution of the environment, and only has to learn the reward function. (\textbf{TL Setting}) Finally, for the experiments in the TL setting, we considered a TST with a reward function as in Fig.\ \ref{app_fig:simple_gridworld_rew_tr}, which is a transposed version of the TRT's reward function (see Fig.\ \ref{app_fig:simple_gridworld_rew}). Note again that we have assumed that the agent already has access to the transition distribution and initial state distribution of the environment, and only has to learn the reward function.
\textbf{Implementation Details of the CIs.} For our CI experiments, we considered specific versions of the OMCP algorithm of \citet{tesauro1996line} and the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna}. The pseudocodes of these algorithms are presented in Alg.\ \ref{alg:alg_CI_DT} \& \ref{alg:alg_CI_Bi}, respectively, and the details of them are provided in Table \ref{tab:CI_DT_details} \& \ref{tab:CI_B_details}, respectively. Note that in Alg.\ \ref{alg:alg_CI_DT}, $n_r$ (the number of episodes to perform rollouts) is set to a high value so that the input policy $\pi^i$ can properly be evaluated.
\begin{table}[h!]
\caption{Details and hyperparameters of Alg. \ref{alg:alg_CI_DT}.}
\centering
\begin{tabular}{l|l}
\hline
$\pi^i$ & a deterministic random policy \\
$\bar{m}$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$n_r$ & 50 \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
SA (in FA experiments) & an aggregation of the form in Fig.\ \ref{app_fig:simple_gridworld_SA} \\
\hline
\end{tabular}
\label{tab:CI_DT_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_CI_Bi}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{\bar{m}}$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
SA (in FA experiments) & an aggregation of the form in Fig.\ \ref{app_fig:simple_gridworld_SA} \\
\hline
\end{tabular}
\label{tab:CI_B_details}
\end{table}
\subsection{Details of the MI Experiments}
\label{app_sec:details_of_MI_exps}
In all of the MI experiments on the SG environment, we have calculated the performance with a discount factor ($\gamma$) of $0.9$, and in all of the MI experiments on the MG environments, we have calculated the performance with a discount factor of $0.99$.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/empty10x10.png}
\vspace{-0.1cm}
\caption{\small Empty 10x10} \label{app_fig:empty10x10}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fourrooms.png}
\vspace{-0.1cm}
\caption{\small FourRooms} \label{app_fig:fourrooms}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/scs9n1.png}
\vspace{-0.1cm}
\caption{\small SCS9N1} \label{app_fig:simplecrossing}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/lcs9n1.png}
\vspace{-0.1cm}
\caption{\small LCS9N1} \label{app_fig:lavacrossing}
\end{subfigure}
\par\bigskip \vspace{-0.25em}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_train_35.png}
\vspace{-0.1cm}
\caption{\small RDS Train (0.35)} \label{app_fig:rds_env_train}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_25.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.25)} \label{app_fig:rds_env_eval25}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_35.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.35)} \label{app_fig:rds_env_eval35}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_45.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.45)} \label{app_fig:rds_env_eval45}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a-d) Several environments that either pre-exist in or are manually built on top of MG. (e-h) The TRTs and TSTs of varying difficulty in the RDS environment \citep{zhao2021consciousness}. Note that the TSTs are just transposed versions of the TRTs.}
\vspace{-0.1cm}
\label{app_fig:minigrid_envs_details}
\end{figure}
\textbf{Environments \& Models.} In Sec.\ \ref{sec:modern_inst_exp}, part of our experiments were performed both on the SG environment and part of them were performed on MG environments. The details of these environments and their corresponding models are as follows:
\begin{itemize}
\item \textbf{SG Environment.} To learn about the SG environment, we refer the reader to Sec.\ \ref{app_sec:details_of_CI_exps} as we have used the same environment in the MI experiments as well. (\textbf{P\&L Setting}) To learn about the models of both planning styles, we also refer the reader to the P\&L Setting of Sec.\ \ref{app_sec:details_of_CI_exps} as have used the same models in the MI experiments as well.
\item \textbf{MG Environments.} In the MG environments, the agent, depicted in red, has to navigate to the green goal cell while avoiding the orange lava cells (if there are any). At each time step, the agent receives a grid-based observation that contains its own position and the positions of the goal, wall and lava cells, and based on this, selects an action that either turns it left or right, or steps it forward. If the agent steps on a lava cell, the episode terminates with no reward, and if it reaches the goal cell, the episode terminates with a reward of $+1$.\footnote{Note that this is not the original reward function of MG environments. In the original version, the agent receives a reward of $+1-0.9(t/T)$, where $t$ is the number of time steps taken to reach the goal cell and $T$ is the maximum episode length, if it reaches the goal. We modified the reward function in order to obtain more intuitive results.} More on the details of these environments can be found in \citet{gym_minigrid}. (\textbf{P\&L Setting}) For the P\&L setting, we performed experiments on the Empty 10x10, FourRooms, SimpleCrossingS9N1 (SCS9N1) and LavaCrossingS9N1 (LCS9N1) environments (see Fig.\ \ref{app_fig:empty10x10}, \ref{app_fig:fourrooms}, \ref{app_fig:simplecrossing}, \& \ref{app_fig:lavacrossing}, respectively). While the last two of these environments already pre-exist in MG, the first two of them are manually built environments. Specifically, the Empty 10x10 environment is obtained by expanding the Empty 8x8 environment and the 10x10 FourRooms environment is obtained by contracting the 16x16 FourRooms environment in \citet{gym_minigrid}. (\textbf{TL Setting}) For the TL setting, we performed experiments on the sequential and regular versions of the RDS environment considered in \citep{zhao2021consciousness} (see Fig.\ \ref{app_fig:rds_env_train}-\ref{app_fig:rds_env_eval45}).\footnote{Note that the RDS environment is an environment that is built on top of MG.} In the sequential version, which we call RDS Sequential, the agent is trained on TRTs with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_train}) and then it is allowed to adapt to subsequent TSTs (a transposed version of the TRTs) with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_eval35}). In the regular version (the version considered in \citep{zhao2021consciousness}), the agent is trained on TRTs with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_train}) and during the training process it is periodically evaluated on TSTs with difficulties varying from 0.25 to 0.45 (see Fig.\ \ref{app_fig:rds_env_eval25}-\ref{app_fig:rds_env_eval45}). Note that the difficulty parameter here controls the density of the lava cells. Also note that with every reset of the episode, a new lava cell pattern is (procedurally) generated for both the TRTs and TSTs. More on the details of the RDS environment can be found in \citep{zhao2021consciousness}.
Finally, note that, as opposed to the experiments on SG environment, for both the P\&L and TL settings, we did not enforce any kind of structure on the models of the agent, and just initialized them randomly. We also note that, in our experiments with RDS Sequential, we reinitialized the non-parametric models (replay buffers) of both planning styles after the tasks switch from the TRTs to the TSTs.
\end{itemize}
\textbf{Implementation Details of the MIs.} For our MI experiments, we considered the DT and B planning algorithms in \citet{zhao2021consciousness} (see Sec.\ \ref{app_sec:MIs_discussion}), whose pseudocodes are presented in Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}, respectively. The details of these algorithms are provided in Table \ref{tab:MI_DT_details} \& \ref{tab:MI_B_details}, respectively. For more details (such as the NN architectures, replay buffer sizes, learning rates, exact details of the tree search, \dots), we refer the reader to the publicly available code and the supplementary material of \citep{zhao2021consciousness}.
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_DT}.}
\centering
\begin{tabular}{l|l}
\hline
$\phi_{\theta}$ & MiniGrid bag of words feature extractor \\
$Q_{\eta}$ & regular NN (P\&L setting), NN with attention (for set-based representations) (TL setting) \\
$\bar{m}_{p \omega}$ & regular NN (P\&L setting), NN with attention (for set-based representations) (TL setting) \\
& (bottleneck mechanism is disabled for both of the setting) \\
$N_{ple}$ & $50$M \\
$N_{rbt}$ & $50$k \\
$n_s$ & $1$ (DT(1)), $5$ (DT(5)), $15$ (DT(15)) \\
$n_{bs}$ & $128$ (P\&L setting), $64$ (TL setting) \\
$h$ & best-first search (training), random search (evaluation) \\
$S$ & random sampling \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 1M time steps \\
\hline
\end{tabular}
\label{tab:MI_DT_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg. \ref{alg:alg_MI_B}.}
\centering
\begin{tabular}{l|l}
\hline
$\phi_{\theta}$ & MiniGrid bag of words feature extractor \\
$Q_{\eta}$ & regular NN (P\&L setting), NN with attention (for set-based representations) \\ & (TL setting) \\
$\bar{\bar{m}}_{p \omega}$ & regular NN (P\&L setting), NN with attention (for set-based representations) \\ & (TL setting) (bottleneck mechanism is disabled for both of the settings) \\
$N_{ple}$ & $50$M \\
$N_{rbt}$ & $50$k \\
$(n_{ibs}, n_{bs})$ & $(0,128)$ (B(R)), $(128,128)$ (B(R+S)), $(128,0)$ (B(S)) (P\&L setting), \\
& $(0,64)$ (B(R)), $(64,64)$ (B(R+S)), $(64,0)$ (B(S)) (TL setting) \\
$S$ & random sampling \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 1M time steps \\
\hline
\end{tabular}
\label{tab:MI_B_details}
\end{table}
Note that the publicly available code of \citep{zhao2021consciousness} only contains B planning algorithms in which the model is a model over the observations (and not the states), which for some reason causes the B planning algorithm of interest to perform very poorly (see the plots in \citep{zhao2021consciousness}). For this reason and in order to make a fair comparison with DT planning, we have implemented a version of the B planning algorithm in which the model is a model over the states and we performed all of our experiments with this version of the algorithm. Also note that, while we have used regular representations in our P\&L experiments, in order to deal with the large number of tasks, we have made use of set-based representations in our TL experiments (see \citep{zhao2021consciousness} for the details of this representation).
Additionally, we also performed experiments with simplified tabular versions of the MIs of the two planning styles, whose pseudocodes are presented in Alg.\ \ref{alg:alg_MI_DT_tab} \& \ref{alg:alg_MI_B_tab}, respectively. The details of these algorithms are provided in Table \ref{tab:MI_DT_tab_details} \& \ref{tab:MI_B_tab_details}, respectively.
\begin{algorithm}[h!]
\centering
\caption{Modernized Version of Tabular OMCP with both a Parametric and Non-Parametric Model} \label{alg:alg_MI_DT_tab}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{m}_p(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize the replay buffer $\bar{m}_{np}\gets \{ \}$}
\State $n_s\gets \text{number of time steps to perform search}$
\State $h\gets \text{search heuristic}$
\While{\text{$\bar{m}_p$ and $\bar{m}_{np}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{tree\_search\_with\_bootstrapping}(S,\bar{m}_p,Q,n_s,h))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{m}_{np}\gets \bar{m}_{np} + \{(S,A,R,S', \text{done})\} $}
\State $S_{\bar{m}_{np}}, A_{\bar{m}_{np}}, R_{\bar{m}_{np}}, S_{\bar{m}_{np}}', \text{done}_{\bar{m}_{np}} \gets \text{sample from } \bar{m}_{np}$
\State \text{Update} $Q$ \& $\bar{m}_p$ \text{with $S_{\bar{m}_{np}}, A_{\bar{m}_{np}}, R_{\bar{m}_{np}}, S_{\bar{m}_{np}}', \text{done}_{\bar{m}_{np}}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $Q$ \& $\bar{m}_p(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{Modernized Version of Tabular Dyna-Q of Interest with both a Parametric and Non-Parametric Model}\label{alg:alg_MI_B_tab}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}_p(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize the replay buffer $\bar{\bar{m}}_{np}\gets \{ \}$}
\While{\text{$Q$, $\bar{\bar{m}}_p$ and $\bar{\bar{m}}_{np}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{\bar{m}}_p(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State \text{$\bar{\bar{m}}_{np}\gets \bar{\bar{m}}_{np} + \{(S,A,R,S', \text{done})\} $}
\State $S\gets S'$
\EndWhile
\While{\text{$Q$ has not converged}}
\State $S_{\bar{\bar{m}}_p}, A_{\bar{\bar{m}}_p} \gets \text{sample from } \mathcal{S}\times \mathcal{A}$
\State $R_{\bar{\bar{m}}_p},S_{\bar{\bar{m}}_p}', \text{done}_{\bar{\bar{m}}_p} \gets \bar{\bar{m}}_p(S_{\bar{\bar{m}}_p},A_{\bar{\bar{m}}_p})$
\State \text{Update} $Q(S_{\bar{\bar{m}}_p},A_{\bar{\bar{m}}_p})$ \text{with $R_{\bar{\bar{m}}_p}$, $S_{\bar{\bar{m}}_p}'$, $\text{done}_{\bar{\bar{m}}_p}$}
\State $S_{\bar{\bar{m}}_{np}}, A_{\bar{\bar{m}}_{np}}, R_{\bar{\bar{m}}_{np}}, S_{\bar{\bar{m}}_{np}}', \text{done}_{\bar{\bar{m}}_{np}}\gets \text{sample from } \bar{\bar{m}}_{np}$
\State \text{Update} $Q(S_{\bar{\bar{m}}_{np}},A_{\bar{\bar{m}}_{np}})$ \text{with $R_{\bar{\bar{m}}_{np}}$, $S_{\bar{\bar{m}}_{np}}'$, $\text{done}_{\bar{\bar{m}}_{np}}$}
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_DT_tab}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{m}_p$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$n_s$ & $|\mathcal{A}|$ \\
$h$ & breadth-first search \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
\hline
\end{tabular}
\label{tab:MI_DT_tab_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_B_tab}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{\bar{m}}_p$ & a tabular parametric model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
\hline
\end{tabular}
\label{tab:MI_B_tab_details}
\end{table}
\section{Additional Results}
\label{app_sec:add_results}
In this section, we provide complementary results to our empirical results in Sec.\ \ref{sec:experiments}. Specifically, we provide (i) performance plots that are obtained by evaluating the different planning styles in their corresponding models and (ii) total reward plots that are obtained by evaluating the different planning styles in both the considered environments and in their corresponding models. Note that while the performance plots are obtained by the measure in (\ref{eqn:perf_measure}), the total reward plots are obtained by simply adding the rewards obtained by the agent throughout the episodes (which is actually the expected \emph{undiscounted} return, i.e., when we use $\gamma=1.0$ in measure (\ref{eqn:perf_measure})).
\subsection{Experiments with CIs}
\subsubsection{PP Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_perf.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_perf_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregarion} \label{app_fig:classic_algs_pp_perf_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning in their corresponding models (learned on the SG environment) in the PP setting with tabular and SA VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pp_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_totrew_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pp_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_totrew_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pp_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the PP setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pp_totrew}
\end{figure}
\subsubsection{P\&L Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_VFA(SA).pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_perf_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning in their corresponding models (learned on the SG environment) in the P\&L setting with tabular and SA VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pl_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_VFA(SA).pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_VFA(SA).pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the P\&L setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pl_totrew}
\end{figure}
\subsubsection{TL Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans(tr)_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans(tr).pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_perf_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans(tr)_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_perf_3}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning (a) on the SG environment and (b, c) in their corresponding models (learned on the SG environment) in the TL setting with tabular and SA VE representations. Black \& gray dashed lines indicate the performance of the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_tl_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans(tr).pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans(tr)_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans(tr).pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans(tr)_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the TL setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_tl_totrew}
\end{figure}
\subsection{Experiments with MIs}
\subsubsection{P\&L Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_perf_sg_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the MIs of DT and B planning in their corresponding models (learned on the SG environment) in the P\&L setting with tabular VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_perf_sg}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_totrew_sg_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_totrew_sg_3}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning (a) on the SG environment and (b) in their corresponding models (learned on the SG environment) in the P\&L setting with tabular VE representations. The black dashed line indicates the total reward obtained by the optimal policy. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_totrew_sg}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/Empty10-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Empty 10x10} \label{app_fig:modern_algs_pl_totrew_mg_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/Fourrooms-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small FourRooms} \label{app_fig:modern_algs_pl_totrew_mg_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/SCS9N1-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small SCS9N1} \label{app_fig:modern_algs_pl_totrew_mg_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/LCS9N1-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small LCS9N1} \label{app_fig:modern_algs_pl_totrew_mg_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning in the P\&L setting with NN VE representations. The black dashed lines indicate the total reward obtained by the optimal policy in the corresponding environment. Shaded regions are one standard error over 50 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_totrew_mg}
\end{figure}
\subsubsection{TL Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrSQ-PR-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Sequential} \label{app_fig:modern_algs_tl_totrew_mg_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_train_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Train (0.35)} \label{app_fig:modern_algs_tl_totrew_mg_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_025_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.25)} \label{app_fig:modern_algs_tl_totrew_mg_3}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_035_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.35)} \label{app_fig:modern_algs_tl_totrew_mg_4}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_045_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.45)} \label{app_fig:modern_algs_tl_totrew_mg_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning in the TL setting with NN VE representations. The black dashed lines indicate the total reward obtained by the optimal policy in the corresponding environment. Shaded regions are one standard error over 50 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_tl_totrew_mg}
\end{figure}
\section{Introduction}
\label{sec:intro}
It has long been argued that, in order for reinforcement learning (RL) agents to adapt to a variety of changing tasks, they should be able to learn a model of their environment, which allows for counterfactual reasoning and fast re-planning \citep{russell2002artificial}. Although this is a widely-accepted view in the RL community, the question of \emph{how} to leverage a learned model to perform planning in the first place does not have a widely-accepted and clear answer. In model-based RL, the two prevalent planning styles are decision-time (DT) and background (B) planning \citep{sutton2018reinforcement}, where the agent mainly plans in the moment and in parallel to its interaction with the environment, respectively. Even though these two planning styles have been developed with different assumptions and application domains in mind, i.e., DT planning algorithms \citep{tesauro1994td, tesauro1996line, silver2017mastering, silver2018general} under the assumption that the exact model of the environment is known and for domains that allow for certain computational budgets, such as board games, and B planning algorithms \citep{sutton1990integrated, sutton1991dyna, kaiser2020model, hafner2021mastering} under the assumption that the exact model is unknown and for domains that usually require fast responses, such as basic gridworlds and video games, recently, with the introduction of the ability to learn a model through pure interaction \citep{schrittwieser2020mastering}, DT planning algorithms have been applied to the same domains as their B planning counterparts (see e.g., \citep{schrittwieser2020mastering, hamrick2021role}). However, it still remains unclear under \emph{what} conditions and in \emph{which} settings one of these planning styles will perform better than the other in these fast-response-requiring domains.
To clarify this, we first start by abstracting away from the specific implementation details of the two planning styles and view them in a unified way through the lens of dynamic programming (DP). Then, we consider the classical instantiations (CI) of these planning styles and based on their DP interpretations, provide theoretical results and hypothesis on which one will perform better in the pure planning (PP), planning \& learning (P\&L), and transfer learning (TL) settings. We then consider the modern instantiations (MI) of these two planning styles and based on both their DP interpretations and implementation details, provide hypotheses on which one will perform better in the P\&L and TL settings. Lastly, we perform illustrative experiments with both instantiations of these planning styles to empirically validate our theoretical results and hypotheses. Overall, our results suggest that even though DT planning does not perform as well as B planning in their CIs, due to (i) the improvements in the way planning is performed, (ii) the usage of only real experience in the updates of the value estimates, and (iii) the ability to improve upon the previously obtained policy at test time, the MIs of it can perform on par or better than their B planning counterparts in both the P\&L and TL settings. We hope that our findings will help the RL community in developing a better understanding of the two planning styles and stimulate research in improving of them in potentially interesting ways.
\section{Background}
\label{sec:background}
In value-based RL \citep{sutton2018reinforcement}, an agent $A$ interacts with its environment $E$ through a sequence of actions to maximize its long-term cumulative reward. Here, the environment is usually modeled as a Markov decision process $E=(\mathcal{S}_E, \mathcal{A}_E, p_E, r_E, d_E, \gamma)$, where $\mathcal{S}_E$ and $\mathcal{A}_E$ are the (finite) set of states and actions, $p_E:\mathcal{S}_E\times \mathcal{A}_E\times \mathcal{S}_E\to [0, 1]$ is the transition distribution, $r_E:\mathcal{S}_E\times \mathcal{A}_E\times \mathcal{S}_E\to \amsmathbb{R}$ is the reward function, $d_E:\mathcal{S}_E\to [0,1]$ is the initial state distribution, and $\gamma\in [0,1)$ is the discount factor. At each time step $t$, after taking an action $a_t\in\mathcal{A}_E$, the environment's state transitions from $s_t\in\mathcal{S}_E$ to $s_{t+1}\in\mathcal{S}_E$, and the agent receives an observation $o_{t+1}\in\mathcal{O}_E$ and an immediate reward $r_t$. We assume that the observations are generated by a deterministic procedure $\psi:\mathcal{S}_E\to \mathcal{O}_E$, unknown to the agent. As the agent usually does not have access to the states in $\mathcal{S}_E$ a priori, and as the observations in $\mathcal{O}_E$ are usually very high-dimensional, it has to operate on its own state space $\mathcal{S}_A$, which is generated by its own adaptable (or sometimes a priori fixed) value encoder $\phi:\mathcal{O}_E\to \mathcal{S}_A$. The goal of the agent is to jointly learn a value encoder $\phi$ and a value estimator (VE) $Q:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ that induces a policy $\pi:\mathcal{S}_A\times\mathcal{A}_E\to [0,1] \in \mathbb{\Pi}$, where $\mathbb{\Pi} \equiv \{\pi | \pi:\mathcal{S}_A\times\mathcal{A}_E\to [0,1] \}$, maximizing $E_{\pi, p_E} [\sum_{t=0}^\infty \gamma^t r_E(S_t, A_t, S_{t+1}) | S_0\sim d_E]$. For convenience, we will refer to the composition of $\phi$ and $Q$ as simply the VE.
\textbf{Model-Free \& Model-Based RL.} The two main ways of achieving this goal are through the use of model-free RL (MFRL) and model-based RL (MBRL) methods. In MFRL, there is just a learning phase and the gathered experience is mainly used in improving the VE. In MBRL, there are two alternating phases: the learning and planning phases.\footnote{Note that even though some MBRL algorithms, such as \citep{tesauro1996line, silver2017mastering, silver2018general}, do not employ a model learning phase and make use of an a priori given exact model, in this study, we will only study versions of them in which the model has to be learned from pure interaction.} In the learning phase, in contrast to MFRL, the gathered experience is mainly used in jointly learning an adaptable (or sometimes a priori fixed) model encoder $\varphi: \mathcal{O}_E\to \mathcal{S}_{M}$ and a model $m\equiv (p_M, r_M, d_M)\in \mathcal{M}$, where $\mathcal{M} \equiv \{(p_M,r_M,d_M) | p_M:\mathcal{S}_{M}\times\mathcal{A}_E\times\mathcal{S}_{M}\to [0,1], r_M:\mathcal{S}_{M}\times\mathcal{A}_E\times\mathcal{S}_{M}\to \amsmathbb{R}, d_M:\mathcal{S}_{M}\to [0,1] \}$ and $\mathcal{S}_{M}$ is the state space of the agent's model, and optionally, as in MFRL, the experience may also be used in improving the VE.\footnote{Note that the learned model can be in a parametric or non-parametric form (see \citep{van2019use}).} Again, for convenience, we will refer to the composition of $\varphi$ and $m$ as simply the model. In the planning phase, the learned model $m$ is then used for simulating experience, either to be used alongside real experience in improving the VE, or just to be used in selecting actions at decision time. Note that in general $\varphi\neq\phi$ and thus $\mathcal{S}_M\neq \mathcal{S}_A$. However, in these cases, we assume that the agent has access to a deterministic function $\rho: \mathcal{S}_M\to \mathcal{S}_A$ that allows for going from $\mathcal{S}_M$ to $\mathcal{S}_A$. We also assume that $\mathcal{S}_E\subseteq\mathcal{S}_M$, which implies that the agent's model is, in principle, capable of exactly modeling the environment, though this may be very hard in practice.
\textbf{Planning Styles in MBRL.} In MBRL, planning is performed in two different styles: (i) DT planning, and (ii) B planning (see Ch.\ 8 of \citep{sutton2018reinforcement}).\footnote{Although some new planning styles have been proposed in the transfer learning literature (see e.g., \citep{barreto2017successor, barreto2019option, barreto2020fast, alver2022constructing}), they can also be viewed as performing some form of DT planning with pre-learned models.} DT planning, also known as ``planning in the now'', is performed as a computation whose output is the selection of a single action for the current state. This is often done by unrolling the model forward from the current state to compute local value estimates, which are then usually discarded after action selection. Here, planning is performed independently for \emph{every} encountered state and it is mainly performed in an \emph{online} fashion, though it may also contain offline components. In contrast, B planning is performed by continually improving a cached VE, on the basis of simulated experience from the model, often in a global manner. Action selection is then quickly done by querying the VE at the current state. Unlike DT planning, B planning is often performed in a purely \emph{offline} fashion, in parallel to the agent-environment interaction, and thus is \emph{not} necessarily focused on the current state: well before action selection for any state, planning plays its part in improving the value estimates in many other states. For convenience, in this study, we will refer to all MBRL algorithms that have an online planning component as DT planning algorithms (see e.g., \citep{tesauro1994td, tesauro1996line, silver2017mastering, silver2018general, schrittwieser2020mastering, zhao2021consciousness}), and will refer to the rest as B planning algorithms (see e.g., \citep{sutton1990integrated, sutton1991dyna, kaiser2020model, hafner2021mastering, zhao2021consciousness}). Note that, regardless of the style, any type of planning can be viewed as a procedure $f: (\mathcal{M}, \mathbb{\Pi})\to \mathbb{\Pi}$ that takes a model $m$ and a policy $\pi^i$ as input and returns an improved policy $\pi_{m}^o$, according to $m$, as output.
\textbf{Algorithms within the Two Planning Styles.} Starting with DT planning, depending on how much search is performed, DT planning algorithms can be studied under three main categories: DT planning algorithms (i) that perform no search (see e.g., \citep{tesauro1996line} and Alg.\ \ref{alg:alg_CI_DT}), (ii) that perform pure search (see e.g., \citep{campbell2002deep} and Alg.\ \ref{alg:alg_CI_DT_exs}), and (iii) that perform some amount of search (see e.g., \citep{silver2017mastering, silver2018general, schrittwieser2020mastering} and Alg.\ \ref{alg:alg_MI_DT}). In the first two, planning is performed by just running pure rollouts with a fixed or improving policy (see Fig.\ \ref{app_fig:pure_rollout}), and by purely performing search (see Fig.\ \ref{app_fig:exhaustive_search}), respectively. In the last one, planning is performed by first performing some amount of search and then either by running rollouts with a fixed or improving policy, by bootstrapping on the cached value estimates of a fixed or improving policy, or by doing both (see Fig.\ \ref{app_fig:search_and_rollout} \& \ref{app_fig:search_and_bootstrap}). Note that while the CIs of DT planning fall within the first two categories, the MIs of it usually fall within the last one. Also note that, while planning is performed with only a single parametric model in the first two categories, it is usually performed with both a parametric and non-parametric (usually a replay buffer, see \citep{van2019use}) model in the last one (see e.g., \citep{schrittwieser2020mastering} and Alg.\ \ref{alg:alg_MI_DT}). See \citep{bertsekas2021rollout} for more details on the different categories of DT planning. Moving on to B planning, as all B planning algorithms (see e.g., Alg.\ \ref{alg:alg_CI_B}, \ref{alg:alg_CI_Bi}, \ref{alg:alg_MI_B}) perform planning by periodically improving a cached VE throughout the model learning process, we do not study them under different categories. However, we again note that, while some B planning algorithms perform planning with a single parametric model (see e.g., Alg.\ \ref{alg:alg_CI_B} \& \ref{alg:alg_CI_Bi}), some perform planning with both a parametric and non-parametric (again usually a replay buffer) model (see e.g., Alg.\ \ref{alg:alg_MI_B}).
\section{A Unified View of the Two Planning Styles}
\label{sec:unified_view}
In this section, we abstract away from the specific implementation details, such as whether policy improvement is done locally or globally, or whether planning is performed in an online or offline manner, and view the two planning styles in a unified way through the lens of DP \citep{bertsekas1996neuro}. More specifically, we view DT planning through the lens of policy iteration (PI) as DT planning algorithms can be considered as performing some amount of asynchronous PI that is focused on the current state (see \citep{bertsekas2021rollout}), and we view B planning through the lens of value iteration (VI) as B planning algorithms can be considered as performing some amount of asynchronous VI that is focused on the sampled states (see \citep{sutton2018reinforcement}).
In this framework, DT planning algorithms that perform no search can be considered as performing one-step PI at every encountered state, on top of a fixed or improving policy, as they compute $\pi_m^o$ by first running many rollouts with a fixed or improving $\pi^i$ in $m$ to evaluate the current state,
and then by selecting the most promising action (which can be considered as first performing policy evaluation and then policy improvement). Similarly, DT planning algorithms that perform pure search can be considered as performing PI until convergence (which we call full PI) at every encountered state as they disregard $\pi^i$ and compute $\pi_{m}^o$ by first performing exhaustive search in $m$ to obtain the optimal values at the current state, and then by selecting the most promising action. Finally, DT planning algorithms that perform some amount of search can be considered as performing something between one-step and full PI at every encountered state, on top of a fixed or improving policy, as they are just a mixture of DT planning algorithms that perform no search and pure search. Hence, depending on how much search is performed, DT planning algorithms in general can be viewed as going between the spectrum of performing one-step PI and full PI at every encountered state, on top of a fixed or improving policy.
Similarly, under the assumption that all states are sampled at least once during planning, all B planning algorithms can also be considered as performing either one-step VI, VI until convergence (which we call full VI), or something in between, on top of a fixed or improving policy, as they compute $\pi_m^o$ by periodically improving a fixed or improving $\pi^i$ on the basis of simulated experience from $m$, either for a single time step, until convergence, or somewhere in between. Thus, depending on much planning is performed, B planning algorithms in general can also be viewed as interpolating between performing one-step VI and full VI, on top of a fixed or improving policy.
Finally, note that, in Sec.\ \ref{sec:background}, we have pointed out that some DT and B planning algorithms perform planning with both a parametric and non-parametric model, which can make it hard for them to be viewed through our proposed framework. However, if one considers the two separate models as a single combined model, then these algorithms can also be viewed straightforwardly in our proposed framework: DT and B planning algorithms that perform planning with two separate models can still be viewed as going between the spectrum of performing one-step PI / VI and full PI / VI, however now, they would just be performing planning with a combined model (see App.\ \ref{app_sec:combined_view_discussion} for a broader discussion on the combined model view).
\section{Decision-Time vs.\ Background Planning}
\label{sec:DTvsB_planning}
In this study, we are interested in understanding under what conditions and in which settings one planning style will perform better than the other. Thus, we start by defining a performance measure that will be used in comparing the two planning styles of interest. Given an arbitrary model $m=(p,r,d)\in\mathcal{M}$, let us define the performance of an arbitrary policy $\pi\in\mathbb{\Pi}$ in it as follows:
\begin{equation}
J_{m}^{\pi} \equiv E_{\pi, p} [{\textstyle \sum}_{t=0}^{\infty} \gamma^t r(S_t, A_t, S_{t+1}) | S_0\sim d ].
\label{eqn:perf_measure}
\end{equation}
Note that $J_{m}^{\pi}$ corresponds to the expected \emph{discounted} return of a policy $\pi$ in model $m$. Next, we consider the conditions under which the comparisons will be made: we are interested in both simple scenarios in which the VEs and models are both represented as a table, and in complex ones in which at least the VEs or models are represented using function approximation (FA).
\begin{wrapfigure}{R}{0.515\textwidth}
\vspace{-0.5cm}
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=3.5cm]{figures/models1.pdf}
\caption{\small General partitioning} \label{fig:model_space_1}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=3.5cm]{figures/models2.pdf}
\caption{\small Partitioning of interest} \label{fig:model_space_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a) The general partitioning and (b) the partitioning of interest of $\mathcal{M}$, for a given $\Pi$ and $J$. The gray and blue regions indicate $\mathcal{M}_{\Pi, J}^{\textnormal{PCM}}\cap \mathcal{M}_{\Pi, J}^{\textnormal{PRM}}$ and $\mathcal{M}\setminus (\mathcal{M}_{\Pi, J}^{\textnormal{PCM}}\cup \mathcal{M}_{\Pi, J}^{\textnormal{PRM}})$, respectively.}
\vspace{-0.2cm}
\label{fig:model_space}
\end{wrapfigure}
\textbf{Partitioning of the Model Space.} We now present a way to partition the space of agent models $\mathcal{M}$ such that one planning style is guaranteed to perform on par or better than the other. Let us start by defining $m^*$ to be the exact model of the environment. Note that $m^*\in\mathcal{M}$, as we assumed that $\mathcal{S}_E\subseteq\mathcal{S}_M$ (see Sec.\ \ref{sec:background}). Then, given a policy set $\Pi\subseteq \mathbb{\Pi}$, containing at least two policies, and a performance measure $J$, defined as in (\ref{eqn:perf_measure}), depending on the relative performances of the policies in it and in $m^*$, a model $m\in\mathcal{M}$ can belong to one of the following main classes:
\begin{definition}[PCM]
\label{def:pcm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi, J}^{\textnormal{PCM}} \equiv \{ m\in\mathcal{M} \ | \ \smash{J_{m^*}^{\pi^i} \lesseqgtr J_{m^*}^{\pi^j}} \ \forall\pi^i,\pi^j\in\Pi \text{ satisfying } \smash{J_{m}^{\pi^i} \gtreqless J_{m}^{\pi^j}} \}$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ is a \emph{performance-contrasting model (PCM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\begin{definition}[PRM]
\label{def:prm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}} \equiv \{ m\in\mathcal{M} \ | \ \smash{J_{m^*}^{\pi^i} \gtreqless J_{m^*}^{\pi^j}} \ \forall\pi^i,\pi^j\in\Pi \text{ satisfying } \smash{J_{m}^{\pi^i} \gtreqless J_{m}^{\pi^j}}\}$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ is a \emph{performance-resembling model (PRM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\vspace{-0.25cm}
Informally, given any two policies in $\Pi$, and $J$, a model $m$ is a PCM of $m^*$ iff the policy that performs on par or better in it performs on par or worse in $m^*$, and it is a PRM of $m^*$ iff the policy that performs on par or better in it also performs on par or better in $m^*$. Note that $m$ can both be a PCM and a PRM of $m^*$ iff the two policies perform on par in both $m$ and $m^*$. If $\Pi$ contains at least one of the optimal policies for $m$, then $m$ can also belong to one of the following specialized classes:
\begin{definition}[PNM]
\label{def:pnm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PNM}} \equiv \{ m\in\mathcal{M}_{\Pi,J}^{\textnormal{PCM}} \ | \ \smash{J_{m^*}^{\pi_m^{*}} = \min_{\pi\in\mathbb{\Pi}} J_{m^*}^{\pi}} \ \forall\pi_m^{*}\in\Pi\}$, where $\pi_m^*$ denotes the optimal policies in $m$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PNM}}$ is a \emph{performance-minimizing model (PNM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\begin{definition}[PXM]
\label{def:pxm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PXM}} = \{ m\in\mathcal{M}_{\Pi,J}^{\textnormal{PRM}} \ | \ \smash{J_{m^*}^{\pi_m^{*}} = \max_{\pi\in\mathbb{\Pi}} J_{m^*}^{\pi}} \ \forall\pi_m^{*}\in\Pi\}$, where $\pi_m^*$ denotes the optimal policies in $m$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PXM}}$ is a \emph{performance-maximizing model (PXM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\vspace{-0.25cm}
Informally, given a subset of $\Pi$ that contains the optimal policies for a model $m$, and $J$, $m$ is a PNM of $m^*$ iff all of the optimal policies result in the worst possible performance in $m^*$, and it is a PXM of $m^*$ iff all them result in the best possible performance in $m^*$. Note that all definitions above are agnostic to how the models are represented.
Fig.\ \ref{fig:model_space_1} illustrates how $\mathcal{M}$ is partitioned for a given $\Pi$ and $J$. Note that for a fixed $J$, the relative sizes of the model classes solely depend on $\Pi$. For instance, as $\Pi$ gets larger, the relative sizes of $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ and $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ shrink, because with every policy that is added to $\Pi$, the number of criteria that a model must satisfy to be a PCM or PRM increases, which reduces the odds of an arbitrary model in $\mathcal{M}$ being in $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ or $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$. And, as $\Pi$ gets smaller, the relative sizes of $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ and $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ grow, and eventually fill up the entire space, when $\Pi$ contains only two policies. Fig.\ \ref{fig:model_space_2} illustrates the partitioning in this scenario. Since we are only interested in comparing the policies of two planning styles, the $\Pi$ of interest has a size of two, and thus we have a partitioning as in Fig.\ \ref{fig:model_space_2}.
\subsection{Classical Instantiations of the Two Planning Styles}
\label{sec:classic_inst}
We are now ready to discuss when will one planning style perform better than the other. For easy analysis, we start by considering the CIs of the two planning styles in which both the VEs and models are represented as a table. More specifically, for DT planning we study a version of the OMCP algorithm of \citet{tesauro1996line} in which a parametric model is learned from experience (see Alg.\ \ref{alg:alg_CI_DT})\footnote{Note that for the CIs of DT planning, we choose to study an algorithm that performs no search, and not a one that performs pure search (see Sec.\ \ref{sec:background}), as the latter ones are not applicable to fast-response-requiring domains.}, and for B planning we study a simplified version of the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna} in which planning is performed until convergence with every model in the model learning process (see Alg.\ \ref{alg:alg_CI_Bi}). See App.\ \ref{app_sec:CIs_discussion} for a discussion on why we consider these versions. Note that these two algorithms can be considered as performing one-step PI and full VI on top of a fixed policy, respectively (see Sec.\ \ref{sec:unified_view}). Also note that, although we only consider these specific instantiations, as long as the VEs of both planning styles are represented as a table and DT planning corresponds to taking a smaller or on par policy improvement step than B planning, which is the case in most CIs, the results that we derive in this section would hold regardless of the choice of instantiation. Before considering different settings, let us define the following policies that will be useful in referring to the input and output policies of the two planning styles:
\begin{definition}[Base, Rollout \citep{bertsekas2021rollout} \& CE \citep{jiang2015dependence} Policies] \label{def:base_rollout_certeq_pol}
The \emph{base policy} $\pi^b\in\mathbb{\Pi}$ is the policy used in initiating PI or VI. Given a fixed or improving base policy $\pi^b$ and a model $m\in\mathcal{M}$, the \emph{rollout policy} $\pi_{m}^r\in\mathbb{\Pi}$ is the policy obtained after performing one-step of PI on top of $\pi^b$ in $m$, and the \emph{certainty-equivalence (CE) policy} $\pi_{m}^{ce}\in\mathbb{\Pi}$ is the policy obtained after performing full PI or full VI on top of $\pi^b$ in $m$.
\end{definition}
\vspace{-0.25cm}
In the rest of this section, we will refer to the policies generated by the CIs of DT and B planning with model $m$ on top of a fixed base policy $\pi^b$ as $\pi_m^r$ and $\pi_m^{ce}$, respectively.
\textbf{PP Setting.} We start by considering the PP setting in which the agent is directly provided with a model. In this setting, we can prove the following statements:
\begin{proposition}
\label{prop:prop1}
Let $m\in\mathcal{M}$ be a PCM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{proposition}
\begin{proposition}
\label{prop:prop2}
Let $m\in\mathcal{M}$ be a PRM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^{r}}$.
\end{proposition}
\vspace{-0.25cm}
Due to space constraints, we defer all the proofs to App.\ \ref{app_sec:proofs}. Prop.\ \ref{prop:prop1} \& \ref{prop:prop2} imply that, given $\Pi=\{\pi_m^r, \pi_m^{ce} \}$ and $J$, although DT planning will perform on par or better than B planning when the provided model $m$ is a PCM, it will perform on par or worse when $m$ is a PRM. Note that these results would not be guaranteed to hold if FA was introduced in the VE representations, as in this case, there would be no guarantee that full VI will result in a better policy than one-step PI in $m$ \citep{bertsekas1996neuro}. However, if one were to use approximators with good generalization capabilities (GGC), i.e., approximators that assign the same value to similar observations, we would expect a similar performance trend to hold.
\textbf{P\&L Setting.} In the P\&L setting, instead of being provided directly, the model has to be learned from the experience gathered by the agent. In this scenario, as different policies are likely to be used in the model learning process, the encountered models of the two planning styles, which we denote as $\bar{m}\in\mathcal{M}$ and $\bar{\bar{m}}\in\mathcal{M}$ for DT and B planning, respectively, are also likely to be different. Thus, as they require the two planning styles to have access to the same model, the results of Prop.\ \ref{prop:prop1} \& \ref{prop:prop2} are not valid in the P\&L setting. However, if the model of B planning is a PNM or a PXM, we can prove the following statements:
\begin{proposition}
\label{prop:prop3}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}} \in\mathcal{M}$ be a PNM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{proposition}
\begin{proposition}
\label{prop:prop4}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}}\in\mathcal{M}$ be a PXM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{proposition}
\vspace{-0.25cm}
Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} imply that, given $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \}$, $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \}$ and $J$, although DT planning will perform on par or better than B planning when $\bar{\bar{m}}$ is a PNM, it will perform on par or worse when $\bar{\bar{m}}$ is a PXM. While the former result can be relevant in the initial phases of B planning's model learning process, the latter one is most likely to be relevant in the final phases of this process, when $\bar{\bar{m}}$ becomes a PXM. Note that since the models are represented as tables, the learned models of both planning styles are guaranteed to become PXMs in the limit, as we know that in the worst case, with sufficient exploration, the models will, in the limit, converge to $m^*$ \citep{sutton2018reinforcement}. However, note that this would not be guaranteed if FA was used in the model representations. Lastly, even if $\bar{\bar{m}}$ starts as a PNM and eventually becomes a PXM, the results of Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} would not be guaranteed to hold if FA was used in the VE representations, due to the reason discussed in the PP setting. However, if one were to use approximators with GGC, we would again expect a similar performance trend to hold.
\textbf{TL Setting.} Although there are many different settings in TL \citep{taylor2009transfer}, for easy analysis, we start by considering the simplest one in which there is only one training task (TRT) and one subsequent test task (TST), differing only in their reward functions, and in which the agent's transfer ability is measured by how fast it adapts to the TST after being trained on the TRT (more challenging settings will be considered in the next section). In this setting, we would expect the results of the P\&L setting to hold directly, as instead of a single one, there are now two consecutive P\&L settings.
\subsection{Modern Instantiations of the Two Planning Styles}
\label{sec:modern_inst}
We now consider the MIs of the two planning styles in which both the VEs and models are represented with neural networks (NN). More specifically, we study the DT and B planning algorithms in \citet{zhao2021consciousness} (see Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}) as they are reflective of many of the properties of their state-of-the-art counterparts (see e.g., \citep{schrittwieser2020mastering, kaiser2020model, hafner2021mastering}) and their code is publicly available. See App.\ \ref{app_sec:MIs_discussion} for a broader discussion on why we choose these algorithms. Here, as the DT planning algorithm performs planning by first performing some amount of search with a parametric model and then by bootstrapping on the value estimates of a continually improving policy, it can be considered as performing more than one-step PI on top of an improving policy in a combined model $\bar{m}_c$ (see Sec.\ \ref{sec:unified_view}). And, as the B planning algorithm performs planning by continually improving a VE at every time step with both a parametric and non-parametric (a replay buffer) model, it can be viewed as performing something between one-step VI and full VI on top of an improving policy with a combined model $\bar{\bar{m}}_c$ (see Sec.\ \ref{sec:unified_view}). However, if $\bar{\bar{m}}_c$ converges, it can be viewed as performing full VI, as in this case the continual improvements to the VE with the converged $\bar{\bar{m}}_c$ would eventually lead to an improvement that is equivalent to performing full VI. Note that although we only consider these specific instantiations, the hypotheses we provide in this section are generally applicable to most state-of-the-art MBRL algorithms, as the algorithms in \citep{zhao2021consciousness} are reflective of many of their properties (see App.\ \ref{app_sec:MIs_discussion}).
\textbf{P\&L Setting.} We start with the P\&L setting, and skip the PP one as it is not a relevant setting used with the MIs. To ease the analysis, let us start by considering a simplified scenario in which both the VEs and models of the MIs of the two planning styles are represented as a table. Let us also define the \emph{improved rollout policy} to be the policy $\pi_{m}^{r+}\in\mathbb{\Pi}$ obtained after performing more than one-step PI, with the exact number not being important, on top of a base policy $\pi^{b}\in\mathbb{\Pi}$ in model $m$, and let us also refer to the policies generated by the MIs of DT and B planning with models $\bar{m}_c$ and $\bar{\bar{m}}_c$ on top of an improving base policy $\pi^{b}$ as $\pi_{\bar{m}_c}^{r+}$ and $\pi_{\bar{\bar{m}}_c}^{ce}$, respectively. Then, using $\pi_{\bar{m}_c}^{r+}$ and $\pi_{\bar{\bar{m}}_c}^{ce}$ in place of $\pi_{\bar{m}}^r$ and $\pi_{\bar{\bar{m}}}^{ce}$, respectively, and under the assumption that $\bar{\bar{m}}_c$ converges, we would expect the formal statements of the P\&L setting of Sec.\ \ref{sec:classic_inst} to hold exactly as DT planning still corresponds taking a smaller or on par policy improvement step than B planning. However, as DT planning now corresponds to performing more than one-step PI, we would expect the performance gap between the two planning styles to reduce in their MIs. Moreover, we would expect this gap to gradually close if both $\bar{m}_c$ and $\bar{\bar{m}}_c$ become, and remain as, PXMs, as the use of an improving policy for DT planning would result in a continually improving performance that gets closer to the one of B planning.
Coming back to our original scenario in which both the VEs and models of the MIs of the two planning styles are represented with NNs, we would expect a similar performance trend to hold as NNs are approximators that have GGC. However, this expectation is solely based on the DP interpretations of the two planning styles and thus does not take into account the implementation details of them, which can also play an important role on how the two planning styles will perform against each other in their MIs. Thus, in the following paragraph, we will discuss the impacts of the implementation details of the two planning styles on their learning speed and final performance, and based on this, will provide hypotheses on which planning style will perform better than the other.
In their MIs, both planning styles perform planning by unrolling NN-based parametric models which are known to easily lead to compounding model errors (CME, \citep{talvitie2014model}). Thus, obtaining combined models that are PXMs becomes quite difficult, if not impossible, for both planning styles. Even though this problem can significantly be mitigated for both of them by unrolling the models for only a few time steps and then bootstrapping on the VEs for the rest, B planning can also suffer from updating its VE with the potentially harmful simulated experience generated by its NN-based parametric model (see e.g., \citep{van2019use, jafferjee2020hallucinating}), which can slowdown, or even prevent, it in reaching optimal (or close to optimal) performance.\footnote{Note that even if the exact model was known, due to the deadly triad \citep{sutton2018reinforcement}, there would be no guarantee that the MIs of both planning styles will be able to output policies of optimal (or close to optimal) performance.} Note that this is not a problem in DT planning as its VE is only updated with real experience. Thus, based on these observations, we hypothesize that compared to DT planning, it is likely for B planning to suffer more in reaching optimal (or close to optimal) performance in their MIs.
\textbf{TL Setting.} We now consider two common TL settings that are both more challenging than the TL setting in Sec.\ \ref{sec:classic_inst}. In these settings, there is a distribution of TRTs and TSTs, differing only in their observations. In the first one, the agent's transfer ability is measured by how fast it adapts to the TSTs after being trained on the TRTs (see e.g., \citep{van2020loca}), and in the second one, this ability is measured by its instantaneous performance on the TSTs as it gets trained on the TRTs, also known as ``zero-shot transfer'' in the literature (see e.g., \citep{zhao2021consciousness, anand2022procedural}). In these two settings, the implementation details of the two planning styles also play an important role on how they will perform against each other. Thus, in the following paragraph, we will also provide hypotheses based on these details.
In both TL settings, we would again expect B planning to suffer more in reaching the optimal (or close to optimal) performance on the TRTs because of the reasons discussed in the P\&L setting. Additionally, in the first setting, after the tasks switch from the TRTs to the TSTs, we would expect B planning to suffer more in the adaptation process, as its parametric model, learned on the TRTs, would keep generating experience that resembles the TRTs until it adapts to the TSTs, which in the meantime can lead to harmful updates to its VE. Also, in the second setting, if the learned parametric model of DT planning is capable of simulating at least a few time steps of the TSTs, and if the learned policies of the both planning styles perform similarly on the TSTs, we would expect DT planning to perform better on the TSTs, as at test time, it would be able to improve upon the policy obtained during training by performing online planning. Note that, this is not possible for B planning, as it performs planning in an offline fashion and thus requires additional interaction with the TSTs to improve upon the policy obtained during training.
\section{Experiments}
\label{sec:experiments}
\begin{wrapfigure}{R}{0.5\textwidth}
\vspace{-0.4cm}
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[height=2.25cm]{figures/simple_gridworld.pdf}
\caption{\small SG Environment} \label{fig:simple_gridworld}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}{0.275\textwidth}
\centering
\includegraphics[height=2.25cm]{figures/minigrid.pdf}
\caption{\small MiniGrid Environments} \label{fig:minigrid}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a) The SG environment and (b) MG environments: (top row) Empty 10x10, FourRooms, SCS9N1, (bottom row) LCS9N1, RDS Train, RDS Test.}
\vspace{-0.2cm}
\label{fig:envs}
\end{wrapfigure}
We now perform illustrative experiments to validate the formal statements and hypotheses presented in Sec.\ \ref{sec:DTvsB_planning}. The experimental details and more detailed results can be found in App.\ \ref{app_sec:exp_details} \& \ref{app_sec:add_results}, respectively.
\textbf{Environments.} We perform experiments on both the Simple Gridworld (SG) environment and on environments that either pre-exist in or are manually built on top of MiniGrid (MG, \citep{gym_minigrid}) (see Fig.\ \ref{fig:envs}), as the optimal policies in these environments are easy to learn and they allow for designing controlled experiments that are helpful in answering the questions of interest to this study. In the SG environment, the agent spawns in state S and has to navigate to the goal state depicted by G. In the MG environments, the agent, depicted in red, has to navigate to the green goal cell, while avoiding the orange lava cells (if there are any). More details on these environments can be found in App.\ \ref{app_sec:exp_details}. Note that while $\mathcal{O}_E=\mathcal{S}_E$ in the SG environment, $\mathcal{O}_E\neq\mathcal{S}_E$ in the MG environments.
\subsection{Experiments with Classical Instantiations}
\label{sec:classic_inst_exp}
In this section, we perform experiments with the CIs of DT and B planning (see Alg.\ \ref{alg:alg_CI_DT} \& \ref{alg:alg_CI_Bi}) on the SG environment to empirically validate our theoretical results and hypotheses in Sec.\ \ref{sec:classic_inst}. In addition to the scenario in which both the VEs and models are represented as tables (where $\phi=\varphi$ are both identity functions (IF), implying $\mathcal{S}_A=\mathcal{S}_{M}=\mathcal{O}_E$), we also consider the one in which only the model is represented as a table (where only $\varphi$ is an IF, implying $\mathcal{S}_{M}=\mathcal{O}_E$). In the latter scenario, we use state aggregation (SA) in the VE representation, i.e., $\phi$ is a state aggregator, which allows for assigning the same value to similar observations. More on the implementation details of these NIs can be found in App.\ \ref{app_sec:details_of_CI_exps}.
\begin{figure}
\centering
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_perf.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_pureplan_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_perf_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{fig:plot_pureplan_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_planlearn_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_VFA(SA).pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{fig:plot_planlearn_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans(tr).pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_transfer_1}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning on the SG environment, in the (a, b) PP, (c, d) P\&L, and (e) TL settings with tabular and SA VE representations. Black \& gray dashed lines indicate the performance of the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.2cm}
\label{fig:classic_algs}
\end{figure}
\textbf{PP Experiments.} According to Prop.\ \ref{prop:prop1} \& \ref{prop:prop2}, although DT planning is guaranteed to perform on par or better than B planning when the provided model is a PCM, it is guaranteed to perform on par or worse when the provided model is a PRM. To empirically verify this, we provided the two planning styles with a series of hand-designed tabular models that interpolate between a PNM and a PXM: the provided models first start with the PNM $m_1$ with goal state G$_1$ (see Fig.\ \ref{fig:simple_gridworld}), and then gradually move towards the PXM $m_{10}$ with goal state G, by first becoming PCMs $\{m_i\}_{i=2}^5$, and then by becoming PRMs $\{m_j\}_{j=6}^9$, with goal states $\{\text{G}_n\}_{n=2}^9$, respectively (see App.\ \ref{app_sec:details_of_CI_exps} for more details on these models). After planning was performed with each of these models, we evaluated the resulting output policies in the environment. Results are shown in Fig.\ \ref{fig:plot_pureplan_1}. We can indeed see that although DT planning performs better when the provided model is a PCM (or a PNM), it performs worse when the provided model is a PRM (or a PXM). To see if similar results would hold with approximators that have GGC, we also performed the same experiment with SA used in the VE representation. Results in Fig.\ \ref{fig:plot_pureplan_2} show that a similar trend holds in this case as well.
\textbf{P\&L Experiments.} Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} imply that, although DT planning is guaranteed to perform on par or better than B planning when the model of B planning is a PNM, it is guaranteed to perform on par or worse when the model of B planning is a PXM. To empirically verify this, we initialized the tabular models of both planning styles as hand-designed PNMs (see App.\ \ref{app_sec:details_of_CI_exps} for the details) and let them be updated through interaction to become PXMs. After every episode, we evaluated the resulting output policies in the environment. Results in Fig.\ \ref{fig:plot_planlearn_1} show that, as expected, although DT planning performs better when B planning's model is a PNM, it performs worse when B planning's model becomes a PXM. Again, to see whether if similar results would hold with approximators that have GGC, we also performed experiments with SA used in the VE representation. Results in Fig.\ \ref{fig:plot_planlearn_2} show that a similar trend holds in this case as well.
\textbf{TL Experiments.} In Sec.\ \ref{sec:classic_inst}, we argued that the results of the P\&L setting would hold directly in the considered TL setting. For empirical verification, we performed a similar experiment to the one in the P\&L setting, in which we initialized the tabular models of both planning styles as hand-designed PNMs and let them be updated to become PXMs. However, differently, after 25 episodes, we now added a subsequent TST with goal state G$_1$ to the TRT (see App.\ \ref{app_sec:details_of_CI_exps} for the details). In Fig.\ \ref{fig:plot_transfer_1}, we can see that, similar to the P\&L setting, before the task changes, DT planning first performs better and but then worse than B planning, and the same happens after the task change. Results in Fig.\ \ref{app_fig:classic_algs_tl_perf_1} show that a similar trend also holds when SA used in the VE representation.
\subsection{Experiments with Modern Instantiations}
\label{sec:modern_inst_exp}
We now perform experiments with the MIs of DT and B planning to empirically validate our hypotheses in Sec.\ \ref{sec:modern_inst}. For the experiments with the SG environment, we consider the former scenario in Sec.\ \ref{sec:classic_inst_exp}, and for the experiments with MG environments, we consider the scenario in which both the VEs and models are represented with NNs, and in which they share the same encoder (where $\phi=\varphi$ are both learnable parametric functions, implying $\mathcal{S}_A=\mathcal{S}_{M}\neq\mathcal{O}_E$). More on the implementation details of these MIs can be found in App.\ \ref{app_sec:details_of_MI_exps}.
\begin{figure}[]
\centering
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Simple Gridworld} \label{fig:plot_simplegridworld}
\end{subfigure}
\centering
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/Empty10-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small Empty10x10} \label{fig:plot_empty10x10}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/Fourrooms-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small FourRooms} \label{fig:plot_fourrooms}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/SCS9N1-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small SCS9N1} \label{fig:plot_simplecrossing}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/LCS9N1-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small LCS9N1} \label{fig:plot_lavacrossing}
\end{subfigure}
\par\bigskip \vspace{-0.5em}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrSQ-PR-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small RDS Sequential} \label{fig:plot_trans_seq}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_train.pdf}
\vspace{-0.25cm}
\caption{\small RDS Train (0.35)} \label{fig:plot_trans_odist_train}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_025.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.25)} \label{fig:plot_trans_odist_025}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_035.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.35)} \label{fig:plot_trans_odist_035}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_045.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.45)} \label{fig:plot_trans_odist_045}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the MIs of DT and B planning in the (a-e) P\&L and (f-j) TL settings with (a) tabular and (b-j) NN VE representations. The black dashed lines indicate the performance of the optimal policy in the corresponding environment. The green and magenta dashed line in (a) inidicates the point after which DT and B planning's models become, and remain as, PXMs, respectively. Shaded regions are one standard error over (a) 100 and (b-j) 50 runs.}
\vspace{-0.2cm}
\label{fig:modern_algs}
\end{figure}
\textbf{P\&L Experiments.} In Sec.\ \ref{sec:modern_inst}, we argued that if one were to use tabular VEs and models in the MIs of the two planning styles, the results of the P\&L section of Sec.\ \ref{sec:classic_inst} would hold exactly. Additionally, we also argued that the performance gap between the two planning styles would reduce in their MIs and even gradually close if the combined models of both planning styles become, and remain as, PXMs. In order to test these, we implemented simplified tabular versions of the MIs of the two planning styles (see Alg.\ \ref{alg:alg_MI_DT_tab} \& \ref{alg:alg_MI_B_tab}) and compared them on the SG environment. Results are shown in Fig.\ \ref{fig:plot_simplegridworld}. As expected, the results of the P\&L section of Sec.\ \ref{sec:classic_inst} holds exactly, and the performance gap between the two planning styles indeed reduces and gradually closes after the models become, and remain as, PXMs.
We then argued that although the usage of NNs in the representation of the VEs and models of the two planning styles would not break the performance trend, their implementation details can also play an important role on how they would compare against each other. We hypothesized that although both planning styles would suffer from CMEs, B planning would additionally suffer from updating its VE using potentially harmful simulated experience, and thus it is likely to suffer more in reaching optimal (or close to optimal) performance. To test these hypotheses, we compared the MIs of the two planning styles (see Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}) on several MG environments. In order to test the effect of CMEs in DT planning, we performed experiments in which the parametric model is unrolled for 1, 5, and 15 time steps during search, denoted as DT(1), DT(5) and DT(15). Note that CMEs are not a problem for B planning, as its parametric model is only unrolled for a single time step. Also, in order to test the effect of updating the VE with simulated experience in B planning, we performed experiments in which its VE is updated with only real, both real and simulated, and only simulated experience, denoted as B(R), B(R+S) and B(S). See App.\ \ref{app_sec:details_of_MI_exps} for the details of these experiments. The results, in Fig.\ \ref{fig:plot_empty10x10}-\ref{fig:plot_lavacrossing}, show that even though the performance of DT planning degrades slightly with the increase in CMEs, this can indeed be easily mitigated by decreasing the search budget. However, as can be seen, even without any CMEs, just the usage of simulated experience alone can indeed slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance, especially in environments that are harder to model such as SCS9N1 \& LCS9N1.
\textbf{TL Experiments.} In Sec.\ \ref{sec:modern_inst}, we first argued that B planning would again suffer more in reaching optimal (or close to optimal) performance on the TRTs in both TL settings. Then, we hypothesized that B planning would also suffer more in adapting to the subsequent TSTs in the first TL setting. Finally, we hypothesized that, under certain assumptions, DT planning would perform better than B planning on the TSTs in the second TL setting. In order to test the first two of these hypotheses, we compared the MIs of the two planning styles on a sequential version of the RDS environment \citep{zhao2021consciousness} (see App.\ \ref{app_sec:details_of_MI_exps} for the details). Results in Fig.\ \ref{fig:plot_trans_seq} show that, similar to the P\&L setting, the increasing usage of simulated experience can indeed slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance on the TRTs, and that B planning indeed suffers more in adapting to the TSTs. And, in order to test the first and third hypotheses, we compared the two planning styles on the exact RDS environment. Results are shown in Fig.\ \ref{fig:plot_trans_odist_train}-\ref{fig:plot_trans_odist_045}. As can be seen, the increasing usage of simulated experience can again slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance on the TRTs. We can also see that DT planning indeed achieves significantly better performance than B planning across all TSTs with varying difficulties.
\section{Related Work}
\label{sec:related_work}
The abstract view that we provide in this study is mostly related to the recent monograph of \citet{bertsekas2021rollout} in which the recent successes of AlphaZero \cite{silver2018general}, a DT planning algorithm, are viewed through the lens of DP. However, we take a broader perspective and provide a unified view that encompasses both DT and B planning algorithms. Also, instead of assuming the availability of an exact model, we consider the scenario in which a model has to be learned by pure interaction with the environment. Another closely related study is the study of \citet{hamrick2021role} which informally relates MuZero \citep{schrittwieser2020mastering}, another DT planning algorithm, to various other DT and B planning algorithms in the literature. Our study can be viewed as a study that formalizes the relation between the two planning styles. There have also been studies that empirically compare the performances of various DT and B planning algorithms on continuous control domains in the P\&L setting \citep{wang2019benchmarking}, and on MG environments in specific TL settings \citep{zhao2021consciousness}. However, none of these studies provide a general understanding of when will one planning style perform better than the other.
Finally, our work also has connections to the studies of \citet{jiang2015dependence} and \citet{arumugam2018mitigating} which provide upper bounds for the performance difference between policies generated as a result of planning with an exact model and an estimated model. However, rather than providing upper bounds, in this study, we are instead interested in understanding which classes of models will allow for one planning style to perform better than the other. Lastly, another related line of research is the recent studies of \citet{grimm2020value, grimm2021proper} which classify models according to how relevant they are for value-based planning. Although, we share the same overall idea that models should only be judged for how useful they are in the planning process, our work differs in that we classify the models according to how useful they are in the comparison of the two planning styles.
\section{Conclusion and Discussion}
\label{sec:conclusion}
To summarize, we first viewed the CIs and MIs of DT and B planning in a unified way through the lens of DP, and then provided theoretical results and hypotheses on which one will perform better than the other across a variety of settings. Then, we empirically validated these results through illustrative experiments. Overall, our findings suggest that even though DT planning does not perform as well as B planning in their CIs, due to the reasons detailed in this study, the MIs of it can perform on par or better than their B planning counterparts in both the P\&L and TL settings. In this study, our main goal was to \emph{understand} under what conditions and in which settings one planning style will perform better than the other in fast-response-requiring domains, and not to provide any practical insights at the moment. However, we believe that both the proposed unifying framework and our theoretical and empirical results can be helpful to the community in improving the two planning styles in potentially interesting ways. Also note that we were only interested in comparing the two planning styles in terms of the expected discounted return of their output policies. Though not the main focus of this study, other possible interesting comparison directions include comparing in terms of sample efficiency and real-time performance. Lastly, due to their easy-to-learn optimal policies and their suitability in designing controlled experiments, we have mainly performed our MI experiments on MG environments. However, experiments in other environments can be helpful in further validating the hypotheses of this study. We hope to tackle this in future work.
\bibliographystyle{plainnat}
\section{Introduction}
\label{sec:intro}
It has long been argued that, in order for reinforcement learning (RL) agents to adapt to a variety of changing tasks, they should be able to learn a model of their environment, which allows for counterfactual reasoning and fast re-planning \citep{russell2002artificial}. Although this is a widely-accepted view in the RL community, the question of \emph{how} to leverage a learned model to perform planning in the first place does not have a widely-accepted and clear answer. In model-based RL, the two prevalent planning styles are decision-time (DT) and background (B) planning \citep{sutton2018reinforcement}, where the agent mainly plans in the moment and in parallel to its interaction with the environment, respectively. Even though these two planning styles have been developed with different assumptions and application domains in mind, i.e., DT planning algorithms \citep{tesauro1994td, tesauro1996line, silver2017mastering, silver2018general} under the assumption that the exact model of the environment is known and for domains that allow for certain computational budgets, such as board games, and B planning algorithms \citep{sutton1990integrated, sutton1991dyna, kaiser2020model, hafner2021mastering} under the assumption that the exact model is unknown and for domains that usually require fast responses, such as basic gridworlds and video games, recently, with the introduction of the ability to learn a model through pure interaction \citep{schrittwieser2020mastering}, DT planning algorithms have been applied to the same domains as their B planning counterparts (see e.g., \citep{schrittwieser2020mastering, hamrick2021role}). However, it still remains unclear under \emph{what} conditions and in \emph{which} settings one of these planning styles will perform better than the other in these fast-response-requiring domains.
To clarify this, we first start by abstracting away from the specific implementation details of the two planning styles and view them in a unified way through the lens of dynamic programming (DP). Then, we consider the classical instantiations (CI) of these planning styles and based on their DP interpretations, provide theoretical results and hypothesis on which one will perform better in the pure planning (PP), planning \& learning (P\&L), and transfer learning (TL) settings. We then consider the modern instantiations (MI) of these two planning styles and based on both their DP interpretations and implementation details, provide hypotheses on which one will perform better in the P\&L and TL settings. Lastly, we perform illustrative experiments with both instantiations of these planning styles to empirically validate our theoretical results and hypotheses. Overall, our results suggest that even though DT planning does not perform as well as B planning in their CIs, due to (i) the improvements in the way planning is performed, (ii) the usage of only real experience in the updates of the value estimates, and (iii) the ability to improve upon the previously obtained policy at test time, the MIs of it can perform on par or better than their B planning counterparts in both the P\&L and TL settings. We hope that our findings will help the RL community in developing a better understanding of the two planning styles and stimulate research in improving of them in potentially interesting ways.
\section{Background}
\label{sec:background}
In value-based RL \citep{sutton2018reinforcement}, an agent $A$ interacts with its environment $E$ through a sequence of actions to maximize its long-term cumulative reward. Here, the environment is usually modeled as a Markov decision process $E=(\mathcal{S}_E, \mathcal{A}_E, p_E, r_E, d_E, \gamma)$, where $\mathcal{S}_E$ and $\mathcal{A}_E$ are the (finite) set of states and actions, $p_E:\mathcal{S}_E\times \mathcal{A}_E\times \mathcal{S}_E\to [0, 1]$ is the transition distribution, $r_E:\mathcal{S}_E\times \mathcal{A}_E\times \mathcal{S}_E\to \amsmathbb{R}$ is the reward function, $d_E:\mathcal{S}_E\to [0,1]$ is the initial state distribution, and $\gamma\in [0,1)$ is the discount factor. At each time step $t$, after taking an action $a_t\in\mathcal{A}_E$, the environment's state transitions from $s_t\in\mathcal{S}_E$ to $s_{t+1}\in\mathcal{S}_E$, and the agent receives an observation $o_{t+1}\in\mathcal{O}_E$ and an immediate reward $r_t$. We assume that the observations are generated by a deterministic procedure $\psi:\mathcal{S}_E\to \mathcal{O}_E$, unknown to the agent. As the agent usually does not have access to the states in $\mathcal{S}_E$ a priori, and as the observations in $\mathcal{O}_E$ are usually very high-dimensional, it has to operate on its own state space $\mathcal{S}_A$, which is generated by its own adaptable (or sometimes a priori fixed) value encoder $\phi:\mathcal{O}_E\to \mathcal{S}_A$. The goal of the agent is to jointly learn a value encoder $\phi$ and a value estimator (VE) $Q:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ that induces a policy $\pi:\mathcal{S}_A\times\mathcal{A}_E\to [0,1] \in \mathbb{\Pi}$, where $\mathbb{\Pi} \equiv \{\pi | \pi:\mathcal{S}_A\times\mathcal{A}_E\to [0,1] \}$, maximizing $E_{\pi, p_E} [\sum_{t=0}^\infty \gamma^t r_E(S_t, A_t, S_{t+1}) | S_0\sim d_E]$. For convenience, we will refer to the composition of $\phi$ and $Q$ as simply the VE.
\textbf{Model-Free \& Model-Based RL.} The two main ways of achieving this goal are through the use of model-free RL (MFRL) and model-based RL (MBRL) methods. In MFRL, there is just a learning phase and the gathered experience is mainly used in improving the VE. In MBRL, there are two alternating phases: the learning and planning phases.\footnote{Note that even though some MBRL algorithms, such as \citep{tesauro1996line, silver2017mastering, silver2018general}, do not employ a model learning phase and make use of an a priori given exact model, in this study, we will only study versions of them in which the model has to be learned from pure interaction.} In the learning phase, in contrast to MFRL, the gathered experience is mainly used in jointly learning an adaptable (or sometimes a priori fixed) model encoder $\varphi: \mathcal{O}_E\to \mathcal{S}_{M}$ and a model $m\equiv (p_M, r_M, d_M)\in \mathcal{M}$, where $\mathcal{M} \equiv \{(p_M,r_M,d_M) | p_M:\mathcal{S}_{M}\times\mathcal{A}_E\times\mathcal{S}_{M}\to [0,1], r_M:\mathcal{S}_{M}\times\mathcal{A}_E\times\mathcal{S}_{M}\to \amsmathbb{R}, d_M:\mathcal{S}_{M}\to [0,1] \}$ and $\mathcal{S}_{M}$ is the state space of the agent's model, and optionally, as in MFRL, the experience may also be used in improving the VE.\footnote{Note that the learned model can be in a parametric or non-parametric form (see \citep{van2019use}).} Again, for convenience, we will refer to the composition of $\varphi$ and $m$ as simply the model. In the planning phase, the learned model $m$ is then used for simulating experience, either to be used alongside real experience in improving the VE, or just to be used in selecting actions at decision time. Note that in general $\varphi\neq\phi$ and thus $\mathcal{S}_M\neq \mathcal{S}_A$. However, in these cases, we assume that the agent has access to a deterministic function $\rho: \mathcal{S}_M\to \mathcal{S}_A$ that allows for going from $\mathcal{S}_M$ to $\mathcal{S}_A$. We also assume that $\mathcal{S}_E\subseteq\mathcal{S}_M$, which implies that the agent's model is, in principle, capable of exactly modeling the environment, though this may be very hard in practice.
\textbf{Planning Styles in MBRL.} In MBRL, planning is performed in two different styles: (i) DT planning, and (ii) B planning (see Ch.\ 8 of \citep{sutton2018reinforcement}).\footnote{Although some new planning styles have been proposed in the transfer learning literature (see e.g., \citep{barreto2017successor, barreto2019option, barreto2020fast, alver2022constructing}), they can also be viewed as performing some form of DT planning with pre-learned models.} DT planning, also known as ``planning in the now'', is performed as a computation whose output is the selection of a single action for the current state. This is often done by unrolling the model forward from the current state to compute local value estimates, which are then usually discarded after action selection. Here, planning is performed independently for \emph{every} encountered state and it is mainly performed in an \emph{online} fashion, though it may also contain offline components. In contrast, B planning is performed by continually improving a cached VE, on the basis of simulated experience from the model, often in a global manner. Action selection is then quickly done by querying the VE at the current state. Unlike DT planning, B planning is often performed in a purely \emph{offline} fashion, in parallel to the agent-environment interaction, and thus is \emph{not} necessarily focused on the current state: well before action selection for any state, planning plays its part in improving the value estimates in many other states. For convenience, in this study, we will refer to all MBRL algorithms that have an online planning component as DT planning algorithms (see e.g., \citep{tesauro1994td, tesauro1996line, silver2017mastering, silver2018general, schrittwieser2020mastering, zhao2021consciousness}), and will refer to the rest as B planning algorithms (see e.g., \citep{sutton1990integrated, sutton1991dyna, kaiser2020model, hafner2021mastering, zhao2021consciousness}). Note that, regardless of the style, any type of planning can be viewed as a procedure $f: (\mathcal{M}, \mathbb{\Pi})\to \mathbb{\Pi}$ that takes a model $m$ and a policy $\pi^i$ as input and returns an improved policy $\pi_{m}^o$, according to $m$, as output.
\textbf{Algorithms within the Two Planning Styles.} Starting with DT planning, depending on how much search is performed, DT planning algorithms can be studied under three main categories: DT planning algorithms (i) that perform no search (see e.g., \citep{tesauro1996line} and Alg.\ \ref{alg:alg_CI_DT}), (ii) that perform pure search (see e.g., \citep{campbell2002deep} and Alg.\ \ref{alg:alg_CI_DT_exs}), and (iii) that perform some amount of search (see e.g., \citep{silver2017mastering, silver2018general, schrittwieser2020mastering} and Alg.\ \ref{alg:alg_MI_DT}). In the first two, planning is performed by just running pure rollouts with a fixed or improving policy (see Fig.\ \ref{app_fig:pure_rollout}), and by purely performing search (see Fig.\ \ref{app_fig:exhaustive_search}), respectively. In the last one, planning is performed by first performing some amount of search and then either by running rollouts with a fixed or improving policy, by bootstrapping on the cached value estimates of a fixed or improving policy, or by doing both (see Fig.\ \ref{app_fig:search_and_rollout} \& \ref{app_fig:search_and_bootstrap}). Note that while the CIs of DT planning fall within the first two categories, the MIs of it usually fall within the last one. Also note that, while planning is performed with only a single parametric model in the first two categories, it is usually performed with both a parametric and non-parametric (usually a replay buffer, see \citep{van2019use}) model in the last one (see e.g., \citep{schrittwieser2020mastering} and Alg.\ \ref{alg:alg_MI_DT}). See \citep{bertsekas2021rollout} for more details on the different categories of DT planning. Moving on to B planning, as all B planning algorithms (see e.g., Alg.\ \ref{alg:alg_CI_B}, \ref{alg:alg_CI_Bi}, \ref{alg:alg_MI_B}) perform planning by periodically improving a cached VE throughout the model learning process, we do not study them under different categories. However, we again note that, while some B planning algorithms perform planning with a single parametric model (see e.g., Alg.\ \ref{alg:alg_CI_B} \& \ref{alg:alg_CI_Bi}), some perform planning with both a parametric and non-parametric (again usually a replay buffer) model (see e.g., Alg.\ \ref{alg:alg_MI_B}).
\section{A Unified View of the Two Planning Styles}
\label{sec:unified_view}
In this section, we abstract away from the specific implementation details, such as whether policy improvement is done locally or globally, or whether planning is performed in an online or offline manner, and view the two planning styles in a unified way through the lens of DP \citep{bertsekas1996neuro}. More specifically, we view DT planning through the lens of policy iteration (PI) as DT planning algorithms can be considered as performing some amount of asynchronous PI that is focused on the current state (see \citep{bertsekas2021rollout}), and we view B planning through the lens of value iteration (VI) as B planning algorithms can be considered as performing some amount of asynchronous VI that is focused on the sampled states (see \citep{sutton2018reinforcement}).
In this framework, DT planning algorithms that perform no search can be considered as performing one-step PI at every encountered state, on top of a fixed or improving policy, as they compute $\pi_m^o$ by first running many rollouts with a fixed or improving $\pi^i$ in $m$ to evaluate the current state,
and then by selecting the most promising action (which can be considered as first performing policy evaluation and then policy improvement). Similarly, DT planning algorithms that perform pure search can be considered as performing PI until convergence (which we call full PI) at every encountered state as they disregard $\pi^i$ and compute $\pi_{m}^o$ by first performing exhaustive search in $m$ to obtain the optimal values at the current state, and then by selecting the most promising action. Finally, DT planning algorithms that perform some amount of search can be considered as performing something between one-step and full PI at every encountered state, on top of a fixed or improving policy, as they are just a mixture of DT planning algorithms that perform no search and pure search. Hence, depending on how much search is performed, DT planning algorithms in general can be viewed as going between the spectrum of performing one-step PI and full PI at every encountered state, on top of a fixed or improving policy.
Similarly, under the assumption that all states are sampled at least once during planning, all B planning algorithms can also be considered as performing either one-step VI, VI until convergence (which we call full VI), or something in between, on top of a fixed or improving policy, as they compute $\pi_m^o$ by periodically improving a fixed or improving $\pi^i$ on the basis of simulated experience from $m$, either for a single time step, until convergence, or somewhere in between. Thus, depending on much planning is performed, B planning algorithms in general can also be viewed as interpolating between performing one-step VI and full VI, on top of a fixed or improving policy.
Finally, note that, in Sec.\ \ref{sec:background}, we have pointed out that some DT and B planning algorithms perform planning with both a parametric and non-parametric model, which can make it hard for them to be viewed through our proposed framework. However, if one considers the two separate models as a single combined model, then these algorithms can also be viewed straightforwardly in our proposed framework: DT and B planning algorithms that perform planning with two separate models can still be viewed as going between the spectrum of performing one-step PI / VI and full PI / VI, however now, they would just be performing planning with a combined model (see App.\ \ref{app_sec:combined_view_discussion} for a broader discussion on the combined model view).
\section{Decision-Time vs.\ Background Planning}
\label{sec:DTvsB_planning}
In this study, we are interested in understanding under what conditions and in which settings one planning style will perform better than the other. Thus, we start by defining a performance measure that will be used in comparing the two planning styles of interest. Given an arbitrary model $m=(p,r,d)\in\mathcal{M}$, let us define the performance of an arbitrary policy $\pi\in\mathbb{\Pi}$ in it as follows:
\begin{equation}
J_{m}^{\pi} \equiv E_{\pi, p} [{\textstyle \sum}_{t=0}^{\infty} \gamma^t r(S_t, A_t, S_{t+1}) | S_0\sim d ].
\label{eqn:perf_measure}
\end{equation}
Note that $J_{m}^{\pi}$ corresponds to the expected \emph{discounted} return of a policy $\pi$ in model $m$. Next, we consider the conditions under which the comparisons will be made: we are interested in both simple scenarios in which the VEs and models are both represented as a table, and in complex ones in which at least the VEs or models are represented using function approximation (FA).
\begin{wrapfigure}{R}{0.515\textwidth}
\vspace{-0.5cm}
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=3.5cm]{figures/models1.pdf}
\caption{\small General partitioning} \label{fig:model_space_1}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=3.5cm]{figures/models2.pdf}
\caption{\small Partitioning of interest} \label{fig:model_space_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a) The general partitioning and (b) the partitioning of interest of $\mathcal{M}$, for a given $\Pi$ and $J$. The gray and blue regions indicate $\mathcal{M}_{\Pi, J}^{\textnormal{PCM}}\cap \mathcal{M}_{\Pi, J}^{\textnormal{PRM}}$ and $\mathcal{M}\setminus (\mathcal{M}_{\Pi, J}^{\textnormal{PCM}}\cup \mathcal{M}_{\Pi, J}^{\textnormal{PRM}})$, respectively.}
\vspace{-0.2cm}
\label{fig:model_space}
\end{wrapfigure}
\textbf{Partitioning of the Model Space.} We now present a way to partition the space of agent models $\mathcal{M}$ such that one planning style is guaranteed to perform on par or better than the other. Let us start by defining $m^*$ to be the exact model of the environment. Note that $m^*\in\mathcal{M}$, as we assumed that $\mathcal{S}_E\subseteq\mathcal{S}_M$ (see Sec.\ \ref{sec:background}). Then, given a policy set $\Pi\subseteq \mathbb{\Pi}$, containing at least two policies, and a performance measure $J$, defined as in (\ref{eqn:perf_measure}), depending on the relative performances of the policies in it and in $m^*$, a model $m\in\mathcal{M}$ can belong to one of the following main classes:
\begin{definition}[PCM]
\label{def:pcm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi, J}^{\textnormal{PCM}} \equiv \{ m\in\mathcal{M} \ | \ \smash{J_{m^*}^{\pi^i} \lesseqgtr J_{m^*}^{\pi^j}} \ \forall\pi^i,\pi^j\in\Pi \text{ satisfying } \smash{J_{m}^{\pi^i} \gtreqless J_{m}^{\pi^j}} \}$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ is a \emph{performance-contrasting model (PCM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\begin{definition}[PRM]
\label{def:prm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}} \equiv \{ m\in\mathcal{M} \ | \ \smash{J_{m^*}^{\pi^i} \gtreqless J_{m^*}^{\pi^j}} \ \forall\pi^i,\pi^j\in\Pi \text{ satisfying } \smash{J_{m}^{\pi^i} \gtreqless J_{m}^{\pi^j}}\}$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ is a \emph{performance-resembling model (PRM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\vspace{-0.25cm}
Informally, given any two policies in $\Pi$, and $J$, a model $m$ is a PCM of $m^*$ iff the policy that performs on par or better in it performs on par or worse in $m^*$, and it is a PRM of $m^*$ iff the policy that performs on par or better in it also performs on par or better in $m^*$. Note that $m$ can both be a PCM and a PRM of $m^*$ iff the two policies perform on par in both $m$ and $m^*$. If $\Pi$ contains at least one of the optimal policies for $m$, then $m$ can also belong to one of the following specialized classes:
\begin{definition}[PNM]
\label{def:pnm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PNM}} \equiv \{ m\in\mathcal{M}_{\Pi,J}^{\textnormal{PCM}} \ | \ \smash{J_{m^*}^{\pi_m^{*}} = \min_{\pi\in\mathbb{\Pi}} J_{m^*}^{\pi}} \ \forall\pi_m^{*}\in\Pi\}$, where $\pi_m^*$ denotes the optimal policies in $m$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PNM}}$ is a \emph{performance-minimizing model (PNM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\begin{definition}[PXM]
\label{def:pxm}
Given $\Pi\subseteq\mathbb{\Pi}$ and $J$, let $\mathcal{M}_{\Pi,J}^{\textnormal{PXM}} = \{ m\in\mathcal{M}_{\Pi,J}^{\textnormal{PRM}} \ | \ \smash{J_{m^*}^{\pi_m^{*}} = \max_{\pi\in\mathbb{\Pi}} J_{m^*}^{\pi}} \ \forall\pi_m^{*}\in\Pi\}$, where $\pi_m^*$ denotes the optimal policies in $m$. We say that each $m\in\mathcal{M}_{\Pi,J}^{\textnormal{PXM}}$ is a \emph{performance-maximizing model (PXM)} of $m^*$ w.r.t.\ $\Pi$ and $J$.
\end{definition}
\vspace{-0.25cm}
Informally, given a subset of $\Pi$ that contains the optimal policies for a model $m$, and $J$, $m$ is a PNM of $m^*$ iff all of the optimal policies result in the worst possible performance in $m^*$, and it is a PXM of $m^*$ iff all them result in the best possible performance in $m^*$. Note that all definitions above are agnostic to how the models are represented.
Fig.\ \ref{fig:model_space_1} illustrates how $\mathcal{M}$ is partitioned for a given $\Pi$ and $J$. Note that for a fixed $J$, the relative sizes of the model classes solely depend on $\Pi$. For instance, as $\Pi$ gets larger, the relative sizes of $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ and $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ shrink, because with every policy that is added to $\Pi$, the number of criteria that a model must satisfy to be a PCM or PRM increases, which reduces the odds of an arbitrary model in $\mathcal{M}$ being in $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ or $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$. And, as $\Pi$ gets smaller, the relative sizes of $\mathcal{M}_{\Pi,J}^{\textnormal{PCM}}$ and $\mathcal{M}_{\Pi,J}^{\textnormal{PRM}}$ grow, and eventually fill up the entire space, when $\Pi$ contains only two policies. Fig.\ \ref{fig:model_space_2} illustrates the partitioning in this scenario. Since we are only interested in comparing the policies of two planning styles, the $\Pi$ of interest has a size of two, and thus we have a partitioning as in Fig.\ \ref{fig:model_space_2}.
\subsection{Classical Instantiations of the Two Planning Styles}
\label{sec:classic_inst}
We are now ready to discuss when will one planning style perform better than the other. For easy analysis, we start by considering the CIs of the two planning styles in which both the VEs and models are represented as a table. More specifically, for DT planning we study a version of the OMCP algorithm of \citet{tesauro1996line} in which a parametric model is learned from experience (see Alg.\ \ref{alg:alg_CI_DT})\footnote{Note that for the CIs of DT planning, we choose to study an algorithm that performs no search, and not a one that performs pure search (see Sec.\ \ref{sec:background}), as the latter ones are not applicable to fast-response-requiring domains.}, and for B planning we study a simplified version of the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna} in which planning is performed until convergence with every model in the model learning process (see Alg.\ \ref{alg:alg_CI_Bi}). See App.\ \ref{app_sec:CIs_discussion} for a discussion on why we consider these versions. Note that these two algorithms can be considered as performing one-step PI and full VI on top of a fixed policy, respectively (see Sec.\ \ref{sec:unified_view}). Also note that, although we only consider these specific instantiations, as long as the VEs of both planning styles are represented as a table and DT planning corresponds to taking a smaller or on par policy improvement step than B planning, which is the case in most CIs, the results that we derive in this section would hold regardless of the choice of instantiation. Before considering different settings, let us define the following policies that will be useful in referring to the input and output policies of the two planning styles:
\begin{definition}[Base, Rollout \citep{bertsekas2021rollout} \& CE \citep{jiang2015dependence} Policies] \label{def:base_rollout_certeq_pol}
The \emph{base policy} $\pi^b\in\mathbb{\Pi}$ is the policy used in initiating PI or VI. Given a fixed or improving base policy $\pi^b$ and a model $m\in\mathcal{M}$, the \emph{rollout policy} $\pi_{m}^r\in\mathbb{\Pi}$ is the policy obtained after performing one-step of PI on top of $\pi^b$ in $m$, and the \emph{certainty-equivalence (CE) policy} $\pi_{m}^{ce}\in\mathbb{\Pi}$ is the policy obtained after performing full PI or full VI on top of $\pi^b$ in $m$.
\end{definition}
\vspace{-0.25cm}
In the rest of this section, we will refer to the policies generated by the CIs of DT and B planning with model $m$ on top of a fixed base policy $\pi^b$ as $\pi_m^r$ and $\pi_m^{ce}$, respectively.
\textbf{PP Setting.} We start by considering the PP setting in which the agent is directly provided with a model. In this setting, we can prove the following statements:
\begin{proposition}
\label{prop:prop1}
Let $m\in\mathcal{M}$ be a PCM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{proposition}
\begin{proposition}
\label{prop:prop2}
Let $m\in\mathcal{M}$ be a PRM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^{r}}$.
\end{proposition}
\vspace{-0.25cm}
Due to space constraints, we defer all the proofs to App.\ \ref{app_sec:proofs}. Prop.\ \ref{prop:prop1} \& \ref{prop:prop2} imply that, given $\Pi=\{\pi_m^r, \pi_m^{ce} \}$ and $J$, although DT planning will perform on par or better than B planning when the provided model $m$ is a PCM, it will perform on par or worse when $m$ is a PRM. Note that these results would not be guaranteed to hold if FA was introduced in the VE representations, as in this case, there would be no guarantee that full VI will result in a better policy than one-step PI in $m$ \citep{bertsekas1996neuro}. However, if one were to use approximators with good generalization capabilities (GGC), i.e., approximators that assign the same value to similar observations, we would expect a similar performance trend to hold.
\textbf{P\&L Setting.} In the P\&L setting, instead of being provided directly, the model has to be learned from the experience gathered by the agent. In this scenario, as different policies are likely to be used in the model learning process, the encountered models of the two planning styles, which we denote as $\bar{m}\in\mathcal{M}$ and $\bar{\bar{m}}\in\mathcal{M}$ for DT and B planning, respectively, are also likely to be different. Thus, as they require the two planning styles to have access to the same model, the results of Prop.\ \ref{prop:prop1} \& \ref{prop:prop2} are not valid in the P\&L setting. However, if the model of B planning is a PNM or a PXM, we can prove the following statements:
\begin{proposition}
\label{prop:prop3}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}} \in\mathcal{M}$ be a PNM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{proposition}
\begin{proposition}
\label{prop:prop4}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}}\in\mathcal{M}$ be a PXM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{proposition}
\vspace{-0.25cm}
Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} imply that, given $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \}$, $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \}$ and $J$, although DT planning will perform on par or better than B planning when $\bar{\bar{m}}$ is a PNM, it will perform on par or worse when $\bar{\bar{m}}$ is a PXM. While the former result can be relevant in the initial phases of B planning's model learning process, the latter one is most likely to be relevant in the final phases of this process, when $\bar{\bar{m}}$ becomes a PXM. Note that since the models are represented as tables, the learned models of both planning styles are guaranteed to become PXMs in the limit, as we know that in the worst case, with sufficient exploration, the models will, in the limit, converge to $m^*$ \citep{sutton2018reinforcement}. However, note that this would not be guaranteed if FA was used in the model representations. Lastly, even if $\bar{\bar{m}}$ starts as a PNM and eventually becomes a PXM, the results of Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} would not be guaranteed to hold if FA was used in the VE representations, due to the reason discussed in the PP setting. However, if one were to use approximators with GGC, we would again expect a similar performance trend to hold.
\textbf{TL Setting.} Although there are many different settings in TL \citep{taylor2009transfer}, for easy analysis, we start by considering the simplest one in which there is only one training task (TRT) and one subsequent test task (TST), differing only in their reward functions, and in which the agent's transfer ability is measured by how fast it adapts to the TST after being trained on the TRT (more challenging settings will be considered in the next section). In this setting, we would expect the results of the P\&L setting to hold directly, as instead of a single one, there are now two consecutive P\&L settings.
\subsection{Modern Instantiations of the Two Planning Styles}
\label{sec:modern_inst}
We now consider the MIs of the two planning styles in which both the VEs and models are represented with neural networks (NN). More specifically, we study the DT and B planning algorithms in \citet{zhao2021consciousness} (see Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}) as they are reflective of many of the properties of their state-of-the-art counterparts (see e.g., \citep{schrittwieser2020mastering, kaiser2020model, hafner2021mastering}) and their code is publicly available. See App.\ \ref{app_sec:MIs_discussion} for a broader discussion on why we choose these algorithms. Here, as the DT planning algorithm performs planning by first performing some amount of search with a parametric model and then by bootstrapping on the value estimates of a continually improving policy, it can be considered as performing more than one-step PI on top of an improving policy in a combined model $\bar{m}_c$ (see Sec.\ \ref{sec:unified_view}). And, as the B planning algorithm performs planning by continually improving a VE at every time step with both a parametric and non-parametric (a replay buffer) model, it can be viewed as performing something between one-step VI and full VI on top of an improving policy with a combined model $\bar{\bar{m}}_c$ (see Sec.\ \ref{sec:unified_view}). However, if $\bar{\bar{m}}_c$ converges, it can be viewed as performing full VI, as in this case the continual improvements to the VE with the converged $\bar{\bar{m}}_c$ would eventually lead to an improvement that is equivalent to performing full VI. Note that although we only consider these specific instantiations, the hypotheses we provide in this section are generally applicable to most state-of-the-art MBRL algorithms, as the algorithms in \citep{zhao2021consciousness} are reflective of many of their properties (see App.\ \ref{app_sec:MIs_discussion}).
\textbf{P\&L Setting.} We start with the P\&L setting, and skip the PP one as it is not a relevant setting used with the MIs. To ease the analysis, let us start by considering a simplified scenario in which both the VEs and models of the MIs of the two planning styles are represented as a table. Let us also define the \emph{improved rollout policy} to be the policy $\pi_{m}^{r+}\in\mathbb{\Pi}$ obtained after performing more than one-step PI, with the exact number not being important, on top of a base policy $\pi^{b}\in\mathbb{\Pi}$ in model $m$, and let us also refer to the policies generated by the MIs of DT and B planning with models $\bar{m}_c$ and $\bar{\bar{m}}_c$ on top of an improving base policy $\pi^{b}$ as $\pi_{\bar{m}_c}^{r+}$ and $\pi_{\bar{\bar{m}}_c}^{ce}$, respectively. Then, using $\pi_{\bar{m}_c}^{r+}$ and $\pi_{\bar{\bar{m}}_c}^{ce}$ in place of $\pi_{\bar{m}}^r$ and $\pi_{\bar{\bar{m}}}^{ce}$, respectively, and under the assumption that $\bar{\bar{m}}_c$ converges, we would expect the formal statements of the P\&L setting of Sec.\ \ref{sec:classic_inst} to hold exactly as DT planning still corresponds taking a smaller or on par policy improvement step than B planning. However, as DT planning now corresponds to performing more than one-step PI, we would expect the performance gap between the two planning styles to reduce in their MIs. Moreover, we would expect this gap to gradually close if both $\bar{m}_c$ and $\bar{\bar{m}}_c$ become, and remain as, PXMs, as the use of an improving policy for DT planning would result in a continually improving performance that gets closer to the one of B planning.
Coming back to our original scenario in which both the VEs and models of the MIs of the two planning styles are represented with NNs, we would expect a similar performance trend to hold as NNs are approximators that have GGC. However, this expectation is solely based on the DP interpretations of the two planning styles and thus does not take into account the implementation details of them, which can also play an important role on how the two planning styles will perform against each other in their MIs. Thus, in the following paragraph, we will discuss the impacts of the implementation details of the two planning styles on their learning speed and final performance, and based on this, will provide hypotheses on which planning style will perform better than the other.
In their MIs, both planning styles perform planning by unrolling NN-based parametric models which are known to easily lead to compounding model errors (CME, \citep{talvitie2014model}). Thus, obtaining combined models that are PXMs becomes quite difficult, if not impossible, for both planning styles. Even though this problem can significantly be mitigated for both of them by unrolling the models for only a few time steps and then bootstrapping on the VEs for the rest, B planning can also suffer from updating its VE with the potentially harmful simulated experience generated by its NN-based parametric model (see e.g., \citep{van2019use, jafferjee2020hallucinating}), which can slowdown, or even prevent, it in reaching optimal (or close to optimal) performance.\footnote{Note that even if the exact model was known, due to the deadly triad \citep{sutton2018reinforcement}, there would be no guarantee that the MIs of both planning styles will be able to output policies of optimal (or close to optimal) performance.} Note that this is not a problem in DT planning as its VE is only updated with real experience. Thus, based on these observations, we hypothesize that compared to DT planning, it is likely for B planning to suffer more in reaching optimal (or close to optimal) performance in their MIs.
\textbf{TL Setting.} We now consider two common TL settings that are both more challenging than the TL setting in Sec.\ \ref{sec:classic_inst}. In these settings, there is a distribution of TRTs and TSTs, differing only in their observations. In the first one, the agent's transfer ability is measured by how fast it adapts to the TSTs after being trained on the TRTs (see e.g., \citep{van2020loca}), and in the second one, this ability is measured by its instantaneous performance on the TSTs as it gets trained on the TRTs, also known as ``zero-shot transfer'' in the literature (see e.g., \citep{zhao2021consciousness, anand2022procedural}). In these two settings, the implementation details of the two planning styles also play an important role on how they will perform against each other. Thus, in the following paragraph, we will also provide hypotheses based on these details.
In both TL settings, we would again expect B planning to suffer more in reaching the optimal (or close to optimal) performance on the TRTs because of the reasons discussed in the P\&L setting. Additionally, in the first setting, after the tasks switch from the TRTs to the TSTs, we would expect B planning to suffer more in the adaptation process, as its parametric model, learned on the TRTs, would keep generating experience that resembles the TRTs until it adapts to the TSTs, which in the meantime can lead to harmful updates to its VE. Also, in the second setting, if the learned parametric model of DT planning is capable of simulating at least a few time steps of the TSTs, and if the learned policies of the both planning styles perform similarly on the TSTs, we would expect DT planning to perform better on the TSTs, as at test time, it would be able to improve upon the policy obtained during training by performing online planning. Note that, this is not possible for B planning, as it performs planning in an offline fashion and thus requires additional interaction with the TSTs to improve upon the policy obtained during training.
\section{Experiments}
\label{sec:experiments}
\begin{wrapfigure}{R}{0.5\textwidth}
\vspace{-0.4cm}
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[height=2.25cm]{figures/simple_gridworld.pdf}
\caption{\small SG Environment} \label{fig:simple_gridworld}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}{0.275\textwidth}
\centering
\includegraphics[height=2.25cm]{figures/minigrid.pdf}
\caption{\small MiniGrid Environments} \label{fig:minigrid}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a) The SG environment and (b) MG environments: (top row) Empty 10x10, FourRooms, SCS9N1, (bottom row) LCS9N1, RDS Train, RDS Test.}
\vspace{-0.2cm}
\label{fig:envs}
\end{wrapfigure}
We now perform illustrative experiments to validate the formal statements and hypotheses presented in Sec.\ \ref{sec:DTvsB_planning}. The experimental details and more detailed results can be found in App.\ \ref{app_sec:exp_details} \& \ref{app_sec:add_results}, respectively.
\textbf{Environments.} We perform experiments on both the Simple Gridworld (SG) environment and on environments that either pre-exist in or are manually built on top of MiniGrid (MG, \citep{gym_minigrid}) (see Fig.\ \ref{fig:envs}), as the optimal policies in these environments are easy to learn and they allow for designing controlled experiments that are helpful in answering the questions of interest to this study. In the SG environment, the agent spawns in state S and has to navigate to the goal state depicted by G. In the MG environments, the agent, depicted in red, has to navigate to the green goal cell, while avoiding the orange lava cells (if there are any). More details on these environments can be found in App.\ \ref{app_sec:exp_details}. Note that while $\mathcal{O}_E=\mathcal{S}_E$ in the SG environment, $\mathcal{O}_E\neq\mathcal{S}_E$ in the MG environments.
\subsection{Experiments with Classical Instantiations}
\label{sec:classic_inst_exp}
In this section, we perform experiments with the CIs of DT and B planning (see Alg.\ \ref{alg:alg_CI_DT} \& \ref{alg:alg_CI_Bi}) on the SG environment to empirically validate our theoretical results and hypotheses in Sec.\ \ref{sec:classic_inst}. In addition to the scenario in which both the VEs and models are represented as tables (where $\phi=\varphi$ are both identity functions (IF), implying $\mathcal{S}_A=\mathcal{S}_{M}=\mathcal{O}_E$), we also consider the one in which only the model is represented as a table (where only $\varphi$ is an IF, implying $\mathcal{S}_{M}=\mathcal{O}_E$). In the latter scenario, we use state aggregation (SA) in the VE representation, i.e., $\phi$ is a state aggregator, which allows for assigning the same value to similar observations. More on the implementation details of these NIs can be found in App.\ \ref{app_sec:details_of_CI_exps}.
\begin{figure}
\centering
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_perf.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_pureplan_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_perf_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{fig:plot_pureplan_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_planlearn_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_VFA_SA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{fig:plot_planlearn_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_tr.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{fig:plot_transfer_1}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning on the SG environment, in the (a, b) PP, (c, d) P\&L, and (e) TL settings with tabular and SA VE representations. Black \& gray dashed lines indicate the performance of the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.2cm}
\label{fig:classic_algs}
\end{figure}
\textbf{PP Experiments.} According to Prop.\ \ref{prop:prop1} \& \ref{prop:prop2}, although DT planning is guaranteed to perform on par or better than B planning when the provided model is a PCM, it is guaranteed to perform on par or worse when the provided model is a PRM. To empirically verify this, we provided the two planning styles with a series of hand-designed tabular models that interpolate between a PNM and a PXM: the provided models first start with the PNM $m_1$ with goal state G$_1$ (see Fig.\ \ref{fig:simple_gridworld}), and then gradually move towards the PXM $m_{10}$ with goal state G, by first becoming PCMs $\{m_i\}_{i=2}^5$, and then by becoming PRMs $\{m_j\}_{j=6}^9$, with goal states $\{\text{G}_n\}_{n=2}^9$, respectively (see App.\ \ref{app_sec:details_of_CI_exps} for more details on these models). After planning was performed with each of these models, we evaluated the resulting output policies in the environment. Results are shown in Fig.\ \ref{fig:plot_pureplan_1}. We can indeed see that although DT planning performs better when the provided model is a PCM (or a PNM), it performs worse when the provided model is a PRM (or a PXM). To see if similar results would hold with approximators that have GGC, we also performed the same experiment with SA used in the VE representation. Results in Fig.\ \ref{fig:plot_pureplan_2} show that a similar trend holds in this case as well.
\textbf{P\&L Experiments.} Prop.\ \ref{prop:prop3} \& \ref{prop:prop4} imply that, although DT planning is guaranteed to perform on par or better than B planning when the model of B planning is a PNM, it is guaranteed to perform on par or worse when the model of B planning is a PXM. To empirically verify this, we initialized the tabular models of both planning styles as hand-designed PNMs (see App.\ \ref{app_sec:details_of_CI_exps} for the details) and let them be updated through interaction to become PXMs. After every episode, we evaluated the resulting output policies in the environment. Results in Fig.\ \ref{fig:plot_planlearn_1} show that, as expected, although DT planning performs better when B planning's model is a PNM, it performs worse when B planning's model becomes a PXM. Again, to see whether if similar results would hold with approximators that have GGC, we also performed experiments with SA used in the VE representation. Results in Fig.\ \ref{fig:plot_planlearn_2} show that a similar trend holds in this case as well.
\textbf{TL Experiments.} In Sec.\ \ref{sec:classic_inst}, we argued that the results of the P\&L setting would hold directly in the considered TL setting. For empirical verification, we performed a similar experiment to the one in the P\&L setting, in which we initialized the tabular models of both planning styles as hand-designed PNMs and let them be updated to become PXMs. However, differently, after 25 episodes, we now added a subsequent TST with goal state G$_1$ to the TRT (see App.\ \ref{app_sec:details_of_CI_exps} for the details). In Fig.\ \ref{fig:plot_transfer_1}, we can see that, similar to the P\&L setting, before the task changes, DT planning first performs better and but then worse than B planning, and the same happens after the task change. Results in Fig.\ \ref{app_fig:classic_algs_tl_perf_1} show that a similar trend also holds when SA used in the VE representation.
\subsection{Experiments with Modern Instantiations}
\label{sec:modern_inst_exp}
We now perform experiments with the MIs of DT and B planning to empirically validate our hypotheses in Sec.\ \ref{sec:modern_inst}. For the experiments with the SG environment, we consider the former scenario in Sec.\ \ref{sec:classic_inst_exp}, and for the experiments with MG environments, we consider the scenario in which both the VEs and models are represented with NNs, and in which they share the same encoder (where $\phi=\varphi$ are both learnable parametric functions, implying $\mathcal{S}_A=\mathcal{S}_{M}\neq\mathcal{O}_E$). More on the implementation details of these MIs can be found in App.\ \ref{app_sec:details_of_MI_exps}.
\begin{figure}[]
\centering
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Simple Gridworld} \label{fig:plot_simplegridworld}
\end{subfigure}
\centering
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/Empty10-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small Empty10x10} \label{fig:plot_empty10x10}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/Fourrooms-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small FourRooms} \label{fig:plot_fourrooms}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/SCS9N1-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small SCS9N1} \label{fig:plot_simplecrossing}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/LCS9N1-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small LCS9N1} \label{fig:plot_lavacrossing}
\end{subfigure}
\par\bigskip \vspace{-0.5em}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrSQ-PR-RandTS.pdf}
\vspace{-0.25cm}
\caption{\small RDS Sequential} \label{fig:plot_trans_seq}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_train.pdf}
\vspace{-0.25cm}
\caption{\small RDS Train (0.35)} \label{fig:plot_trans_odist_train}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_025.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.25)} \label{fig:plot_trans_odist_025}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_035.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.35)} \label{fig:plot_trans_odist_035}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_045.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.45)} \label{fig:plot_trans_odist_045}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the MIs of DT and B planning in the (a-e) P\&L and (f-j) TL settings with (a) tabular and (b-j) NN VE representations. The black dashed lines indicate the performance of the optimal policy in the corresponding environment. The green and magenta dashed line in (a) inidicates the point after which DT and B planning's models become, and remain as, PXMs, respectively. Shaded regions are one standard error over (a) 100 and (b-j) 50 runs.}
\vspace{-0.2cm}
\label{fig:modern_algs}
\end{figure}
\textbf{P\&L Experiments.} In Sec.\ \ref{sec:modern_inst}, we argued that if one were to use tabular VEs and models in the MIs of the two planning styles, the results of the P\&L section of Sec.\ \ref{sec:classic_inst} would hold exactly. Additionally, we also argued that the performance gap between the two planning styles would reduce in their MIs and even gradually close if the combined models of both planning styles become, and remain as, PXMs. In order to test these, we implemented simplified tabular versions of the MIs of the two planning styles (see Alg.\ \ref{alg:alg_MI_DT_tab} \& \ref{alg:alg_MI_B_tab}) and compared them on the SG environment. Results are shown in Fig.\ \ref{fig:plot_simplegridworld}. As expected, the results of the P\&L section of Sec.\ \ref{sec:classic_inst} holds exactly, and the performance gap between the two planning styles indeed reduces and gradually closes after the models become, and remain as, PXMs.
We then argued that although the usage of NNs in the representation of the VEs and models of the two planning styles would not break the performance trend, their implementation details can also play an important role on how they would compare against each other. We hypothesized that although both planning styles would suffer from CMEs, B planning would additionally suffer from updating its VE using potentially harmful simulated experience, and thus it is likely to suffer more in reaching optimal (or close to optimal) performance. To test these hypotheses, we compared the MIs of the two planning styles (see Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}) on several MG environments. In order to test the effect of CMEs in DT planning, we performed experiments in which the parametric model is unrolled for 1, 5, and 15 time steps during search, denoted as DT(1), DT(5) and DT(15). Note that CMEs are not a problem for B planning, as its parametric model is only unrolled for a single time step. Also, in order to test the effect of updating the VE with simulated experience in B planning, we performed experiments in which its VE is updated with only real, both real and simulated, and only simulated experience, denoted as B(R), B(R+S) and B(S). See App.\ \ref{app_sec:details_of_MI_exps} for the details of these experiments. The results, in Fig.\ \ref{fig:plot_empty10x10}-\ref{fig:plot_lavacrossing}, show that even though the performance of DT planning degrades slightly with the increase in CMEs, this can indeed be easily mitigated by decreasing the search budget. However, as can be seen, even without any CMEs, just the usage of simulated experience alone can indeed slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance, especially in environments that are harder to model such as SCS9N1 \& LCS9N1.
\textbf{TL Experiments.} In Sec.\ \ref{sec:modern_inst}, we first argued that B planning would again suffer more in reaching optimal (or close to optimal) performance on the TRTs in both TL settings. Then, we hypothesized that B planning would also suffer more in adapting to the subsequent TSTs in the first TL setting. Finally, we hypothesized that, under certain assumptions, DT planning would perform better than B planning on the TSTs in the second TL setting. In order to test the first two of these hypotheses, we compared the MIs of the two planning styles on a sequential version of the RDS environment \citep{zhao2021consciousness} (see App.\ \ref{app_sec:details_of_MI_exps} for the details). Results in Fig.\ \ref{fig:plot_trans_seq} show that, similar to the P\&L setting, the increasing usage of simulated experience can indeed slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance on the TRTs, and that B planning indeed suffers more in adapting to the TSTs. And, in order to test the first and third hypotheses, we compared the two planning styles on the exact RDS environment. Results are shown in Fig.\ \ref{fig:plot_trans_odist_train}-\ref{fig:plot_trans_odist_045}. As can be seen, the increasing usage of simulated experience can again slowdown, or even prevent (as in B(S)), B planning in reaching optimal (or close to optimal) performance on the TRTs. We can also see that DT planning indeed achieves significantly better performance than B planning across all TSTs with varying difficulties.
\section{Related Work}
\label{sec:related_work}
The abstract view that we provide in this study is mostly related to the recent monograph of \citet{bertsekas2021rollout} in which the recent successes of AlphaZero \cite{silver2018general}, a DT planning algorithm, are viewed through the lens of DP. However, we take a broader perspective and provide a unified view that encompasses both DT and B planning algorithms. Also, instead of assuming the availability of an exact model, we consider the scenario in which a model has to be learned by pure interaction with the environment. Another closely related study is the study of \citet{hamrick2021role} which informally relates MuZero \citep{schrittwieser2020mastering}, another DT planning algorithm, to various other DT and B planning algorithms in the literature. Our study can be viewed as a study that formalizes the relation between the two planning styles. There have also been studies that empirically compare the performances of various DT and B planning algorithms on continuous control domains in the P\&L setting \citep{wang2019benchmarking}, and on MG environments in specific TL settings \citep{zhao2021consciousness}. However, none of these studies provide a general understanding of when will one planning style perform better than the other.
Finally, our work also has connections to the studies of \citet{jiang2015dependence} and \citet{arumugam2018mitigating} which provide upper bounds for the performance difference between policies generated as a result of planning with an exact model and an estimated model. However, rather than providing upper bounds, in this study, we are instead interested in understanding which classes of models will allow for one planning style to perform better than the other. Lastly, another related line of research is the recent studies of \citet{grimm2020value, grimm2021proper} which classify models according to how relevant they are for value-based planning. Although, we share the same overall idea that models should only be judged for how useful they are in the planning process, our work differs in that we classify the models according to how useful they are in the comparison of the two planning styles.
\section{Conclusion and Discussion}
\label{sec:conclusion}
To summarize, we first viewed the CIs and MIs of DT and B planning in a unified way through the lens of DP, and then provided theoretical results and hypotheses on which one will perform better than the other across a variety of settings. Then, we empirically validated these results through illustrative experiments. Overall, our findings suggest that even though DT planning does not perform as well as B planning in their CIs, due to the reasons detailed in this study, the MIs of it can perform on par or better than their B planning counterparts in both the P\&L and TL settings. In this study, our main goal was to \emph{understand} under what conditions and in which settings one planning style will perform better than the other in fast-response-requiring domains, and not to provide any practical insights at the moment. However, we believe that both the proposed unifying framework and our theoretical and empirical results can be helpful to the community in improving the two planning styles in potentially interesting ways. Also note that we were only interested in comparing the two planning styles in terms of the expected discounted return of their output policies. Though not the main focus of this study, other possible interesting comparison directions include comparing in terms of sample efficiency and real-time performance. Lastly, due to their easy-to-learn optimal policies and their suitability in designing controlled experiments, we have mainly performed our MI experiments on MG environments. However, experiments in other environments can be helpful in further validating the hypotheses of this study. We hope to tackle this in future work.
\bibliographystyle{plainnat}
\section{Pseudocodes of the CIs of the Two Planning Styles}
\label{app_sec:CIs_algorithms}
\begin{algorithm}[h!]
\centering
\caption{Tabular Online Monte-Carlo Planning (OMCP) \citep{tesauro1996line} with an Adaptable Model} \label{alg:alg_CI_DT}
\begin{algorithmic}[1]
\State \text{Initialize $\pi^i\in\mathbb{\Pi}$ as a random policy}
\State \text{Initialize $\bar{m}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $n_r\gets \text{number of episodes to perform rollouts}$
\While{\text{$\bar{m}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{MC\_rollout}(S, \bar{m}, n_r, \pi^i))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{m}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $\bar{m}(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{Tabular Exhaustive Search \citep{campbell2002deep} with an Adaptable Model} \label{alg:alg_CI_DT_exs}
\begin{algorithmic}[1]
\State \text{Initialize $\bar{m}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $h\gets \text{search heuristic}$
\While{\text{$\bar{m}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{exhaustive\_tree\_search}(S, \bar{m},h))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{m}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $\bar{m}(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/rollout.pdf}
\caption{\small Pure Rollouts} \label{app_fig:pure_rollout}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/exsearch.pdf}
\caption{\small Pure Search} \label{app_fig:exhaustive_search}
\end{subfigure}
\par\bigskip \vspace{-0.25em}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/rollout_and_search.pdf}
\caption{\small Search + Rollouts} \label{app_fig:search_and_rollout}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[height=3.25cm]{figures/bootstrap_and_search.pdf}
\caption{\small Search + Bootstrapping on the value estimates} \label{app_fig:search_and_bootstrap}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small Various planning styles within DT planning in which planning is performed (i) by purely performing rollouts, (ii) by purely performing search, (iii) by performing rollouts after performing some amount of search, and (iv) by bootstrapping on the value estimates after performing some amount of search. The subscripts and the superscripts on the states indicate the time steps and state identifiers, respectively. The black traingles indicate the terminal states.}
\vspace{-0.1cm}
\label{app_fig:dt_planning_figures}
\end{figure}
\begin{algorithm}[h!]
\centering
\caption{General Tabular Dyna-Q \citep{sutton1990integrated, sutton1991dyna}}\label{alg:alg_CI_B}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State $\mathcal{SA}_{\text{prev}}\gets \{\}$
\State $n_p\gets \text{number of time steps to perform planning}$
\While{\text{$Q$ and $\bar{\bar{m}}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State $\mathcal{SA}_{\text{prev}}\gets \mathcal{SA}_{\text{prev}} + \{(S, A)\}$
\State \text{Update} $Q(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State \text{Update} $\bar{\bar{m}}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $i\gets 0$
\While{$i<n_p$}
\State $S_{\bar{\bar{m}}}, A_{\bar{\bar{m}}} \gets \text{sample from } \mathcal{SA}_{\text{prev}}$
\State $R_{\bar{\bar{m}}},S_{\bar{\bar{m}}}', \text{done}_{\bar{\bar{m}}}\gets \bar{\bar{m}}(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$
\State \text{Update} $Q(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$ \text{with $R_{\bar{\bar{m}}}$, $S_{\bar{\bar{m}}}'$, $\text{done}_{\bar{\bar{m}}}$}
\State $i\gets i+1$
\EndWhile
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\vspace{-0.1cm}
\begin{algorithm}[h!]
\centering
\caption{Tabular Dyna-Q of Interest}\label{alg:alg_CI_Bi}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\While{\text{$Q$ and $\bar{\bar{m}}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{\bar{m}}(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State $S\gets S'$
\EndWhile
\While{\text{$Q$ has not converged}}
\State $S_{\bar{\bar{m}}}, A_{\bar{\bar{m}}}\gets \text{sample from } \mathcal{S}\times\mathcal{A}$
\State $R_{\bar{\bar{m}}},S_{\bar{\bar{m}}}', \text{done}_{\bar{\bar{m}}} \gets \bar{\bar{m}}(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$
\State \text{Update} $Q(S_{\bar{\bar{m}}},A_{\bar{\bar{m}}})$ \text{with $R_{\bar{\bar{m}}}$, $S_{\bar{\bar{m}}}'$, $\text{done}_{\bar{\bar{m}}}$}
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\section{Discussion on the Choice of the CIs of the Two Planning Styles}
\label{app_sec:CIs_discussion}
As indicated in the main paper, for DT planning, we study the OMCP algorithm of \citet{tesauro1996line}, and, for B planning, we study the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna}. We choose these algorithms as they are easy to analyze. In this study, as we are interested in scenarios where the model has to be learned from pure interaction, we consider a version of the OCMP algorithm in which a parametric model is learned from experience (see Alg.\ \ref{alg:alg_CI_DT} for the pseudocode). Note that this is the only difference compared to the original version of the OMCP algorithm. And, in order to make a fair comparison with this version of the OMCP algorithm, we consider a simplified version of the Dyna-Q algorithm (see Alg.\ \ref{alg:alg_CI_B} \& \ref{alg:alg_CI_Bi} for the pseudocodes of the original and simplified versions, respectively). Compared to the original version of Dyna-Q, in this version, there are several minor differences:
\begin{itemize}
\item While planning, the agent can now sample states and actions that it has not observed or taken before. Note that this is also the case for the version of the OMCP algorithm considered in this study.
\item Now, instead of using samples from both the environment and model, the agent updates its VE with samples only from the model. Note that the version of the OMCP algorithm considered in this study also makes use of only the model while performing planning.
\item Now, instead of planning for a fixed number of time steps, the agent performs planning until its VE converges. Note that, in order to allow for sample efficiency, usually $n_p$ is also set to high values in the original version of the Dyna-Q algorithm. Also note that, in order to properly evaluate the base policy, usually $n_r$ is also set to high values in the version of the OMCP algorithm considered in this study. Thus, in practice, both the original version of the Dyna-Q algorithm and the version of the OMCP algorithm considered in this study also devote a significant budget to perform planning.
\item Lastly, instead of performing planning after every time step, now the agent only performs planning after every episode. Rather than to allow for a fair comparison, this is to allow the Dyna-Q version of interest to be able operate in fast-response-requiring domains (planning until convergence after every time step would obviously slow down the response time of the algorithm and prevent it from operating in fast-response-requiring domains).
\end{itemize}
\section{Proofs}
\label{app_sec:proofs}
\begin{appproposition}
\label{app_prop:prop1}
Let $m\in\mathcal{M}$ be a PCM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pcm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_m^r$ and $\pi_m^{ce}$ are the policies that are obtained after performing one-step PI and full VI on top of a $\pi^b$ in model $m$, respectively. Thus, we have $J_{m}^{\pi_m^r} \leq J_{m}^{\pi_m^{ce}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pcm}, implies $J_{m^*}^{\pi_m^r} \geq J_{m^*}^{\pi_m^{ce}}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop2}
Let $m\in\mathcal{M}$ be a PRM of $m^*$ w.r.t.\ $\Pi=\{\pi_m^r, \pi_m^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^{r}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:prm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_m^r$ and $\pi_m^{ce}$ are the policies that are obtained after performing one-step PI and full VI on top of a $\pi^b$ in model $m$, respectively. Thus, we have $J_{m}^{\pi_m^r} \leq J_{m}^{\pi_m^{ce}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:prm}, implies $J_{m^*}^{\pi_m^{ce}} \geq J_{m^*}^{\pi_m^r}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop3}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}} \in\mathcal{M}$ be a PNM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pnm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_{\bar{\bar{m}}}^{ce}$ is the policy that is obtained after performing full VI on top of $\pi^b$ in model $\bar{\bar{m}}$. Thus, $\pi_{\bar{\bar{m}}}^{ce}$ is one of the optimal policies of model $\bar{\bar{m}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pnm}, implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} = \min_{\pi\in\Pi} J_{m^*}^{\pi}$ and thus $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \leq J_{m^*}^{\pi} \forall \pi\in\mathbb{\Pi}$. This in turn implies $J_{m^*}^{\pi_{\bar{m}}^r} \geq J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}}$.
\end{proof}
\begin{appproposition}
\label{app_prop:prop4}
Let $\bar{m}\in\mathcal{M}$ be a PCM or a PRM of $m^*$ w.r.t.\ $\bar{\Pi}=\{\pi_{\bar{m}}^r, \pi_{\bar{m}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$, and let $\bar{\bar{m}}\in\mathcal{M}$ be a PXM of $m^*$ w.r.t.\ $\bar{\bar{\Pi}}=\{\pi_{\bar{\bar{m}}}^r, \pi_{\bar{\bar{m}}}^{ce} \} \subseteq\mathbb{\Pi}$ and $J$. Then, $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{appproposition}
\begin{proof}
This result directly follows from Defn.\ \ref{def:pxm} \& \ref{def:base_rollout_certeq_pol}. Recall that, according to Defn.\ \ref{def:base_rollout_certeq_pol}, given a $\pi^b\in\mathbb{\Pi}$, $\pi_{\bar{\bar{m}}}^{ce}$ is the policy that is obtained after performing full VI on top of $\pi^b$ in model $\bar{\bar{m}}$. Thus, $\pi_{\bar{\bar{m}}}^{ce}$ is one of the optimal policies of model $\bar{\bar{m}}$ \citep{bertsekas1996neuro}, which, by Defn.\ \ref{def:pxm}, implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} = \max_{\pi\in\Pi} J_{m^*}^{\pi}$ and thus $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi} \forall \pi\in\mathbb{\Pi}$. This in turn implies $J_{m^*}^{\pi_{\bar{\bar{m}}}^{ce}} \geq J_{m^*}^{\pi_{\bar{m}}^{r}}$.
\end{proof}
\section{Pseudocodes of the MIs of the Two Planning Styles}
\label{app_sec:MIs_algorithms}
\vspace{-0.5cm}
\begin{algorithm}[H]
\centering
\caption{The DT Planning Algorithm in \citet{zhao2021consciousness}}\label{alg:alg_MI_DT}
\begin{algorithmic}[1]
\State \text{Initialize the parameters $\theta$, $\eta$ \& $\omega$ of $\phi_{\theta}:\mathcal{O}_E\to \mathcal{S}_A$, $Q_{\eta}:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ \& $\bar{m}_{p \omega} = (p_{\omega}, r_{\omega}, d_{\omega})$}
\State \text{Initialize the replay buffer $\bar{m}_{np}\gets \{ \}$}
\State $N_{ple}\gets \text{number of episodes to perform planning and learning}$
\State $N_{rbt}\gets \text{number of samples that the replay buffer must hold to perform planning and learning}$
\State $n_s\gets \text{number of time steps to perform search}$
\State $n_{bs}\gets \text{number of samples to sample from } \bar{m}_{np}$
\State $h\gets \text{search heuristic}$
\State $S\gets \text{replay buffer sampling strategy}$
\State $i\gets 0$
\While{$i<N_{ple}$}
\State $O\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{tree\_search\_with\_bootstrapping}(\phi_{\theta}(O), \bar{m}_{p \omega}, Q_{\eta}, n_s, h))$}
\State \text{$R, O', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{m}_{np}\gets \bar{m}_{np} + \{(O,A,R,O', \text{done})\} $}
\If{$|\bar{m}_{np}| \geq N_{rbt}$}
\State $\mathcal{B}_{np}\gets \text{sample\_batch}(\bar{m}_{np}, n_{bs}, S)$
\State Update $\phi_{\theta}$, $Q_{\eta}$ \& $\bar{m}_{p \omega}$ with $\mathcal{B}_{np}$
\EndIf
\State $O\gets O'$
\EndWhile
\State $i\gets i+1$
\EndWhile
\State \textbf{Return} $\phi_{\theta}$, $Q_{\eta}$ \& $\bar{m}_{p \omega}$
\end{algorithmic}
\end{algorithm}
\vspace{-0.5cm}
\begin{algorithm}[h!]
\centering
\caption{The B Planning Algorithm in \citet{zhao2021consciousness}}\label{alg:alg_MI_B}
\begin{algorithmic}[1]
\State \text{Initialize the parameters $\theta$, $\eta$ \& $\omega$ of $\phi_{\theta}:\mathcal{O}_E\to \mathcal{S}_A$, $Q_{\eta}:\mathcal{S}_A\times\mathcal{A}_E\to \amsmathbb{R}$ \& $\bar{\bar{m}}_{p \omega} = (p_{\omega}, r_{\omega}, d_{\omega})$}
\State \text{Initialize the replay buffer $\bar{\bar{m}}_{np}\gets \{ \}$ and the imagined replay buffer $\bar{\bar{m}}_{inp}\gets \{ \}$}
\State $N_{ple}\gets \text{number of episodes to perform planning and learning}$
\State $N_{rbt}\gets \text{number of samples that the replay buffer must hold to perform planning and learning}$
\State $n_{ibs}\gets \text{number of samples to sample from } \bar{\bar{m}}_{inp}$
\State $n_{bs}\gets \text{number of samples to sample from } \bar{\bar{m}}_{np}$
\State $S\gets \text{replay buffer sampling strategy}$
\State $i\gets 0$
\While{$i<N_{ple}$}
\State $O\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q_{\eta}(\phi_{\theta} (O),\cdot))$}
\State \text{$R, O', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{\bar{m}}_{np}\gets \bar{\bar{m}}_{np} + \{(O,A,R,O', \text{done})\} $}
\State \text{$\bar{\bar{m}}_{inp}\gets \bar{\bar{m}}_{inp} + \{(\phi_{\theta}(O),A)\} $}
\If{$|\bar{\bar{m}}_{np}| \geq N_{rbt}$}
\State $\mathcal{B}_{inp}\gets \text{sample\_batch}(\bar{\bar{m}}_{inp}, n_{ibs}, S)$
\State $\mathcal{B}_p\gets \mathcal{B}_{inp} + \bar{\bar{m}}_{p \omega}(\mathcal{B}_{inp})$
\State $\mathcal{B}_{np}\gets \text{sample\_batch}(\bar{\bar{m}}_{np}, n_{bs}, S)$
\State Update $\phi_{\theta}$ \& $Q_{\eta}$ with $\mathcal{B}_{np} + \mathcal{B}_p$
\State \text{Update} $\phi_{\theta}$ \& $\bar{\bar{m}}_{p \omega}$ \text{with $\mathcal{B}_{np}$}
\EndIf
\State $O\gets O'$
\EndWhile
\State $i\gets i+1$
\EndWhile
\State \textbf{Return} $\phi_{\theta}$ \& $Q_{\eta}$
\end{algorithmic}
\end{algorithm}
\section{Discussion on the Choice of the MIs of the Two Planning Styles}
\label{app_sec:MIs_discussion}
As indicated in the main paper, we study the DT and B planning algorithms in \citet{zhao2021consciousness}. More specifically, for DT planning, we study the ``UP'' algorithm, and, for B planning, we study the ``Dyna'' algorithm in \citep{zhao2021consciousness}. We choose these algorithms as they are reflective of many of the properties of their state-of-the-art (SOTA) counterparts (MuZero \citep{schrittwieser2020mastering} and SimPLe \citep{kaiser2020model} / DreamerV2 \citep{hafner2021mastering}, respectively) and their code is publicly available\footnote{\url{https://github.com/mila-iqia/Conscious-Planning}}. The pseudocodes of these algorithms are presented in Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}, respectively. Note that, similar to their SOTA counterparts, these two algorithms do not employ the ``bottleneck mechanism'' introduced in \citep{zhao2021consciousness}. Some of the important similarities and differences between these algorithms and their SOTA counterparts, are as follows:
\begin{enumerate}
\item Similarities and differences between the DT planning algorithm in \citet{zhao2021consciousness} and MuZero \citep{schrittwieser2020mastering}
\begin{itemize}
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also performs planning with both a parametric and non-parametric model.
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also represents its parametric model using NNs.
\item Similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} also learns a parametric model through pure interaction with the environment. However, rather than unrolling the model for several time steps and training it with the sum of the policy, value and reward losses as in MuZero, it unrolls the model for only a single time step and trains it with the sum of the value, dynamics, reward and termination losses (see the total loss term in Sec.\ 3 of \citep{zhao2021consciousness}).
\item Lastly, similar to MuZero, the DT planning algorithm in \citep{zhao2021consciousness} selects actions by directly bootstrapping on the value estimates of a continually improving policy (without performing any rollouts), which is obtained by planning with a non-parametric model, after performing some amount search with a parametric model. However, rather than performing the search using Monte-Carlo Tree Search (MCTS, \citep{kocsis2006bandit}) as in MuZero, it uses best-first search (during training) and random search (during evaluation) (see App.\ H of \citep{zhao2021consciousness} for the details of the search procedures).
\end{itemize}
\item Similarities and differences between the B planning algorithm in \citet{zhao2021consciousness} and SimPLe \citep{kaiser2020model} / DreamerV2 \citep{hafner2021mastering}
\begin{itemize}
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also performs planning with a parametric model. Additionally, it also performs planning with a non-parametric model.
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also represents its parametric model using NNs and it also updates its VE using the simulated data generated with this model.
\item Similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} also learns a parametric model through pure interaction with the environment. However, rather than performing planning with this model after allowing for an initial kickstarting period as in SimPLe / DreamerV2 (referred to as a ``world model'' learning period), for a fair comparison with the DT planning algorithm, it starts to perform planning right at the beginning of the model learning process.
\item Lastly, similar to SimPLe / DreamerV2, the B planning algorithm in \citep{zhao2021consciousness} selects actions by simply querying its VE.
\end{itemize}
\end{enumerate}
Even though there are some differences between the planning algorithms in \citep{zhao2021consciousness} and their SOTA counterparts, except for the kickstarting period in SOTA B planning algorithms, these are just minor implementation details that would not have any impact on the conclusions of this study. Although the kickstarting period can mitigate the harmful simulated data problem to some degree (or even prevent it if the period is sufficiently long), allowing for it would definitely prevent a fair comparison with the DT planning algorithm in \citep{zhao2021consciousness}. This is why we did not allow for it in our experiments.
\section{Discussion on the Combined View of Models}
\label{app_sec:combined_view_discussion}
In order to be able to view the DT and B planning algorithms that perform planning with both a parametric and non-parametric model through our proposed DP framework, we view the two separate models of these algorithms as a single combined model. This becomes obvious for B planning algorithms if one notes that they perform planning with a batch of data that is jointly generated by both a parametric and non-parametric model (see e.g., line 20 in Alg.\ \ref{alg:alg_MI_B} in which $\phi_{\theta}$ and $Q_{\eta}$ are updated with batch of data that is jointly generated by both $\bar{\bar{m}}_{p \omega}$ and $\bar{\bar{m}}_{np}$), which can be thought of performing planning with a batch of data that is generated by a single combined model. It also becomes obvious for DT planning algorithms if one notes that they perform planning by first performing search with a parametric model, and then by bootstrapping on the value estimates of a continually improving policy that is obtained by planning with a non-parametric model (see e.g., line 13 in Alg.\ \ref{alg:alg_MI_DT} in which action selection is done with both $\bar{m}_{p \omega}$ and $Q_{\eta}$ (which is obtained by planning with $\bar{m}_{np}$)), which can be thought of performing planning with a single combined model that is obtained by concatenating the parametric and non-parametric models.
\section{Experimental Details}
\label{app_sec:exp_details}
In this section, we provide the details of the experiments that are performed in Sec.\ \ref{sec:experiments}. This also includes the implementation details of the CIs and MIs of the two planning styles considered in this study.
\subsection{Details of the CI Experiments}
\label{app_sec:details_of_CI_exps}
In all of the CI experiments, we have calculated the performance with a discount factor ($\gamma$) of $0.9$.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards.pdf}
\vspace{-0.1cm}
\caption{\small SG Rewards} \label{app_fig:simple_gridworld_rew}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_m8.pdf}
\vspace{-0.1cm}
\caption{\small $m_8$ Rewards} \label{app_fig:simple_gridworld_rew_m8}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_PDM_init.pdf}
\vspace{-0.1cm}
\caption{\small Initial PDM Rewards} \label{app_fig:simple_gridworld_rew_pdm}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_rewards_tr.pdf}
\vspace{-0.1cm}
\caption{\small TST Rewards} \label{app_fig:simple_gridworld_rew_tr}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small Reward functions of (a) the SG environment, (b) the $m_8$ model, (c) the initial PDM, and (d) the TST.}
\vspace{-0.1cm}
\label{app_fig:simple_gridworld_env_details}
\end{figure}
\begin{wrapfigure}{R}{0.225\textwidth}
\vspace{-0.5cm}
\centering
\begin{subfigure}{0.225\textwidth}
\centering
\includegraphics[height=2.cm]{figures/simple_gridworld_SA.pdf}
\end{subfigure}
\vspace{-0.25cm}
\caption{\small The form of SA used in this study.}
\vspace{-0.1cm}
\label{app_fig:simple_gridworld_SA}
\end{wrapfigure}
\textbf{Environment \& Models.} All of the experiments in Sec.\ \ref{sec:classic_inst_exp} are performed on the SG environment. Here, the agent spawns in state S and has to navigate to the goal state depicted by G. At each time step, the agent receives an $(x,y)$ pair, indicating its position, and based on this, selects an action that moves it to one of the four neighboring cells with a slip probability of $0.1$. The agent receives a negative reward that is linearly proportional to its distance from G and a reward of $+10$ if it reaches G (see Fig.\ \ref{app_fig:simple_gridworld_rew}). The agent-environment interaction lasts for a maximum of 100 time steps, after which the episode terminates with a reward of $0$ if the agent was not able to reach the goal state G. (\textbf{PP Setting}) For the experiments in the PP setting, the agent is provided with series of models in which it receives a reward of $+10$ if it reaches the goal state and a reward of $0$ elsewhere. For example, see the reward function of model $m_8$ in Fig.\ \ref{app_fig:simple_gridworld_rew_m8}. Note that these models have the same transition distribution and initial state distribution with the SG environment. (\textbf{P\&L Setting}) For the experiments in the P\&L setting, the models of both of the planning styles are initialized as a hand-designed PDM with a reward function as in Fig.\ \ref{app_fig:simple_gridworld_rew_pdm} and with a goal state located at the bottom right corner. Note that in these experiments, we have assumed that the agent already has access to the transition distribution and initial state distribution of the environment, and only has to learn the reward function. (\textbf{TL Setting}) Finally, for the experiments in the TL setting, we considered a TST with a reward function as in Fig.\ \ref{app_fig:simple_gridworld_rew_tr}, which is a transposed version of the TRT's reward function (see Fig.\ \ref{app_fig:simple_gridworld_rew}). Note again that we have assumed that the agent already has access to the transition distribution and initial state distribution of the environment, and only has to learn the reward function.
\textbf{Implementation Details of the CIs.} For our CI experiments, we considered specific versions of the OMCP algorithm of \citet{tesauro1996line} and the Dyna-Q algorithm of \citet{sutton1990integrated, sutton1991dyna}. The pseudocodes of these algorithms are presented in Alg.\ \ref{alg:alg_CI_DT} \& \ref{alg:alg_CI_Bi}, respectively, and the details of them are provided in Table \ref{tab:CI_DT_details} \& \ref{tab:CI_B_details}, respectively. Note that in Alg.\ \ref{alg:alg_CI_DT}, $n_r$ (the number of episodes to perform rollouts) is set to a high value so that the input policy $\pi^i$ can properly be evaluated.
\begin{table}[h!]
\caption{Details and hyperparameters of Alg. \ref{alg:alg_CI_DT}.}
\centering
\begin{tabular}{l|l}
\hline
$\pi^i$ & a deterministic random policy \\
$\bar{m}$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$n_r$ & 50 \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
SA (in FA experiments) & an aggregation of the form in Fig.\ \ref{app_fig:simple_gridworld_SA} \\
\hline
\end{tabular}
\label{tab:CI_DT_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_CI_Bi}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{\bar{m}}$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
SA (in FA experiments) & an aggregation of the form in Fig.\ \ref{app_fig:simple_gridworld_SA} \\
\hline
\end{tabular}
\label{tab:CI_B_details}
\end{table}
\subsection{Details of the MI Experiments}
\label{app_sec:details_of_MI_exps}
In all of the MI experiments on the SG environment, we have calculated the performance with a discount factor ($\gamma$) of $0.9$, and in all of the MI experiments on the MG environments, we have calculated the performance with a discount factor of $0.99$.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/empty10x10.png}
\vspace{-0.1cm}
\caption{\small Empty 10x10} \label{app_fig:empty10x10}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fourrooms.png}
\vspace{-0.1cm}
\caption{\small FourRooms} \label{app_fig:fourrooms}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/scs9n1.png}
\vspace{-0.1cm}
\caption{\small SCS9N1} \label{app_fig:simplecrossing}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/lcs9n1.png}
\vspace{-0.1cm}
\caption{\small LCS9N1} \label{app_fig:lavacrossing}
\end{subfigure}
\par\bigskip \vspace{-0.25em}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_train_35.png}
\vspace{-0.1cm}
\caption{\small RDS Train (0.35)} \label{app_fig:rds_env_train}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_25.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.25)} \label{app_fig:rds_env_eval25}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_35.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.35)} \label{app_fig:rds_env_eval35}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.cm]{figures/fig_distshift_eval_45.png}
\vspace{-0.1cm}
\caption{\small RDS Test (0.45)} \label{app_fig:rds_env_eval45}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small (a-d) Several environments that either pre-exist in or are manually built on top of MG. (e-h) The TRTs and TSTs of varying difficulty in the RDS environment \citep{zhao2021consciousness}. Note that the TSTs are just transposed versions of the TRTs.}
\vspace{-0.1cm}
\label{app_fig:minigrid_envs_details}
\end{figure}
\textbf{Environments \& Models.} In Sec.\ \ref{sec:modern_inst_exp}, part of our experiments were performed both on the SG environment and part of them were performed on MG environments. The details of these environments and their corresponding models are as follows:
\begin{itemize}
\item \textbf{SG Environment.} To learn about the SG environment, we refer the reader to Sec.\ \ref{app_sec:details_of_CI_exps} as we have used the same environment in the MI experiments as well. (\textbf{P\&L Setting}) To learn about the models of both planning styles, we also refer the reader to the P\&L Setting of Sec.\ \ref{app_sec:details_of_CI_exps} as have used the same models in the MI experiments as well.
\item \textbf{MG Environments.} In the MG environments, the agent, depicted in red, has to navigate to the green goal cell while avoiding the orange lava cells (if there are any). At each time step, the agent receives a grid-based observation that contains its own position and the positions of the goal, wall and lava cells, and based on this, selects an action that either turns it left or right, or steps it forward. If the agent steps on a lava cell, the episode terminates with no reward, and if it reaches the goal cell, the episode terminates with a reward of $+1$.\footnote{Note that this is not the original reward function of MG environments. In the original version, the agent receives a reward of $+1-0.9(t/T)$, where $t$ is the number of time steps taken to reach the goal cell and $T$ is the maximum episode length, if it reaches the goal. We modified the reward function in order to obtain more intuitive results.} More on the details of these environments can be found in \citet{gym_minigrid}. (\textbf{P\&L Setting}) For the P\&L setting, we performed experiments on the Empty 10x10, FourRooms, SimpleCrossingS9N1 (SCS9N1) and LavaCrossingS9N1 (LCS9N1) environments (see Fig.\ \ref{app_fig:empty10x10}, \ref{app_fig:fourrooms}, \ref{app_fig:simplecrossing}, \& \ref{app_fig:lavacrossing}, respectively). While the last two of these environments already pre-exist in MG, the first two of them are manually built environments. Specifically, the Empty 10x10 environment is obtained by expanding the Empty 8x8 environment and the 10x10 FourRooms environment is obtained by contracting the 16x16 FourRooms environment in \citet{gym_minigrid}. (\textbf{TL Setting}) For the TL setting, we performed experiments on the sequential and regular versions of the RDS environment considered in \citep{zhao2021consciousness} (see Fig.\ \ref{app_fig:rds_env_train}-\ref{app_fig:rds_env_eval45}).\footnote{Note that the RDS environment is an environment that is built on top of MG.} In the sequential version, which we call RDS Sequential, the agent is trained on TRTs with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_train}) and then it is allowed to adapt to subsequent TSTs (a transposed version of the TRTs) with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_eval35}). In the regular version (the version considered in \citep{zhao2021consciousness}), the agent is trained on TRTs with difficulty 0.35 (see Fig.\ \ref{app_fig:rds_env_train}) and during the training process it is periodically evaluated on TSTs with difficulties varying from 0.25 to 0.45 (see Fig.\ \ref{app_fig:rds_env_eval25}-\ref{app_fig:rds_env_eval45}). Note that the difficulty parameter here controls the density of the lava cells. Also note that with every reset of the episode, a new lava cell pattern is (procedurally) generated for both the TRTs and TSTs. More on the details of the RDS environment can be found in \citep{zhao2021consciousness}.
Finally, note that, as opposed to the experiments on SG environment, for both the P\&L and TL settings, we did not enforce any kind of structure on the models of the agent, and just initialized them randomly. We also note that, in our experiments with RDS Sequential, we reinitialized the non-parametric models (replay buffers) of both planning styles after the tasks switch from the TRTs to the TSTs.
\end{itemize}
\textbf{Implementation Details of the MIs.} For our MI experiments, we considered the DT and B planning algorithms in \citet{zhao2021consciousness} (see Sec.\ \ref{app_sec:MIs_discussion}), whose pseudocodes are presented in Alg.\ \ref{alg:alg_MI_DT} \& \ref{alg:alg_MI_B}, respectively. The details of these algorithms are provided in Table \ref{tab:MI_DT_details} \& \ref{tab:MI_B_details}, respectively. For more details (such as the NN architectures, replay buffer sizes, learning rates, exact details of the tree search, \dots), we refer the reader to the publicly available code and the supplementary material of \citep{zhao2021consciousness}.
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_DT}.}
\centering
\begin{tabular}{l|l}
\hline
$\phi_{\theta}$ & MiniGrid bag of words feature extractor \\
$Q_{\eta}$ & regular NN (P\&L setting), NN with attention (for set-based representations) (TL setting) \\
$\bar{m}_{p \omega}$ & regular NN (P\&L setting), NN with attention (for set-based representations) (TL setting) \\
& (bottleneck mechanism is disabled for both of the setting) \\
$N_{ple}$ & $50$M \\
$N_{rbt}$ & $50$k \\
$n_s$ & $1$ (DT(1)), $5$ (DT(5)), $15$ (DT(15)) \\
$n_{bs}$ & $128$ (P\&L setting), $64$ (TL setting) \\
$h$ & best-first search (training), random search (evaluation) \\
$S$ & random sampling \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 1M time steps \\
\hline
\end{tabular}
\label{tab:MI_DT_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg. \ref{alg:alg_MI_B}.}
\centering
\begin{tabular}{l|l}
\hline
$\phi_{\theta}$ & MiniGrid bag of words feature extractor \\
$Q_{\eta}$ & regular NN (P\&L setting), NN with attention (for set-based representations) \\ & (TL setting) \\
$\bar{\bar{m}}_{p \omega}$ & regular NN (P\&L setting), NN with attention (for set-based representations) \\ & (TL setting) (bottleneck mechanism is disabled for both of the settings) \\
$N_{ple}$ & $50$M \\
$N_{rbt}$ & $50$k \\
$(n_{ibs}, n_{bs})$ & $(0,128)$ (B(R)), $(128,128)$ (B(R+S)), $(128,0)$ (B(S)) (P\&L setting), \\
& $(0,64)$ (B(R)), $(64,64)$ (B(R+S)), $(64,0)$ (B(S)) (TL setting) \\
$S$ & random sampling \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 1M time steps \\
\hline
\end{tabular}
\label{tab:MI_B_details}
\end{table}
Note that the publicly available code of \citep{zhao2021consciousness} only contains B planning algorithms in which the model is a model over the observations (and not the states), which for some reason causes the B planning algorithm of interest to perform very poorly (see the plots in \citep{zhao2021consciousness}). For this reason and in order to make a fair comparison with DT planning, we have implemented a version of the B planning algorithm in which the model is a model over the states and we performed all of our experiments with this version of the algorithm. Also note that, while we have used regular representations in our P\&L experiments, in order to deal with the large number of tasks, we have made use of set-based representations in our TL experiments (see \citep{zhao2021consciousness} for the details of this representation).
Additionally, we also performed experiments with simplified tabular versions of the MIs of the two planning styles, whose pseudocodes are presented in Alg.\ \ref{alg:alg_MI_DT_tab} \& \ref{alg:alg_MI_B_tab}, respectively. The details of these algorithms are provided in Table \ref{tab:MI_DT_tab_details} \& \ref{tab:MI_B_tab_details}, respectively.
\begin{algorithm}[h!]
\centering
\caption{Modernized Version of Tabular OMCP with both a Parametric and Non-Parametric Model} \label{alg:alg_MI_DT_tab}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{m}_p(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize the replay buffer $\bar{m}_{np}\gets \{ \}$}
\State $n_s\gets \text{number of time steps to perform search}$
\State $h\gets \text{search heuristic}$
\While{\text{$\bar{m}_p$ and $\bar{m}_{np}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(\text{tree\_search\_with\_bootstrapping}(S,\bar{m}_p,Q,n_s,h))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{$\bar{m}_{np}\gets \bar{m}_{np} + \{(S,A,R,S', \text{done})\} $}
\State $S_{\bar{m}_{np}}, A_{\bar{m}_{np}}, R_{\bar{m}_{np}}, S_{\bar{m}_{np}}', \text{done}_{\bar{m}_{np}} \gets \text{sample from } \bar{m}_{np}$
\State \text{Update} $Q$ \& $\bar{m}_p$ \text{with $S_{\bar{m}_{np}}, A_{\bar{m}_{np}}, R_{\bar{m}_{np}}, S_{\bar{m}_{np}}', \text{done}_{\bar{m}_{np}}$}
\State $S\gets S'$
\EndWhile
\EndWhile
\State \textbf{Return} $Q$ \& $\bar{m}_p(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{Modernized Version of Tabular Dyna-Q of Interest with both a Parametric and Non-Parametric Model}\label{alg:alg_MI_B_tab}
\begin{algorithmic}[1]
\State \text{Initialize $Q(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize $\bar{\bar{m}}_p(s,a)$ $\forall s\in\mathcal{S}$ \& $\forall a\in\mathcal{A}$}
\State \text{Initialize the replay buffer $\bar{\bar{m}}_{np}\gets \{ \}$}
\While{\text{$Q$, $\bar{\bar{m}}_p$ and $\bar{\bar{m}}_{np}$ has not converged}}
\State $S\gets \text{reset environment}$
\While{\text{not done}}
\State \text{$A\gets \epsilon\text{-greedy}(Q(S,\cdot))$}
\State \text{$R, S', \text{done} \gets \text{environment($A$)}$}
\State \text{Update} $\bar{\bar{m}}_p(S,A)$ \text{with $R$, $S'$, $\text{done}$}
\State \text{$\bar{\bar{m}}_{np}\gets \bar{\bar{m}}_{np} + \{(S,A,R,S', \text{done})\} $}
\State $S\gets S'$
\EndWhile
\While{\text{$Q$ has not converged}}
\State $S_{\bar{\bar{m}}_p}, A_{\bar{\bar{m}}_p} \gets \text{sample from } \mathcal{S}\times \mathcal{A}$
\State $R_{\bar{\bar{m}}_p},S_{\bar{\bar{m}}_p}', \text{done}_{\bar{\bar{m}}_p} \gets \bar{\bar{m}}_p(S_{\bar{\bar{m}}_p},A_{\bar{\bar{m}}_p})$
\State \text{Update} $Q(S_{\bar{\bar{m}}_p},A_{\bar{\bar{m}}_p})$ \text{with $R_{\bar{\bar{m}}_p}$, $S_{\bar{\bar{m}}_p}'$, $\text{done}_{\bar{\bar{m}}_p}$}
\State $S_{\bar{\bar{m}}_{np}}, A_{\bar{\bar{m}}_{np}}, R_{\bar{\bar{m}}_{np}}, S_{\bar{\bar{m}}_{np}}', \text{done}_{\bar{\bar{m}}_{np}}\gets \text{sample from } \bar{\bar{m}}_{np}$
\State \text{Update} $Q(S_{\bar{\bar{m}}_{np}},A_{\bar{\bar{m}}_{np}})$ \text{with $R_{\bar{\bar{m}}_{np}}$, $S_{\bar{\bar{m}}_{np}}'$, $\text{done}_{\bar{\bar{m}}_{np}}$}
\EndWhile
\EndWhile
\State \textbf{Return} $Q(s,a)$
\end{algorithmic}
\end{algorithm}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_DT_tab}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{m}_p$ & a tabular model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$n_s$ & $|\mathcal{A}|$ \\
$h$ & breadth-first search \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
\hline
\end{tabular}
\label{tab:MI_DT_tab_details}
\end{table}
\begin{table}[h!]
\caption{Details and hyperparameters of Alg.\ \ref{alg:alg_MI_B_tab}.}
\centering
\begin{tabular}{l|l}
\hline
$Q$ & a tabular value function (initialized as zero $\forall s\in\mathcal{S}$ and $\forall a\in\mathcal{A}$) \\
$\bar{\bar{m}}_p$ & a tabular parametric model (initialized as a hand-designed PDM (see Fig.\ \ref{app_fig:simple_gridworld_rew_pdm})) \\
$\epsilon$ & linearly decays from $1.0$ to $0.0$ over the first 20 episodes \\
\hline
\end{tabular}
\label{tab:MI_B_tab_details}
\end{table}
\section{Additional Results}
\label{app_sec:add_results}
In this section, we provide complementary results to our empirical results in Sec.\ \ref{sec:experiments}. Specifically, we provide (i) performance plots that are obtained by evaluating the different planning styles in their corresponding models and (ii) total reward plots that are obtained by evaluating the different planning styles in both the considered environments and in their corresponding models. Note that while the performance plots are obtained by the measure in (\ref{eqn:perf_measure}), the total reward plots are obtained by simply adding the rewards obtained by the agent throughout the episodes (which is actually the expected \emph{undiscounted} return, i.e., when we use $\gamma=1.0$ in measure (\ref{eqn:perf_measure})).
\subsection{Experiments with CIs}
\subsubsection{PP Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_perf.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_perf_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregarion} \label{app_fig:classic_algs_pp_perf_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning in their corresponding models (learned on the SG environment) in the PP setting with tabular and SA VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pp_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_model_totrew_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pp_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pp_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_model_totrew_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pp_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the PP setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pp_totrew}
\end{figure}
\subsubsection{P\&L Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_VFA_SA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_perf_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning in their corresponding models (learned on the SG environment) in the P\&L setting with tabular and SA VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pl_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_VFA_SA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_pl_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_VFA_SA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_pl_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the P\&L setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_pl_totrew}
\end{figure}
\subsubsection{TL Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_perf_nolearntrans_tr_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_perf_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_tr.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_perf_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_tr_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_perf_3}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the CIs of DT and B planning (a) on the SG environment and (b, c) in their corresponding models (learned on the SG environment) in the TL setting with tabular and SA VE representations. Black \& gray dashed lines indicate the performance of the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_tl_perf}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_tr.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_totrew_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_tr_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_totrew_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_tr.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:classic_algs_tl_totrew_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_tr_VFA.pdf}
\vspace{-0.25cm}
\caption{\small State Aggregation} \label{app_fig:classic_algs_tl_totrew_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the CIs of DT and B planning (a, b) on the SG environment and (c, d) in their corresponding models (learned on the SG environment) in the TL setting with tabular and SA VE representations. Black \& gray dashed lines indicate total reward obtained by the optimal \& random policies, respectively. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:classic_algs_tl_totrew}
\end{figure}
\subsection{Experiments with MIs}
\subsubsection{P\&L Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_perf_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_perf_sg_2}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The performance of the MIs of DT and B planning in their corresponding models (learned on the SG environment) in the P\&L setting with tabular VE representations. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_perf_sg}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/true_learnedmodel_totrew_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_totrew_sg_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/apprx_learnedmodel_totrew_nolearntrans_final.pdf}
\vspace{-0.25cm}
\caption{\small Tabular} \label{app_fig:modern_algs_pl_totrew_sg_3}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning (a) on the SG environment and (b) in their corresponding models (learned on the SG environment) in the P\&L setting with tabular VE representations. The black dashed line indicates the total reward obtained by the optimal policy. Shaded regions are one standard error over 100 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_totrew_sg}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/Empty10-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small Empty 10x10} \label{app_fig:modern_algs_pl_totrew_mg_1}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/Fourrooms-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small FourRooms} \label{app_fig:modern_algs_pl_totrew_mg_2}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/SCS9N1-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small SCS9N1} \label{app_fig:modern_algs_pl_totrew_mg_3}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/LCS9N1-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small LCS9N1} \label{app_fig:modern_algs_pl_totrew_mg_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning in the P\&L setting with NN VE representations. The black dashed lines indicate the total reward obtained by the optimal policy in the corresponding environment. Shaded regions are one standard error over 50 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_pl_totrew_mg}
\end{figure}
\subsubsection{TL Experiments}
\begin{figure}[H]
\centering
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrSQ-PR-RandTS_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Sequential} \label{app_fig:modern_algs_tl_totrew_mg_1}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_train_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Train (0.35)} \label{app_fig:modern_algs_tl_totrew_mg_2}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_025_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.25)} \label{app_fig:modern_algs_tl_totrew_mg_3}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_035_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.35)} \label{app_fig:modern_algs_tl_totrew_mg_4}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[height=2.45cm]{figures/RDS-TrZS-PR-RandTS_045_totrew.pdf}
\vspace{-0.25cm}
\caption{\small RDS Test (0.45)} \label{app_fig:modern_algs_tl_totrew_mg_4}
\end{subfigure}
\vspace{-0.1cm}
\caption{\small The total reward obtained by the MIs of DT and B planning in the TL setting with NN VE representations. The black dashed lines indicate the total reward obtained by the optimal policy in the corresponding environment. Shaded regions are one standard error over 50 runs.}
\vspace{-0.1cm}
\label{app_fig:modern_algs_tl_totrew_mg}
\end{figure}
|
1,116,691,501,004 | arxiv | \section*{Supplementary Information}
\endgroup
\tableofcontents
\setcounter{figure}{0}
\setcounter{equation}{0}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\theequation}{S\arabic{equation}}
\section{BEC preparation}
Bose-Einstein condensate production proceeds as in Ref.~\cite{Kollar2015aac}. To shape the BEC for this experiment, we use the same dynamical trap shaping technique as employed in our previous work reported in Ref.~\cite{Guo2019eab}. A nearly pure BEC is created in state $| F = 1, m_F = -1\rangle$. A harmonic potential consisting of two crossed beams of wavelength 1064 nm forms a trap of frequencies $(\omega_x,\omega_y,\omega_z)=2\pi\times[52.6(2),52.8(2),91.5(4)]$~Hz. The BEC population is $N=4.1(3)\times10^5$ and has Thomas-Fermi radii of $(R_x,R_y,R_z)=[12.3(2),12.2(2),7.1(1)]$~$\mu$m. Finally, by changing the dither pattern of the trapping beams perpendicular to the pump, the trap shape is adiabatically deformed to produce an elongated gas of 93~$\mu$m along the pump direction $\hat{x}$. A harmonic potential in the other two directions is maintained with similar trap frequencies in the other two directions. The centre-of-mass of its density distribution lies at $\mathbf{r}_{\mathrm{cm}} = (\text{49~}\mu\mathrm{m},\text{35~}\mu\mathrm{m})$ along $\hat{x}$ and $\hat{y}$ with respect to the cavity centre.
\section{Cavity, pump lasers, and frequency locks}
The confocal cavity is vibrationally stabilised using the method presented in Ref.~\cite{Kollar2015aac}. It is 1-cm long and has a radius of curvature $R =1$~cm, resulting in a waist of its TEM$_{0,0}$ mode of $w_0=35$~$\mu$m. Its finesse is $5.5 \times 10^4$, yielding a cavity linewidth of $\kappa = $137~kHz. With a single-atom, single-mode coupling $g_0$ of $2 \pi \times1.47$~MHz, the single-atom, single-mode cooperativity is $C = 2g_0^2/\kappa\gamma = 5$, where the atomic linewidth is $\gamma = 2\pi\times6$~MHz. Assuming a supermode enhancement factor of $\sim$10 (proportional to the inverse local interaction length scale $\xi$)~\cite{Vaidya2017tpa,Kroeze2021preprint}, the supermode single-atom cooperativity is $C^*\approx50$.
The 780-nm pump beams are each derived from a frequency-doubled 1560-nm fiber amplifier and seed laser; see Fig.~\ref{pumping}. The relative frequency between the two 1560-nm seed lasers is stabilised with respect to a frequency source oscillating at half of the cavity free spectral range ${\sim}7.5$~GHz. This frequency difference is controlled using a proportional-integral loop filter with feedback applied to seed `b'. A portion of the doubled 780-nm light from seed `a' is used as the illumination beam for the digital micro-mirror device. The DMD reflects this light into the path of the longitudinal cavity injection beam. Acousto-optical modulators are used to stabilise the intensity and adjust the relative detuning between the beams. Additional 1560-nm light from seed `a' is used to stabilise the science cavity using the Pound-Drever-Hall technique. The two pumps are detuned from the 5$^{2}S_{1/2}|2,-2\rangle$ to 5$^{2}P_{3/2}$ transition by 96~GHz and 111~GHz, respectively. Throughout the experiments, the pumps are equally detuned from the relevant cavity resonances by $\Delta^\text{a}_C = \Delta^\text{b}_C \equiv \Delta_C = -2 \pi \times 50~\mathrm{MHz}$.
\begin{SCfigure}
\centering
\includegraphics[width = 0.5\textwidth]{SuppFig1.pdf}
\caption{Pumping and laser locking schematics for two pump fields separated by one free spectral range (FSR). Blue trace is the experimentally measured confocal transmission, while red curve is an illustration of the pump field line shape detuned by $\Delta_C$ from the nearby cavity resonance in blue. Listed are a proportional-integral (PI) loop filter, photodetector (PD), periodically poled lithium niobate (PPLN) doubling crystals for second harmonic generation (SHG), acousto-optical modulators (AOMs), and a digital micromirror device (DMD).}
\label{pumping}
\end{SCfigure}
\section{Lattice calibration and pump balancing}
We calibrate the lattice depth of pump beams by performing Kapitza-Dirac diffraction of the BEC. The phase of the pump fields at the BEC is controlled by the retroreflection mirror shared by the pump beams. Measuring the lattice depth of the combined pump beams, we adjust the translation stage on which this mirror is mounted to match the phases of the pump lattices at the position of the atoms. We note that the beat length of the two pump lattices (separated in optical frequency by 15~GHz) is $\sim$5~mm, much larger than the atomic cloud size. Therefore, small mechanical fluctuations from the mirror mount will not cause the lattice to become out-of-phase at the atoms. The difference in recoil energy from this difference in frequency is on the order of ${\sim} 0.1$~Hz and thus negligible, as is the change in wavelength.
To bring into balance the cavity-mediated interactions induced by each pump, we perform a sequence of single-pump self-organisation experiments. We linearly ramp-up each beam in 5~ms and note the time at which the superradiant threshold is reached. The interaction strength can then be balanced by adjusting the ramp rate such that superradiance on a single FSR occurs at the same time for each beam. This ensures that the Raman coupling rate from each FSR is balanced, i.e, $\Omega_\text{a}/\Delta^\text{a}_A = \Omega_\text{b}/\Delta^\text{b}_A $, which then balances the cavity interaction strength for each pump.
\section{Holographic reconstruction of cavity emission}
To perform the holographic imaging (spatial heterodyne detection) of the cavity emission, we follow the procedure established in Refs.~\cite{Kroeze2018sso,Guo2019spa} for a single pump field and extend it to the case of two pumps. Above threshold, the cavity emission has optical frequency content at both $\omega_a$ and $\omega_b$ (the two pump frequencies), separated by one FSR. To fully reconstruct the cavity electric field, therefore, one must illuminate the camera with two large local oscillator (LO) beams at frequencies $\omega_a$ and $\omega_b$ at different angles with respect to the propagation direction of the cavity emission. This is illustrated in Fig.~\ref{fig1}a. The interference between LO and the cavity emission produces an image with an intensity $I_h(\mathbf{r})$ that may be expressed as
\begin{align}
I_{h}(\mathbf{r}) = \sum_{i = a,b}\lvert E_{c,i}(\mathbf{r}) \rvert ^2 + \lvert E_{\text{LO},i}(\mathbf{r}) \rvert ^2 + 2\chi_i |E_{c,i}(\mathbf{r})E_{\text{LO},i}(\mathbf{r})| \cos \left[ \Delta \mathbf{k}_i \cdot \mathbf{r} + \Delta\phi_i(\mathbf{r}) + \delta_i \right],
\label{hologram}
\end{align}
where we have ignored the fast oscillating term at $\omega_b - \omega_a$, and $E_{c,i}$ and $E_{\text{LO},i}$ are the cavity fields and LO fields for the two FSRs, respectively. Reduction of fringe contrast is characterised by the factor $\chi_i$. The additional phase terms $\delta_i$ accounts for the overall phase drift between the LO beams and the cavity emission in each experimental realisation due to technical fluctuations of the apparatus. Because of the angle difference, information from the cavity fields $E_{c,a}$ and $E_{c,b}$ are encoded in spatial wavevectors $\Delta\mathbf{k}_a$ and $\Delta\mathbf{k}_b$, respectively. Assuming the cavity field varies slowly over the spatial scale $2 \pi/|\Delta \mathbf{k}_i|$, we may then extract the cavity field amplitudes $|E_{c,i} (\mathbf{r})|$ and phase profiles $\Delta\phi_i(\mathbf{r}) + \delta_i$ by demodulating the image at $\Delta \mathbf{k}_{i}$.
By using this scheme---an LO at each frequency but at different spatial wavevectors---we take a single spatial heterodyne image that simultaneously allows us to reconstruct the intracavity field for each resonance. The phase of the nonlocal emission should differ by $\pi$ in the two images and indeed this signal cancels in their digital sum, as shown in Fig.~\ref{fig2}c.
\section{Linear phase gradient in Figure 2 images}
The origin of the rainbow-like baseline linear phase gradient seen in the images of the supermode DW polariton is unrelated to the phonon physics presented. It is likely an artefact of a small-angle (${\sim}0.45^\circ$) misalignment between the BEC and $\hat{x}$ (inducing an apparent phase shift) as well as from nonlinear effects due to pumping far above threshold ($\Omega/\Omega_\text{th}\approx20$, in this case). While strong pumping is needed to obtain these illustrative, high signal-to-noise images, it is unnecessary for the phonon dispersion measurements reported in main text. That is, we employ far weaker, near-threshold pump strengths in the experiments reported in Figs.~\ref{fig3} and~\ref{fig4}.
We also note that the images in Figs.~\ref{fig2}d,e are taken by first exciting the phonon modes near the pump power $\Omega \approx \Omega_\text{th}$ and then ramping up one pump to be much stronger than the other for enhanced contrast for the image associated with the LO from that pump. The beam power is rapidly ramped up to $\Omega/\Omega_\text{th}\approx20$ in a duration of $50~\mu$s, which is much faster than the phonon dynamics.
\section{Generation of longitudinal probe with the DMD}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{SuppFig2.pdf}
\caption{Measured DMD probe transmission cavity field and their phase profile line cuts. The values of $k_\perp/k_r$ in panels (a)--(f) are $[0, 2.1, 4.2, 6.3, 8.5, 10.6]\times 10^{-3}$, respectively. The white dashed line in panel a shows the length of the cuts in panel g. Additional features around the main probe field are due to imperfections of the confocal cavity and stray light from the DMD probe beam. The grey area is the half plane that contains the mirror image of the probe field, and we do not show this redundant portion of the image in the main text figures.}
\label{SuppFig2}
\end{figure*}
The DMD plane is set at approximately the Fourier plane of the cavity centre by using a 100-mm focal length in-vacuum plano-convex lens. The phase aberration of the DMD and misalignment of the illumination beam must be calibrated out of the field images sent into the cavity. We first calibrate these aberrations with an out-of-vacuum setup, similar to that used in Ref.~\cite{Papageorge2016ctm}. Then, using a cavity that is far from the confocal degeneracy point, an additional quadratic phase correction is added onto the DMD transfer function to effectively bring the DMD plane to the Fourier plane of the cavity centre. Finally, any intracavity field we desire can be generated by programming its Fourier transform to be displayed on the DMD. In our experiment, we perform Bragg spectroscopy at six different momenta; the measured DMD probe fields associated with these momenta are shown in Fig.~\ref{SuppFig2}. The maximum $k_\perp$ modulation we can inject is limited by the numerical aperture of the lens that in-couples the DMD light and by the holding piece of the mirror.
\section{Bragg spectroscopy and self-correlation analysis}
The dynamic susceptibility of the system can be measured by using the longitudinal probe imprinted with a phase modulation $\propto k_\perp$ along $\hat{x}$ to stimulate, along with the pump fields, the scattering of atoms into the momentum states $|\Psi(k_\perp)\rangle_+ = \sum_{\sigma,\tau=\pm 1} |\tau k_r + \sigma k_\perp, \sigma k_r\rangle$, as illustrated in Fig.~\ref{fig3}a. There is another possible set of states that we do not choose to stimulate or imprint given by $|\Psi(k_\perp)\rangle_- = \sum_{\sigma,\tau=\pm 1} |\tau k_r - \sigma k_\perp, \sigma k_r\rangle$; note that $|\Psi(-k_\perp)\rangle_+=|\Psi(k_\perp)\rangle_-$. We choose $|\Psi\rangle_+$ versus $|\Psi\rangle_-$ by setting the phase of the field imprinted by the DMD. The $|\Psi\rangle_+$ state yields the phase \textit{advancing} images in the main text. Because the scattering is coherent, the total atomic state is in a superposition of $|\Psi(k_\perp)\rangle_+$ and $|0,0\rangle$. In real space, adding the $|\Psi\rangle_+$ excitation on top of a uniform chequerboard lattice corresponds to adding a shearing lattice distortion.
We perform Bragg spectroscopy of the system's excitations by monitoring the increase in the population of the scattered atoms in the time-of-flight images versus the relative detuning between the longitudinal probe and the transverse pump. This detuning is adjusted with an AOM on the longitudinal probe beam path. The pump power is first ramped up to prepare the system with a given cavity-mediated interaction strength, and then the longitudinal probe beam is pulsed on for 0.5~ms. For measurements of mode-softening below the transition threshold, the response can be read-out by directly counting the atom population excited into the $|\Psi\rangle_+$ momentum state. There are no background atoms at these momenta because there is no population of this momentum state in the normal phase: any atom signal is due to the Bragg excitation. The resonance frequency is extracted by fitting the spectrum to a symmetric double-Lorentzian peak. The set of such frequencies is plotted in Fig.~\ref{fig3}e along with curves produced using the theory presented below. The blue uncertainty bands are primarily due to the atom number uncertainty in the cavity-mediated interaction strength. The bands broaden close to threshold where the photon contribution plays an increased role.
For measurements above the threshold, however, the situation is complicated by the macroscopic population of atoms already in the $|\Psi\rangle_0 = \sum_{\sigma,\tau=\pm 1} |\tau k_r, \sigma k_r\rangle$ excited momentum state. While the longitudinal probe creates an additional momentum excitation, the additional atoms are hard to distinguish from that already present because a) the number of these atoms is small compared to the number already condensed into this state, and b) $k_\perp \ll k_r$, so that $|\Psi\rangle_+$ cannot be distinguished from $|\Psi\rangle_0$ given the limited 20~ms time-of-expansion of the time-of-flight image. Thus, the same momentum-space atom-counting method used for below-threshold spectroscopy measurements is not viable.
We therefore turn to an alternative method that uses these same absorption time-of-flight (TOF) images, but performs an analysis based on momentum correlations rather than momentum-space atom counting. To explain how this works, we first note that in real space, the longitudinal probe creates a small periodic distortion in the originally perfect chequerboard lattice. We can quantify this distortion by computing the momentum-space self-correlation $\langle \rho(\mathbf{k}+\delta \mathbf{k})\rho(\mathbf{k}) \rangle$ of the atomic momentum distribution $\rho(\mathbf{k})$, which can be computed from $\langle \rho(\mathbf{r}+\delta \mathbf{r})\rho(\mathbf{r}) \rangle_{\mathrm{TOF}}$ in each TOF image. By focusing on the correlation between the shape of the wavepackets centred at momentum states $|\Psi\rangle_0$ and $|0,0\rangle$, we can discern the presence of atoms excited to $\pm k_\perp$ states. This is because the correlation in the shape of $\rho(\mathbf{k})$ at $\mathbf{k}=(0,0)$ and at the four $(\pm k_r,\pm k_r)$ regions is strongest when a perfect chequerboard lattice is present: the wavepacket of the excited momentum state $|\Psi\rangle_0$ is simply a momentum displacement of that at $\mathbf{k}=(0,0)$. However, in the presence of a small lattice distortion given by $k_\perp$, the structure factor is reduced and destructive matter-wave interference results in a reduction in the correlation. This correlation reduction is what is plotted in the inset of Fig.~\ref{fig4}. The phonon mode resonances are manifest in the correlation signal dips.
To perform the above-threshold measurement, we first fit the entire image to a broad 2D Gaussian profile as an estimate of the background contribution arising from atom heating and from atom scattering halos resulting from the pumps. Then the self-correlation analysis is performed on the background-subtracted images. Due to imperfect subtraction, negative values appear in parts of the correlation. Note that since we are only interested in the correlations between Bragg peaks---all positive valued---the negative values do not affect the results. This analysis is repeated for each value of probe detuning $\omega$ and $k_\perp$ to form the experimental $\omega(k_\perp)$ dispersion curve shown in Fig.~\ref{fig4}. Due to the sensitivity to atom number fluctuations in the correlation versus $\omega$ spectroscopy data, we perform bootstrap sampling to obtain a more reliable error estimate for the data points comprising $\omega(k_\perp)$.
\section{Self-organisation in a double-pumped confocal cavity}
To derive a theory for obtaining the dispersion-relation curves plotted in the figures of the main text, we start with the Hamiltonian describing atoms coupled to two degenerate resonances of a confocal cavity under the transverse double-pump scheme:
\begin{align}
\label{eq:Hamiltonian}
H=& -\sum_{\mu}\Delta_{\mu} \hat{a}^\dagger_{\mu} \hat{a}^{}_{\mu} -\sum_{\mu}\Delta_{\mu} \hat{b}^\dagger_{\mu} \hat{b}^{}_{\mu} \nonumber \\
&+ N\int d^3\mathbf{x}
\hat{\Psi}^\dagger(\mathbf{x})\left(-\frac{\nabla^2}{2m}+ V(\mathbf{x})
+U|\hat{\Psi}(\mathbf{x})|^2\right)\hat{\Psi}(\mathbf{x})\nonumber \\
&+N \int d^3\mathbf{x}
\hat{\Psi}^\dagger(\mathbf{x})\left(\frac{|\hat{\phi}_a|^2}{\Delta^a_A} + \frac{|\hat{\phi}_b|^2}{\Delta^b_A}\right)
\hat{\Psi}(\mathbf{x}),
\end{align}
where $N$ is the number of atoms and $\Delta^a_A$ and $\Delta^b_A$ are the atomic detuning of the two pumps. The terms in the first line are the cavity field energies in a frame rotating at one FSR. The terms in the second line are the atomic kinetic energy, potential energy from a trap $V(x)$, and contact interaction of strength $U$ (note this is separate from the cavity mediated atom-atom interaction discussed in the main text). The third line contains the interaction terms between the BEC and the two cavity mode families. The matter wave field is denoted by $\hat{\Psi}(\mathbf{x})$, while the light fields are $\hat{\phi}_a$ and $\hat{\phi}_b$, which contain both the standing-wave transverse pump and a sum over all cavity modes with transverse and longitudinal spatial dependence. In writing the above, we have set $\hbar=1$, and we will continue to do so throughout this and following sections.
For notational ease, we index the transverse modes with the single variable $\mu$, rather than separate indices $l$ and $m$ for the Hermite-Gauss functions along the $\hat{x}$ and $\hat{y}$, respectively. We thus use $\mu = \{l,m\}$ to label the transverse electromagnetic mode function TEM$_{\mu}$ $\equiv$ TEM$_{l,m}$, and we define the total mode family index $n_\mu = l + m$. The light field contains two transverse pumps with strength $\Omega_a$ and $\Omega_b$ coupled to two degenerate families of cavity modes separated by one FSR. The total fields are
\begin{align}
\hat{\phi}_a(\mathbf{r}) &= \Omega_a \cos(k_rx) \nonumber \\
&+ g_0\sum_{\mu} \hat{a}_{\mu}
\Xi_\mu(\mathbf{r})\cos{\left[k_r\left(z+\frac{r^2}{2R(z)}\right)-\theta_{\mu}(z)\right]}, \nonumber \\
\hat{\phi}_b(\mathbf{r}) &= \Omega_b \cos(k_rx) \nonumber \\ &+g_0\sum_{\mu} \hat{b}_{\mu}
\Xi_\mu(\mathbf{r})\cos{\left[k_r\left(z+\frac{r^2}{2R(z)}\right)-\theta_{\mu}(z) + \pi/2\right]},
\label{totalLightField}
\end{align}
where $\Xi_{\mu}(\mathbf{r})$ is the spatial profile of a Hermite-Gauss mode of the cavity and the summation runs only over even or odd modes. The form of the light field
results in a spatially varying single-photon Rabi frequency $g_0 \Xi_{\mu} (\mathbf{r})/\Xi_{0,0}(0)$. The radius of curvature is given by $R(z) = z + {z^2_R}/{z}$, where $z_R$ is the Rayleigh range. The term $\theta_\mu(z)$ accounts for the Gouy phase shift---the fact that in a confocal cavity, different transverse modes, though degenerate, have different longitudinal phase variation. This is given by
\begin{align}
\theta_{\mu}(z) &= \vartheta(z) + n_\mu [\vartheta(L/2) + \vartheta(z)] - \Theta, \label{gouy}\\
\vartheta(z) &= \mathrm{arctan} \left( \frac{z}{z_R} \right).
\end{align}
The phase offset $\Theta$ is fixed by the boundary condition that the light field vanishes at the two mirrors. There also is an additional $\pi/2$ phase shift between the two degenerate resonances separated by one cavity FSR; see Ref.~\cite{Guo2019eab} for more details.
\subsection{Derivation of equations of motion}
\subsubsection{Effective matter-light coupling}
In the main text, we discussed the effective atom-atom interaction mediated by the cavity, specialising to atoms at the cavity midplane, $z=0$. In this section, we discuss the cavity-mediated interaction more fully. To do this, it is clearer to consider this as an interaction between atomic density waves. We therefore expand the atomic wavefunction as
\begin{equation}
\hat{\Psi} = Z(z - z_0) \left\{ \hat{\psi}_0 (\mathbf{r}) \mu_0 (k_r x) + [\hat{\psi}_1 (\mathbf{r}) e^{-i ( k_r z + \delta) } + \text{H.c.} ] \mu_1(k_r x) \right\},
\label{atomexpansion}
\end{equation}
where $\mu_n(\phi)$ are the $2 \pi$ periodic eigenfunctions of the Mathieu equation, $\partial_\phi^2 \mu_n + [a_n - 2 \mathcal{Q} \cos(2 \phi)] \mu_n=0$, with eigenvalues $a_n$ and the $\mu_n(\phi)$ describing wavefunctions in the pump lattice. The dimensionless parameter
$\mathcal{Q} = -\Omega_a^2/(4 \Delta^a_A \omega_r) - \Omega_b^2/(4 \Delta^b_A \omega_r)$ is the pump lattice depth in units of the recoil energy $\omega_r = k_r^2/2m$.
The factor $Z(z)$ is the envelope function in $\hat{z}$; $\psi_0$ is the condensate wavefunction; $\psi_1$ is the envelope function of the atomic density wave formed by scattered atoms; and $\delta$ is a fixed phase offset that we will later choose for convenience.
Our aim will be to derive coupled equations for the density wave envelope function $\psi_1$ and the cavity light; this will allow us to find the dispersion of the normal modes (DW polaritons), both below and above the DW polariton condensation threshold.
To consider the cavity fields, we may first focus on a single degenerate resonance, and then later combine the effects of both resonances. The linear-order light-matter coupling term in the Hamiltonian is
\begin{equation}\label{atomwave}
H^{a}_{LM}=
\frac{g_0 \Omega}{\Delta^a_A} \int d^3 \mathbf{x} |\Psi(\mathbf{x})|^2 \sum_{\mu}\Xi_{\mu}(\mathbf{r})(\hat{a}^{\dagger}_{\mu} + \hat{a}_{\mu}) \cos(k_r x) \cos[k_r z - \theta_\mu (z)].
\end{equation}
We have assumed that the Rayleigh range is much larger than the BEC, enabling us to drop the $r^2/2R(z)$ term. We next integrate-out the $z$ dependence because the dynamics of interest occur in the transverse plane. This can be done straightforwardly in the limit where we assume $Z(z-z_0)$ has a width $w_z$ and that $\lambda\ll w_z\ll z_R$. The first inequality allows us to drop any terms oscillating at wavevectors $k_r$ or $2k_r$ along $\hat z$; this imposes momentum conservation so that recoiling atoms receive momentum kicks given by the difference between the pump and cavity momenta. The second condition means that we can evaluate the slowly varying phase terms as being effectively constant over the width of the gas: $\theta_{\mu}(z) \simeq \theta_{\mu}(z_0)$. Similarly, we will drop the fast oscillating terms along $\hat{x}$. Using the expression for the atomic wavefunction in Eq.~\eqref{atomwave} and keeping terms up to linear order in $\hat{\psi}_1$, the coupling term then becomes
\begin{equation}
H^{a}_{LM}=\eta_a \int d \mathbf{r} \sum_{\mu}\Xi_{\mu}(\mathbf{r})(\hat{a}^{\dagger}_{\mu} + \hat{a}_{\mu})[\hat{\psi}_0(\mathbf{r}) \hat{\psi}^{\dagger}_{1} (\mathbf{r}) e^{i (\delta+ \theta_{\mu} (z_0))} + \text{H.c.} ],
\end{equation}
where $\eta_{a,b} \equiv O(\mathcal{Q}) g_0 \Omega_{a,b}/4\Delta^{a,b}_A$ is the two-photon coupling strength. The factor $O(\mathcal{Q}) = \langle \mu_1(\phi) \mu_0(\phi) \cos(\phi) \rangle$ is the overlap of scattered atoms with the pump potential and the condensate in the pump lattice averaged over one lattice period $\phi \in [0,2 \pi]$. This overlap factor depends on the dimensionless pump lattice strength $\mathcal{Q}$ via the form of the Mathieu functions $\mu_n$ as defined above. To simplify the expression, we choose $\delta = \Theta - \vartheta(z_0)$ and define $\theta_0 = \pi/4 + \mathrm{arctan}(z_0/z_R)$. We finally arrive at the following form of the interaction
\begin{equation}
H^{a}_{LM}=\eta_a \int d \mathbf{r} \sum_{\mu}\Xi_{\mu}(\mathbf{r})(\hat{a}^{\dagger}_{\mu} + \hat{a}_{\mu})[\hat{\psi}_0(\mathbf{r}) \hat{\psi}^{\dagger}_{1} (\mathbf{r}) e^{i n_\mu \theta_0} + \text{H.c.} ].
\end{equation}
The calculation for the degenerate resonance one FSR away (to the red detuning side)---the $\hat b_{\mu} + \hat b_{\mu}^\dagger$ modes---is identical to the above, except there is an additional $\pi/2$ phase shift in $\theta_{\mu}$ that shifts the longitudinal cavity profile. We can now rewrite the full Hamiltonian up to linear order in light-matter coupling as
\begin{align}
H =& -\sum_{\mu}\Delta_{\mu} \hat{a}^\dagger_{\mu} \hat{a}^{}_{\mu} -\sum_{\mu}\Delta_{\mu} \hat{b}^\dagger_{\mu} \hat{b}^{}_{\mu} \nonumber \\
&+ \int d\mathbf{r} \hat{\psi}_{1}^{\dagger}(\mathbf{r}) \left[-\frac{\nabla^2}{2m}+ \omega_0 + V(\mathbf{r})
+U|\psi_{1}(\mathbf{r})|^2 \right] \hat{\psi}_{1}(\mathbf{r}) \nonumber \\
&+ \eta_a \int d \mathbf{r} \sum_{\mu}\Xi_{\mu}(\mathbf{r})(\hat{a}^{\dagger}_{\mu} + \hat{a}_{\mu})[\hat{\psi}_0(\mathbf{r}) \hat{\psi}^{\dagger}_{1} (\mathbf{r}) e^{i n_\mu \theta_0} + \text{H.c.} ] \nonumber \\
&+ \eta_b \int d \mathbf{r} \sum_{\mu}\Xi_{\mu}(\mathbf{r})(\hat{b}^{\dagger}_{\mu} + \hat{b}_{\mu})[i \hat{\psi}_0(\mathbf{r}) \hat{\psi}^{\dagger}_{1} (\mathbf{r}) e^{i n_\mu \theta_0} + \text{H.c.} ].
\label{eq:lin-ham}
\end{align}
Here, $\omega_0 = \omega_r[1+a_1(\mathcal{Q}) - a_0(\mathcal{Q})]$ is the recoil energy of the atoms in the chequerboard momentum state $(\pm k_r,\pm k_r)$ in the presence of a deep pump lattice, written in terms of the Mathieu equation parameter $\mathcal{Q}$.
Note that the additional factor of $i$ in the last line of Eq.~\eqref{eq:lin-ham} is due to the aforementioned $\pi/2$ shift for $\hat b_\mu$ modes. To model a nonideal degenerate cavity, i.e., one with imperfectly degenerate mode families due to mirror aberrations, we take $\Delta_\mu = \Delta_C + n_{\mu} \epsilon$. Here, $\Delta_C$ is the pump detuning to the reference mode (typically the peak of the mode spectrum; see, e.g., Fig.~\ref{pumping}) and $\epsilon$ is the residual mode splitting due to cavity mirror nonidealities; this is typically around 5~MHz for our confocal cavity.
We now focus on the case of two confocal degenerate resonances that contain only the even transverse modes, i.e., $n_\mu~\mathrm{mod}~2 = 0$. To account for the infinite number of transverse modes in a more tractable manner, we will find it useful to define the following cavity operators:
\begin{align}
\hat{\mathcal{A}}(\mathbf{r}) &= \frac{1}{\sqrt{\mathcal{N}_a}}\sum_{\mu} \frac{\Xi_{\mu}(\mathbf{r})}{w_0/\sqrt{2}} \left[ \hat{a}_{\mu} \cos(n_{\mu} \theta_0) - \hat{b}_{\mu} \sin(n_{\mu} \theta_0) \right] \mathcal{S}^{+}_{\mu}, \nonumber \\
\hat{\mathcal{B}}(\mathbf{r}) &= \frac{1}{\sqrt{\mathcal{N}_b}}\sum_{\mu} \frac{\Xi_{\mu}(\mathbf{r})}{w_0/\sqrt{2}} \left[ \hat{a}_{\mu} \sin(n_{\mu} \theta_0) + \hat{b}_{\mu} \cos(n_{\mu} \theta_0) \right]\mathcal{S}^{+}_{\mu},
\label{light_reduce}
\end{align}
where $\mathcal{N}_{a,b}$ are normalisation factors to guarantee bosonic commutation relations and the factor
\begin{equation}
\mathcal{S}^{+}_{\mu} = \frac{1}{2} [1 + (-1)^{n_\mu}]
\end{equation}
is chosen to cancel the odd modes in a degenerate confocal resonance such that the summation can be carried over all transverse modes. Computing the commutation relations, we find that
\begin{align}
[\hat{\mathcal{A}}(\mathbf{r}), \hat{\mathcal{A}}^{\dagger}(\mathbf{r}^\prime) ] &= \frac{1}{\mathcal{N}_a}\sum_{\mu} \frac{\Xi_{\mu} (\mathbf{r})}{w_0/\sqrt{2}} \frac{\Xi_{\mu} (\mathbf{r}^\prime)}{w_0/\sqrt{2}}\mathcal{S}^{+}_{\mu} = \frac{1}{2 \mathcal{N}_a} [\delta(\mathbf{r} - \mathbf{r}^\prime)+\delta(\mathbf{r} + \mathbf{r}^\prime)], \nonumber \\
[\hat{\mathcal{B}}(\mathbf{r}), \hat{\mathcal{B}}^{\dagger}(\mathbf{r}^\prime) ] &= \frac{1}{\mathcal{N}_b}\sum_{\mu} \frac{\Xi_{\mu} (\mathbf{r})}{w_0/\sqrt{2}} \frac{\Xi_{\mu} (\mathbf{r}^\prime)}{w_0/\sqrt{2}}\mathcal{S}^{+}_{\mu} = \frac{1}{2 \mathcal{N}_b} [\delta(\mathbf{r} - \mathbf{r}^\prime)+\delta(\mathbf{r} + \mathbf{r}^\prime)],
\end{align}
where the normalisation condition of the Hermite-Gauss mode function is given by
\begin{equation}
\int \frac{d \mathbf{r}}{w^2_0/2} \Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}) = 1.
\end{equation}
The appearance of the mirror-image term $\delta(\mathbf{r} + \mathbf{r}^\prime)$ is due to the fact that the summation is restricted to all modes with even spatial symmetry. We note that this term does not play a role in the atom-cavity interaction because we place the BEC away from the cavity centre so that no atoms exist at the mirror image position. Consequently, the same normalisation $\mathcal{N}_{a,b} = 1/2$ satisfies the bosonic commutation relation for both $\hat{\mathcal{A}}$ and $\hat{\mathcal{B}}$. Using orthonormality of the Hermite-Gauss mode functions $\Xi_{\mu}$, the original boson modes may be rewritten as
\begin{align}
\hat{a}_{\mu} &= \sqrt{\frac{1}{2}} \int d\mathbf{r} \frac{\Xi_{\mu}(\mathbf{r})}{w_0/\sqrt{2}} \left[ \hat{\mathcal{A}} (\mathbf{r}) \cos(n_{\mu} \theta_0) + \hat{\mathcal{B}}(\mathbf{r}) \sin(n_{\mu} \theta_0) \right], \nonumber \\
\hat{b}_{\mu} &= \sqrt{\frac{1}{2}} \int d\mathbf{r} \frac{\Xi_{\mu}(\mathbf{r})}{w_0/\sqrt{2}} \left[ \hat{\mathcal{A}} (\mathbf{r}) \cos(n_{\mu} \theta_0) - \hat{\mathcal{B}}(\mathbf{r}) \sin(n_{\mu} \theta_0) \right].
\end{align}
Employing the $\hat{\mathcal{A}}$, $\hat{\mathcal{B}}$ basis, we can now rewrite the original Hamiltonian in terms of an inverse Green's function $\mathcal{D}^{-1}(\mathbf{r},\mathbf{r}^\prime)$:
\begin{align}
H =& -\Delta_C \int \frac{d\mathbf{r} d \mathbf{r}^\prime}{w^2_0/2} [\hat{\mathcal{A}}^{\dagger}(\mathbf{r})\mathcal{D}^{-1}(\mathbf{r},\mathbf{r}^\prime)\hat{\mathcal{A}}(\mathbf{r}^\prime)+\hat{\mathcal{B}}^{\dagger}(\mathbf{r})\mathcal{D}^{-1}(\mathbf{r},\mathbf{r}^\prime)\hat{\mathcal{B}}(\mathbf{r}^\prime)] \nonumber \\ &+ \int d\mathbf{r} \hat{\psi}_{1}^{\dagger}(\mathbf{r}) \left[-\frac{\nabla^2}{2m}+ \omega_0 + V(\mathbf{r})
+U|\psi_{1}(\mathbf{r})|^2 \right] \hat{\psi}_{1}(\mathbf{r}) \nonumber \\
&+ \eta_a \int d \mathbf{r} \frac{w_0}{\sqrt{2}} [\hat{\psi}_{1}^{\dagger}(\mathbf{r}) + \hat{\psi}_{1}(\mathbf{r})][\hat{\mathcal{A}}^{\dagger}(\mathbf{r})+\hat{\mathcal{A}}(\mathbf{r})]\psi_0 (\mathbf{r}) + \eta_b \int d \mathbf{r}\frac{w_0}{\sqrt{2}} [i\hat{\psi}_{1}^{\dagger}(\mathbf{r}) - i\hat{\psi}_{1}(\mathbf{r})][\hat{\mathcal{B}}^{\dagger}(\mathbf{r})+\hat{\mathcal{B}}(\mathbf{r})]\psi_0 (\mathbf{r}),
\label{eq:realspaceH}
\end{align}
where
\begin{equation}
\mathcal{D}^{-1}(\mathbf{r},\mathbf{r}^\prime) = \sum_{\mu} (1 + n_\mu \tilde{\epsilon}) \Xi_{\mu}(\mathbf{r}) \Xi_{\mu} (\mathbf{r}^\prime),
\end{equation}
and $\tilde{\epsilon} \equiv \epsilon/\Delta_C$. Here, the summation is over all transverse modes with both even and odd spatial symmetry since we have dropped the mirror-image term in the commutation relation for the cavity field operators $\hat{\mathcal{A}}$ and $\hat{\mathcal{B}}$.
\subsubsection{Approximating the cavity Green's functions}
Equation~\eqref{eq:realspaceH}, along with the definition of $\mathcal{D}^{-1}(\mathbf{r},\mathbf{r}^\prime)$, provides a general description of the cavity tuned near to a confocal point, but with some nonzero $\epsilon$. Modelling the dynamics of this is however hard in general, as $\mathcal{D}(\mathbf{r},\mathbf{r}^\prime)$ is not translationally invariant. However, we can approximate this by a translationally invariant function, allowing one to use a momentum space description to find the DW polariton dispersion.
The cavity Green's function can be written as
\begin{equation}
\mathcal{D} (\mathbf{r},\mathbf{r}^\prime) \equiv\sum_{\mu}\frac{\Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}^\prime)}{1 + n_{\mu} \tilde{\epsilon}} \nonumber
= \int d\lambda e^{-\lambda} G(\mathbf{r},\mathbf{r}^\prime,\tilde{\epsilon}\lambda),
\end{equation}
where we have used
\begin{equation}
G(\mathbf{r},\mathbf{r}^\prime,\varphi)
\equiv
\sum_{\mu}\Xi_{\mu}(\mathbf{r})\Xi_{\mu}(\mathbf{r}^\prime) e^{-n_\mu \varphi}
= \frac{e^{\varphi}}{2 \pi \sinh(\varphi)}
\exp\left[- \frac{(\mathbf{r}-\mathbf{r}^\prime)^2/w_0^2}{2 \tanh(\varphi/2)} -
\frac{(\mathbf{r}+\mathbf{r}^\prime)^2/w_0^2}{2 \coth(\varphi/2)}
\right].
\label{harmonicgreen}
\end{equation}
This then gives~\cite{Vaidya2017tpa}
\begin{equation}
\mathcal{D} (\mathbf{r},\mathbf{r}^\prime)
= \frac{1}{2 \pi \tilde{\epsilon}} K_0 \left( \frac{2}{\sqrt{\tilde{\epsilon}}} \left| \frac{\mathbf{r} - \mathbf{r}^\prime}{w_0} \right| \sqrt{1 + \frac{\tilde{\epsilon}}{4}\left(\frac{\mathbf{r} + \mathbf{r}^\prime}{w_0}\right)^2}\right),
\end{equation}
where $K_0$ is the modified Bessel function of the second kind. We assume that the spatial extent of $\psi_0 (\mathbf{r})$ is smaller than $2w_0/\sqrt{\tilde{\epsilon}}\approx 220$~$\mu$m, which is true for our condensate length of $\sim$93~$\mu$m. These assumptions allow us to make the following approximation:
\begin{align}\label{localinteraction}
\mathcal{D} (\mathbf{r},\mathbf{r}^\prime) &\approx \frac{1}{2 \pi \tilde{\epsilon}} K_0 \left( \frac{2}{\sqrt{\tilde{\epsilon}}} \left| \frac{\mathbf{r} - \mathbf{r}^\prime}{w_0} \right| \sqrt{1 + \tilde{\epsilon} \left(\frac{\mathbf{r}_{\mathrm{cm}}}{w_0}\right)^2}\right), \nonumber \\
&\equiv K(|\mathbf{r} - \mathbf{r}^\prime|/\xi),
\end{align}
where again, $\mathbf{r}_{\mathrm{cm}} = (\text{49~}\mu\mathrm{m},\text{35~}\mu\mathrm{m})$ is the centre of mass coordinate of the gas and
\begin{equation}
\xi \equiv \frac{w_0 \sqrt{\tilde{\epsilon}}}{2 \sqrt{1 + \tilde{\epsilon} (\mathbf{r}_{\mathrm{cm}}/w_0)^2}}.
\end{equation}
Thus, we see that the cavity Green's function is translationally invariant in the limit of $\tilde{\epsilon} \ll 1$. In the limit that $\sqrt{\tilde{\epsilon}}{r}_{\mathrm{cm}}/w_0 \ll 1$, we have that $\xi \approx w_0 \sqrt{\tilde{\epsilon}}/2$. For our system, this local interaction range is 5-6~$\mu$m at the $\Delta_C$ employed for the spectroscopy data taken here~\cite{Vaidya2017tpa,Kroeze2021preprint}.
From the form of Eq.~\eqref{localinteraction}, we find that the approximate Green's function is diagonal in momentum space:
\begin{equation}
\mathcal{D}(\mathbf{k}) = \frac{1}{1 + k^2 /\zeta^2},
\end{equation}
where $\zeta = 1/\xi$ is the characteristic momentum scale. Therefore, the dispersion of the cavity field $\hat{\mathcal{A}}$, $\hat{\mathcal{B}}$ in the small-$\tilde{\epsilon}$ limit is
\begin{equation}
\mathcal{D}^{-1}(\mathbf{k}) = 1 + k^2 /\zeta^2.
\end{equation}
We note that the well-defined nature of the momentum peaks evident in our spectroscopy experiments---indicating that momentum is a good quantum number---support the assumptions made above.
\subsubsection{Dissipative equations of motion}
Using the above translational invariance, we can now write the mean-field equations of motion for the cavity modes and atoms:
\begin{align}\label{EOM}
i \partial_t \psi_1 &= \left[\omega_0 - \frac{\nabla^2}{2m}\right] \psi_1 + \eta \psi_0[(\mathcal{A}^{*} + \mathcal{A})+i(\mathcal{B}^{*} + \mathcal{B})]\frac{w_0}{\sqrt{2}} + U |\psi_1|^2 \psi_1, \nonumber \\
i \partial_t \mathcal{A} &= -\Delta_C \left(1 - \frac{\nabla^2}{\zeta^2}\right) \mathcal{A} + \eta \psi_0 (\psi^{*}_{1} + \psi_1) \frac{w_0}{\sqrt{2}} - i \kappa \mathcal{A}, \nonumber \\
i \partial_t \mathcal{B} &= -\Delta_C \left(1 - \frac{\nabla^2}{\zeta^2}\right) \mathcal{B} + i\eta \psi_0(\psi^{*}_{1} - \psi_1)\frac{w_0}{\sqrt{2}} - i \kappa \mathcal{B}.
\end{align}
Here, $\eta \equiv \eta_a = \eta_b $ is the balanced pump power. We note that $\mathcal{A}$ and $\mathcal{B}$ couple to the real and imaginary part of $\psi_1$, respectively.
These equations support two steady-state conditions. There is always a steady state with $\psi_1=\mathcal{A}=\mathcal{B}=0$, corresponding to the normal state, below threshold. In addition, above a critical pumping strength a DW polariton condensate state exists.
Using the uniform state as an ansatz and choosing the phase of the atom field $\psi_1$ to be zero, the DW polariton condensate stationary state is given by:
\begin{equation}
\psi_{1S} = \sqrt{\frac{ \mu}{U}}, \qquad
\mathcal{A}_S = \frac{2 \eta \sqrt{N} \psi_{1S}}{\Delta_C + i \kappa}, \qquad
\mathcal{B}_S = 0.
\end{equation}
We have introduced $\mu$ which plays a role analogous to the chemical potential in a weakly interacting Bose gas and $N = |\psi_0|^2 w^2_0/2$. The steady state condition requires:
\begin{equation}
\mu = -\frac{4 \eta^2 N \Delta_C}{\Delta^2_C + \kappa^2} - \omega_0.
\end{equation}
Note that the pump is red-detuned from the cavity resonance and thus $\Delta_C < 0$.
The DW polariton condensate state exists only when $\mu>0$, which requires a sufficiently large pump strength $\eta$. We recover the below-threshold state if we set $\mu=0$.
\subsection{Dispersion relation and speed of sound}
To derive the dispersion relation, we now expand around the stationary state by using the Bogoliubov--de Gennes parametrisation, considering a fluctuation with wavevector $\mathbf{k}$ and (complex) frequency $\nu$. Note that in our system, since we are considering $\psi_1$ as the envelope function of a density wave,
$ \mathbf{k}\cdot\hat{x} = k_\perp$, $\mathbf{k}\cdot\hat{y} = k_y$, and $\mathbf{k}\cdot\hat{z} = 0$:
\begin{align}
\psi_1(\mathbf{r},t) &= \psi_{1S} + u e^{-i(\mathbf{k} \cdot \mathbf{r} + \nu t)} + v^{*} e^{i(\mathbf{k} \cdot \mathbf{r} + \nu^\ast t)}, \nonumber \\
\mathcal{A}(\mathbf{r},t) &= \mathcal{A}_S + c e^{-i(\mathbf{k} \cdot \mathbf{r} + \nu t)} + d^{*} e^{i(\mathbf{k} \cdot \mathbf{r} + \nu^\ast t)}, \nonumber \\
\mathcal{B}(\mathbf{r},t) &= \mathcal{B}_S + f e^{-i(\mathbf{k} \cdot \mathbf{r} + \nu t)} + g^{*} e^{i(\mathbf{k} \cdot \mathbf{r} + \nu^\ast t)}.
\end{align}
We seek to find how the allowed value(s) of $\nu$ depend on $\mathbf{k}$. Inserting these into the equations of motion, keeping terms up to linear order in small fluctuations, and matching the Fourier components, the equations for the Bogoliubov--de Gennes coefficients are
\begin{align}
\nu u &= \left[\omega_0 + \frac{k^2}{2m} +2U \psi^2_S \right] u + U \psi^2_S v + \eta \sqrt{N}[c+d+i(f+g)], \nonumber \\
\nu v &= -\left[\omega_0 + \frac{k^2}{2m} +2U \psi^2_S \right] v - U \psi^2_S u - \eta \sqrt{N}[c+d-i(f+g)], \nonumber \\
\nu c &= \left[-\Delta_C\left(1 + \frac{k^2}{\zeta^2}\right) - i \kappa\right] c + \eta \sqrt{N}(u+v), \nonumber \\
\nu d &= -\left[-\Delta_C\left(1 + \frac{k^2}{\zeta^2}\right) + i \kappa\right] d - \eta \sqrt{N}(u+v),
\nonumber \\
\nu f &= \left[-\Delta_C\left(1 + \frac{k^2}{\zeta^2}\right) - i \kappa\right] f - i \eta \sqrt{N}(u-v), \nonumber \\
\nu g &= -\left[-\Delta_C\left(1 + \frac{k^2}{\zeta^2}\right) + i \kappa\right] g - i \eta \sqrt{N}(v-u).
\end{align}
The cavity field fluctuations $c,d,f,g$ can be eliminated by expressing them in terms of the atomic wavefunctions $u$ and $v$, and then substituting them back into the first two equations. The results are
\begin{align}
\nu u &= \left(\omega_0 + \frac{k^2}{2m} + 2 \mu\right) u + \mu v + \eta^2 N \frac{4 \Delta_k u}{(\nu + i \kappa)^2 - \Delta^2_k}, \nonumber \\
\nu v &= -\left(\omega_0 + \frac{k^2}{2m} + 2 \mu\right) v - \mu u - \eta^2 N \frac{4 \Delta_k v}{(\nu + i \kappa)^2 - \Delta^2_k},
\end{align}
where $\Delta_k \equiv -\Delta_C(1 + k^2/\zeta^2)$. These equations can be re-arranged into the simple form
\begin{equation}
\begin{bmatrix}
A(\nu) - \nu & \mu \\
\mu & A(\nu) + \nu
\end{bmatrix}
\begin{bmatrix}
u \\
v
\end{bmatrix}
=0,
\end{equation}
where
\begin{equation}
A(\nu) \equiv \omega_0 + \frac{k^2}{2m} + 2 \mu + \eta^2 N \frac{4 \Delta_k}{(\nu + i \kappa)^2 - \Delta^2_k}.
\end{equation}
\subsubsection{Below threshold}
Below threshold, as noted above, we set $\mu = 0$. In this case there is no mixing of $u$ and $v$, and the excitation spectrum can be found from solutions of the equation:
\begin{equation}
\nu = \omega_0 + \frac{k^2}{2m} + \eta^2 N \frac{4 \Delta_k}{(\nu + i \kappa)^2 - \Delta^2_k}.
\end{equation}
This expression is the dispersion of DW polaritons: the normal-mode frequencies result from mixing the atomic density wave dispersion $k^2/2m$ with the photon-mediated interaction. Focusing on the experimentally relevant limit $\nu, \kappa \ll |\Delta_C|$, we obtain the simple expression
\begin{equation}
\nu(\mathbf{k}) = \omega_0 + \frac{k^2}{2m} + \frac{4 \eta^2 N}{\Delta_C(1 + k^2/\zeta^2)}.
\end{equation}
As observed in the main text, this expression exhibits a relatively flat dispersion (controlled by the atomic mass) when $\eta=0$. Because $\Delta_C<0$, increasing $\eta$ both softens the mode---reduces $\nu(\mathbf{k}=0)$---and leads to a steeper dispersion, due to the $\mathbf{k}$-dependence of the second, cavity-mediated term. When $4\eta^2 N = - \Delta_C \omega_0$, one has $\nu(\mathbf{k}=0)=0$; this corresponds to the point at which the mode becomes entirely soft, and DW polariton condensation occurs. (Note, this condition assumes $\kappa \ll |\Delta_C|$, as introduced above).
\subsubsection{Above threshold}
\begin{figure*}
\includegraphics[width = 0.95\textwidth]{SuppFig3.pdf}
\caption{(a) Numerical solution to the dispersion above threshold. Inset shows the spectrum at small momenta. (b) Full dispersion (blue) contrasted with the cavity-only contribution to the dispersion (red) that does not contain the atomic $k^2/2m$ dispersion. At low momenta, the dispersion is dominated by the cavity contribution. The dashed line shows $k_\perp = \zeta$. The cavity dispersion flattens toward a Debye frequency of ${\sim}$5~kHz at large $k_\perp$.}
\label{fig:full_theory}
\end{figure*}
Above threshold, the frequencies are solutions of the determinant equation:
\begin{equation}
\mu^2 = [A(\nu) - \nu][A(\nu) + \nu].
\label{det_eq}
\end{equation}
To obtain a simple expression, we expand $A(\nu)$ up to linear order in $\nu/|\Delta_k|$ and $\kappa/|\Delta_k|$, which is the relevant limit for the long-wavelength, low-energy regime we explore:
\begin{equation}
A(\nu) = \omega_0 + \frac{k^2}{2m} + 2 \mu -\frac{4 \eta^2 N}{\Delta_k}\left[ 1+ \frac{2 i \nu \kappa }{\Delta^2_k} \right] + O\left(\left|\frac{\nu}{\Delta_k}\right|^2,\left|\frac{\kappa}{\Delta_k}\right|^2\right).
\end{equation}
The determinantal equation for $\nu$ in Eq.~\eqref{det_eq} can then be solved:
\begin{equation}
\label{eq:DWpoldispersion}
\nu(\mathbf{k}) = i \frac{\Gamma_k}{2} \pm \sqrt{(\mathcal{E}^2_k - \mu^2) - \frac{\Gamma_k^2}{4}},
\end{equation}
where
\begin{equation}
\Gamma_k = -\frac{16 N \mathcal{E}_k \eta^2 \kappa}{\Delta^3_k}, \qquad
\mathcal{E}_k = \omega_0 + \frac{k^2}{2m} + 2 \mu -\frac{4 \eta^2 N}{\Delta_k}.
\end{equation}
The $\Gamma_k$ expression is the excitation gap due to cavity loss. Note that for consistency, the chemical potential $\mu$ in this expression should also be expanded up to linear order in $\kappa/|\Delta_C|$ as
\begin{equation}
\mu = -\frac{4 \eta^2 N }{\Delta_C} - \omega_0,
\end{equation}
so that $\mathcal{E}_0=\mu$ (recalling that $\Delta_{k=0}=-\Delta_C)$.
Because $\kappa \ll |\Delta_C|$, the dissipative term $\Gamma_k$ has a small effect except at very small $k$. Outside this small $k$ regime, the real part of the dispersion can be expanded up to linear order to yield the dispersion relation of the acoustic photon mode:
\begin{equation}
\Re[\nu (\mathbf{k})]
\simeq \sqrt{2\mu(\mathcal{E}_k - \mu)}
= v_s |\mathbf{k}|,
\end{equation}
where the speed of sound takes the form
\begin{equation}
v_s = \sqrt{\mu \left( \frac{1}{2m} + \frac{E_I}{\zeta^2} \right) },
\end{equation}
in terms of the cavity-mediated interaction energy scale
$E_I = -{4N \eta^2}/{\Delta_C}$. One may note that $E_I/\zeta^2 \gg 1/2m$, so the sound velocity is principally determined by the cavity-mediated interaction strength.
For the above-threshold data presented in the main text, $v_s \approx 0.16$ m/s.
Using the above expression, we can also describe the diffusive
regime that occurs at very small momenta. We use that
$\mathcal{E}_k^2 - \mu^2 \simeq v_s^2 k^2$ at small momentum to find that as $\mathbf{k} \to 0$ the gapless branch of $\nu(\mathbf{k})$ has the purely imaginary diffusive spectrum $\nu(\mathbf{k}) = i 2 v_s^2 k^2 / \Gamma_k$, as also found for microcavity polariton condensates~\cite{Szymanska2006nqc}. This crosses over to the linear sound dispersion when
$v_s k = \Gamma_k/2$. Figure~\ref{fig:full_theory} plots the numerical solution to the above-threshold determinant equation using experimental parameters. The imaginary part contribution $\propto\Gamma_k$ is indeed negligible.
As discussed above, one should note that $k_\perp$ is measured from the $(\pm k_r,\pm k_r)$ points. In the above equations, this can be seen from the fact that $\psi_1(k)$ is the envelope function multiplying an atomic density wave with wavevector $k_r$. We also note one particular higher-order effect that is not included in the above treatment. The missing effect arises from the fact that photon momentum must be conserved, $k^2 = k_\parallel^2 + k_\perp^2 = 4\pi^2/\lambda^2$. Consequently, phonons with nonzero $k_\perp$ propagate in supermodes with a concomitantly reduced $k_\parallel$. Nevertheless, this effect is negligible for the momenta considered in the current experiment ($k_{\perp}/k_r \sim 10^{-2}$). For the largest momentum measured, the fractional change in the longitudinal momentum $k_{\parallel}$ is less than $10^{-4}$.
We also note that there should be a phonon dispersion in $\hat{y}$ as well as that which we have shown in $\hat{x}$. We do not attempt to measure the dispersion in $\hat{y}$ because we choose to make a BEC that is thin in this direction so as to maximise its length in $\hat{x}$. That is, the BEC is too thin in $\hat{y}$ to support a full wavelength of the shortest-wavelength phonons we are able to stimulate at present.
\section{Derivation of the form of the light profile emitted from the cavity}
We now derive the relationship of the intracavity field to that emitted by the cavity. Moreover, we describe how a phonon excitation at a particular momentum $k_\perp$ appears in holographic images of the cavity field. We first show how each cavity field couples to momentum excitations. To do so, we consider the equation of motions in Eq.~\ref{EOM} and resolve $\hat{\psi}_{1}$ into the momentum basis
\begin{equation}
\hat{\psi}_{1}(\mathbf{r}) = \int d\mathbf{k} \hat{\psi}_{\mathbf{k}} e^{- i \mathbf{k} \cdot \mathbf{r}}.
\end{equation}
The equations of motion for $\alpha_\mu \equiv \langle \hat{a}_{\mu} \rangle$ and $\beta_\mu \equiv \langle \hat{b}_{\mu} \rangle$ are
\begin{align}
i \partial_t \alpha_{\mu} &= -(\Delta_{\mu} + i\kappa) \alpha_{\mu} - \eta \int d \mathbf{k} \int d\mathbf{r} \Xi_{\mu}(\mathbf{r}) [ \psi_{0} (\mathbf{r})\psi^{*}_{\mathbf{k}} e^{i (\mathbf{k} \cdot \mathbf{r} + n_\mu \theta_0)}+ \text{H.c.}], \\
i \partial_t \beta_{\mu} &= -(\Delta_{\mu} + i \kappa) \beta_{\mu} - \eta \int d \mathbf{k} \int d \mathbf{r} \Xi_{\mu}(\mathbf{r}) [\psi_{0} (\mathbf{r}) \psi^{*}_{\mathbf{k}} i e^{i (\mathbf{k} \cdot \mathbf{r} + n_\mu \theta_0)}+ \text{H.c.}].
\end{align}
We can consider the light profile due to the atomic population in a particular momentum mode $\psi_{\mathbf{k}}$. Focusing on the first degenerate resonance with even modes, and after setting the time derivative to zero, the light profile is
\begin{align}
\tilde{\alpha} (\mathbf{r},z) &= \sum_{\mu} \alpha_{\mu} \Xi_{\mu} (\mathbf{r}) \cos(k_r z - \theta_0 - \Theta - n_{\mu} \theta_0) \nonumber \\
&= -\sum_{\mu}\eta \int d\mathbf{r}^\prime \frac{\Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}^\prime)}{\Delta_{\mu} + i \kappa} \mathcal{S}^{+}_{\mu}\psi_0 (\mathbf{r}^\prime) \int d\mathbf{k} [\psi^{*}_{\mathbf{k}} e^{i (\mathbf{k} \cdot \mathbf{r}^\prime + n_\mu \theta_0)}+ \text{H.c.}]\cos(k_r z - \theta_0 - \Theta - n_{\mu} \theta_0),
\end{align}
where the tilde signifies that this includes the profile along $\hat{z}$.
Taking the forward-travelling part of the standing-wave intracavity field, we find the form of the cavity emission out of one of its cavity mirrors (for one of the two cavity fields):
\begin{align}\label{firstforward}
\tilde{\alpha}^{+} (\mathbf{r},z) &\propto e^{i(k_r z - \theta_0 - \Theta)} \sum_{\mu}\int d\mathbf{r}^\prime \frac{\Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}^\prime)}{\Delta_{\mu} + i \kappa} \mathcal{S}^{+}_{\mu}\psi_0 (\mathbf{r}^\prime) \int d\mathbf{k} [\psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime} + \psi_{\mathbf{k}} e^{-i(\mathbf{k} \cdot \mathbf{r}^\prime + 2 n_{\mu} \theta_0)}] \nonumber \\
&\propto e^{i(k_r z - \theta_0 - \Theta)} \int d\mathbf{k} \int d\mathbf{r}^\prime \psi_0 (\mathbf{r}^\prime) \left[\mathcal{G}^{+} (\mathbf{r},\mathbf{r}^\prime,0) \psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime} + \mathcal{G}^{+} (\mathbf{r},\mathbf{r}^\prime,-2i\theta_0) \psi_{\mathbf{k} }e^{-i \mathbf{k} \cdot \mathbf{r}^\prime} \right],
\end{align}
where $\mathcal{G}^{+}(\mathbf{r},\mathbf{r}^\prime,\varphi)$ is a modified Green's function defined as~\cite{Vaidya2017tpa}
\begin{align}
\mathcal{G}^{+}(\mathbf{r},\mathbf{r}^\prime,\varphi) &= \mathcal{G}(\mathbf{r},\mathbf{r}^\prime,\varphi) + \mathcal{G}(\mathbf{r},-\mathbf{r}^\prime,\varphi), \nonumber \\
\mathcal{G}(\mathbf{r},\mathbf{r}^\prime,\varphi) &= \displaystyle\sum_{\mu} \frac{\Xi_{\mu}(\mathbf{r})\Xi_{\mu}(\mathbf{r}^\prime) e^{-n_\mu \varphi}}{1 + \tilde{\epsilon} n_\mu + i \tilde{\kappa}}.
\end{align}
The calculation for $\beta_{\mu}$ proceeds in the same way, with the only difference being the aforementioned additional $\pi/2$ phase shift:
\begin{align}
\tilde{\beta} (\mathbf{r},z) &= \sum_{\mu} \beta_{\mu} \Xi_{\mu} (\mathbf{r}) \sin(k_r z - \theta_0 - \Theta - n_{\mu} \theta_0) \nonumber \\
&= -\sum_{\mu}\eta \int d\mathbf{r}^\prime \frac{\Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}^\prime)}{\Delta_{\mu} + i \kappa} \psi_0 (\mathbf{r}^\prime) \int d\mathbf{k} [i\psi^{*}_{\mathbf{k}} e^{i (\mathbf{k} \cdot \mathbf{r}^\prime + n_\mu \theta_0)}+ \text{H.c.}]\sin(k_r z - \theta_0 - \Theta - n_{\mu} \theta_0), \\
\tilde{\beta}^{+} (\mathbf{r},z) &\propto e^{i(k_r z - \theta_0 - \Theta)} \sum_{\mu}\int d\mathbf{r}^\prime \frac{\Xi_{\mu}(\mathbf{r}) \Xi_{\mu}(\mathbf{r}^\prime)}{\Delta_{\mu} + i \kappa} \psi_0 (\mathbf{r}^\prime) \int d\mathbf{k} [\psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime} - \psi_{\mathbf{k}} e^{-i(\mathbf{k} \cdot \mathbf{r}^\prime + 2 n_{\mu} \theta_0)}] \nonumber \\ \label{secondforward}
&\propto e^{i(k_r z - \theta_0 - \Theta)} \int d\mathbf{k} \int d\mathbf{r}^\prime \psi_0 (\mathbf{r}^\prime) \left[\mathcal{G}^{+} (\mathbf{r},\mathbf{r}^\prime,0) \psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime} - \mathcal{G}^{+} (\mathbf{r},\mathbf{r}^\prime,-2i\theta_0) \psi_{\mathbf{k} }e^{-i \mathbf{k} \cdot \mathbf{r}^\prime} \right].
\end{align}
After measuring the cavity field emission from both frequencies, the total field can be reconstructed digitally by summing Eqs.~\ref{firstforward} and~\ref{secondforward}:
\begin{equation}
\Phi(\mathbf{r}) \propto \int d\mathbf{k} \int d\mathbf{r}^\prime \psi_0 (\mathbf{r}^\prime) \mathcal{G}^{+} (\mathbf{r},\mathbf{r}^\prime,0) \psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime}.
\label{camera_field}
\end{equation}
Therefore, the digitally summed image contains the contribution from only the local part $\mathcal{G} (\mathbf{r},\mathbf{r}^\prime,0)$ of the field because the nonlocal contribution from $\mathcal{G} (\mathbf{r},\mathbf{r}^\prime,-2i\theta_0)$ cancels, as shown in Fig.~\ref{fig2} in the main text.
In the limit of an ideal confocal cavity in which $\mathcal{G} (\mathbf{r},\mathbf{r}^\prime,0)$ becomes a $\delta$-function (i.e., perfect mode degeneracy), Eq.~\eqref{camera_field} shows that a single momentum component $\psi_{\mathbf{k}}$ will result in a phase winding $e^{i \mathbf{p} \cdot \mathbf{r}}$ on the reconstructed cavity emission. Taking the Fourier transform of the complex electric field $\Phi(\mathbf{r})$ then reveals the momentum mode that has been stimulated through Bragg spectroscopy. Furthermore, this also shows that a particular momentum can be excited by stimulating the local part of the field and probing on either one of the degenerate resonances.
Though we do not employ this here, we note that the nonlocal part of the cavity emission---i.e., the part that could be found by digitally subtracting the images at the two cavity frequencies---also provides information about the driven momentum. This is because, for a confocal cavity with a BEC confined to the cavity midpoint $z=0$, this emission is the Fourier transform of the object image. Consider the simple case of an ideal confocal cavity with atoms located at the midplane of the cavity where $\theta_0 = \pi/4$. The nonlocal part of the emitted field is
\begin{align}
\Phi_{\mathrm{nonlocal}}(\mathbf{r}) &\propto \int d\mathbf{k} \int d\mathbf{r}^\prime \psi_0 (\mathbf{r}^\prime) \cos\left[ \frac{2 \mathbf{r} \cdot \mathbf{r}^\prime}{w^2_0}\right] \psi^{*}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{r}^\prime} \nonumber \\
&= \frac{1}{2} \int d\mathbf{k} \int d\mathbf{r}^\prime \psi_0 (\mathbf{r}^\prime) \psi^{*}_{\mathbf{k}} \left\{ \mathrm{exp} \left[ i \left( \mathbf{k} + \frac{2 \mathbf{r}}{w^2_0} \right) \cdot \mathbf{r}^\prime \right] + \mathrm{exp} \left[ i \left( \mathbf{k} - \frac{2 \mathbf{r}}{w^2_0} \right) \cdot \mathbf{r}^\prime \right] \right\},
\label{nonlocal_field}
\end{align}
which contains direct information about the excited momentum modes $\psi_{\mathbf{k}}$.
\section{Coupling of a longitudinal probe field into the confocal cavity}
The Hamiltonian for a longitudinally driven multimode cavity is given by
\begin{equation}
H= -\sum_{\mu}\left[\Delta_{\mu} \hat{a}^\dagger_{\mu} \hat{a}^{}_{\mu} + f_{\mu} (\hat{a}_{\mu} + \hat{a}^{\dagger}_{\mu})\right],
\end{equation}
where \begin{equation}
f_{\mu} = \int d\mathbf{r}^\prime f(\mathbf{r}^\prime) \Phi_{\mu}(\mathbf{r},^\prime z_0)
\end{equation}
is the spatial overlap between the longitudinal pump $f(\mathbf{r})$ and a cavity mode $\Phi_{\mu}(\mathbf{r},z)$ near a particular plane $z=z_0$ is
\begin{equation}
\Phi_{\mu}(\mathbf{r},z_0)=
\Xi_{\mu} (\mathbf{r}) \cos[k_r z_0 - \theta_\mu(z_0)].
\end{equation}
Ignoring cavity loss, the classical equation of motion of the expectation value $\alpha_\mu = \langle \hat{a}_{\mu} \rangle$ is
\begin{equation}
i \partial_t \alpha_{\mu} = \Delta_{\mu} \alpha_{\mu} + f_{\mu}.
\end{equation}
In steady state, the total transverse light field is therefore given by
\begin{equation}
\alpha(\mathbf{r}) = \sum_{\mu} \alpha_{\mu} \Phi_{\mu}(\mathbf{r},z_0) = \sum_{\mu} \frac{f_\mu \Phi_{\mu}(\mathbf{r},z_0)}{\Delta_{\mu}}.
\end{equation}
Focusing on the transverse profile, the cavity field is
\begin{equation}
\alpha(\mathbf{r}) = \int d\mathbf{r}^\prime f(\mathbf{r}^\prime)\sum_{\mu} \frac{\Xi_{\mu}(\mathbf{r})\Xi_{\mu}(\mathbf{r}^\prime) \cos^2\left[k_r z_0 -\theta_{\mu}(z_0)\right]}{\Delta_{\mu}} \mathcal{S}^{+}_{\mu} \equiv \int d\mathbf{r}^\prime f(\mathbf{r}^\prime) \mathcal{T} (\mathbf{r}, \mathbf{r}^\prime),
\end{equation}
where $\theta_{\mu}$ is the longitudinal phase of mode $\mu$ at the transverse plane $z = z_0$ we are considering. The factor $\mathcal{S}^{+}_{\mu}$ restricts the summation to modes with even transverse spatial symmetry for the case of a degenerate resonance in a confocal cavity. Thus, the transfer function $\mathcal{T}$ for an longitudinally input beam profile can be evaluated the same way as the Green's function for the cavity-mediated interaction.
In an ideal confocal cavity, at the cavity midplane, the transfer function contains three parts~\cite{Vaidya2017tpa}:
\begin{equation}
\mathcal{T}(\mathbf{r},\mathbf{r}^\prime) \propto \delta\Big(\frac{\mathbf{r} - \mathbf{r}^\prime}{w_0/\sqrt{2}}\Big) + \delta\Big(\frac{\mathbf{r} + \mathbf{r}^\prime}{w_0/\sqrt{2}}\Big) + \frac{1}{\pi} \cos \Big( \frac{2\mathbf{r} \cdot \mathbf{r}^\prime}{w_0^2}\Big).
\end{equation}
For a longitudinal probe profile localised around the atoms, the resulting cavity field will contain the probe field, its mirror image, and a nonlocal contribution. In a realistic cavity where the transverse modes are not fully degenerate (such as ours), the highest spatial-frequency components of the input probe field will be suppressed.
\section{Comparison to other systems}
In a crystallisation transition, the mutual interactions of the particles in a gas or liquid conspire to break a continuous translational $U(1)$ symmetry. While the periodicity of the resultant lattice is set by properties of the particles themselves, the phase of the lattice freely emerges. Goldstone's theorem implies that after solidification, the lattice may, in principle, slide at no energy cost since no particular phase had been preferred~\cite{Altland2006cmf}. While this zero-momentum ($k=0$) mode contributes nothing to the thermodynamic properties of the solid, long-wavelength (small $k$) phonon modes that do contribute also arise from the broken symmetry. These connect with the $k=0$ mode to form a continuous, gapless spectrum of excitations called a Goldstone mode.
While the amplitude of the lattice is emergent in a single-mode, single-pumped cavity, its phase and periodicity are geometrically fixed by the single cavity mode into which the photons scatter---the lattice remains inelastic. Adding a second, frequency-degenerate cavity mode creates a standing-wave potential for the atoms with an emergent phase, thereby breaking the $U(1)$ symmetry. As the atoms move, so does the standing wave, realising the $k=0$ point of a Goldstone mode. Such a scheme was experimentally realized using two crossed single-mode Fabry-P\'{e}rot cavities tuned to the same frequency~\cite{Leonard2017sfi}. Pumped atoms scatter photons into a superposition of the two modes. Their coherent sum yields a $U(1)$ phase degree of freedom and realises a lattice with the same $k=0$ mode mentioned above. With the atoms Bose-condensed, a simple supersolid is created wherein superfluidity and periodic structure arising from the broken translational symmetry coexist. Absent, however, are phonons. This is because the infinite-range, photon-mediated interactions of each single-mode cavity yield an effectively 0D system with no $k>0$ modes~\cite{Leonard2017mam,Lang2017cea}---the Goldstone dispersion is missing. A similar $k=0$ Goldstone mode of a dipolar supersolid has been observed~\cite{Guo2019tlg}.
\end{document} |
1,116,691,501,005 | arxiv |
\section{Introduction}
Kinetic theories pioneered by L. Boltzmann arise in a variety of fields beyond the classical rarefield gas dynamics, ranging from multiphase flows \cite{HQ2017,MarFox2013}, aerosol dynamics in atmospheric environments \cite{Frd2000,MarFox2013}, and active matter physics \cite{Herty2018}, to galactic dynamics in the universe \cite{Yoshi2013}. In the kinetic framework \cite{Kremer2010}, various physical systems are described with a distribution function $f$ which depends on the spatial and other problem-specific microscopic variables and its time evolution is governed by kinetic equations like the Boltzmann equation. Although the kinetic equations have solid physical ground, they are computationally costly and therefore not directly usable in engineering applications.
Because of the above reason, various simplifications or approximations of the kinetic equations have been proposed, including the BGK model \cite{bgk1954}, discrete velocity models \cite{Gatignol1975,Mieu2000}, and moment closure systems \cite{Grad1949,Levermore1996,MarFox2013}. All these approximations have their advantages and disadvantages. This work is concerned with moment closure systems, in which the governing equations of several moments of the distribution function are derived from the kinetic equation and an additional procedure must be accompanied to close the moment system \cite{MarFox2013}. The resultant moment systems consist usually of first-order partial differential equations (PDEs).
To correctly model the observability of physical processes, the derived system of PDEs should be well-posed (or hyperbolic for first-order systems). For instance, the well-known Grad's closure method yields non-hyperbolic PDEs and produces unphysical results \cite{Grad1949,Muller1998}. Its hyperbolic regularization has attracted much attention \cite{Cai13-1,Cai13-2,Levermore1996,McDonald2013,Struchtrup2003}. A recent work is \cite{Cai2015} where the authors introduced a framework to construct hyperbolic moment closure systems.
Furthermore, the moment closure systems derived from the kinetic equations should preserve the key physical properties of the original kinetic equations. For the Boltzmann equation, one of the key properties is the celebrated $H$-theorem characterizing the dissipation property of the mesoscopic system under consideration \cite{Kremer2010}. In this regard, a paradigm is the widely used BGK model that not only simplifies the collision term in the Boltzmann equation, but also inherits the key conservation and dissipation properties thereof \cite{Kremer2010}. At this point, an immediate question is how to manifest the $H$-theorem in such moment systems.
It turns out that the structural stability condition proposed in \cite{Yong1999} for hyperbolic relaxation systems is a proper counterpart of the $H$-theorem for the kinetic equation. Indeed, this condition has been tacitly respected by many well-developed physical theories \cite{Yong2008}. Recently, it was shown in \cite{Di2017} to be satisfied by the hyperbolic regularization models derived in \cite{Cai13-1,Cai13-2,Cai2015}.
In contrast, the Biot/squirt (BISQ) model for wave propagation in saturated porous media violates this condition and thus allows exponentially exploding asymptotic solutions \cite{LiuJW2016}. On the other hand, this condition also implies that the resultant moment system is compatible with the classical theories \cite{Yong1999}. The implication is important because the lower-order moments are usually associated with the macroscopic parameters of the system \cite{Kremer2010}. Therefore, we believe that the structural stability condition is a proper criterion to evaluate the moment closure systems.
The objective of this paper is to investigate whether or not the quadrature-based moment method (QBMM, \cite{MarFox2013}) yields hyperbolic PDEs which satisfy the structural stability condition above.
In QBMM, the distribution function $f$ is approximated with a linear combination of $N$ ($N \ge 1$) $\delta$-functions with unknown centers or their Gaussian approximations with unknown variance and centers (named QMOM or EQMOM, respectively) \cite{MarFox2013}.
QBMM has become an effective and popular method in simulating the evolution of fine particulate matter, where the distribution function is independent of the particle velocity and the resultant governing equation is termed population balance equation \cite{MarFox2013,Ngu2016,Yuan2011}.
However, the QMOM-derived moment system of the Boltzmann equation leads to unphysical shocks in the numerical solution of Riemann problems \cite{Fox2008}, which is confirmed by our own numerical results (see the Supplementary Material).
Thus it is appealing to find the cause for the irregular behaviors and the aforementioned criteria are expected to be useful in clarifying such issues.
This paper deals only with the spatial one-dimensional (1-D) Boltzmann equation with hypothetical collisions (BGK or Shakhov type), just to figure out a road map for further investigations of general cases. We show that the QMOM-derived moment system is not strongly hyperbolic for any number $N$ of nodes, while the Gaussian EQMOM produces strictly hyperbolic moment systems when the variance is positive. For the latter, we further determine their equilibrium manifolds and verify the structural stability condition. The proofs are quite technical and purely analytic. They involve detailed analyses of characteristic polynomials of the coefficient matrices.
Let us remark that for $N=2$, the hyperbolicity of moment systems has been studied in \cite{Chalons2012} for 1-D QMOM and in \cite{Chalons2017} for 1-D Gaussian-EQMOM. The proofs rely on direct calculations of the eigenvalues of the coefficient matrix of the moment systems \cite{Chalons2012,Chalons2017} and
does not seem generalizable to $N$-node systems. Thus new techniques are needed to handle the general cases. Moreover, the stability of EQMOM has not been analyzed in the existing literature. Given our positive results, EQMOM reveals its potential in solving a wider range of kinetic equations.
The paper is organized as follows. \Cref{sec:main} presents a brief introduction on QBMM (QMOM and EQMOM) and states our main results. \Cref{sec:StabQMOM} is devoted to a proof of non-hyperbolicity of QMOM for $N$-node systems.
In \Cref{sec:StabEQMOM}, we verify the structural stability condition for the EQMOM with $N$ nodes. In particular, the hyperbolicity is demonstrated in \Cref{subsec:hypeqmom}, the equilibrium states are determined in \Cref{subsec:eqmomequistate}, and the dissipation property is shown in \Cref{subsec:coupAseqmom,subsec:coupASseqmom}. Finally, we conclude our paper in \Cref{sec:conclusions}.
\section{Preliminaries}
\label{sec:main}
For simplicity, we only consider a hypothetical 1-D ideal gas with the probability density function $f=f(t,x,\xi)$ of time $t \in \mathbb{R}^{+}$, spatial position $x \in \mathbb{R}$ and velocity $\xi \in \mathbb{R}$.
The temporal evolution of $f$ is governed by the Boltzmann equation \cite{Kremer2010}:
\begin{equation} \label{eq:boltz}
\frac{\partial f}{\partial t} + \xi \frac{\partial f}{\partial x} = Q(f).
\end{equation}
Here the volumetric force is neglected and the right-hand side $Q(f)$ represents the collisions. As a standard assumption \cite{Kremer2010}, $Q=Q(f)$ has only 1, $\xi$ and $\xi^2$ as locally conserved quantities:
\begin{equation} \label{eq:collinvar}
\int_{\mathbb{R}} Q(f) \phi(\xi) d \xi = 0, \qquad \phi (\xi) = 1, \ \xi, \ \xi^{2},
\end{equation}
and vanishes at a local equilibrium distribution
\begin{equation} \label{eq:feq}
f_{eq} = f_{eq} (t,x,\xi) = \frac{\rho}{(2\pi \theta)^{1/2}} \exp \left( -\frac{ \left( \xi - U \right)^{2} }{2 \theta} \right),
\end{equation}
where $\rho$, $U$ and $\theta$ are the density, velocity and temperature of the gas, respectively. They are the classical macroscopic parameters related to $f$ as
\begin{equation} \label{eq:mo2macro}
\rho = \int_{\mathbb{R}} f d \xi, \quad
\rho U = \int_{\mathbb{R}} \xi f d \xi, \quad
\rho \theta = \int_{\mathbb{R}} \left( \xi-U \right)^{2} f d \xi,
\end{equation}
In this paper, we mainly consider the BGK model \cite{bgk1954}, where
\begin{equation} \label{eq:BGK}
Q=Q_{BGK}(f) = \nu (f_{eq} - f).
\end{equation}
Here $\nu$ is the collision frequency.
This simple model has been widely used since it preserves several key properties of the kinetic equation, including \cref{eq:collinvar} and the $H$-theorem. Because the BGK model results in the Prantle number $Pr=1$, inconsistent with most realistic cases \cite{Kremer2010}, the Shakhov model was proposed \cite{Shakhov1968}:
\begin{equation} \label{eq:Shakhov}
Q_{S}(f) = \nu (f_S - f).
\end{equation}
Here an alternative equilibrium distribution $f_S$ is assumed:
\begin{equation} \label{eq:fS}
f_S = f_{eq} \times \left( 1 + \frac{(1-Pr) q (\xi-U)}{3 \rho \theta^2} \left( \frac{ \left( \xi-U \right)^{2}}{\theta} - 3 \right) \right)
\end{equation}
with $q$ the heat flux defined as $q = \int_{\mathbb{R}} \frac{1}{2} \left( \xi-U \right)^3 f d \xi$.
Denote by $M_j(t,x) = \int_{\mathbb{R}} \xi^{j} f d\xi$ the $j$th velocity-moment of $f$.
From \cref{eq:mo2macro} we see that
\begin{equation} \label{eq:1Dmo}
M_0 = \rho, \ M_1 = \rho U, \ M_2 = \rho (U^2+\theta), \ M_3 = \rho (U^3+3U\theta)+2q.
\end{equation}
The evolution equation for $M_j$ can be derived from the Boltzmann equation \cref{eq:boltz} with the BGK collision \cref{eq:BGK}:
\begin{equation} \label{eq:1Dmoeqbgk}
\partial_t M_j + \partial_x M_{j+1} = \nu \left[ \rho \Delta_j(U,\theta) - M_j \right].
\end{equation}
Here $\Delta_j(U,\theta)$ denotes the $j$th moment of the normalized Gaussian distribution
\begin{displaymath}
\delta_{\theta} (\xi;U) = \frac{1}{\sqrt{2\pi \theta}} \exp \left( - \frac{(\xi-U)^2}{2\theta} \right).
\end{displaymath}
Notice that $\Delta_0(U,\theta)=1$ and $\Delta_1(U,\theta)=U$.
There are infinitely many equations in \cref{eq:1Dmoeqbgk}. The first $N$ equations for moments $M_0,\dots,M_{N-1}$ are not closed, because the $M_{N-1}$-equation contains the term $\partial_x M_{N}$. Hence a closure method is needed.
In the rest of this section, we introduce the QBMM methods, the structural stability condition for hyperbolic relaxation systems, and our main results of this paper.
\subsection{Quadrature-based moment methods}
\label{subsec:QBMM}
In QBMM, the lower-order moments determine the weights and nodes of the quadrature for the integration $\int f(\xi) g(\xi) d \xi$.
Then the unclosed term can be expressed in terms of the lower-order moments and thereby the closure is done \cite{MarFox2013}.
\subsubsection{Quadrature method of moment (QMOM)}
\label{subsubsec:QMOM}
In QMOM, the distribution function $f$ is assumed to be a sum of $N$ Dirac delta functions
\begin{equation} \label{eq:QMOMf}
f(\xi) = \sum_{i=1}^{N} w_i \delta(\xi-u_{i}).
\end{equation}
In order to determine the weights $w_i$ and nodes $u_i$, the first $2N$ lower-order moments $M_0,\dots,M_{2N-1}$ are employed:
\begin{equation} \label{eq:QMOMmo}
M_j = \sum_{i=1}^{N} w_i u_i^j \quad \text{for } j=0,\dots,2N-1.
\end{equation}
These non-linear algebraic equations can be solved to obtain $w_i$ and $u_i$ as in \cite{MarFox2013}. Then the next moment $M_{2N}$ can be found as
\begin{equation} \label{eq:QMOMM_2N}
\bar{M}_{2N} = \sum_{i=1}^{N} w_i u_i^{2N}.
\end{equation}
Namely, $w_i$ and $u_i$ are functions of $M_1,\dots,M_{2N-1}$, and so is $\bar{M}_{2N}$. In this way, we obtain the following system of PDEs:
\begin{equation} \label{eq:QMOMboltz}
\partial_t M + A(M) \partial_x M = \nu S(M).
\end{equation}
Here $M=(M_0,\dots,M_{2N-1})^{T} \in \mathbb{R}^{2N}$, $S(M) = \rho (\Delta_0 (U,\theta),\dots, \Delta_{2N-1} (U,\theta))^{T} - M$, and
\begin{equation} \label{eq:QMOMA}
A(M) =
\begin{bmatrix}
0 & 1 & & & \\
& 0 & 1 & & \\
& & \ddots & \ddots & \\
& & & 0 & 1 \\
a_0 & a_1 & \cdots & a_{2N-2} & a_{2N-1}
\end{bmatrix},
\end{equation}
with $a_j = \frac{\partial \bar{M}_{2N}}{\partial M_j}$ for $0 \le j \le 2N-1$.
\subsubsection{Extended-QMOM (EQMOM)}
\label{subsubsec:EQMOM}
In order to improve QMOM \cite{Chalons2017}, the delta function in \cref{eq:QMOMf} is replaced with its Gaussian approximation
\begin{displaymath}
\delta_{\sigma^2} (\xi;u) = \frac{1}{\sqrt{2\pi} \sigma} \exp \left( - \frac{(\xi-u)^2}{2\sigma^2} \right),
\end{displaymath}
that is,
\begin{equation} \label{eq:EQMOMf}
f(\xi) = \sum_{i=1}^{N} w_i \delta_{\sigma^2}(\xi;u_{i}).
\end{equation}
Set $W = (w_1,u_1,\dots,w_N,u_N,\sigma^2)^{T} \in \mathbb{R}^{2N+1}$ and $M=(M_0,\dots,M_{2N})^{T} \in \mathbb{R}^{2N+1}$. They are related with the map $M = \mathcal{M} (W)$:
\begin{equation} \label{eq:EQMOMmo}
M_j = \sum_{i=1}^{N} w_i \Delta_j(u_i,\sigma^2) \quad \text{for } j=0,\dots,2N,
\end{equation}
defined for $W \in \Omega_W = \Omega_W^{open} \cup \Omega_{W}^{eq}$, where
\begin{subequations} \label{eq:domainW}
\begin{align}
\Omega_{W}^{open} &= \{ W: \ w_i>0; \ \sigma^2>0; \ \forall \ i \ne j, \ u_i \ne u_j \}, \label{eq:domainWopen} \\
\Omega_{W}^{eq} &= \{ W: \ w_i>0; \ \sigma^2 > 0; \ u_1=u_2=\dots=u_N \}.
\end{align}
\end{subequations}
Remark that $\Delta_j(u_i, \sigma^2)$ is exactly the same as that in \cref{eq:1Dmoeqbgk}. It is shown in \Cref{sec:injection} that the map $M = \mathcal{M} (W)$ is one-to-one for $W \in \Omega_{W}^{open}$.
Therefore, $W$ can be uniquely solved from \cref{eq:EQMOMmo} for $M \in \mathcal{M}(\Omega_W^{open})$.
In this way, the next moment $M_{2N+1}$ is a function of the lower-order moments $M \in \mathcal{M}(\Omega_W^{open})$:
\begin{equation} \label{eq:EQMOMM_2Np1}
\bar{M}_{2N+1} = \sum_{i=1}^{N} w_i \Delta_{2N+1} (u_i, \sigma^2).
\end{equation}
Therefore, the following moment system is derived:
\begin{equation} \label{eq:EQMOMboltz}
\partial_t M + A(M) \partial_x M = \nu S(M)
\end{equation}
for $M \in \mathcal{M} (\Omega_{W}^{open})$.
Here $S(M) = \rho \left(\Delta_0 (U,\theta),\dots, \Delta_{2N} (U,\theta) \right)^{T} - M$ and
\begin{equation} \label{eq:EQMOMA}
A(M) =
\begin{bmatrix}
0 & 1 & & & \\
& 0 & 1 & & \\
& & \ddots & \ddots & \\
& & & 0 & 1 \\
a_0 & a_1 & \cdots & a_{2N-1} & a_{2N}
\end{bmatrix}
\end{equation}
with $a_j = \frac{\partial \bar{M}_{2N+1}}{\partial M_j}$ for $0 \le j \le 2N$.
For such systems, the moment set $\mathcal{M} (\Omega_{W})$ and its closure $\overline{\mathcal{M} (\Omega_{W})}$ have been extensively studied as a realizability issue in the literature \cite{Chalons2017,MarFox2013,Ngu2016,Pigou2018}.
A further discussion on this issue is beyond the scope of this paper.
\subsection{Structural stability condition}
\label{subsec:stability}
The both QMOM and EQMOM moment systems consist of first-order PDEs derived from the Boltzmann equation. To clarify whether or not these systems inherits the $H$-theorem characterizing the dissipation property of the Boltzmann equation, we recall the structural stability condition proposed in \cite{Yong1999} for systems of $D$-dimensional PDEs:
\begin{equation} \label{eq:genehypde}
\frac{\partial M}{\partial t} + \sum_{d=1}^{D} A_d (M) \frac{\partial M}{\partial x_d} = S(M).
\end{equation}
Here $M$ is the unknown $n$-vector valued function, $A_d=A_d(M)$ is the $d$th $n \times n$ coefficient matrix, and the source term $S=S(M)$ is a given $n$-vector valued function of $M \in \mathbb{G} \subset \mathbb{R}^n$. As in \cite{Yong1999}, we assume that the equilibrium manifold $\mathcal{E} = \{ M \in \mathbb{G} \ | \ S(M) = 0 \}$ is not empty and denote the Jacobian matrix of $S(M)$ as $S_{M} (M)$. The stability condition reads as
\begin{itemize}
\item [(i)] There exist an invertible $n \times n$ matrix $P(M)$ and an invertible $r \times r$ ($0<r \le n$) matrix $\hat{T} (M)$ such that
\begin{displaymath}
P(M) S_M (M) =
\begin{bmatrix}
0 & 0 \\
0 & \hat{T}(M)
\end{bmatrix}
P(M), \quad \forall M \in \mathcal{E};
\end{displaymath}
\item [(ii)] There exists a positive definite symmetric matrix $A_0(M)$ such that
\begin{displaymath}
A_0 (M) A_d (M) = A_d^T (M) A_0 (M) \quad \text{for any } M \in \mathbb{G} \text{ and } d = 1,\dots,D;
\end{displaymath}
\item [(iii)] The spatial derivative parts and the source are coupled as
\begin{displaymath}
A_0 (M) S_M (M) + S_M^T (M) A_0 (M) \le - P^T (M)
\begin{bmatrix}
0 & 0 \\
0 & I_r
\end{bmatrix}
P(M), \quad \forall M \in \mathcal{E}.
\end{displaymath}
\end{itemize}
Here $I_r$ is the unit matrix of order $r$.
As shown in \cite{Yong2008}, this set of conditions has been tacitly respected by many well-developed physical theories. Condition (i) is classical for initial value problems of the system of ordinary differential equations (ODE, spatially homogeneous systems), while (ii) means the symmetrizable hyperbolicity of the PDE system. Condition (iii) characterizes a kind of coupling between the ODE and PDE parts. Recently, this structural stability condition is shown in \cite{Di2017} to be proper for certain moment closure systems.
On the other hand, this set of conditions implies the existence and stability of the zero relaxation limit of the corresponding initial value problems \cite{Yong1999}. Thanks to these, we believe that the structural stability condition is essential for a reasonable moment closure system.
\subsection{Main results}
\label{subsec:mainresult}
For the moment systems derived above, we will establish the following facts as the main result of this paper,
\begin{theorem}[Non-hyperbolicity of QMOM] \label{thm:stabqmom}
The QMOM-derived moment system \cref{eq:QMOMboltz} is not strongly hyperbolic.
\end{theorem}
\begin{theorem}[Stability of EQMOM] \label{thm:stabeqmom}
The EQMOM-derived moment system \cref{eq:EQMOMboltz} satisfies the structural stability condition for $M \in \mathcal{M}(\Omega_W)$.
\end{theorem}
A proof of \cref{thm:stabqmom} will be presented in the next section. In \Cref{sec:StabEQMOM}, \cref{thm:stabeqmom} is divided as \cref{thm:hypeqmom} (hyperbolicity of EQMOM), \cref{thm:equisolu} (equilibrium state), \cref{thm:eqmombgkstabiii} (BGK model) and \cref{thm:eqmomshakstab} (Shakhov model), which will be proved in \Cref{subsec:hypeqmom,subsec:eqmomequistate,subsec:coupAseqmom,subsec:coupASseqmom}, respectively.
\section{Non-hyperbolicity of QMOM}
\label{sec:StabQMOM}
This section is devoted to a proof of \cref{thm:stabqmom} for the QMOM-derived moment system \cref{eq:QMOMboltz} with $N \ge 2$. We should mention that this theorem has been proved in \cite{Chalons2012} but only for $N=2$. For our purpose, we need to consider the $2N \times 2N$ coefficient matrix $A=A(M)$ in \cref{eq:QMOMA}.
\begin{proof}[Proof of \cref{thm:stabqmom}]
Let $\lambda$ be an eigenvalue of $A$ and $\mathbf{v}=(v_1,\dots,v_{2N})^{T}$ the corresponding right eigenvector.
A direct calculation indicates that
\begin{subequations}
\begin{align}
v_{k} = \lambda v_{k-1} &= \lambda^{k-1} v_1 \quad \text{for } k=2,\dots,2N, \label{eq:Avseq} \\
\sum_{k=1}^{2N} a_{k-1} v_k &= \lambda v_{2N} = \lambda^{2N} v_1. \label{eq:Avlast}
\end{align}
\end{subequations}
Then we have $\mathbf{v} = v_1 (1,\lambda,\dots,\lambda^{2N-1})^{T}$ and thereby $v_1 \ne 0$. This shows that the geometric multiplicity of each eigenvalue is 1.
On the other hand, we see from \cref{eq:Avlast} that the characteristic polynomial of $A$ is
\begin{equation} \label{eq:Acharpqmom}
c(\lambda) = \lambda^{2N} - a_{2N-1} \lambda^{2N-1} - \dots - a_1 \lambda - a_0.
\end{equation}
Note that $(a_0,a_1,\dots,a_{2N-1}) = \left( \frac{ \partial \bar{M}_{2N}}{\partial M_0}, \frac{\partial \bar{M}_{2N}}{\partial M_1}, \dots, \frac{\partial \bar{M}_{2N}}{\partial M_{2N-1}} \right) = \frac{\partial \bar{M}_{2N}}{\partial M}$ with $\bar{M}_{2N}$ defined in \cref{eq:QMOMM_2N} and $M=(M_0,\dots,M_{2N-1})^{T}$.
Writing $W = (w_1,u_1,\dots,w_N,u_N)^{T} \in \mathbb{R}^{2N}$, we have
\begin{equation} \label{eq:aMWqmom}
(a_0,a_1,...,a_{2N-1}) \frac{\partial M}{\partial W} = \left( \frac{\partial \bar{M}_{2N}}{\partial M} \right) \left( \frac{\partial M}{\partial W} \right) = \frac{\partial \bar{M}_{2N}}{\partial W}.
\end{equation}
In addition, it follows from \cref{eq:QMOMmo} that the $2N \times 2N$ Jacobian matrix $\partial M / \partial W$ is
\begin{displaymath}
\frac{\partial M}{\partial W} =
\begin{bmatrix}
1 & 0 & \cdots & 1 & 0 \\
u_1 & w_1 & \cdots & u_N & w_N \\
\vdots & \vdots & & \vdots & \vdots \\
u_1^j & jw_1u_1^{j-1} & \cdots & u_N^j & jw_Nu_N^{j-1} \\
\vdots & \vdots & & \vdots & \vdots \\
u_1^{2N-1} & (2N-1)w_1u_1^{2N-2} & \cdots & u_N^{2N-1} & (2N-1)w_Nu_N^{2N-2}
\end{bmatrix}
\end{displaymath}
and from \cref{eq:QMOMM_2N} that
\begin{displaymath}
\frac{\partial \bar{M}_{2N}}{\partial W} = \left( u_1^{2N}, 2Nw_1u_1^{2N-1},\dots, u_N^{2N}, 2Nw_Nu_N^{2N-1} \right).
\end{displaymath}
Substituting the last two relations into \cref{eq:aMWqmom}, we obtain
\begin{displaymath}
\begin{aligned}
u_k^{2N} - a_{2N-1} u_k^{2N-1} - \dots - a_1 u_k - a_0 &= 0, \\
2Nu_k^{2N-1} - (2N-1)a_{2N-1}u_k^{2N-2} - \dots - a_1 &= 0
\end{aligned}
\end{displaymath}
for $k=1,\dots,N$. These mean that $c(u_k) = 0$ and $ \left. \frac{dc(\lambda)}{d \lambda} \right|_{\lambda = u_k} = 0$ for $k=1,\dots,N$. Since $c=c(\lambda)$ is a monic polynomial of order $2N$, there must be
\begin{equation} \label{eq:cuk_qmom}
c(\lambda) = (\lambda-u_1)^2 \cdots (\lambda-u_N)^2.
\end{equation}
As a result, the eigenvalues of $A$ are $u_1,u_2,\dots,u_N$ and each of them has the algebraic multiplicity 2 and the geometric multiplicity 1. In view of its Jordan canonical form, the coefficient matrix $A$ is similar to
\begin{equation} \label{eq:AJor_qmom}
\begin{bmatrix}
u_1 & 1 & & & \\
0 & u_1 & & & \\
& & \ddots & & \\
& & & u_N & 1 \\
& & & 0 & u_N
\end{bmatrix}.
\end{equation}
Hence the moment closure system \cref{eq:QMOMboltz} is not strongly hyperbolic.
\end{proof}
\section{Stability of EQMOM}
\label{sec:StabEQMOM}
We prove \cref{thm:stabeqmom} in this section. In particular, \Cref{subsec:hypeqmom} is devoted to Condition (ii), while Conditions (i) and (iii) are verified in \Cref{subsec:coupAseqmom,subsec:coupASseqmom} for both the BGK and Shakhov collision models.
\subsection{Preliminaries}
\label{subsec:preli}
Recall that in \Cref{subsec:QBMM}, we use the notation
\begin{displaymath}
\Delta_{j} = \Delta_{j}(u,\sigma^2) = \int_{\mathbb{R}} \xi^j \delta_{\sigma^2} (\xi;u) d\xi
\end{displaymath}
for the $j$th moment of the Gaussian distribution $\delta_{\sigma^2}=\delta_{\sigma^2}(\xi;u) = \frac{1}{\sqrt{2\pi}\sigma} \exp{\left(-\frac{(\xi-u)^2}{2\sigma^2}\right)}$.
A direct calculation shows $\Delta_0(u,\sigma^2) = 1$ and $\Delta_1(u,\sigma^2) = u$. Moreover, we can show with \cref{prop:basicDel}(a) below that $\Delta_j (u,\sigma^2)$ is a bivariate polynomial of $u$ and $\sigma^2$.
\begin{lemma} \label{prop:basicDel}
\begin{displaymath}
\begin{aligned}
&\text{(a)}& \ \Delta_j(u,\sigma^2) &= u \Delta_{j-1}(u,\sigma^2) + (j-1) \sigma^2 \Delta_{j-2}(u,\sigma^2) \quad \text{for } j \ge 2, \\
&\text{(b)}& \ \Delta_j(u,\sigma^2) &= \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{(u^j)^{(2k)}}{k!} \quad \text{(this is a finite sum)}, \\
&\text{(c)}& \ \frac{\partial \Delta_j (u, \sigma^2)}{\partial u} &= j\Delta_{j-1}(u,\sigma^2) \quad \text{for } j \ge 1, \\
&\text{(d)}& \ \frac{\partial \Delta_j (u, \sigma^2)}{\partial \sigma^2} &= \frac{j(j-1)}{2} \Delta_{j-2}(u,\sigma^2) \quad \text{for } j \ge 2.
\end{aligned}
\end{displaymath}
\end{lemma}
\begin{proof}
(a): Note that $d \delta_{\sigma^2} / d \xi = - (\xi-u) \delta_{\sigma^2} / {\sigma^2} $. Then for $j \ge 2$ we have
\begin{displaymath}
\begin{aligned}
\Delta_j
&= \int_{\mathbb{R}} (\xi-u+u) \xi^{j-1} \delta_{\sigma^2} d \xi
= u \Delta_{j-1} + \int_{\mathbb{R}} (\xi-u) \xi^{j-1} \delta_{\sigma^2} d\xi \\
&= u \Delta_{j-1} - \sigma^2 \int_{\mathbb{R}} \xi^{j-1} \frac{d \delta_{\sigma^2}}{d\xi} d\xi
= u \Delta_{j-1} + (j-1) \sigma^2 \Delta_{j-2}.
\end{aligned}
\end{displaymath}
This, together with $\Delta_0(u,\sigma^2) = 1$ and $\Delta_1(u,\sigma^2) = u$, indicates that $\Delta_j=\Delta_j(u,\sigma^2)$ is a polynomial of both $u$ and $\sigma^2$.
(b): This can be proven by induction on $j$. It obviously holds for $\Delta_0=1$ and $\Delta_1=u$. Suppose it is true for $j-1$ and $j$. Then for $j+1$ it follows from (a) that
\begin{displaymath}
\begin{aligned}
\Delta_{j+1}
&= u\Delta_j + j \sigma^2 \Delta_{j-1}
= u\sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{(u^j)^{(2k)}}{k!} + \sigma^2 \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{(u^j)^{(2k+1)}}{k!} \\
&= u^{j+1} + \sum_{k=1}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{u(u^j)^{(2k)}+2k(u^j)^{(2k-1)}}{k!} = \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{(u^{j+1})^{(2k)}}{k!}.
\end{aligned}
\end{displaymath}
Hence the proof is complete.
(c \& d): These two follow immediately from (b):
\begin{displaymath}
\begin{aligned}
\frac{\partial \Delta_{j}}{\partial u}
&= \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{(u^j)^{(2k+1)}}{k!}
= \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{j(u^{j-1})^{(2k)}}{k!}
= j \Delta_{j-1}; \\
\frac{\partial \Delta_{j}}{\partial \sigma^2}
&= \frac{1}{2} \sum_{k=1}^{\infty} k \left( \frac{\sigma^2}{2} \right)^{k-1} \frac{(u^j)^{(2k)}}{k!}
= \frac{1}{2} \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^{k} \frac{(u^j)^{(2k+2)}}{k!} \\
&= \frac{1}{2} \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^{k} \frac{j(j-1)(u^{j-2})^{(2k)}}{k!}
= \frac{j(j-1)}{2} \Delta_{j-2}.
\end{aligned}
\end{displaymath}
\end{proof}
\begin{remark} \label{rem:Deltaexplicit}
\cref{prop:basicDel}(b) is obviously equivalent to
\begin{equation}
\Delta_j (u,\sigma^2) = \sum_{k=0}^{[j/2]} \frac{j!}{k!(j-2k)!} \left( \frac{\sigma^2}{2} \right)^k u^{j-2k},
\end{equation}
which was established in \cite{MarFox2013}. But the former is more convenient for our later use.
\end{remark}
Inspired by \cref{prop:basicDel}(b), we introduce a family of linear operators $\mathcal{D}_{\vartheta}$, parameterized with $\vartheta \in \mathbb{R}$, acting on the polynomial algebra $\mathbb{R} [u]$. For $f \in \mathbb{R}[u]$, $\mathcal{D}_{\vartheta} f$ is defined as
\begin{equation} \label{eq:opDdefinition}
\mathcal{D}_{\vartheta} f = \sum_{k=0}^{\infty} \left( \frac{\vartheta}{2} \right)^k \frac{f^{(2k)}}{k!},
\end{equation}
which is a finite sum.
Obviously, $\mathcal{D}_0$ is an identical operator, $\mathcal{D}_{\sigma^2}f$ is a polynomial of $u$ and $\sigma^2$, and $\mathcal{D}_{\sigma^2} u^j = \Delta_j(u,\sigma^2)$. Further useful properties of $\mathcal{D}_{\vartheta}$ are
\begin{lemma} \label{prop:basicOp}
\begin{displaymath}
\begin{aligned}
&\text{(a)}& \ &\text{(composition)} \quad \mathcal{D}_{\alpha} \circ \mathcal{D}_{\vartheta} = \mathcal{D}_{\alpha+\vartheta}, \\
&\text{(b)}& \ &\mathcal{D}_{\vartheta} \ \text{is invertible and } \mathcal{D}_{\vartheta}^{-1} = \mathcal{D}_{-\vartheta}, \\
&\text{(c)}& \ &\frac{\partial}{\partial u} \mathcal{D}_{\vartheta} f(u) = \mathcal{D}_{\vartheta} f'(u), \quad \frac{\partial}{\partial \vartheta} \mathcal{D}_{\vartheta} f(u) = \frac{1}{2} \mathcal{D}_{\vartheta} f''(u), \\
&\text{(d)}& \ &\mathcal{D}_{\vartheta} (uf) = u\mathcal{D}_{\vartheta}f + \vartheta \mathcal{D}_{\vartheta}f', \\
&\text{(e)}& \ &\text{If } \mathcal{D}_{\vartheta} f (u_0) = 0, \text{ then } \mathcal{D}_{\vartheta}(uf)|_{u=u_0} = \vartheta \mathcal{D}_{\vartheta} f'(u_0).
\end{aligned}
\end{displaymath}
\end{lemma}
\begin{proof}
(a): For the composition, we deduce from the definition that
\begin{displaymath}
\begin{aligned}
(\mathcal{D}_{\alpha} \circ
&\mathcal{D}_{\vartheta} )f = \sum_{k=0}^{\infty} \left( \frac{\alpha}{2} \right)^k \frac{1}{k!} \sum_{l=0}^{\infty} \left( \frac{\vartheta}{2} \right)^l \frac{f^{(2k+2l)}}{l!} \\
&= \sum_{p=0}^{\infty} \frac{f^{(2p)}}{p!} \left[ \sum_{l=0}^p \frac{p!}{l!(p-l)!} \left( \frac{\alpha}{2} \right)^{p-l} \left( \frac{\vartheta}{2} \right)^l \right]
= \sum_{p=0}^{\infty} \frac{f^{(2p)}}{p!} \left( \frac{\alpha+\vartheta}{2} \right)^p = \mathcal{D}_{\alpha+\vartheta} f.
\end{aligned}
\end{displaymath}
(b) follows immediately from (a) and $\mathcal{D}_0 = id$.
For (c), the first one is obvious, while the second can be shown as \cref{prop:basicDel}(d).
(d): By using $(uf)^{(2k)} = u f^{(2k)} + 2k f^{(2k-1)}$, this can be proved as \cref{prop:basicDel}(b). Then (e) follows immediately from (d).
\end{proof}
\subsection{Hyperbolicity of EQMOM}
\label{subsec:hypeqmom}
In this section we prove that the EQMOM-derived moment system \cref{eq:EQMOMboltz} for the 1-D Boltzmann equation is strictly hyperbolic, which will be shown to be sufficient for the structural stability condition (ii). The conclusion can be stated as
\begin{theorem} \label{thm:hypeqmom}
For $M \in \mathcal{M}(\Omega_W)$, the $(2N+1) \times (2N+1)$ coefficient matrix $A=A(M)$ in \cref{eq:EQMOMA} has $(2N+1)$ distinct real eigenvalues. Namely, the EQMOM-derived moment system \cref{eq:EQMOMboltz} is strictly hyperbolic.
\end{theorem}
We should mention that this theorem was already established in \cite{Chalons2017} for $N=2$ (the two-node system) but the proof does not seem to work for $N > 2$.
Our proof of this theorem needs some preparations. First of all, the characteristic polynomial of $A$ in \cref{eq:EQMOMA} reads as
\begin{equation} \label{eq:Acharpeqmom}
c(u;W) = u^{2N+1} - a_{2N}u^{2N} - \cdots - a_1 u - a_0.
\end{equation}
Here the coefficient $a_j =a_j(W)= \frac{\partial \bar{M}_{2N+1}}{\partial M_j}$ ($j=0,1,\dots,2N$), with $\bar{M}_{2N+1}$ defined in \cref{eq:EQMOMM_2Np1}, is a function of $W$.
To show that $c(u;W)$, as a polynomial of $u$, has $(2N+1)$ distinct real roots for $M \in \mathcal{M}(\Omega_W)$, we introduce an auxiliary function
\begin{equation} \label{eq:gdefine}
g(u;W) = \mathcal{D}_{\sigma^2} c(u;W) = \sum_{k=0}^{\infty} \left( \frac{\sigma^2}{2} \right)^k \frac{\partial_u^{2k} c(u;W)}{k!}.
\end{equation}
By \cref{prop:basicOp}(b), we have
\begin{equation} \label{eq:cbyg}
c(u;W) = \mathcal{D}_{-\sigma^2} g(u;W) = \sum_{k=0}^{\infty} \left( - \frac{\sigma^2}{2} \right)^k \frac{\partial_u^{2k} g(u;W)}{k!}.
\end{equation}
Set $a_{2N+1}=-1$. Then $c(u;W)$ can be rewritten as $-\sum_{j=0}^{2N+1} a_j u^{j}$ and from the linearity of $\mathcal{D}_{\sigma^2}$ it follows that
\begin{equation} \label{eq:ggg}
g(u;W) = -\sum_{j=0}^{2N+1} a_j \mathcal{D}_{\sigma^2} u^{j} = -\sum_{j=0}^{2N+1} a_j \Delta_j(u, \sigma^2).
\end{equation}
Moreover, from \cref{eq:Acharpeqmom,eq:gdefine} we see that $g(u;W)$ is a $u$-polynomial of degree $(2N+1)$:
\begin{equation} \label{eq:cgrelate1}
g(u;W) = -\sum_{j=0}^{2N+1} g_j u^j
\end{equation}
with $g_{2N+1} = a_{2N+1}= -1$. Further relations between the coefficients of $g(u;W)$ and $c(u;W)$ are
\begin{equation} \label{eq:ajbygj}
a_j = \sum_{k=0}^{N-[j/2]} g_{j+2k} \frac{(j+2k)!}{j!k!} \left( - \frac{ \sigma^2}{2} \right)^k, \quad j=0,1,\dots, 2N+1.
\end{equation}
This can be shown as
\begin{displaymath}
\begin{aligned}
c(u;W)
&= \sum_{k=0}^N \frac{\partial_u^{2k} g(u;W)}{k!} \left( -\frac{\sigma^2}{2} \right)^k
= - \sum_{k=0}^N \sum_{j=0}^{2N+1-2k} \frac{(j+2k)!}{j!k!} \left( - \frac{\sigma^2}{2} \right)^k g_{j+2k} u^j \\
&= - \sum_{j=0}^{2N+1} \left[ \sum_{k=0}^{N-[j/2]} \frac{(j+2k)!}{j!k!} \left( -\frac{\sigma^2}{2} \right)^k g_{j+2k} \right] u^j.
\end{aligned}
\end{displaymath}
Furthermore, $g=g(u)=g(u;W)$ has the following elegant expression.
\begin{lemma} \label{prop:groots}
\begin{displaymath}
g(u;W) = (u-u_1)^2 \cdots (u-u_N)^2 (u-\tilde{U}),
\end{displaymath}
where $u_1,\dots,u_N$ are the nodes solved from \cref{eq:EQMOMmo}, and
\begin{displaymath}
\tilde{U} = \tilde{U} (W) = \frac{\sum_{i=1}^N w_i u_i \prod_{1\le j\le N, j\ne i} (u_j-u_i)^2}{\sum_{i=1}^N w_i \prod_{1\le j\le N, j \ne i} (u_j-u_i)^2}
\end{displaymath}
for $W \in \Omega_W^{open}$.
\end{lemma}
\begin{remark} \label{rem:geq}
This lemma shows that for $W \in \Omega_{W}^{open}$, $\tilde{U}$ is a convex combination of the $u_i$'s. Moreover, for $W = (w_1, U, w_2, U, \dots, w_N, U, \sigma^2) \in \Omega_{W}^{eq}$ and any sequence $\{W_k\} \subset \Omega_{W}^{open}$ approaching $W$, $\tilde{U}(W_k)$ converges to $U$.
Because of this, for $W \in \Omega_W^{eq}$ we define $\tilde{U}(W) = U$ ($=M_1/M_0$) and thereby $g(u;W) = (u-U)^{2N+1}$.
\end{remark}
\begin{proof}
By \cref{prop:basicDel}(c\&d), the Jacobian matrix of the map $M = \mathcal{M}(W)$ defined in \cref{eq:EQMOMmo} is
\begin{equation} \label{eq:jacoeqmom}
\begin{bmatrix}
\Delta_0(u_1) & 0 & \cdots & \Delta_0(u_N) & 0 & 0 \\
\Delta_1(u_1) & w_1 \Delta_0(u_1) & \cdots & \Delta_1(u_N) & w_N \Delta_1(u_N) & 0 \\
\vdots & \vdots & & \vdots & \vdots & \vdots \\
\Delta_j(u_1) & jw_1 \Delta_{j-1}(u_1) & \cdots & \Delta_j(u_N) & j w_N \Delta_{j-1}(u_N) & \binom{j}{2} M_{j-2} \\
\vdots & \vdots & & \vdots & \vdots & \vdots \\
\Delta_{2N}(u_1) & 2N w_1 \Delta_{2N-1}(u_1) & \cdots & \Delta_{2N}(u_N) & 2N w_N \Delta_{2N-1}(u_N) & \binom{2N}{2} M_{2N-2}
\end{bmatrix}.
\end{equation}
Note that the dependence of $\Delta_j$ on $\sigma^2$ has been omitted here for clarity. Moreover, from \cref{eq:EQMOMM_2Np1}, $\partial \bar{M}_{2N+1} / \partial W$ reads as
\begin{displaymath}
\left( \Delta_{2N+1}(u_1), (2N+1)w_1 \Delta_{2N}(u_1), \dots, \Delta_{2N+1}(u_N), (2N+1)w_N \Delta_{2N}(u_N), \binom{2N+1}{2} M_{2N-1} \right).
\end{displaymath}
Then from the simple relation
\begin{displaymath}
(a_0,a_1,...,a_{2N}) \frac{\partial \mathcal{M}}{\partial W} = \left( \frac{\partial \bar{M}_{2N+1}}{\partial M} \right) \left( \frac{\partial \mathcal{M}}{\partial W} \right) = \frac{\partial \bar{M}_{2N+1}}{\partial W}
\end{displaymath}
we obtain
\begin{subequations}
\begin{align}
a_0 \Delta_0 (u_k) + a_1 \Delta_1 (u_k) + \dots + a_{2N} \Delta_{2N} (u_k) &= \Delta_{2N+1} (u_k), \label{eq:aMWeqmom1} \\
w_k a_1 \Delta_0 (u_k) + \dots + 2N w_k a_{2N} \Delta_{2N-1} (u_k) &= (2N+1) w_k \Delta_{2N} (u_k), \label{eq:aMWeqmom2} \\
\binom{2}{2} a_2 M_0 + \dots + \binom{2N}{2} a_{2N} M_{2N-2} &= \binom{2N+1}{2} M_{2N-1}. \label{eq:aMWeqmom3}
\end{align}
\end{subequations}
\cref{eq:aMWeqmom1,eq:aMWeqmom2}, together with \cref{eq:ggg} and \cref{prop:basicDel}(c), imply that $g(u_k)=0$ and $\left. \frac{dg(u)}{du} \right|_{u=u_k} =0$ for $k=1,\dots,N$.
Thus we see the expected expression of $g(u;W)$ from \cref{eq:cgrelate1} with $\tilde{U}$ to be determined.
Next, we use \cref{eq:aMWeqmom3} to determine $\tilde{U}$.
Recall that $a_{2N+1}=-1$. We use \cref{eq:EQMOMmo} to rewrite \cref{eq:aMWeqmom3} as
\begin{equation} \label{eq:ajsrel}
0=\sum_{j=2}^{2N+1} \binom{j}{2} a_j M_{j-2} = \sum_{i=1}^N w_i \sum_{j=2}^{2N+1} \binom{j}{2} a_j \Delta_{j-2} (u_i,\sigma^2).
\end{equation}
On the other hand, we deduce from \cref{eq:ggg,eq:cgrelate1} that
\begin{displaymath}
-\frac{1}{2} g''(u) = \sum_{j=2}^{2N+1} \binom{j}{2} g_j u^{j-2} = \sum_{j=2}^{2N+1} \binom{j}{2} a_j \Delta_{j-2}(u,\sigma^2).
\end{displaymath}
Then we see from \cref{eq:ajsrel} that
\begin{equation} \label{eq:ajsrel1}
\sum_{i=1}^{N} w_i \sum_{j=2}^{2N+1} \binom{j}{2} g_j u_i^{j-2} = 0.
\end{equation}
Now we define $\tilde{g}(u) = (u-u_1)^2 \cdots (u-u_N)^2 = -\sum_{j=0}^{2N} \tilde{g}_j u^{j}$ with $\tilde{g}_{2N} = -1$. Then $g(u) = (u-\tilde{U}) \tilde{g}(u)$ and the coefficients are related with
\begin{displaymath}
g_j = \tilde{g}_{j-1} - \tilde{U} \tilde{g}_j
\end{displaymath}
for $0 \le j \le 2N+1$ ($\tilde{g}_{-1} = \tilde{g}_{2N+1} = 0$). Substituting this relation into \cref{eq:ajsrel1}, we obtain
\begin{displaymath}
\left[ \sum_{j=2}^{2N} \binom{j}{2} \tilde{g}_j M_{j-2}^{*} \right] \tilde{U} = \sum_{j=2}^{2N+1} \binom{j}{2} \tilde{g}_{j-1} M_{j-2}^{*},
\end{displaymath}
where $M_j^{*} = \sum_{i=1}^{N} w_i u_i^{j}$. It remains to show
\begin{displaymath}
\begin{aligned}
\sum_{j=2}^{2N} \binom{j}{2} \tilde{g}_j M_{j-2}^{*} &= - \sum_{i=1}^N w_i \prod_{1\le k\le N,k \ne i} (u_k-u_i)^2, \\
\sum_{j=2}^{2N+1} \binom{j}{2} \tilde{g}_{j-1} M_{j-2}^{*} &= - \sum_{i=1}^N w_i u_i \prod_{1\le k\le N, k \ne i} (u_k-u_i)^2.
\end{aligned}
\end{displaymath}
These two follow from the obvious relations
\begin{displaymath}
\begin{aligned}
- \sum_{j=2}^{2N} \binom{j}{2} \tilde{g}_j u_i^{j-2} &= \frac{1}{2} \tilde{g}''(u_i) = \prod_{1\le k\le N, k \ne i} (u_k - u_i)^2, \label{eq:gtiltsecder} \\
- \sum_{j=2}^{2N+1} \binom{j}{2} \tilde{g}_{j-1} u_i^{j-2} &= \frac{1}{2} \left( u \tilde{g} \right)'' (u_i) = u_i \prod_{1\le k\le N, k \ne i} (u_k-u_i)^2 \label{eq:ugtiltsecder}
\end{aligned}
\end{displaymath}
for any $1 \le i \le N$.
This completes the proof.
\end{proof}
\begin{remark} \label{rem:greqgorder}
\cref{prop:groots} indicates that the coefficients $g_j$ of $g(u;W)$ in \cref{eq:cgrelate1} are independent of $\sigma^2$. From \cref{eq:cbyg} we see that $c(u;W)$ is a bivariate polynomial of $u$ and $\sigma^2$, the coefficients $a_j$ of $c(u;W)$ are polynomials of $\sigma^2$, and $c(u;W) = g(u)$ for $\sigma^2=0$.
Furthermore, the $j$th derivative $c^{(j)}(u;W)$ of $c(u;W)$ with respect to $u$ can be viewed as a perturbation of $g^{(j)}(u)$ with the single parameter $\sigma^2 \ge 0$ for $0 \le j \le 2N+1$.
\end{remark}
By \cref{prop:groots}, $g(u)$ has $(2N+1)$ ($=$ the degree of $g$) real roots (including multiplicity). This fact can be further generalized as follows.
\begin{lemma} \label{prop:gderroots}
For any $0 \le j \le 2N$, $g^{(j)}(u)$ has $(2N+1-j)$ real roots (including multiplicity). Hence any local minimum (maximum) value of $g^{(j)}(u)$ is non-positive (non-negative).
\end{lemma}
\begin{proof}
We prove by induction on $j$. As discussed above, the conclusion holds for $j=0$. Namely, $g$ has $(2N+1)$ roots. Suppose it holds for $0,\dots,j$. Then we have $g^{(j)}(u) = C (u-\tilde{u}_1)^{k_1} \cdots (u-\tilde{u}_m)^{k_m}$, where $m\ge 1$, $\tilde{u}_1 < \tilde{u}_2 < \dots < \tilde{u}_m$, $k_i \ge 1$ and $k_1 + \dots + k_m = 2N+1-j$.
Thus $(u-\tilde{u}_i)^{k_i-1}$ is a factor of $g^{(j+1)}(u)$ for any $1\le i \le m$.
Besides, Rolle's theorem implies the existence of at least one root of $g^{(j+1)}(u)$ in each open interval $(\tilde{u}_i,\tilde{u}_{i+1})$ for $1\le i \le m-1$. Therefore, the number of roots of $g^{(j+1)}$ is no less than
\begin{displaymath}
(k_1-1)+\cdots+(k_m-1) + (m-1) = (k_1+\cdots+k_m)-1 = 2N-j.
\end{displaymath}
Since $g^{(j+1)}$ is of degree $(2N-j)$, it must have $(2N-j)$ roots (including multiplicity). This also indicates that $g^{(j)}$ has only one extreme point in each open interval above. Hence any local minimum (maximum) value of $g^{(j)}(u)$ is non-positive (non-negative).
\end{proof}
With the preparations above, we are in a position to prove \cref{thm:hypeqmom}.
\begin{proof}[Proof of \cref{thm:hypeqmom}]
We will prove the following stronger statement: for $0\le j\le 2N+1$, $c^{(2N+1-j)}(u;W)$ has $j$ distinct roots for any $W =(w_1,u_1,\dots, w_N,u_N,\sigma^2)$ $\in \Omega_W$ with $\sigma^2>0$. This will be done with induction on $j$.
For $j=0, \ 1$, the statement is obvious because $c^{(2N+1)}(u;W) = (2N+1)!$ and $c^{(2N)}(u;W)$ is of degree 1.
Suppose the conclusion holds for $j \le k (\le 2N+1)$.
From \cref{rem:greqgorder} we know that $c^{(2N+1-k)}(u;W)$ is a bivariate polynomial of $u$ and $\sigma^2$ on $\mathbb{R} \times [0,\infty)$.
Denote $u^{*}(\sigma^2) \in \mathbb{R}$ to be one root of $c^{(2N+1-k)}(u;W)$. Thus $u^{*}(\sigma^2)$ is an extreme point of $c^{(2N-k)}(u;W)$ and $u^{*}(0)$ is a root of $g^{(2N+1-k)}(u)$.
Moreover, $u^{*}(\sigma^2)$ is continuous on $\sigma^2 \in [0,\infty)$ and differentiable on $(0,\infty)$ because the roots are distinct \cite{Kato1980}.
Next we consider the extreme values of $c^{(2N-k)}(u;W)$ at $u=u^*(\sigma^2)$.
Since $c^{(2N-k)}(u;W)$ is a polynomial of $u$ and $\sigma^2$, the composite $h_{k}(\sigma^2):=c^{(2N-k)}(u^*(\sigma^2);W)$ is continuous on $[0,\infty)$ and differentiable on $(0,\infty)$. And $h_k(0)$ is the extreme value of $g^{(2N-k)}(u)$.
According to \cref{prop:gderroots}, $h_k(0) \ge 0$ if it is a local maximum and $h_k(0) \le 0$ if it is a local minimum.
For $\sigma^2>0$, because $c^{(2N+1-k)}(u^*(\sigma^2);W)=0$, the derivative of $h_k(\sigma^2)$ reads as
\begin{displaymath}
\begin{aligned}
\frac{\partial h_k(\sigma^2)}{\partial \sigma^2}
&= \frac{\partial}{\partial \sigma^2} c^{(2N-k)}(u^*(\sigma^2);W)
= \frac{\partial}{\partial \sigma^2} \sum_{l=0}^{\infty} \left( - \frac{\sigma^2}{2} \right)^l \frac{g^{(2l+2N-k)}(u^*(\sigma^2))}{l!} \\
&= -\frac{1}{2} \sum_{l=1}^{\infty} \left( - \frac{\sigma^2}{2} \right)^{l-1} \frac{g^{(2l+2N-k)}(u^*(\sigma^2))}{(l-1)!} + c^{(2N+1-k)}(u^*(\sigma^2);W)
\frac{\partial u^*(\sigma^2)}{\partial \sigma^2} \\
&= -\frac{1}{2} c^{(2N+2-k)}(u^*(\sigma^2);W).
\end{aligned}
\end{displaymath}
Thus, if $c^{(2N-k+2)}(u^*(\sigma^2);W)<0$ (that is, $u^*(\sigma^2)$ is a local maximum point of $c^{(2N-k)}(u;W)$), then the local maximum value $h_k(\sigma^2)$ strictly increases on $\sigma^2 \in (0,\infty)$.
Since $h_k(0) \ge 0$ and $h_k(\sigma^2)$ is continuous at $\sigma^2=0$, we conclude that $h_k(\sigma^2)>0$ for all $\sigma^2>0$.
Similarly, if $c^{(2N-k+2)}(u^*(\sigma^2);W)>0$, we have $h_k(\sigma^2)<0$ for all $\sigma^2>0$.
In summary, the above arguments show that each local maximum value of the ($k+1$)th oder polynomial $c^{(2N-k)}(u;W)$ is positive and each local minimum value is negative. On the other hand, by the induction assumption $c^{(2N+1-k)}(u;W)$ has $k$ distinct real roots, which are naturally extreme points of $c^{(2N-k)}(u;W)$ for $\sigma^2>0$. Therefore, $c^{(2N-k)}(u;W)$ has $(k-1)$ distinct real roots among the extreme points. Moreover, the induction assumption implies that $c^{(2N-k)}(u;W)$ has one root larger and another one less than all the extreme points. Thus, for each $\sigma^2>0$, $c^{(2N-k)}(u;W)$ has $(k+1)$ distinct real roots. By the induction principle this completes the proof.
\end{proof}
\begin{remark} \label{rem:hyp2}
By \cref{thm:hypeqmom}, the coefficient matrix $A=A(M)$ of the 1-D moment system \cref{eq:EQMOMboltz} has $n=(2N+1)$ distinct real eigenvalues $\lambda_i$ ($1\le i \le n$) for $\sigma^2>0$. Denote by $r_i$ the corresponding left eigenvectors. Set $L = (r_1^{T},\dots,r_n^{T})^{T}$.
It is clear that $A_0(M) = L^{T} \Lambda L$ with $\Lambda$ an arbitrary positive diagonal matrix is a symmetrizer in the structural stability condition (ii).
As a matter of fact, it is straightforward to show that such a symmetrizer can only be of the form $L^{T} \Lambda L$ .
\end{remark}
\subsection{Equilibrium state}
\label{subsec:eqmomequistate}
As stated in \Cref{subsec:stability}, (i) and (iii) of the structural stability condition should be examined on the equilibrium manifold $\mathcal{E}$ where $S(M(W))=0$.
In this section we determine the equilibrium manifold.
For the BGK model, $S(M(W))=0$ is equivalent to
\begin{equation} \label{eq:equieq}
\sum_{i=1}^{N} w_i \Delta_j (u_i, \sigma^2) = M_j = \rho \Delta_j (U, \theta) \quad \text{for } j=0,\dots,2N
\end{equation}
(see \Cref{subsubsec:EQMOM}). Thus, the equilibrium state $W=(w_1, u_1, \dots, w_N, u_N, \sigma^2)^T$ is determined by the three macroscopic parameters $\rho$, $U$ and $\theta$. And we need to find $W$ from \cref{eq:equieq} for $1 \le i \le N$.
For this purpose, we recall from \Cref{subsec:preli} that $\Delta_0(u,\sigma^2)=1$, $\Delta_1(u,\sigma^2)=u$ and $\Delta_2(u,\sigma^2)=u^2+\sigma^2$. Thus, for $j=0,1,2$, \cref{eq:equieq} is just
\begin{displaymath}
\sum_{i=1}^{N} w_i = \rho, \quad \sum_{i=1}^{N} w_i u_i = \rho U, \quad \sum_{i=1}^{N} w_i u_i^2 = \rho U^2 + \rho (\theta - \sigma^2).
\end{displaymath}
Then we deduce from the inequality $\left( \sum_{i=1}^{N} w_i \right) \left( \sum_{i=1}^{N} w_i u_i^2 \right) \ge \left( \sum_{i=1}^{N} w_i u_i \right)^2$
that
\begin{equation} \label{eq:zetaprop}
\sigma^2 \le \theta \quad \text{and } \sigma^2=\theta \text{ if and only if all the $u_i$'s are equal}.
\end{equation}
For further discussions, we need the following fact.
\begin{proposition} \label{lem:Mjstar}
\begin{displaymath}
M_j^{*}: = \sum_{i=1}^{N} w_i u_i^{j}= \sum_{k=0}^{[j/2]} \frac{j!}{k!(j-2k)!} \left( - \frac{\sigma^2}{2} \right)^k M_{j-2k}.
\end{displaymath}
\end{proposition}
\begin{proof}
Recall that $\Delta_j(u,\sigma^2) = \mathcal{D}_{\sigma^2} u^j$. From \cref{prop:basicOp}(b) and \cref{prop:basicDel}(c) we deduce that
\begin{displaymath}
u^j = \sum_{k=0}^{\infty} \frac{\partial_u^{2k} \Delta_j(u,\sigma^2)}{k!} \left( - \frac{\sigma^2}{2} \right)^k = \sum_{k=0}^{[j/2]} \frac{j!}{k!(j-2k)!} \left( - \frac{\sigma^2}{2} \right)^k \Delta_{j-2k}(u,\sigma^2).
\end{displaymath}
Then taking the weighted summation $\sum_{i=1}^N w_i$ and using \cref{eq:EQMOMmo} give the proposition.
\end{proof}
Next we define $\zeta^2 = \theta - \sigma^2 \ge 0$ and show that \cref{eq:equieq} is equivalent to
\begin{equation} \label{eq:equieqred}
\sum_{i=1}^{N} w_i u_i^j = \rho \Delta_j (U, \zeta^2) \quad \text{for } j=0,\dots,2N.
\end{equation}
Indeed, if \cref{eq:equieq} holds (i.e. $M(W) \in \mathcal{E}$), the last proposition implies that
\begin{displaymath}
\begin{aligned}
\sum_i w_i u_i^j
= M_j^* &= \sum_{k=0}^{[j/2]} \left( -\frac{\sigma^2}{2} \right)^k \frac{j!}{k!(j-2k)!} \left[ \rho \Delta_{j-2k} (U,\theta) \right] \\
&= \rho \sum_{k=0}^{\infty} \left( -\frac{\sigma^2}{2} \right)^k \left. \frac{ \partial_u^{2k} \Delta_j (u,\theta) }{k!} \right|_{u=U}
= \rho \mathcal{D}_{-\sigma^2} \mathcal{D}_{\theta} U^j = \rho \mathcal{D}_{\theta-\sigma^2} U^j.
\end{aligned}
\end{displaymath}
Here the expression $\mathcal{D}_{\vartheta} f(U)$ denotes $\mathcal{D}_{\vartheta} f(u)|_{u=U}$ for arbitrary polynomial $f$ and the last step is due to \cref{prop:basicOp}(a). This is just \cref{eq:equieqred}.
The deduction of \cref{eq:equieq} from \cref{eq:equieqred} is similar.
Now we are in a position to state the central result of this section.
\begin{theorem} \label{thm:equisolu}
The equilibrium state belongs to $\Omega_W^{eq}$, that is,
\begin{displaymath}
u_1 = \dots = u_N = U, \ \sigma^2 = \theta, \ \text{and} \ \sum_{i=1}^{N} w_i = \rho.
\end{displaymath}
Hence, at equilibrium $\bar{M}_{2N+1} = \rho \Delta_{2N+1} (U, \theta)$.
\end{theorem}
\begin{proof}
Thanks to \cref{eq:zetaprop}, it suffices to show that $\zeta^2 := \theta - \sigma^2 = 0$. Otherwise, the $u_i$'s must take $N'$ different values ($1<N'\le N$).
Then, by redefining $w_i$, the summation $\sum_{i=1}^N w_i u_i^j$ in the left-hand side of \cref{eq:equieqred} is reduced to $\sum_{k=1}^{N'} w_k u_k^j$ where the $u_k$'s are distinct ($1 \le k \le N'$).
Thus, we may as well assume that all the $u_i$'s are distinct and $\zeta^2>0$.
Then we will derive a contradiction in three steps, where the abbreviation $$\mathcal{D}_{\vartheta} f(U) \equiv \mathcal{D}_{\vartheta} f(u)|_{u=U}$$
will be frequently used.
\textbf{Step I}. Because $\Delta_j (u,\zeta^2) = \mathcal{D}_{\zeta^2} u^j$, the first $N$ equations ($j=0,\dots,N-1$) in \cref{eq:equieqred} can be rewritten as a system of linear algebraic equations:
\begin{displaymath}
\begin{bmatrix}
1& 1 & \cdots & 1 \\
u_1 & u_2 & \cdots & u_N \\
\vdots & \vdots & & \vdots \\
u_1^{N-1} & u_2^{N-1} & \cdots & u_N^{N-1}
\end{bmatrix}
\begin{bmatrix}
w_1 \\ w_2 \\ \vdots \\ w_N
\end{bmatrix}
= \rho
\begin{bmatrix}
\mathcal{D}_{\zeta^2} (1) \\
\mathcal{D}_{\zeta^2} (U^1) \\
\vdots \\
\mathcal{D}_{\zeta^2} (U^{N-1})
\end{bmatrix}.
\end{displaymath}
Since all the $u_i$'s are distinct, this gives a unique $(w_1,\dots,w_N)$ in terms of $(u_1,\dots,u_N)$, $\rho$, $U$ and $\zeta^2$. We claim that for $1\le i \le N$,
\begin{equation} \label{eq:wbyueq}
w_i \prod_{1\le k \le N, k \ne i} (u_i-u_k) = \rho \mathcal{D}_{\zeta^2} \left( \prod_{1 \le k \le N, k \ne i} (U-u_k) \right).
\end{equation}
To see this, we use the uniqueness and only need to show that the $w_i$'s solve the system of equations above.
Indeed, thanks to the Lagrange interpolating polynomial
\begin{equation*} \label{eq:lip1}
\sum_{i=1}^N \frac{ \prod_{1 \le k \le N, k \ne i} (u-u_k) }{ \prod_{1 \le k \le N, k \ne i} (u_i-u_k) } u_i^j = u^j \quad \text{for } 0 \le j \le N-1
\end{equation*}
and the linearity of the operator $\mathcal{D}_{\zeta^2}, $\cref{eq:wbyueq} implies that for $0\le j \le N-1$,
\begin{displaymath}
\sum_{i=1}^N w_i u_i^j
= \rho \mathcal{D}_{\zeta^2} \left( \sum_{i=1}^N \frac{ \prod_{1 \le k \le N, k \ne i} (U-u_k) }{ \prod_{1 \le k \le N, k \ne i} (u_i-u_k) } u_i^j \right) = \rho \mathcal{D}_{\zeta^2} (U^j).
\end{displaymath}
Namely, the $w_i$'s defined in \cref{eq:wbyueq} solve the system of linear algebraic equations above.
\textbf{Step II}. With the $w_i$'s defined in \cref{eq:wbyueq}, we turn to the next $N$ equations ($j=N,\dots,2N-1$) in \cref{eq:equieqred} to solve $u_i$:
\begin{displaymath}
\begin{bmatrix}
1& 1 & \cdots & 1 \\
u_1 & u_2 & \cdots & u_N \\
\vdots & \vdots & & \vdots \\
u_1^{N-1} & u_2^{N-1} & \cdots & u_N^{N-1}
\end{bmatrix}
\begin{bmatrix}
w_1 u_1^N \\ w_2 u_2^N \\ \vdots \\ w_N u_N^N
\end{bmatrix}
= \rho
\begin{bmatrix}
\mathcal{D}_{\zeta^2} (U^N) \\
\mathcal{D}_{\zeta^2} (U^{N+1}) \\
\vdots \\
\mathcal{D}_{\zeta^2} (U^{2N-1})
\end{bmatrix}.
\end{displaymath}
Again, the solution $w_iu_i^N$ is unique. As in \textbf{Step I}, we can show that
\begin{equation} \label{eq:wuNeq}
w_i u_i^N \prod_{1\le k \le N, k \ne i} (u_i-u_k) = \rho \mathcal{D}
_{\zeta^2} \left( U^N \prod_{1\le k\le N, k\ne i} (U-u_k) \right)
\end{equation}
for $1\le i \le N$.
Substituting \cref{eq:wbyueq} into \cref{eq:wuNeq}, we obtain
\begin{equation} \label{eq:usolve1}
u_i^N \mathcal{D}_{\zeta^2} \left( \prod_{1\le k \le N, k\ne i} (U-u_k) \right) = \mathcal{D}_{\zeta^2} \left( U^N \prod_{1\le k \le N, k\ne i} (U-u_k) \right)
\end{equation}
for $1 \le i \le N$. By the linearity of $\mathcal{D}_{\zeta^2}$, \cref{eq:usolve1} is equivalent to
\begin{displaymath}
\mathcal{D}_{\zeta^2} \left( (U^{N-1} + u_i U^{N-2} + \cdots + u_i^{N-1}) \prod_{k=1}^N (U-u_k) \right) = 0,
\end{displaymath}
which can be rewritten as
\begin{displaymath}
\begin{bmatrix}
1 & u_1 & \cdots & u_1^{N-1} \\
\vdots & \vdots & & \vdots \\
1 & u_N & \cdots & u_N^{N-1}
\end{bmatrix}
\begin{bmatrix}
\mathcal{D}_{\zeta^2} (U^{N-1} F) \\
\vdots \\
\mathcal{D}_{\zeta^2} (F)
\end{bmatrix} = 0
\end{displaymath}
with $F=F(U) = \prod_{k=1}^N (U-u_k)$.
Since all the $u_i$'s are distinct, this says
\begin{equation} \label{eq:dFeq0}
\mathcal{D}_{\zeta^2}(F) = \mathcal{D}_{\zeta^2}(UF) = \cdots = \mathcal{D}_{\zeta^2}(U^{N-1}F) = 0.
\end{equation}
Having this, in \cref{prop:basicOp}(e) we take $u_0 = U$ and $f(u) = u^j F(u)$ ($0 \le j \le N-2$) and deduce from \cref{eq:dFeq0} that
$$0 = \mathcal{D}_{\zeta^2} (U^{j+1}F) = \zeta^2 \mathcal{D}_{\zeta^2} \left( (U^jF)' \right) = \zeta^2 \mathcal{D}_{\zeta^2} \left( U^j F' \right).$$
Hence $\mathcal{D}_{\zeta^2}(U^j F') = 0$ for $0 \le j \le N-2$. This procedure can be repeated for the derivative of $F'(u)$ to yield $\mathcal{D}_{\zeta^2} (U^j F'') = 0$ for $0 \le j \le N-3$. Moreover, we have
\begin{equation} \label{eq:dFjeq0}
\mathcal{D}_{\zeta^2} \left( U^j F^{(k)} \right) = 0
\end{equation}
for $0 \le k \le N-1$ and $0 \le j \le N-1-k$.
\textbf{Step III}. In this step, we use \cref{eq:wuNeq} and the Lagrange interpolating polynomial
\begin{equation*} \label{eq:interpj}
\sum_{i=1}^N \frac{\prod_{1\le k\le N, k\ne i} (u-u_k)}{\prod_{1\le k\le N, k\ne i} (u_i-u_k)} u_i^N
= u^N - \prod_{k=1}^N (u-u_k)
\end{equation*}
to deduce that
\begin{displaymath}
\begin{aligned}
\sum_{i=1}^N w_i u_i^{2N}
&= \rho \mathcal{D}_{\zeta^2} \left( U^N \sum_{i=1}^N \frac{\prod_{1\le k\le N, k\ne i} (U-u_k)}{\prod_{1\le k\le N, k\ne i} (u_i-u_k)} u_i^N \right) = \rho \mathcal{D}_{\zeta^2} \left( U^N(U^N-F) \right).
\end{aligned}
\end{displaymath}
Thus, the last equation in \cref{eq:equieqred} is equivalent to $\mathcal{D}_{\zeta^2} (U^N (U^N-F)) = \mathcal{D}_{\zeta^2} (U^{2N})$ or
\begin{equation*}
\mathcal{D}_{\zeta^2} \left( U^N F \right) = 0.
\end{equation*}
Then we use \cref{prop:basicOp}(d) and \cref{eq:dFjeq0} to see that
$$0 = \mathcal{D}_{\zeta^2} (U^N F) = \zeta^2 \mathcal{D}_{\zeta^2} (U^{N-1} F') =
\cdots = \zeta^{2N} \mathcal{D}_{\zeta^2} (F^{(N)}) = N! \cdot \zeta^{2N},$$
which implies that $\zeta^2=0$.
This contradicts the assumption that all the $u_i$'s are distinct and $\zeta^2>0$.
Hence the proof is complete.
\end{proof}
\subsection{BGK model}
\label{subsec:coupAseqmom}
In this subsection we show that the EQMOM moment system \cref{eq:EQMOMboltz} with BGK source term
$$S(M)= \rho \left(\Delta_0 (U,\theta),\dots, \Delta_{2N} (U,\theta) \right)^{T} - M$$
satisfies the structural stability condition (i)--(iii). Indeed, (ii) has been verified in \cref{rem:hyp2}.
To see Condition (i), we compute the Jacobian matrix of $S=S(M)$. Notice that the first three components of $S$ vanish identically and $\rho, U,\theta$ depend only on $M_0$, $M_1$ and $M_2$.
Then the Jacobian matrix can be written as
\begin{equation} \label{eq:sourcejac}
S_M (M) := \frac{\partial S}{\partial M} =
\begin{bmatrix}
0_{3 \times 3} & \\
\hat{S}_M & -I_{2N-2}
\end{bmatrix},
\end{equation}
where $\hat{S}_M$ is a $(2N-2)\times 3$ matrix with
\begin{equation} \label{eq:chidef}
\left( \hat{S}_M \right)_{i-2, \ j+1} = \chi_i^j: = \partial (\rho \Delta_i(U,\theta)) / \partial M_j
\end{equation}
for $3 \le i \le 2N$ and $j=0,1,2$.
Now we take
\begin{equation} \label{eq:sourcediag}
P=\begin{bmatrix} I_3 & \\ - \hat{S}_M & I_{2N-2} \end{bmatrix}
\end{equation}
and see that $P S_M = \begin{bmatrix} 0_{3\times 3}& \\ & -I_{2N-2} \end{bmatrix} P$, which justifies Condition (i).
The rest of this subsection is to show Condition (iii).
To this end, we need to choose the symmetrizer $A_0 = A_0(M)$.
As pointed out in \cref{rem:hyp2}, such a symmetrizer $A_0$ can only be of the form $L^{T} \Lambda L$ with $\Lambda$ a diagonal positive definite matrix to be determined.
Firstly, we specify the matrix $L = (r_1^{T},\dots,r_{2N+1}^{T})^{T}$ with $r_i$ a left eigenvector of the coefficient matrix $A=A(M)$ corresponding to the eigenvalues $\lambda_i$ for $1 \le i \le 2N+1$.
Let $r_i=\left( r_i^{(1)},\dots,r_i^{(2N+1)} \right)$.
From $r_i A=\lambda_i r_i$ we have
$$r_i^{(j)} + a_j r_i^{(2N+1)} = \lambda_i r_i^{(j+1)} \quad \text{for } 0 \le j \le 2N.$$
Here we have assumed $r_i^{(0)}=0$ for simplicity.
From the last equation we see that $r_i^{(2N+1)} \ne 0$; otherwise the eigenvector $r_i=0$. Thus we may as well assume $r_i^{(2N+1)}=1$. Recall that $a_{2N+1}=-1$. Then we can easily obtain
\begin{displaymath}
r_i^{(j)} = - \sum_{k=j}^{2N+1}a_k \lambda_i^{k-j}
\end{displaymath}
for $0 \le j \le 2N$. Therefore, we have
\begin{equation} \label{eq:symmL}
L=
\begin{bmatrix}
\lambda_1^{2N} & \lambda_1^{2N-1} & \lambda_1^{2N-2} & \cdots & 1 \\
\lambda_2^{2N} & \lambda_2^{2N-1} & \lambda_2^{2N-2} & \cdots & 1 \\
\lambda_3^{2N} & \lambda_3^{2N-1} & \lambda_3^{2N-2} & \cdots & 1 \\
\vdots & \vdots & \vdots & & \vdots \\
\lambda_{2N+1}^{2N} & \lambda_{2N+1}^{2N-1} & \lambda_{2N+1}^{2N-2} & \cdots & 1
\end{bmatrix}
\begin{bmatrix}
1 &&&& \\
-a_{2N} & 1 &&& \\
-a_{2N-1} & -a_{2N} & 1 && \\
\vdots & \vdots & \vdots & \ddots & \\
-a_1 & -a_2 & -a_3 & \cdots & 1
\end{bmatrix}.
\end{equation}
With this $L$, we can state our main result of this subsection.
\begin{theorem} \label{thm:eqmombgkstabiii}
For the EQMOM moment system \cref{eq:EQMOMboltz}, the inequality in the structural stability condition (iii) holds with $A_0 = L^T L$ and $P$ defined in \cref{eq:sourcediag}.
\end{theorem}
\begin{proof}
According to Theorem 2.1 in \cite{Yong1999}, it suffices to show that at equilibrium states $M$,
$$K(M) := P^{-T} A_0 P^{-1} = (LP^{-1})^{T} (LP^{-1})$$
is of the block-diagonal form $\diag(K_1,K_2)$, in which $K_1$ and $K_2$ are $3 \times 3$ and $(2N-2) \times (2N-2)$ matrices, respectively.
Namely, the first three columns of $LP^{-1}$ are orthogonal to its other columns.
In what follows all the states $M$ are in equilibrium.
To show the orthogonality, we compute the $(2N+1)$-matrix $LP^{-1}: = (b_{il})$. From \cref{eq:sourcediag} we see that
\begin{equation} \label{eq:sourcediaginv}
P^{-1} =
\begin{bmatrix}
I_3 & \\
\hat{S}_M & I_{2N-2}
\end{bmatrix}.
\end{equation}
This, together with \cref{eq:symmL}, gives
\begin{displaymath}
b_{il} = \left \{
\begin{aligned}
- \sum_{j=0}^{2N} \chi_j^{l-1} \sum_{k=j+1}^{2N+1} a_k \lambda_i^{k-j-1} \quad &\text{for } 1 \le l \le 3, \\
-\sum_{k=l}^{2N+1} a_k \lambda_i^{k-l} \quad &\text{for } 4 \le l \le 2N+1.
\end{aligned}
\right.
\end{displaymath}
The expression above indicates that the last $(2N-2)$ columns of $LP^{-1}$ are linear combinations of $(\lambda_1^{\beta},\dots,\lambda_{2N+1}^{\beta})^{T} \in \mathbb{R}^{2N+1}$ for $0 \le \beta \le 2N-3$.
Thus it reduces to show that
$$\sum_{i=1}^{2N+1} b_{il} \lambda_i^{\beta} = 0 \quad \text{for $l=1,2,3$ and $0 \le \beta \le 2N-3$} .$$
Set $p_k = \sum_{i=1}^{2N+1} \lambda_i^k$. By using the above expression of $b_{il}$ for $1\le l\le 3$, the last equation is equivalent to
\begin{equation} \label{eq:LPinvinprod}
\sum_{j=0}^{2N} \chi_j^{l} \sum_{k=j+1}^{2N+1} a_k p_{k-j-1+\beta} = \sum_{j=-\beta}^{2N-\beta} \chi_{j+\beta}^l \sum_{k=\beta}^{2N-j} a_{j+k+1}p_{k} = 0
\end{equation}
for $l=0,1,2$ and $0 \le \beta \le 2N-3$.
To prove \cref{eq:LPinvinprod}, it suffices to show that
\begin{equation} \label{eq:LPinvinpdeq}
\sum_{j=-\beta}^{2N-\beta} \mathcal{H}_{j+\beta} \sum_{k=\beta}^{2N-j} a_{j+k+1} p_{k} = 0, \quad \text{for } 0 \le \beta \le 2N-3,
\end{equation}
where $\mathcal{H}_j$ can be replaced by any of $\Delta_j=\Delta_j(U,\theta)$, $\partial_U \Delta_j$ and $\partial_{\theta} \Delta_j$. Indeed, \cref{eq:LPinvinprod} follows immediately from \cref{eq:LPinvinpdeq} and \cref{eq:chidef} which says
\begin{displaymath}
\chi_j^l = \frac{\partial}{\partial M_l} (\rho \Delta_j(U,\theta)) = \left( \frac{\partial \rho}{\partial M_l} \right) \Delta_j + \left( \rho \frac{\partial U}{\partial M_l} \right) \partial_U \Delta_{j} + \left( \rho \frac{\partial \theta}{\partial M_l} \right) \partial_{\theta} \Delta_j
\end{displaymath}
for $0 \le j \le 2N$ and $l=0,1,2$.
Before proceeding, two tools are needed. The first one is Newton's power sum formulas for $p_k$ \cite{EID1968}:
\begin{subequations} \label{eq:newtonsum}
\begin{align}
\sum_{k=0}^{2N+1} a_k p_{k-j-1} = \sum_{k=-1-j}^{2N-j} a_{j+k+1} p_k &= 0 \quad \text{for} \ j \le -2, \label{eq:newtonsum2} \\
(2N-j) a_{j+1} + \sum_{k=1}^{2N-j} a_{j+k+1} p_k &= 0 \quad \text{for} \ -1 \le j \le 2N. \label{eq:newtonsum1}
\end{align}
\end{subequations}
The second tool is the following relation
\begin{equation} \label{eq:ceqeq}
\left. \mathcal{D}_{\theta} \left( u^k c^{(j)} \right) \right|_{u=U} = 0
\end{equation}
for $0 \le k \le 2N$ and $0 \le j \le 2N-k$, where $c^{(j)}$ denotes the $j$th derivative of the characteristic polynomial $c=c(u;W)$ with respect to $u$.
This relation can be proved as below. \cref{prop:groots} tells $g(u) = (u-U)^{2N+1}$ in equilibrium and therefore $g^{(j)}(U) = 0$ for $0 \le j \le 2N$.
From \cref{prop:basicOp}(c) we see that $g^{(j)}(u)=\mathcal{D}_{\theta} c^{(j)}$ and thereby $\left. \mathcal{D}_{\theta} c^{(j)} \right|_{u=U} = 0$ for $0 \le j \le 2N$. This is just the case for $k=0$ in \cref{eq:ceqeq}.
Then using \cref{prop:basicOp}(d) we have
\begin{displaymath}
\left. \mathcal{D}_{\theta} \left(u c^{(j)}\right) \right|_{u=U}
= U \left. \mathcal{D}_{\theta} c^{(j)} \right|_{u=U} + \theta \left. \mathcal{D}_{\theta} c^{(j+1)} \right|_{u=U} = 0
\end{displaymath}
for $j=0,\dots,2N-1$, which validates the case for $k=1$.
This procedure can be repeated to show \cref{eq:ceqeq} for other $k \le 2N$.
With these preparations, we only need to prove \cref{eq:LPinvinpdeq} for the following two cases.
\textbf{Case I}: $\beta=0$. Noting that $p_0 = 2N+1$, we deduce from \cref{eq:newtonsum1} that $$\sum_{k=0}^{2N-j} a_{k+j+1} p_{k} = (j+1)a_{j+1}.$$
Thus \cref{eq:LPinvinpdeq} in this case is equivalent to
\begin{displaymath}
\sum_{j=0}^{2N} (j+1) a_{j+1} \mathcal{H}_j = 0.
\end{displaymath}
When taking $\mathcal{H}_j$ to be $\Delta_j$, $\partial_U \Delta_j$ or $\partial_{\theta} \Delta_j$,
the left-hand side of the last equation is equivalent to $\left. \mathcal{D}_{\theta} c' \right|_{u=U}$, $\left. \mathcal{D}_{\theta} c'' \right|_{u=U}$ or $\left. \mathcal{D}_{\theta} c''' \right|_{u=U}$, respectively.
They are all equal to zero due to \cref{eq:ceqeq} and hence \cref{eq:LPinvinpdeq} with $\beta=0$ is proved.
\textbf{Case II}: $\beta \ge 1$.
As in Case I, we first simplify the coefficients $\sum_{k=\beta}^{2N-j} a_{j+k+1} p_{k}$ of $\mathcal{H}_{j+\beta}$ in \cref{eq:LPinvinpdeq} by using Newton's power sum formulas \cref{eq:newtonsum1,eq:newtonsum2}. They can be rewritten as
\begin{displaymath}
\begin{aligned}
\sum_{k=-1-j}^{\beta-1} a_{j+k+1} p_{k} + \sum_{k=\beta}^{2N-j} a_{j+k+1} p_{k} &= 0 \quad \text{for } j \le -2, \\
\left( (2N-j)a_{j+1} + \sum_{k=1}^{\beta-1}a_{j+k+1} p_{k} \right) + \sum_{k=\beta}^{2N-j} a_{j+k+1} p_{k} &= 0 \quad \text{for } j \ge -1.
\end{aligned}
\end{displaymath}
With these two relations, \cref{eq:LPinvinpdeq} is equivalent to
\begin{equation} \label{eq:ceqfurred}
\begin{split}
0
&= \sum_{j=-\beta}^{2N-\beta} \mathcal{H}_{j+\beta} \sum_{k=\max \{ 1, -1-j \}}^{\beta-1} a_{j+k+1} p_{k} + \sum_{j=-1}^{2N-\beta} (2N-j) \mathcal{H}_{j+\beta} a_{j+1} \\
&= \sum_{k=1}^{\beta-1} p_{k} \sum_{j=-1-k}^{2N-\beta} \mathcal{H}_{j+\beta} a_{j+k+1} + \sum_{j=-1}^{2N-\beta} (2N-j) \mathcal{H}_{j+\beta} a_{j+1}
\end{split}
\end{equation}
for $1 \le \beta \le 2N-3$.
\cref{eq:ceqfurred} can be further simplified by using the following relations
\begin{displaymath}
\sum_{j=-1-k}^{2N-k} \mathcal{H}_{j+\beta} a_{j+k+1} = 0 \quad \text{for } 1 \le k \le \beta-1.
\end{displaymath}
Indeed, replacing $\mathcal{H}_j$ by $\Delta_j$, $\partial_U \Delta_j$ or $\partial_{\theta} \Delta_j$,
the sum is just $\left. \mathcal{D}_{\theta} \left( u^{\beta-1-k} c \right) \right|_{u=U}$, $\left. \mathcal{D}_{\theta} \left( u^{\beta-1-k} c' \right) \right|_{u=U}$ or $\left. \mathcal{D}_{\theta} \left( u^{\beta-1-k} c'' \right) \right|_{u=U}$,
respectively.
They are all equal to zero due to \cref{eq:ceqeq} and the fact that $0 \le \beta-1-k \le \beta-2 \le 2N-5$.
With the last relation, the first term in the right-hand side of \cref{eq:ceqfurred} is reduced to
\begin{displaymath}
\begin{aligned}
\sum_{k=1}^{\beta-1} p_{k} & \sum_{j=-1-k}^{2N-\beta} \mathcal{H}_{j+\beta} a_{j+k+1}
= -\sum_{k=1}^{\beta-1} p_{k} \sum_{j=2N-\beta+1}^{2N-k} \mathcal{H}_{j+\beta} a_{j+k+1} \\
&= -\sum_{j=2N-\beta+1}^{2N-1} \mathcal{H}_{j+\beta} \sum_{k=1}^{2N-j} a_{j+k+1} p_{k}
= \sum_{j=2N-\beta+1}^{2N-1} \mathcal{H}_{j+\beta} \left[ (2N-j) a_{j+1} \right].
\end{aligned}
\end{displaymath}
The last step resorts again to \cref{eq:newtonsum1} for $j \ge 2N+1-\beta \ge 4$.
With this, \cref{eq:ceqfurred} is equivalent to
\begin{equation} \label{eq:ceqredfin}
\sum_{j=-1}^{2N-1} (2N-j) \mathcal{H}_{j+\beta} a_{j+1} = 0 \quad \text{for } 1 \le \beta \le 2N-3.
\end{equation}
This is our final task.
In \cref{eq:ceqredfin}, we take $\mathcal{H}_j$ to be $\Delta_j$, $\partial_U \Delta_j$ or $\partial_{\theta} \Delta_j$ and arrive at
\begin{displaymath}
\left. \mathcal{D}_{\theta} \left( u^{\beta}c' - (2N+1) u^{\beta-1} c \right)^{(k)} \right|_{u=U} = 0
\end{displaymath}
for $1 \le \beta \le 2N-3$.
Here $k=0,1,2$ correspond to $\mathcal{H}_j=\Delta_j, \ \partial_U \Delta_j \text{ or } \partial_{\theta} \Delta_j$, respectively.
The last relations hold due to \cref{eq:ceqeq} and the linearity of the operator $\mathcal{D}_{\theta}$.
Hence the orthogonality is validated and the proof is completed.
\end{proof}
\begin{remark}
It is worth pointing out that the structural stability condition still holds if the collision frequency $\nu=\nu(M)$ in the BGK model depends on $M$, because in equilibrium $S=S(M)=0$ and thus $\partial_M (\nu S) = \nu S_M(M) + S \partial_M \nu = \nu S_M(M)$. Hence all the analyses above are valid.
\end{remark}
\subsection{Shakhov model}
\label{subsec:coupASseqmom}
This subsection is devoted to the EQMOM moment system of the 1-D Boltzmann equation with Shakhov source term \cref{eq:Shakhov}. We first introduce the notation
\begin{displaymath}
\Delta_j^S = \Delta_j^S (U,\theta,q) = \frac{1}{\rho} \int_{\mathbb{R}} \xi^j f_S d \xi
\end{displaymath}
with the equilibrium distribution $f_S = f_S(t,x,\xi)$ defined in \cref{eq:fS}. In this situation the moment system has the form \cref{eq:EQMOMboltz} but the source term is different:
\begin{equation*}
S^{Sh} = S^{Sh}(M) = \rho (\Delta_0^S, \Delta_1^S, \dots, \Delta_{2N}^S)^{T} - M.
\end{equation*}
We need to investigate whether this source term satisfies the structural stability condition (i) \& (iii). For this purpose, some basic properties of $\Delta_j^S$ are required. A direct calculation shows that $\Delta_0^S = \Delta_0 = 1$, $\Delta_1^S = \Delta_1 = U$ and $\Delta_2^S = \Delta_2 = U^2+\theta$. Moreover, for $j \ge 3$ we have
\begin{equation} \label{eq:deltaS}
\begin{split}
\rho \Delta_j^S - \rho \Delta_j
&= \frac{q(1-Pr)}{3 \theta^2} \int_{\mathbb{R}} \xi^j (\xi-U) \left( \frac{(\xi-U)^2}{\theta}-3 \right) f_{eq} d\xi \\
&= \binom{j}{3} (1-Pr) (2q) \Delta_{j-3}
= \binom{j}{3} (1-Pr) (M_3 - \rho \Delta_3) \Delta_{j-3}.
\end{split}
\end{equation}
Here \cref{prop:basicDel}(a) is used for the integration and the last step is due to the definition of $q$.
As in \Cref{subsec:eqmomequistate}, the equilibrium state $W$ needs to be determined. From $S^{Sh}(M)=0$ we see that $\rho \Delta_j^S = M_j$ for $0 \le j \le 2N$.
With \cref{eq:deltaS}, the equation $\rho \Delta_3^S = M_3$ clearly implies that $M_3=\rho \Delta_3$ for $Pr \ne 1$.
Therefore, we have $\rho \Delta_j^S = \rho \Delta_j$ for any $0 \le j \le 2N$ and the equilibrium manifold $\mathcal{E}$ is determined by $M_j = \rho \Delta_j$ for any $0\le j \le 2N$. This is exactly the same as that of the BGK model, which has already been determined in \cref{thm:equisolu} to be $W \in \Omega_W^{eq}$.
At equilibrium, the Jacobian matrix of $S^{Sh}$ can be computed with \cref{eq:deltaS}:
\begin{equation} \label{eq:sourcejacs}
S_M^{Sh}: = \left. \frac{\partial S^{Sh}}{\partial M} \right|_{S^{Sh}(M)=0} =
\left( I_{2N+1} - (1-Pr) \sum_{i=3}^{2N} \binom{i}{3} \Delta_{i-3} E_{(i+1),4} \right) S_M,
\end{equation}
where $S_M$ is the Jacobian matrix \cref{eq:sourcejac} for the BGK model and the $(2N+1)$-matrix $E_{ij} = \left( e_{ij} \right)$ with $e_{ij}=1$ and all the other entities being zero.
$S_M^{Sh}$ is diagonalizable by an invertible matrix $P^S$ such that $P^S S_M^{Sh} = -\diag(0,0,0,Pr,1,\dots,1) P^S$, and
\begin{equation} \label{eq:sourcediags}
\left(P^S \right)^{-1} =
P^{-1} + \sum_{i=4}^{2N} \binom{i}{3} \Delta_{i-3} E_{(i+1),4},
\end{equation}
where $P^{-1}$ is defined in \cref{eq:sourcediaginv}.
Hence the structural stability condition (i) is justified.
For Condition (iii), we take the same symmetrizer $A_0=L^{T} L$ as that for the BGK model. This is reasonable since the equilibrium state is the same. It then suffices to show that the first three columns of $L \left( P^S \right)^{-1}$ are orthogonal to its other columns in equilbrium.
From \cref{eq:sourcediags} we see that the only difference between $L \left( P^S \right)^{-1}$ and $LP^{-1}$ is the fourth column.
For $L \left( P^S \right)^{-1}$, its fourth column is a linear combination of the last $(2N-2)$ columns of $L$, while the last $(2N-2)$ columns of $LP^{-1}$ are exactly those of $L$.
Since the first three columns of $LP^{-1}$ are orthogonal to its other columns, the fourth column of $L \left( P^S \right)^{-1}$ is also orthogonal to its first three columns. This has validated Condition (iii).
In this way, we have the main result of this subsection:
\begin{theorem} \label{thm:eqmomshakstab}
For the 1-D Boltzmann equation with the Shakhov model, the EQMOM moment system satisfies the structural stability condition.
\end{theorem}
\section{Conclusions}
\label{sec:conclusions}
This paper presents a rigorous stability analysis of the quadrature based moment methods (QBMM) for the Boltzmann equation.
To figure out a road map for more general cases, only the spatial one-dimensional (1-D) Boltzmann equation with hypothetical collisions (BGK or Shakhov type) is considered here.
In the QBMM, the distribution function $f$ is approximated with a linear combination of $N$ ($N \ge 1$) $\delta$-functions with unknown centers or their Gaussian approximations with unknown variance and centers (named QMOM or EQMOM, respectively).
For QMOM, we show purely analytically that the resulting moment systems of first-order PDEs are not strongly hyperbolic for any $N$.
Furthermore, we prove that the moment systems produced by the Gaussian EQMOM are strictly hyperbolic, when the variance is positive, and preserve the dissipation property of the kinetic equation. As a step in the proof, we also determine the equilibrium manifold that lies on the boundary of the state space for the parameters $(w_i, u_i, \sigma^2)$ ($1\le i \le N$). These conclusions explain why the EQMOM gives reasonable numerical results while QMOM does not.
The proofs are quite technical and involve detailed analyses of the characteristic polynomial of the coefficient matrices. They offer a guideline to investigate the multidimensional cases with multiple nodes, which is underway.
\section{1-D Riemann problem}
In this part we report our numerical experiments by using the two-node QMOM and EQMOM to solve the 1-D Riemann problem with the BGK model.
Assume that the collision frequency is proportional to the density, i.e. $\nu= \kappa \rho$. Three values of $\kappa$, $0,10,\infty$ are considered.
When $\kappa=0$, $f$ is a traveling wave and the analytical solution is given in \cite{Chalons2017}. When $\kappa=\infty$, the analytical solution of the Euler equation can be found in \cite{Toro2009}. In contrast, the direct simulation Monte Carlo (DSMC) result for $\kappa=10$ is used for a comparison, in which the BGK collision term is simulated according to \cite{Macro2002}.
In the simulation, the initial values are \cite{Chalons2017}: $\rho(0,x)=1$ and $\theta(0,x) = \frac{1}{3}$ for any $x \in \mathbb{R}$; $U(0,x<0) = 1$ and $U(0,x>0)=-1$. The moment inversion algorithms and spatial fluxes are treated as in \cite{Chalons2017,Fox2008}. The 1-D computational domain of $-1<x<1$ is discretized into 1000 uniform cells.
The time step is chosen so that the CFL number is less than 0.5.
The macroscopic parameters defined in \cref{eq:mo2macro} at $t=0.1$ are shown in \cref{fig:numfigkpa0,fig:numfigkpa10,fig:numfigkpainf} for the three values of $\kappa$. When the collision rate is zero or finite, QMOM results in a huge unphysical $\delta$-shock, whereas the EQMOM result is close to either the analytical solution or the DSMC results.
Such numerical results demonstrate again that EQMOM gives reasonable and much better results. This motivates us to study the two methods analytically.
\begin{figure}[htbp]
\centering
\label{fig:kpa0}\includegraphics[width=5in]{1DRiemann-kpa0}
\caption{The macroscopic parameters at $t=0.1$ for $\kappa=0$.}
\label{fig:numfigkpa0}
\end{figure}
\begin{figure}[htbp]
\centering
\label{fig:kpa10}\includegraphics[width=5in]{1DRiemann-kpa10}
\caption{The macroscopic parameters at $t=0.1$ for $\kappa=10$.}
\label{fig:numfigkpa10}
\end{figure}
\begin{figure}[htbp]
\centering
\label{fig:kpainf}\includegraphics[width=5in]{1DRiemann-kpainf}
\caption{The macroscopic parameters at $t=0.1$ for $\kappa=\infty$.}
\label{fig:numfigkpainf}
\end{figure}
\bibliographystyle{siamplain}
|
1,116,691,501,006 | arxiv | \section{Introduction}
Understanding the efficiency of converting baryons into stars is a challenge in studies of galaxy formation and evolution.
The star-formation rate history (SFH) is well established from observations of star-forming galaxies across cosmic time at infrared, ultra violet, submillimetre, and radio wavelengths \citep[][and references therein]{Madau2014}. The star-formation rate (SFR) density increased at high redshift, reached a peak at around $z \sim 2$ and decreased until today \citep[see][for a review]{Madau2014}. Which physical processes are driving this dramatic change and their relative importance represent two of the main unanswered questions of modern astrophysics. Whether this is due to a lack of cold gas supply, or a lower efficiency in converting the gas into stars, or to the presence of strong outflows preventing the infall of new cold material, is still debated \newline \citep[e.g.][and references therein]{Madau2014, Katsianis2017}. Simple expectations would have the SFH mirror the cold gas evolution, as gas is being consumed by star formation \citep[e.g.][]{Putman2017, Driver2018}. The atomic gas density, $\Omega_{\rm HI}$, is the original reservoir of gas for star formation and is indeed well constrained locally and at $z > 2$. Most recent results, however, indicate a mild evolution in $\Omega_{\rm HI}$ with cosmic lookback time \citep[e.g.][]{Noterdaeme2009, Zafar2013, Rhee2018}. Neutral hydrogen provides the essential reservoir, but it has to cool and transform to the molecular phase in order to provide the necessary conditions for star formation. Further studies using damped Lyman~$\alpha$ absorbers as well as H\,{\sc i}~21~cm absorption traced with the Square Kilometre Array (SKA) path finder observations provide important clues on physical state of the atomic gas and the neutral inter-stellar medium (ISM) physics \citep{Kanekar2014a, Allison2016}. However, a direct probe of the fuel for star formation has to come from measurements of the molecular phase of the gas.
In order to probe this essential phase of baryons over cosmic time, a number of deep cosmological surveys for CO emission lines have been conducted. The first study used the Institute de Radioastronomie Millim\'etrique (IRAM) Plateau de Bure Interferometer to perform molecular line scans in the Hubble Deep Field North and provided upper limits on the cosmic molecular gas mass density $\Omega(\text{CO})$ \citep{Decarli2014, Walter2014}. More recently, the ALMA Spectroscopic Survey in the Hubble Ultra Deep Field (ASPECS), provided the first measurements of $\Omega(\text{CO})$ at redshift $0 < z < 4.5$ \citep{Decarli2016, Decarli2019}. The CO Luminosity Density at High Redshift (COLDz) survey \citep{Riechers2018} undertaken with the VLA offers first indications of the molecular mass density at high redshift ($z\sim$~2--3 and $z \sim$~5--7). An alternative approach using the dust mass as a tracer of the molecular gas mass is presented by \citet{Scoville2017}. The fact that multiple transitions of CO at different redshifts can be searched in a given observed frequency setting greatly increases the redshift path, and hence the searched volume. These emission-line surveys are especially sensitive to the high-mass end of the molecular gas mass function. However, such observations require large investments in telescope time, and since typically only a small contiguous area is covered, the results are prone to cosmic variance effects.
Four intervening molecular absorbers have been detected in targeted surveys of strongly lensed systems and galaxy merger pairs that were known to show H\,{\sc i}\ absorption \citep[e.g.][]{Wiklind1995, Kanekar2005, Wiklind2018, Combes2019}. Only the molecular absorber towards PKS 1830-211 was detected before any other lines were known \citep{Wiklind1996a}. In addition, associated molecular absorption lines have been found in three intermediate-redshift AGN \citep{Wiklind1994, Wiklind1996, Allison2018a}. Similar, intrinsic absorption is observed more frequently in low redshift AGNs \citep[e.g.][]{Tremblay2018a, Maccagni2018, Rose2019} and in high-redshift submillimetre galaxies (SMGs) \citep[e.g.][]{George2014, Falgarone2017, Indriolo2018}.
\citet{deUgartePostigo2018} also reported CO absorption lines against Gamma Ray Bursts (GRBs) observed with Atacama Large Millimetre and Submillimetre Array (ALMA).
However, to measure the cosmic molecular gas mass density in an unbiased way, blind detections of intervening molecular absorbers are required.
Here, we present a complementary approach to probing the molecular phase of the gas and its evolution with cosmic time free from cosmic variance issues. In analogy with studies at optical wavelengths \citep[e.g.][]{Peroux2003, Zafar2013}, we use (sub)mm-bright ($\sim 0.1 - 3$~Jy) background sources to probe intervening molecular absorption lines. Moving from optical to mm wavelengths has the advantage that this study is not affected by dust attenuation in the quasar spectra which might be expected for molecular absorbers. Therefore, by choosing (sub)mm-bright background objects we are not biased against potentially dusty absorbers. Furthermore, tracing molecular absorption offers a measurement of the cosmic molecular gas density free from cosmic variance.
A similar ``blind'' study was performed at lower frequencies using the Green Bank Telescope \citep{Kanekar2014}. The authors surveyed a redshift path, defined as the sum of the redshift intervals covered by the individual spectra ($\Delta z = \sum_i (z_{\rm max} - z_{\rm min})$), of $\Delta z\sim 24$ at $0.81 < z < 1.91$ and did not detect any molecular absorbers with N(H$_2$) $\geq 3 \times 10^{21}$.
In the present work, we perform a ``blind'' search for CO absorbers against a large sample of (sub)mm bright background galaxies. These objects are all 880 ALMACAL targets observed up until December 2018. ALMACAL consists of observations of a large sample of bright, compact sources \citep[generally blazars, see][]{Bonato2018} which are used as calibrator sources for ALMA.
These calibrators are ideal targets for an unbiased search for CO absorbers for two main reasons. First, the total integration time spent on ALMACAL sources is $>1500$ hours, orders of magnitude more than what would be attainable in a targeted ALMA survey programme for intervening absorption lines. Secondly, since the calibrators are distributed all over the sky observable with ALMA, it is possible to quantify the effect of cosmic variance.
In addition, the sensitivity of the absorption survey is independent of redshift and solely relies on the brightness of the unrelated background sources. Using absorption lines we are able to reach low gas column densities, providing us with a more complete and unbiased (with respect to excitation conditions) view of the molecular gas content of the Universe over cosmic time.
There are also several caveats using this blind absorption line approach. For example, a single identification of an absorption line cannot uniquely linked to a species, and hence not to the column density and the redshift of the absorber. However, since CO is a much stronger absorber than all other molecular species we can safely assume that any detected absorption line is tracing CO. Furthermore, unlike emission-line surveys in well-studied cosmological deep fields, our survey does not have the luxury of extensive ancillary data that can be used to identify source redshifts. Our sample of quasars is magnitude limited and therefore susceptible to effects of gravitational magnification. First, the probability of finding quasars with absorbers is increased by the flux boosting from gravitational lensing by the absorber. Second, the solid angle behind absorbers is gravitationally enlarged diluting the flux of the background quasar. \citet{Menard2003} find indeed an excess of bright quasars with absorbers. Furthermore, ALMA calibrators are selected to be (sub)mm bright and have therefore redshifts of $z \leq 3$.
The paper is organised as follows: the ALMACAL survey including the optimised data reduction for this data-intensive project is presented in Sec.~\ref{SecDataRed}, in Sec.~\ref{SecBlindDetec} we describe the absorption line search as well as the derivation of the limits on the CO column density distribution function from our observations and the molecular gas column density distribution function from the IllustrisTNG simulation, in Sec.~\ref{SecDiscussion} we present our limits on the molecular gas mass density evolution as a function of cosmic time and in Sec.~\ref{SecConclusions} we summarize our conclusions. Throughout this paper we adopt a $\Lambda$CDM cosmological model with $H_0 = 70 {\rm km s^{-1} Mpc^{-1}}$, $\Omega_{\rm m} = 0.3$ and $\Omega_{\Lambda} = 0.7$.
\section{ALMACAL observations and data reduction}
\label{SecDataRed}
\begin{figure}
\centering
\begin{tikzpicture}[node distance=1.5cm,
every node/.style={fill=white, font=\sffamily}, align=center]
\large
\node (load) [loadData] {Raw calibrator data};
\node (extractSpectrum) [DataReduction, below of=load] {Extract XX and YY polarisation \\ spectrum in the $uv$-plane};
\node (sumSpectrum) [DataReduction, below of=extractSpectrum] {Sum the XX and YY \\ polarisation spectra};
\node (divideSpectra) [DataReduction, below of=sumSpectrum] {Calculate Cal1/Cal2};
\node (maskSpectrum) [DataReduction, below of=divideSpectra] {Mask edges and atmospheric lines \\ and discard bad data};
\node (lowPass) [DataReduction, below of=maskSpectrum] {Subtract low frequency signal};
\node (sourceFinding) [SourceFinding, below of=lowPass] {Absorption line finding};
\draw[->] (load) -- (extractSpectrum);
\draw[->] (extractSpectrum) -- (sumSpectrum);
\draw[->] (sumSpectrum) -- (divideSpectra);
\draw[->] (divideSpectra) -- (maskSpectrum);
\draw[->] (maskSpectrum) -- (lowPass);
\draw[->] (lowPass) -- (sourceFinding);
\end{tikzpicture}
\caption{A flowchart describing our methodology to efficiently process the large data volume of ALMACAL while maintaining the highest spectral resolution.}
\label{FigDataRedFlow}
\end{figure}
\begin{figure}
\includegraphics[width = \linewidth]{{exampleRatio}.png}
\caption{Example of the first data reduction step to construct the ratios of two calibrator spectra observed in the same ALMA execution block. The top/middle panels show the spectra of calibrator 1 and 2, respectively, with arbitrary flux units. The bottom panel shows the ratio of the spectra of calibrator 1 and calibrator 2. The green line represents the atmospheric model as described in section \ref{SecDataRed}. This data processing reduces atmospheric line signatures.}
\label{FigRatio}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{LowPassFilterExample.png}
\caption{Illustration of the application of a low-pass filter in the data processing. The orange spectrum shows the processed data before filtering, the green curve represents the same data after applying a low-pass filter maintaining features that are on scales larger than 200~km~s$^{-1}$. In blue, we show the resulting spectrum obtained by dividing the original spectrum by the low-pass curve. The resulting flat spectrum is then used as an input to the absorption line finder.}
\label{FigFilter}
\end{figure}
We extract from the ALMA archive all phase, amplitude and bandpass calibrator data from PI observations from Cycles 1 to 6, taken before the 4$^{\rm th}$ December 2018. We only consider data taken with the ALMA 12-m array. This amounts to observations of 880 calibrators. To determine the redshift of the calibrators, we use the compilation of redshifts from the database presented by \citet{Bonato2018}, combined with the updated redshift estimates of the Australian Telescope 20 GHz survey \citep[AT20G][]{Ekers2007} sources (\citet{Mahony2011} and E.~Mahony private comm.). For the remaining calibrators, we perform an additional query to the Simbad \citep{Wenger2000} and NED\footnote{ The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.} databases. We note, however, that the accuracy of these redshifts might be limited. This results in redshift measurements of 622 calibrators. We test whether our samples of calibrators with and without redshift information are drawn from the same population of {\it WISE} colours (Band1 - Band4) and find that, based on a Kolmogorov- Smirnov test, we have to reject this hypothesis with a 96.4\% confidence level. However, we perform the line search on all quasar spectra irrespective of the availability of the redshift and therefore do not introduce a bias in our sample of calibrators. The line identification of absorption signals towards background quasars without redshift information is not straightforward since without a known redshift, the absorption line could both be intervening and associated to the background source. For such cases it would be important to obtain a redshift of the quasar using follow-up observations. Thus far, this is a hypothetical problem since we do not find any extragalactic absorption lines towards background quasars at unknown redshifts. In the following we discuss only the spectra of quasars with known redshifts.
We devise a new, optimised data processing strategy to handle the large data volume comprising several tens of thousands of spectra while maintaining the highest spectral resolution. This is necessary to keep the data volume manageable. For reference, we store in our ALMACAL archive fully-calibrated ms-files with a reduced spectral resolution which amounts to more than 26 Tb of data. The reduced spectral resolution of 15 kHz is too low to study absorption lines that are expected to be narrower than $40$~km~s$^{-1}$ \citep{Wiklind2018}. A schematic view of the data reduction chain is illustrated by Fig.~\ref{FigDataRedFlow}.
To this end, the spectrum of the calibrator is extracted directly from the $uv$ data, by fitting a point source model at the phase centre. For technical reasons, we extract the XX and YY polarisation data separately and add those in quadrature to obtain Stokes I spectra. We choose to only consider dual-polarization mode scans to keep the data retrieval simple and uniform, full polarisation data represents only a small fraction of the total ALMA archival data. For each calibrator observation, each spectral window is treated individually, resulting in a total of 28,644 spectra in our database. To remove unwanted structures from the spectra, we apply a bandpass correction by taking ratios of the spectra of pairs of calibrators from the same execution block.
This procedure also removes some of the atmospheric absorption line signatures imprinted on the spectra. An example of this procedure is shown in Fig.~\ref{FigRatio}. For the vast majority of the calibrator observations, this simple algorithm results in flat spectra, apart from those narrow spectral regions that correspond to strong atmospheric absorption. If more than two calibrators were used in one observation, all possible combinations of calibrator spectra pairs are used to produce bandpass-calibrated spectra.
This approach allows us to confirm a potential detection identified in one ratio-ed spectrum using a second ratio.
For further processing of the spectra, we mask 5 per cent of the channels on each end of the spectrum to remove edge effects. These edge channels are often strongly affected by non-flat bandpass effects. Furthermore, we mask a 0.2~GHz wide window centred on the central frequencies of the strongest H$_2$O, O$_2$ and O$_3$ atmospheric absorption lines identified from the ALMA atmosphere model provided by Juan Pardo\footnote{\url{https://almascience.nrao.edu/about-alma/atmosphere-model}}. Spectra covering more than one atmospheric transition are not further considered. Finally, contiguous parts of the spectra narrower than 15 per cent of the total spectrum bandwidth are discarded to ensure a possible detection of the full absorption line and the continuum.
Despite the success of this simple algorithm, we observe that on several occasions, a second-order correction of the bandpass is required to remove all unwanted signal. To this end, we use a Butterworth low-pass filter developed as a maximally flat low-pass filter for signal processing. For each spectrum, we create a template of the spectrum including only structures wider than $200$~km~s$^{-1}$. The original spectrum is divided by this template of its low frequency shape. An example of this procedure is shown in Fig.~\ref{FigFilter}.
We expect molecular absorption lines to be narrow ($<100$~km~s$^{-1}$) and moreover, the quality of the spectra does not allow us to search for wider spectral lines since they would be indistinguishable from instrumental artefacts and imperfect bandpass calibration.
\section{Analysis and Results}
\label{SecBlindDetec}
\begin{figure*}
\includegraphics[width = 0.49\linewidth]{FrequencyDistibutionAbsorption.pdf}
\includegraphics[width = 0.49\linewidth]{VelResDistibutionAbsorption.pdf}
\caption{Key quantities of the ALMACAL spectra. {\it Left:} Histogram of the observed frequencies with the ALMA observing bands overlaid. Most of the observations are taken in ALMA bands 3, 6, and 7. {\it Right:} Distribution of the velocity resolution of the ALMACAL spectra as a histogram and a cumulative plot. Half of the spectra have a resolution higher than 10 km s$^{-1}$. \label{FigDataDescr}}
\end{figure*}
\subsection{Blind Search for Intervening Absorbers}
\begin{figure*}
\includegraphics[width = 0.49\linewidth]{ExampleDetection.png}
\includegraphics[width = 0.49\linewidth]{CO1-0absJ1415+1322.png}
\caption{Examples of detected absorption lines. Left: A Galactic absorption line detection in the spectrum of J1744-3116. The absorption line is $^{13}$CO(2-1) at 220.39 GHz that arises in the ISM of the Milky Way Galaxy. Right: Associated $^{12}$CO(1-0) absorption in the known molecular absorber J1415+1322 at z = 0.24671. While we find multiple Galactic absorption lines, no extragalactic intervening molecular absorbers are detected.\label{FigDetection}}
\end{figure*}
After the data processing described in the previous section and discarding spectra based on the criteria described above, we are left with 28,644 flattened spectra, cleaned from any unwanted atmospheric or instrumental effects, in which we search for absorption lines of astrophysical origin. The key properties of the data including observed frequencies and velocity resolution are shown in Fig.~\ref{FigDataDescr}. We note that the spectra have varying bandwidth and spectral resolution, ranging from 0.03~MHz to 2~GHz, and from 256 to 3840 channels, respectively. We devise a search algorithm based on a signal-to-noise threshold of $5\sigma$, and apply this algorithm to both the ratios and the inverted ratios to search for absorption in both calibrator spectra. Since the spectra show no significant bandpass variations based on manual inspection, we can apply a global $\sigma$-threshold for each individual spectrum. Before running the finder algorithm, we smooth each spectrum to increase the signal-to-noise ratio. We use a range of smoothing kernels between 10 and 190~\mbox{$\rm km\, s^{-1}$}~in steps of 10~\mbox{$\rm km\, s^{-1}$}. Furthermore, we only record detections if the signal is significant in two consecutive channels.
From this initial list of candidates, we remove all detections which occur within the lowest or the highest 5 per cent of channels to remove absorption line candidates for which we do not see the continuum on both sides. Furthermore, we discard detections that lie within 200~\mbox{$\rm km\, s^{-1}$}\ of the velocities of the centres of known Galactic CO lines. We apply an additional manual cleaning of candidate detections that can be identified as obvious electronic artefacts (such as strong periodic signals). For the remaining candidates, we identify atmospheric and Galactic transitions by cross-matching the detected frequencies observed in multiple sight lines. Finally, we match the candidate list with a list of frequencies corresponding to rare molecular species from SPLATALOGUE \citep{Remijan2007} to filter out remaining absorption lines of Galactic origin. Examples of a detection of Galactic absorption and associated absorption are shown in Fig.~\ref{FigDetection}. The Galactic absorption lines will be the subject of a forthcoming paper. Additionally, we compare the redshift of the lowest possible CO transition with the calibrator redshift and exclude implausible lines (i.e. the redshift of the absorber would be higher than the redshift of the background quasar). After performing these checks, we are left with one significant detection of an extragalactic molecular absorption line shown in Fig.~\ref{FigDetection}, which we identify to be intrinsic CO(1-0) absorption in the spectrum of the background calibrator J1415+1320 ($z = 0.2467$) \citep{Wiklind1994}. This detection validates the robustness of our finding algorithm.
\subsection{The Column Density Distribution Function Based on Intervening Absorbers}
\label{SecAnalysis}
We calculate the column density distribution function from the sensitivity limits we reach in the calibrator spectra. To illustrate the potential of this method we derive predictions of the column density distribution function from the IllustrisTNG100 cosmological hydrodynamical simulation \citep{Pillepich2018, Naiman2018, Nelson2018b, Marinacci2018, Springel2018}.
\begin{figure*}
\includegraphics[width = 0.49\linewidth]{COcolumnDensityDistributionFunctionZwaanAndBurghLowZInH2NormalBins.pdf}
\includegraphics[width = 0.49\linewidth]{COcolumnDensityDistributionFunctionZwaanAndBurghHighZInH2NormalBins.pdf}
\caption{CO column density distribution functions in the two redshift bins. The column densities are expressed in molecules cm$^{-2}$. The arrows indicate the upper limits from our ``blind'' CO absorber survey within $\Delta N = 1$~dex. The left panel corresponds to $z<0.5$ and the right panel to $z>0.5$. Light coloured limits reflect the uncertainty introduced by the CO-to-H$_2$ column density conversion factor. The blue line is the H$_2$ column density distribution function at $z=0$ based on CO emission line observations \citep{Zwaan2006}. The brown shaded region marks the predictions based on IllustrisTNG100 results with a variation of post processing recipes to illustrate the uncertainties (see Sec.~\ref{SecTNG100} for details). The top-axis shows the fiducial CO-to-H$_2$ conversion from \citet{Burgh2007}.}
\label{FigColDensDist}
\end{figure*}
\begin{table}
\caption{Redshift path surveyed, $\Delta z$, and comoving pathlength, $\Delta X$, for each CO transition in two distinct redshift ranges, $z < 0.5$ and $z > 0.5$. The cumulative redshift path surveyed, $\Delta z$, is 182.2 for CO transitions between $J = 1-0$ and $J = 5-4$.}
\label{TabLimColumnDensLow}
\centering
\begin{tabular}{l l l l l l l}
\hline \hline
CO & Redshift & $\Delta z$ & $\Delta X$ \\
transition&range & &\\
\hline
CO(1--0) & $<0.5$ & 48.4 & 61.2 \\
CO(2--1) & $<0.5$ & 13.4 & 16.3 \\
CO(3--2) & $<0.5$ & 20.4 & 28.6 \\
CO(4--3) & $<0.5$ & 9.8 & 14.6 \\
CO(5--4) & $<0.5$ & 1.4 & 2.1 \\
\hline
CO(1--0) & $>0.5$ & 0.0 & 0.0 \\
CO(2--1) & $>0.5$ & 34.1 & 80.8 \\
CO(3--2) & $>0.5$ & 18.5 & 42.0 \\
CO(4--3) & $>0.5$ & 18.8 & 41.5 \\
CO(5--4) & $>0.5$ & 17.5 & 40.0 \\
\hline \hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width = 0.49\linewidth]{H2CumulativeNumAbsUpdateLowz.pdf}
\includegraphics[width = 0.49\linewidth]{H2CumulativeNumAbsUpdateHighz.pdf}
\caption{Cumulative number of molecular absorbers per comoving path length interval $\Delta X$ with a column density greater than $N$ (notations same as in Fig.~\ref{FigColDensDist}). This presentation of the column density distribution function is independent of the choice of a bin size, $\Delta N$. We use this bin-free representation to calculate limits on the molecular gas densities with redefined redshift bins by scaling of the functional form from \citet{Zwaan2006} to our upper limits (see Sec.~\ref{SecRhoH2} for details).}
\label{FigCumNum}
\end{figure*}
We first calculate the redshift path probed by the 28,644 spectra included in our survey. For each spectrum, we compute the redshift path observed \citep[see e.g.][]{Zafar2013} given the observed frequencies and assuming CO transitions from $J = 1-0$ up to $J = 5-4$. The frequency coverage of the fully reduced and masked spectra is used for this calculation. The maximum probed redshift in each spectrum is set by the redshift of the calibrator.
The cumulative redshift path surveyed, $\Delta z$, is 182.2 for CO transitions between $J = 1-0$ and $J = 5-4$. We further split the sample in two redshift ranges, at $z > 0.5$ and $0.5 < z < 1.7$ with mean redshifts of $z = 0.199$ and $z = 0.839$. The two subsamples are covering approximately the same pathlength of $\Delta z = 93.3$ at $z < 0.5$ and $\Delta z = 88.9$ at $z>0.5$. Details of the redshift paths for each CO transition in the two sub-samples are listed in Table~\ref{TabLimColumnDensLow}.
We then calculate the limiting CO column densities probed in our survey following \citet{Mangum2015}. We assume an excitation temperature equal to the CMB temperature at the redshift probed with the spectrum, because this is the lowest possible temperature. The physical conditions of the molecular absorbing gas in the galaxy lensing the quasar PKS1830--211 were investigated by \citet{Muller2013}. They found that for polar molecules, the excitation temperature is close to that of the CMB at the corresponding redshift. A molecule like CO, on the other hand, is easier to excite due to its low electric dipole moment, and in general, we would not be able to constrain $T_{\rm ex}$ for CO to better than $T_{\rm cmb} < T_{\rm ex} < T_{\rm kin}$, without constraints from additional lines/species. Since we have no detections, we perform the calculation using the $5 \sigma$ level from each spectrum as the detection threshold and an expected FWHM of the absorption line of $40$~km~s$^{-1}$ \citep{Wiklind2018}. The CO column density limit is converted into a H$_2$ column density limit using a mean column density ratio of $N({\rm CO})/ N(\mbox{H}_2) = 3 \times 10^{-6}$ \citep{Burgh2007}. In order to bound the large uncertainty on this conversion factor, we also present CO column densities derived with upper and lower limits of $10^{-5}$ to $10^{-7}$, respectively. The column density limits from non-detections are calculated for each observation using the corresponding frequency coverage and rms. We note that the column density ratio of $N({\rm CO})/ N(\mbox{H}_2)$ over a large range of H$_2$ column densities is not constant \citep{Balashev2017}. However, with the currently available data this is challenging to quantify.
Next, we estimate the $5 \sigma$ limits on the column density distribution function following the definition \citep{Carswell1984}:
\begin{equation}
f(N(\mbox{H}_2),X) dN(\mbox{H}_2) \Delta X < \frac{1}{\Delta N(\mbox{H}_2) \Delta X} dN(\mbox{H}_2) \Delta X,
\end{equation}
where the number of absorbers detected within the column density range $\Delta N(\text{H}_2)$ is less than one. $\Delta X$ is the comoving pathlength for the specific column density under consideration. The comoving pathlength ensures that for a constant physical size and comoving number density, the absorbers have a constant $f(N(\text{H}_2), X) $ \mbox{\citep{Bahcall1969}}. The comoving pathlength of a single sightline, $\Delta X_i$, is defined as follows:
\begin{equation}
\Delta X = \frac{H_0}{H(z)} (1 + z)^2 \text{d} z
\end{equation}
\begin{equation}
\Delta X_i = \int_{z_{\rm min}}^{z_{\rm max}} \text{d} X = \int_{z_{\rm min}}^{z_{\rm max}} \frac{(1 + z)^2}{\sqrt{ \Omega_{\Lambda} + \Omega_{\rm M} \times (1 + z)^3}} \text{d} z.
\end{equation}
The limiting column densities and covered path length are then combined for the whole survey.
The non-detections from our survey translate to upper limits on the column density distribution function. However, in the definition of $f(N(\text{H}_2), X)$ the choice of the bin size influences the values of $f(N(\text{H}_2), X)$ upper limits in the case of non-detections. Here, we use a bin width of $\Delta N = 1$~dex, as it is common practice in H\,{\sc i}\ absorption line studies \citep[e.g.][]{Peroux2003}. The resulting upper limits on the column density distribution function are shown in Fig.~\ref{FigColDensDist} for the two redshift ranges.
To remedy the dependence on the bin size, we also calculate the cumulative number of absorbers per $\Delta X$ \mbox{\citep{Peroux2003}} as a function of column density, which is independent of the binning choice (see Fig.~\ref{FigCumNum}). We also calculate the cumulative number of absorbers expected based on the results from BIMA SONG observations of local star-forming galaxies \mbox{\citep{Zwaan2006}} for comparison.
\subsection{Predicting the Column Density Distribution Function from IllustrisTNG}
\label{SecTNG100}
\begin{figure}
\centering
\includegraphics[width = 0.8\linewidth]{{TNG50-1.67.564218.MH2GK_popping}.pdf}
\caption{An example of a molecular gas disk in a z=0.5 galaxy (top panel: face-on, bottom panel: edge-on view) with \mbox{$M_{\star} = 10^{9.4} {\rm M}_{\sun}$} and \mbox{${\rm SFR} = 1.1 {\rm M}_{\sun} {\rm yr}^{-1}$} from post processing of the IllustrisTNG100 simulation. The highest column densities are only observed in edge-on disks, while intermediate column densities are predicted out to radii of $\sim$10~kpc in other viewing directions.}
\label{FigTNGmolDisk}
\end{figure}
From a modelling point of view, the molecular phase of the cold gas is challenging to assess because of the complexity of the physics involved and because it requires sub-grid modelling to capture the unresolved physics. Semi-analytical techniques of pressure-based models \citep{Blitz2006, Gnedin2011, Krumholz2013} are used to split the cold hydrogen from hydrodynamical simulations (such as the EAGLE or IllustrisTNG) into its atomic and molecular components \citep{Obreschkow2009, Popping2014, Lagos2015, Chen2018a}.
Here we use the TNG100 volume of the IllustrisTNG simulations \citep{Pillepich2018, Naiman2018, Nelson2018b, Marinacci2018, Springel2018} through its publicly available data \citep{Nelson2018c} in order to compare our observations against the theoretical expectation for the $\mbox{H}_2$ column density distribution function. An example of the column density map in a typical galaxy from the simulations is shown in Fig.~\ref{FigTNGmolDisk}. We construct the column density distribution function at the mean redshift of the two subsamples ($z=0.199, 0.839$) using the $\mbox{H}_2$ modeling methodology of \citet{Popping2019} (see also \citet{Stevens2019, Diemer2019} for assessments of the H\,{\sc i}~and $\mbox{H}_2$ outcomes of TNG) and the column density distribution function gridding procedure as described in \citet{Nelson2019}. The column density is integrated over a path length of 10 cMpc $h^{-1}$. In order to assess the sensitivity of our result to various physical and numerical choices, we present a band which encompasses several different column density distribution function calculations, which vary the $\mbox{H}_2$ model employed (three versions), the projection depth / effective path length (five values), different grid sizes for the computation of the column density (three values), and assumptions on the $\mbox{H}_2$ contents of star-forming versus non-star-forming gas cells. In Fig.~\ref{FigCDDFzevoTNG}, we show the expected evolution of the column density distribution function with redshift. We find that an increasing number of high column density absorbers at high redshift is expected. On the low column density end, on the other hand, the number of absorbers is almost constant over $z = 4 - 1$ and increases at $z = 1$. We also note that in the IllustrisTNG100 results, high column densities observed by \citet{Zwaan2006} are not reproduced. This may be due in part to the limited volume in the simulation. Furthermore, the simulations predict more low column density molecular gas compared to the observations. The expected error range shown in Fig.~\ref{FigColDensDist} also applies to this plot, but is not shown for reasons of clarity. This will not explain the full discrepancy. Additional uncertainties affecting the comparison between the simulations and observations are the assigning of a molecular gas fraction to the gas cells in the post processing as well as the CO-to-H$_2$ conversion factor used in the observations. Furthermore, \citet{Diemer2019} compare H$_2$ half mass radii in IllustrisTNG, relative to stellar half-mass radii, to the EDGE-CALIFA survey (based on CO), finding that although both TNG and EDGE-CALIFA have a majority of galaxies with $R_{\rm half,H2}/R_{\rm half,*} \sim 1$, the median ratio in TNG is approximately 30\% larger, though with a large dependence on the invoked HI/H$_2$ model. However, this degree of difference in H$_2$ extents cannot fully explain the large H$_2$ column differences seen here.
The prediction of the column density distribution function and the cumulative number of absorbers are shown in Fig.~\ref{FigColDensDist} and \ref{FigCumNum}. We have conducted the same analysis on the results from the EAGLE simulation \citep{Schaye2015, Crain2015} and find that the qualitative expectations for the column density distribution function are in line with those from IllustrisTNG.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{zEvolTNG100.pdf}
\caption{Redshift evolution of the H$_2$ column density distribution function as predicted from the IllustrisTNG simulation (solid lines) and from observations at $z = 0$ by \citet{Zwaan2006} (dashed line) normalized by a power law function fitting the low column density end of the predictions. We find that the column density distribution functions determined from the post processing of IllustrisTNG results at different redshifts predict an increasing number of high column density absorbers at high redshift.}
\label{FigCDDFzevoTNG}
\end{figure}
\subsection{Cosmic Evolution of the Molecular Gas Mass Density}
\label{SecRhoH2}
\begin{figure*}
\centering
\includegraphics[width = 0.8\linewidth]{rhoCompASPECS_ASPECSlikeBinsCumulativeLimit.png}
\caption{Cosmic evolution of the molecular and atomic gas densities. For $\rho ( {\rm H}_2)$, our limits are consistent with the measurements at $z = 0$ of \citet{Keres2003, Zwaan2006, Boselli2014, Saintonge2017} and the results of \citet{Decarli2016, Decarli2019, Scoville2017}. The sensitivity to low column density of the absorption line technique combined with the ALMACAL survey being unaffected by cosmic variance emphasize power of this complementary study to probe the cosmic evolution of the molecular gas mass density. A fit to $\rho ({\rm HI})$ observations is shown as a solid line \citep{Rhee2018}.}
\label{fig:rhoComp}
\end{figure*}
Finally, we calculate the molecular gas mass density $\rho ( {\rm H}_2)$ from the cumulative number of absorbers per $\Delta X$. We use the functional form of the cumulative number of absorbers per $\Delta X$ from \citet{Zwaan2006} at $z \sim 0$ as a proxy. We scale it to our upper limits and integrate over differential number of absorbers multiplied by the respective column density. \citet{Zwaan2006} found that the contribution of low column density absorbers with \mbox{$\log(N({\rm H}_2)) < 21$} to the total molecular gas mass at $z \sim 0$ is only 3 per cent. Therefore, we integrate only column densities log($N({\rm \mbox{H}_2})$) $>$ 21. Since we aim at a comparison with other surveys, we define similar redshift bins to those introduced by the ASPECS survey \citep{Decarli2016, Decarli2019} for this calculation.
The resulting limits are shown in Fig.~\ref{fig:rhoComp}, where we also report measurements from the literature based on the same cosmology and CO-to-H$_2$ conversion factor.
\section{Discussion}
\label{SecDiscussion}
With our ``blind'' survey for intervening molecular absorbers we put significantly improved constraints on the column density distribution function of molecular gas beyond $z\sim 0$. We compare our upper limits with the measurements at $z \sim 0$ presented by \citet{Zwaan2006} in Fig.~\ref{FigColDensDist} and find that our limits at $0<z<0.5$ and $0.5 < z < 1.7$ are consistent with the column density distribution function measurement at $z = 0$. The depth of our data translates to five orders of magnitude lower column densities than probed by \citet{Zwaan2006}. In addition, the absorption technique with a sensitivity independent of redshift in principle allows us to measure the redshift evolution of the column density distribution function.
We calculate limits on the limiting cross section of the molecular gas per galaxy based on the non-detection and the surveyed redshift path. We assume a \citet{Schechter1976} galaxy luminosity function and a uniform and spherically symmetric distribution of molecular gas and follow the description in \citet{Peroux2005}.
We derive a maximum radius of 4.8~kpc at $z < 0.5$ and 4.6~kpc at $z > 0.5$. \citet{Zwaan2006} find a median impact parameter of $N({\rm H_2}) > 10^{21} \;{\rm cm}^{-2}$ of 2.5~kpc, consistent with our upper limits. Our results provide statistical evidence that molecular gas around galaxies have a limited extent, well below the typical size of CGM regions. It however does not exclude that the CGM may contain more clumpy molecular gas.
Compared to the predictions from IllustrisTNG from Sec.~\ref{SecTNG100} presented in Fig.~\ref{FigColDensDist} our limits are already close, within ~1 dex, of the expected value of $f(N(\text{H}_2), X)$ at low column densities.
The sensitivity reached in our survey is comparable to the column densities predicted by the simulations.
However, uncertainties in this comparison are still large in both observations and simulations. The observations on the one hand involve a conversion from measured CO column densities to $\mbox{H}_2$ column densities. Cosmological simulations on the other hand are lacking the resolution and associated small-scale physics to follow molecular cloud formation and rely on sub-grid physics models. Furthermore, the uncertainty of the molecular gas fraction in gas cells assigned in the post processing and the too extended H$_2$ disk sizes as measured by their half mass radii in IllustrisTNG \citep{Diemer2019} make a direct comparison challenging. Improvements on both sides are necessary to further explore the molecular column density distribution.
Fig.~\ref{fig:rhoComp} shows the cosmic evolution of the cold gas in the Universe. Dedicated efforts to measure the cosmic evolution of the molecular gas mass density from deep CO emission line observations by the ASPECS and COLDz surveys have provided the first measurements of $\rho ( {\rm H}_2)$ over a large redshift range \citep{Decarli2016, Decarli2019, Riechers2018}. Uncertainties in the $\rho ( {\rm H}_2)$ measurements, such as those related to uncertain CO excitation, completeness errors, and redshift errors are discussed by the authors of these studies.
At least of similar importance for deep surveys with small fields of view such as ASPECS and COLDz are the effects of cosmic variance on $\rho (\mbox{H}_2)$ measurements. This has been shown to be particularly important at low redshift \citep{Popping2019}. A complementary approach using the dust emission yields comparable results on $\rho (\mbox{H}_2)$ \citep{Scoville2017}. In our absorption-based study presented this paper, we provide new upper limits free from cosmic variance effects. Our limits are consistent with the measurements at z = 0 of \citet{Keres2003, Zwaan2006, Boselli2014, Saintonge2017} and supportive of the results of \citet{Decarli2016, Decarli2019, Scoville2017}.
A fit to $\rho ({\rm HI})$ observations is shown based on \citet{Rhee2018}. These results show that the amount of cold gas in its atomic form is only a few times higher than that in its molecular phase from $z \sim 3$ to $z \sim 0$, implying that the decrease of $\mbox{H}_2$ density is faster than for H\,{\sc i}~towards late times. While the SFH evolves by a factor 20--30 from $z=2$ to present day, $\rho ( {\rm H}_2)$ decreases by one order of magnitude in the same time lapse and $\rho ({\rm HI})$ by less than 15 per cent. These findings indicate that H$_2$ is being consumed faster than H\,{\sc i}\ can replenish it unless it is constantly fed. The dramatic decrease of the cosmic star-formation rate density might therefore arise from a shortfall of molecular gas supply. On the contrary the MUFASA simulation predicts a shallower evolution of the molecular gas mass density than indicated by the observations \citep{Dave2017}.
Future blind absorption line surveys will offer more stringent constraints on the evolution of the cosmic molecular gas mass density by either moving to higher redshifts, where more high column density absorbers are predicted per $dz$, or by increasing the surveyed redshift path. For our current survey we would be sensitive to the measurements from ASPECS if we would cover a 1.3 times larger comoving redshift path ($\Delta X \sim 290$). To put this into perspective, it is important to realize that the ALMACAL results presented in this paper are based on more observing time than the sum of all ALMA Large Programs from Cycles 4 to 7. ALMACAL is an ongoing survey, so more redshift pathlength is accumulated continuously. But even a modest increase of a factor of two will take several years of observing. Another significant improvement in the covered redshift pathlength would be achieved by increasing the instantaneous frequency coverage of ALMA observations from its current 8~GHz per polarisation to at least 16~GHz, as is recommended in the ALMA development roadmap \citep{Carpenter2019}. Apart from this technological improvement, an increase of the redshift path could be achieved by measuring more optical redshifts for ALMA calibrator sources, which is under way. However, the uncertainties introduced by lensing of the background quasar by the foreground absorber will remain a systematic issue.
\section{Summary and Conclusions}
\label{SecConclusions}
\begin{table}
\caption{Derived upper limits on the cosmic molecular gas mass density. \label{TabRhoH2Limits}}
\centering
\begin{tabular}{cl}
\hline
$z$ & $\rho({\rm H}_2) [\text{M}_{\sun} \text{Mpc}^{-3}]$\\
\hline
0.003 -- 0.369 & $< 10^{8.26}$ \\
0.2713 -- 0.6306 & $< 10^{8.32}$\\
0.6950 -- 1.1744 & $< 10^{8.39}$\\
1.006 -- 1.738 & $< 10^{8.21}$\\
\hline
\end{tabular}
\end{table}
We present constraints on the cosmic evolution of the molecular gas density of the Universe from a ``blind'' search for extragalactic intervening molecular absorbers using the ALMACAL survey. The novelty of the approach resides in {\em i)} its redshift-independent sensitivity, {\em ii)} its ability to reach low gas densities, and {\em iii)} the fact that it overcomes cosmic variance effects. Our survey is sensitive to column densities as low as N(CO)>10$^{11}$ cm$^{-2}$ (${\rm N}(\mbox{H}_2) > 10^{16} {\rm cm}^{-2}$). This is five orders of magnitude lower than probed in previous surveys \citep{Zwaan2006, Kanekar2014}.
To keep the data reduction simple and uniform, we use an simple data processing method to handle the large data volume while maintaining the data at its highest spectral resolution. The resulting sample of 622 unique quasar spectra is searched ``blindly" for CO absorption lines.
At $z<0.5$, we survey a total pathlength of $\Delta z = 93$ and a total comoving pathlength of $\Delta X = 123$. At $z>0.5$, $\Delta z = 89$ and $\Delta X = 205$. The large path length surveyed allows us to put constraints on the CO column density distribution functions at $z<0.5$ and $z>0.5$. While we detect multiple Galactic absorption lines and one known extragalactic intrinsic absorber, no extragalactic intervening molecular absorbers have been found.
The upper limits on the molecular mass density reported in this survey are presented in Table~\ref{TabRhoH2Limits}.
These upper limits are consistent with previous surveys. Together, these findings indicate that the dramatic decrease of the star-formation rate history might arise from a shortfall of molecular gas supply. Our limits add a constraint on the contribution from low column density molecular hydrogen. In addition, the new absorption line technique offers a characterization of cosmic variance issues possibly affecting emission surveys \citep{Popping2019}.
We present the theoretical estimates of the molecular gas column density distribution from post-processing of the IllustrisTNG results. These estimates are consistent with our observational upper limits.
However, both are subject to systematic uncertainties. Both a better understanding of the CO-to-H$_2$ conversion factor and advances in the modelling of molecular gas in cosmological simulations will decrease the uncertainties.
To put stronger constraints on the evolution of the molecular gas mass, a significant improvement on the redshift path covered per observation with ALMA is needed. This will occur naturally over time and will be accelerated by the proposed technological upgrades. Further improvement will result from the measurement of background quasar redshifts.
\section*{Acknowledgements}
The authors thank Ryan J. Cooke for useful discussions, Jonghwan Rhee for providing the fit to $\rho ({\rm H {\sc I}})$ and Sergei Balashev for discussions on the CO-to-H$_2$ column density conversion.
AK acknowledges support from the STFC grant ST/P000541/1 and Durham University. CP thanks the Alexander von Humboldt Foundation for the granting of a Bessel Research Award held at MPA. IRS acknowledges support from the ERC Advanced Grant DUSTYGAL (321334), a Royal Society/Wolfson Merit Award and STFC (ST/P000541/1). RD thanks the Alexander von Humboldt Foundation for support. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France . This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\bibliographystyle{mnras}
|
1,116,691,501,007 | arxiv | \section{Introduction}
Anomalous X-ray pulsars (AXP) belong to a class of rare objects closely concentrated along the
Galactic plane, known to emit pulsed X-rays for energies below 10 keV with pulse periods in the range $\sim$ 6 -
12 s and characteristic spin-down time scales of $\sim 10^3$ - $10^5$ year. Two, probably three, members
of this class are embedded in a shell-like supernova remnant \citep[SNR; see e.g.][]{gregory80,kriss85}.
The fact that the observed X-ray luminosity is much larger than the spin-down
power excludes an interpretation in which the (pulsed) X-ray emission originates from a spin-down
powered pulsar. On the other hand, the steady spin-down without a doppler modulated signature
and the lack of bright optical counterparts make an X-ray binary interpretation, in which mass
transfer (accretion) powers the high-energy emission of the system, very unlikely \citep[see][for a review
on AXPs]{mereghetti02}. Currently, models based on the decay of very strong magnetic fields
($10^{14}$-$10^{15}$ Gau\ss) - so called ``magnetar'' models \citep{thompson96} - seem to explain the
observed high-energy characteristics of AXPs at a satisfactory level. For instance, the recently detected bursts
from the AXPs 1E\, 1048.1\,-\,5937\ \citep{gavriil02b} and 1E\, 2259\,$+$\,586\ \citep{kaspi03} mimic the bursting behaviour of
Soft Gamma-ray Repeaters (SGR) for which the magnetar model was initially developed.
Also, the ``glitch'' phenomenon detected in the spin-down of some AXP members fits in this model \citep{kaspi00,kaspi03,morii05}.
These properties provide strong evidence that both AXPs and SGRs are members of the same source class.
The X-ray spectra of AXPs in the 0.5-10 keV band are very soft and can best be described by a black body plus a
power-law model. The softness of the spectra below 10 keV (power-law indices $2 < \Gamma < 4$, with $F_{\gamma}
\propto E^{-\Gamma}$) predicts non-detections for energies above 10 keV and thus explains the initial ignorance for
studying the spectral properties of AXPs at energies above 10 keV.
It was a great surprise that the high-resolution imaging instrument IBIS/ISGRI aboard ESA's INTEGRAL satellite
measured hard X-rays from the direction of three AXPs. Firstly, \citet{molkov04} reported
the discovery of an INTEGRAL source at the position of AXP 1E\, 1841-045\ in SNR Kes 73 for energies up to 120 keV
(60--120 keV: 7.5 $\sigma$). This was followed up by \citet{kuiper04}, who analysed
archival RXTE PCA and HEXTE data from monitoring observations spread over four years, to
prove that the hard X-ray emission comes from the AXP and not from the SNR. They discovered
non-thermal pulsed hard X-ray / soft $\gamma$-ray emission up to $\sim$ 150 keV with a spectrum with
power-law photon index of $\sim$ 0.94.
Secondly, \citet{revnivtsev04} published the INTEGRAL detection (18--60 keV: 6.5 $\sigma$) of AXP 1RXS\, J170849.0\,-\,400910\
(we use throughout this paper: 1RXS\, J1708\,-\,4009), also in a spatial analysis of ISGRI data. Similarly,
\citet{hartog04} reported the detection of AXP 4U\, 0142\,$+$\,61\ (50--100 keV: 6.1 $\sigma$) which exhibited a very hard
total spectrum above 20 keV. These three AXPs reach about the same flux level around 100 keV.
In this paper we present the results from follow-up studies using archival RXTE PCA and
HEXTE data of 1RXS\, J1708\,-\,4009, 4U\, 0142\,$+$\,61, 1E\, 2259\,$+$\,586\ and 1E\, 1048.1\,-\,5937\ aimed at studying their timing and spectral characteristics
above 10 keV. We also revisit the pulsed high-energy emission properties of 1E\, 1841-045\ above 10 keV using more
RXTE PCA data and applying now more-optimized event selection criteria. Furthermore, we explored the INTEGRAL
database using both public, private and core program data to derive the total X-ray emission spectra of the
five AXPs mentioned above, and reanalyze archival data from COMPTEL \citep{schonfelder93}, delivering constraining
upper limits in the soft gamma-ray band 0.75-30 MeV. Finally, initial results will be shown from IBIS ISGRI timing
analysis studies of 1RXS\, J1708\,-\,4009\ and 1E\, 1841-045.
\begin{table}[t]
\caption{List of RXTE observations of 1RXS\, J1708\,-\,4009, 4U\, 0142\,$+$\,61, 1E\, 2259\,$+$\,586\ and 1E\, 1048.1\,-\,5937\ used in
this study. For observations used in the reanalysis of 1E\, 1841-045\ see Sect. \ref{sect_pca_tm} and \citet{kuiper04}.\label{obs_table}}
{\footnotesize
\begin{center}
\begin{tabular}{cccr}
\hline
\textbf{Obs.} & \multicolumn{2}{c}{\textbf{Begin/End Date}} &
\textbf{Exp.$^{\dagger}$}\\
\textbf{id.} & \multicolumn{2}{c}{\textbf{(dd/mm/yyyy)}} & \textbf{(ks)}\\
\hline
\multicolumn{4}{l}{\textit{1RXS\, J1708\,-\,4009}}\\
30125 & 12-01-1998 & 08-01-1999 & 59.896 \\
40083 & 06-02-1999 & 11-03-2000 & 52.568 \\
50082 & 21-04-2000 & 12-05-2001 & 34.576 \\
60412 & 20-05-2001 & 23-05-2001 & 9.928 \\
60069 & 06-05-2001 & 20-02-2002 & 24.544 \\
70094 & 02-04-2002 & 20-03-2003 & 55.600 \\
80098 & 16-04-2003 & 26-10-2003 & 73.368 \\
All & 12-01-1998 & 26-10-2003 & 310.480 \\
\hline\hline
\multicolumn{4}{l}{\textit{4U\, 0142\,$+$\,61}}\\
10193 & 28-03-1996 & 29-03-1996 & 37.728 \\
10185 & 28-03-1996 & 28-03-1996 & 16.288 \\
20146 & 24-11-1996 & 13-12-1997 & 10.600 \\
30110 & 21-03-1998 & 21-03-1998 & 15.408 \\
50082 & 07-03-2000 & 10-02-2001 & 34.240 \\
60069 & 18-03-2001 & 08-01-2002 & 42.272 \\
70094 & 06-03-2002 & 26-12-2002 & 82.464 \\
80098 & 28-03-2003 & 18-09-2003 & 38.664 \\
80099 & 03-09-2003 & 09-09-2003 & 29.216 \\
All & 28-03-1996 & 18-09-2003 & 306.880 \\
\hline\hline
\multicolumn{4}{l}{\textit{1E\, 2259\,$+$\,586}}\\
10192 & 29-09-1996 & 30-09-1996 & 75.936 \\
20145 & 25-02-1997 & 25-03-1997 & 101.024 \\
20146 & 24-11-1996 & 13-12-1997 & 9.784 \\
30126 & 13-08-1998 & 02-12-1998 & 103.136 \\
40083 & 17-01-1999 & 01-03-2001 & 48.344 \\
40082 & 26-01-2000 & 27-03-2000 & 87.120 \\
50082 & 10-03-2000 & 02-03-2001 & 50.280 \\
60069 & 16-04-2001 & 07-02-2002 & 49.592 \\
70094 & 22-03-2002 & 15-02-2003 & 149.840 \\
80098 & 15-03-2003 & 28-10-2003 & 71.960 \\
All & 29-09-1996 & 28-10-2003 & 747.016 \\
\hline\hline
\multicolumn{4}{l}{\textit{1E\, 1048.1\,-\,5937}}\\
10192 & 29-07-1996 & 30-07-1996 & 72.344 \\
20146 & 24-11-1996 & 13-12-1997 & 12.616 \\
40083 & 23-01-1999 & 10-02-2000 & 49.184 \\
50082 & 11-03-2000 & 09-02-2001 & 40.992 \\
60069 & 06-03-2001 & 25-02-2002 & 139.776 \\
70094 & 12-03-2002 & 25-02-2003 & 133.224 \\
80098 & 12-03-2003 & 24-02-2004 & 136.376 \\
All & 29-07-1996 & 24-02-2004 & 584.512 \\
\hline
\multicolumn{4}{l}{$^{\dagger}$PCU-2 exposure after screening} \\
\end{tabular}
\end{center}}
\end{table}
\section{Instruments and observations}
\subsection{Rossi X-ray Timing Explorer}
In this study extensive use is made of data from monitoring observations
of AXPs with the two non-imaging X-ray instruments aboard RXTE, the Proportional Counter Array
(PCA; 2-60 keV) and the High Energy X-ray Timing Experiment (HEXTE; 15-250 keV). The PCA
\citep{jahoda96} consists of five collimated xenon proportional
counter units (PCUs) with a total effective area of $\sim 6500$ cm$^2$ over a $\sim 1\degr$
(FWHM) field of view. Each PCU has a front Propane anti-coincidence layer and three Xenon
layers which provide the basic scientific data, and is sensitive to
photons with energies in the range 2-60 keV. The energy resolution is about 18\% at 6 keV.
The HEXTE instrument \citep{rothschild98} consists of two independent detector
clusters, each containing four Na(Tl)/ CsI(Na) scintillation
detectors. The HEXTE detectors are mechanically collimated to a $\sim 1\degr$ (FWHM)
field of view and cover the 15-250 keV energy range with an energy resolution of
$\sim$ 15\% at 60 keV. The collecting area is 1400 cm$^2$ taking into account the
loss of the spectral capabilities of one of the detectors. The maximum time
resolution of the tagged events is $7.6\mu$s. In its default operation mode the
field of view of each cluster is switched on and off source to provide instantaneous
background measurements.
Due to the co-alignment of HEXTE and the PCA, they simultaneously observe the
sources. Table \ref{obs_table} lists the publicly available RXTE
observations used in this study. In the fourth column the PCU unit-2 screened
(see Sect \ref{rxte_timing}) exposure is given. A typical observation consists of
several sub-observations spaced more or less uniformly between the start and end date of
the observation.
\subsection{INTEGRAL}
The INTEGRAL spacecraft \citep{winkler03}, launched 17 October 2002, carries two main
$\gamma$-ray instruments: a high-angular-resolution imager IBIS \citep{ubertini03} and
a high-energy-resolution spectrometer SPI \citep{vedrenne03}. These instruments make use of
coded aperture masks enabling image reconstruction in the hard X-ray/soft $\gamma$-ray band.
In our study, guided by sensitivity considerations, we only used data recorded by the INTEGRAL
Soft Gamma-Ray Imager ISGRI \citep{lebrun03}, the upper detector system of IBIS, sensitive
to photons with energies in the range $\sim$20 keV -- 1 MeV. With an angular resolution of about $12\arcmin$
and a source location accuracy of better than $1\arcmin$ (for a $>10\sigma$ source) this instrument
is able to locate and separate high-energy sources in crowded fields within its $19\degr \times 19\degr$
field of view (50\% partially coded) with an unprecedented sensitivity ($\sim$ 960 cm$^2$ at 50 keV).
Its energy resolution of about 7\% at 100 keV is amply sufficient to determine the (continuum) spectral
properties of hard X-ray sources in the $\sim$ 20 - 300 keV energy band.
The timing accuracy of the ISGRI time stamps recorded on board is about $61\mu$s. The time
alignment between INTEGRAL and RXTE is better than $\sim 25\mu$s, verified using data
from simultaneous RXTE and INTEGRAL observations of the accretion-powered millisecond pulsar
IGR J00291+5934 \citep{falanga05}; for a calibration on the Crab pulsar, see also \citet{kuiper03}.
Given the fact that the accuracy of the RXTE clock in absolute time is about $2\mu$s \citep{rots04},
this implies that the INTEGRAL absolute timing is better than $\sim 27 \mu$s.
Data from regular INTEGRAL Crab monitoring observations show that the clock behaviour is stable, making
deep timing studies of weak pulsars possible.
In its default operation mode INTEGRAL observes the sky in a dither pattern
with $2\degr$ steps, which could be rectangular e.g. a $5 \times 5$ dither pattern
with 25 grid points, or hexagonal with 7 grid points (target in the middle). Typical integration
times for each grid point (pointing/sub-observation) are in the range 1800 -
3600 seconds. This strategy drives the structure of the INTEGRAL data archive which is
organised in so-called science windows (Scw) per INTEGRAL orbital revolution (lasting for
about 3 days) containing the data from all instruments for a given pointing.
Most of the INTEGRAL data reduction in this study was performed with the Offline
Scientific Analysis (OSA) version 4.1 distributed by the INTEGRAL Science Data Centre
\citep[ISDC; see e.g.][]{courvoisier03}.
Table \ref{obsint_table} lists the INTEGRAL orbital revolution identifiers with corresponding start/end dates of the
observations used in the imaging/spectral analyses and timing analyses of the selected sample of persistent AXPs.
\begin{table}[t]
\caption{List of INTEGRAL observations, sorted on INTEGRAL orbital revolutions (Rev.),
of the AXPs studied in this work. For more details on the executed INTEGRAL observations, see
{http://integral.esac.esa.int/} \label{obsint_table}}
{\footnotesize
\begin{center}
\begin{tabular}{llcc}
\hline
\textbf{Rev.} & \textbf{Rev.} & \multicolumn{2}{c}{\textbf{Begin/End Date}} \\
\textbf{begin} & \textbf{end} & \multicolumn{2}{c}{\textbf{(dd/mm/yyyy)}} \\
\hline\hline
\multicolumn{4}{c}{\textbf{Imaging analysis}}\\
\hline\hline
\multicolumn{4}{l}{\textit{1RXS\, J1708\,-\,4009}}\\
36 & 106 & 29-01-2003 & 29-08-2003 \\
\hline
\multicolumn{4}{l}{\textit{4U\, 0142\,$+$\,61}}\\
47 & 92 & 03-03-2003 & 16-07-2003 \\
142 & 148 & 12-12-2003 & 01-01-2004 \\
153 & 153 & 14-01-2004 & 15-01-2004 \\
161 & 162 & 07-02-2004 & 12-02-2004 \\
177 & 234 & 26-03-2004 & 12-09-2004 \\
238 & 238 & 25-09-2004 & 27-09-2004 \\
261 & 266 & 02-12-2004 & 19-12-2004 \\
268 & 269 & 24-12-2004 & 28-12-2004 \\
\hline
\multicolumn{4}{l}{\textit{1E\, 1841-045}}\\
49 & 253 & 10-03-2003 & 08-11-2004 \\
\hline
\multicolumn{4}{l}{\textit{1E\, 2259\,$+$\,586}}\\
142 & 148 & 12-12-2003 & 01-01-2004 \\
161 & 162 & 07-02-2004 & 12-02-2004 \\
\hline
\multicolumn{4}{l}{\textit{1E\, 1048.1\,-\,5937}}\\
36 & 217 & 29-01-2003 & 24-07-2004 \\
\hline\hline
\multicolumn{4}{c}{\textbf{Timing analysis}}\\
\hline\hline
\multicolumn{4}{l}{\textit{1RXS\, J1708\,-\,4009}}\\
36 & 120 & 29-01-2003 & 10-10-2003 \\
\hline
\multicolumn{4}{l}{\textit{1E\, 1841-045}}\\
49 & 123 & 10-03-2003 & 18-10-2003 \\
\hline
\end{tabular}
\end{center}}
\end{table}
\subsection{COMPTEL}
The Compton telescope COMPTEL aboard the Compton Gamma-Ray Observatory (CGRO, 1991 -- 2000)
was sensitive to $\gamma$-ray photons between 0.75 and 30 MeV, thereby covering the harder
$\gamma$-ray band adjacent to the INTEGRAL one. The very hard spectra that we measured with
IBIS ISGRI for some AXPs warranted us to revisit the COMPTEL data archive to search for
signals from AXPs. COMPTEL has an energy-dependent energy and angular resolution of 5\% -- 8\%
(FWHM) and $1\fdg7$ -- $4\fdg4$ (FWHM), respectively, and a wide circular field of view
covering $\sim$1 steradian. Imaging in its large field of view is possible with
a location accuracy (flux dependent) of the order of $0\fdg5$ -- $2\degr$.
For details on the experiment see \citet{schonfelder93}.
\section{Analysis methods}
\subsection{RXTE PCA/HEXTE timing}
\label{rxte_timing}
The PCA data from the observations listed in Table \ref{obs_table} have all been collected in {\em Goodxenon} or
{\em GoodxenonwithPropane} mode allowing high-time-resolution ($0.9\mu$s) studies in 256 spectral channels.
Because we are mainly interested in the medium/hard ($>2$ keV) X-ray timing properties of the selected sample of AXPs,
we ignored the events triggered in the Propane layers of each PCU.
Furthermore, contrary to the work presented in \citet{kuiper04}, we now used data from {\em all} three xenon layers of each
PCU, namely, employing data from the (deeper) middle and lower xenon layers considerably improves the signal-to-noise
ratio for energies above $\sim 10$ keV. This allows us to better characterize the hard ($>10$ keV) X-ray properties of AXPs.
The number of active PCUs at any time was changing, therefore, we treated the five PCUs constituting the PCA separately.
Good time intervals have been determined for each PCU by including only time periods when the PCU in question is on,
and during which the pointing direction is within $0\fdg 05$ from the target, the elevation angle above Earth's horizon
is greater than $5\degr$, a time delay of 30 minutes since the peak of a South-Atlantic-Anomaly passage holds, and a
low background level due to contaminating electrons is observed.
These good time intervals have subsequently been applied in the screening process to the data streams from each of
the PCUs (e.g. see Table \ref{obs_table} for the resulting screened exposure of PCU-2 per observation run).
Next, for each sub-observation the arrival times of the selected events (for each PCU unit) have been converted to
arrival times at the solar system barycenter (in TDB (=barycentric dynamical time) time scale; DE200 solar system ephemeris)
using the instantaneous spacecraft position and the celestial positions of the selected sample of AXPs. In this work we used
for the AXP positions those obtained by the Chandra X-ray observatory with a typical position accuracy of about $0\farcs5$ (for 1RXS\, J1708\,-\,4009,
see \citet{israel03}; for 4U\, 0142\,$+$\,61, see \citet{patel03}; for 1E\, 2259\,$+$\,586, see \citet{patel01}; for 1E\, 1048.1\,-\,5937, see \citet{wang02,israel02}
; and finally for 1E\, 1841-045, see \citet{wachter04}).
These barycentered arrival times have been folded with available phase connected timing solutions (see for details the
relevant section on the timing characteristics for each AXP) using only the first three frequency coefficients (frequency,
first and second frequency time derivatives at a certain epoch) to obtain pulse phase distributions for selected energy windows.
Combining now the phase distributions from the various PCUs for energies between $\sim 2$ and 10 keV we obtained for each
sub-observation pulse profiles, which deviate from uniformity at significances well above $5\sigma$ for each of the AXPs in our sample.
For the calculation of these significances we applied in this work the {\em bin-free} $Z_{n}^2$ statistic \citep{buccheri83}, which
behaves as a $\chi^2$ distribution for $2n$ degrees of freedom (n=number of harmonics).
However, phase shifts between the sub-observations made a direct combination of the pulse profiles to obtain (very)
high-statistics {\em time-averaged} pulse profiles impossible. Therefore we correlated the pulse phase distribution of each
sub-observation with a chosen initial template and applied the measured phase shifts to obtain an aligned combination with much
higher statistics. The correlation is then once repeated with, instead of the initially chosen template, the aligned
combination from the first correlation to obtain the final summed profile \citep[see e.g.][for a similar iterative method applied
for PSR B0540-69]{deplaa03}.
The net result is a set of aligned high-statistics pulse profiles in 256 energy channels for each of the AXPs in our sample.
It should be noted that these profiles are {\em time-averaged} profiles ignoring possible temporal variations in shape and/or
pulsed fraction (see e.g. \citet{rea05} for 1RXS\, J1708\,-\,4009; \citet{rea06} for 4U\, 0142\,$+$\,61; and \citet{gavriil04b,tiengo05} for 1E\, 1048.1\,-\,5937).
HEXTE operated in its default rocking mode during the observations listed in Table \ref{obs_table}, allowing the
collection of real-time background data from two independent positions $\pm 1\fdg 5$ to either side of the
on-source position. For the timing analysis we selected only the on-source data.
Good time intervals have been determined using similar screening filters as used in the case of the PCA. The
selected on-source HEXTE event times have subsequently been barycentered and folded according to the available ephemerides
(see PCA part at the beginning of this section) using again only the first three frequency coefficients.
Applying, for each AXP in our sample, the phase shifts as derived from the contemporaneous PCA measurements to the HEXTE phase
distributions of each sub-observation we obtained the time averaged HEXTE pulse phase distributions in 256 spectral channels
(15 - 250 keV) for the combination of observations listed in Table \ref{obs_table}.
\subsection{RXTE PCA/HEXTE spectral analysis}
\label{rxte_spectral}
Because of the non-imaging nature of the two main RXTE instruments the (total) source-flux estimation relies on accurate
time-dependent instrumental and celestial background measurements. Although such models exist for the PCA, the complexity of
the near environment of the AXPs in our sample makes it very difficult to derive reliable unbiased total flux estimates with
the PCA in the 2-30 keV range. Specifically, all AXPs are located in a narrow strip along the Galactic plane where (large)
gradients in the Galactic ridge emission exist \citep{valinia98}; 1E\, 1841-045\ and 1E\, 2259\,$+$\,586\ are located in supernova remnants;
time-variable strong sources are present near to 4U\, 0142\,$+$\,61\ (Be X-ray binary RX J0146.9+6121), 1E\, 1048.1\,-\,5937\ (the enigmatic
$\eta$ Carina) and 1RXS\, J1708\,-\,4009\ (e.g. the strong and highly variable X-ray binaries OAO 1657-415 and 4U 1700-377).
The rocking strategy applied during HEXTE operations in principle provides instantaneous background measurements, but also in this
case the gradient in the Galactic ridge emission and the possible presence of other (strong) sources in both the on and off-source
pointings (very serious for e.g. 1RXS\, J1708\,-\,4009\ and 4U\, 0142\,$+$\,61) can result in unreliable (background subtracted) total-source-flux measurements.
Therefore we abandoned, in contrast to the work presented in \citet{kuiper04} for 1E\, 1841-045\ in Kes 73, the derivation of the total-source
flux with the non-imaging PCA and HEXTE instruments.
In this work we concentrate on the derivation of the {\em time-averaged pulsed\/} PCA/HEXTE spectra of the AXPs in our sample.
This can be done by determining the number of pulsed counts in differential PCA/HEXTE energy bands by fitting a truncated Fourier
series \begin{equation} N(\phi) = a_0+\sum_{k=1}^{N} a_k \cos(2\pi k \phi) + b_k \sin(2\pi k \phi) \label{eq_1} \end{equation}
with $\phi$ the pulse phase, to the measured pulse phase distributions $N(\phi)$. It turned out that 3 to 5 harmonics ($N=3/5$)
were sufficient to describe the measured distributions accurately for all energy intervals and AXPs in our sample. In the case of
the PCA we derived for each PCU the energy response matrix (energy redistribution including the sensitive area) for the combination
of observations listed in Table \ref{obs_table} and subsequently took the different PCU (screened) exposure times into account in the
construction of the weighted PCU-combined energy response.
The pulsed (excess) counts per energy band are fitted in a procedure assuming either an absorbed power-law
($F_{\gamma}=K\cdot e^{-N_H\cdot \sigma} \cdot E_{\gamma}^{-\Gamma}$), or an absorbed double power-law
($F_{\gamma}=e^{-N_H \cdot \sigma}\cdot (K_1 \cdot E_{\gamma}^{-\Gamma_1} + K_2 \cdot E_{\gamma}^{-\Gamma_2})$) or an absorbed black body plus
power-law ($F_{\gamma}=e^{-N_H\cdot \sigma}\cdot (K \cdot E_{\gamma}^{-\Gamma}+ K_{bb} \cdot E_{\gamma}^2 / (\exp(E_{\gamma}/kT)-1))$) photon spectrum folded through the PCU-combined energy response.
In the spectral ``deconvolution" process of the HEXTE total pulsed counts in almost all cases\footnote{For HEXTE pointings with the
target (AXP) far off-axis e.g. 4U\, 0142\,$+$\,61\ during an observation of HMXB RX J0146.9+6121, we took the reduction in the effective sensitive area due
to the collimator response into account} the on-axis cluster A and B energy response matrices have been employed taking into account
the (slightly) different screened on-source exposure times for each cluster.
The exposure times have been corrected for the considerable deadtime effects.
\subsection{INTEGRAL timing analysis}
\label{sect_int_timing}
The first step in an INTEGRAL timing analysis is to obtain a set of science windows for which the angular distance between instrument
pointing direction and target is within $14\fdg5$ to ensure that (a part of) the detector plane is illuminated by the target. The resulting
list is further screened on erratic (ISGRI) count rate variations, indicative for particle effects due to Earth radiation belt
passages or solar flare activities. These science windows are excluded for further analysis.
Next, only events with rise times between 7 and 90 \citep[see][for definition]{lebrun03}, detected in {\em non-noisy}
ISGRI detector pixels which have an illumination factor of more than 25\% (i.e. at least 25\% of a detector pixel must have been illuminated
by the target) are passed for further analysis.
The on-board event time stamps are corrected for known instrumental (fixed), ground station and general time delays in the on-board time vs.
TT (Terrestrial Time) correlation \citep[see e.g.][]{walter03}.
The resulting event times in TT of the selected events are barycentered (using the JPL DE200 solar system ephemeris) adopting the Chandra X-ray
positions of the AXPs and the instantaneous INTEGRAL orbit information.
These barycentered events are finally folded using an appropriate timing model ($\nu,\dot\nu,\ddot\nu$ and the epoch)
to yield pulse phase distributions for different energy bands between 20 and 300 keV.
The timing models (phase connecting ephemerides) are based on publicly available RXTE monitoring data of AXPs.
Ephemerides have been generated for two AXPs in our sample, 1RXS\, J1708\,-\,4009\ and 1E\, 1841-045, because at the INTEGRAL epoch (MJD $> 52668$ and MJD $> 52698$ for
1RXS\, J1708\,-\,4009\ and 1E\, 1841-045, respectively) these were not available from existing literature (see Table \ref{eph_table}; all with very small RMS values,
required for extracting the weak pulsed signals).
\begin{table*}[t]
\caption{Phase coherent ephemerides for 1RXS\, J1708\,-\,4009\ and 1E\, 1841-045, derived from RXTE PCA monitoring data and valid for the
analyzed INTEGRAL observations.\label{eph_table}}
{\footnotesize
\begin{center}
\begin{tabular}{lccccccc}
\hline
AXP & Start & End & Epoch & $\nu$ & $\dot\nu$ & $\ddot\nu$
& RMS\\
& [MJD] & [MJD] & [MJD,TDB] & [Hz] & $\times 10^{-13}$ Hz/s & $\times
10^{-22}$ Hz/s$^2$ & \\
\hline
\hline
1RXS\, J1708\,-\,4009 & 52590 & 52939 & 52590.0 & 0.09089812328(36) & -1.59836(30) & 0.00(Fixed) & 0.012\\
1E\, 1841-045 & 52726 & 52982 & 52726.0 & 0.0848972590(50) & -3.2217(85) & 4.22(83) & 0.015\\
\hline
\hline
\end{tabular}
\end{center}}
\end{table*}
\subsection{INTEGRAL spectral analysis}
The INTEGRAL IBIS ISGRI spectral analysis applied in our study is based on OSA 4.1 programs producing sky mosaics for the
combination of many science windows in different energy bands \citep{goldwurm03}. The resulting dead-time corrected ISGRI
source-count rates per energy band are referenced to count rates measured from the Crab in similar energy bands (Crab
calibration observations during INTEGRAL Revs. 102/103 were used).
We used for the total Crab photon emission spectrum the following spectrum derived by \citet{willingale01} based on XMM-Newton observations
of the Crab at energies between 0.3 and 10 keV: photon index $\Gamma=2.108(6)$ with a normalization at 1 keV of $9.59(5)$
ph/(cm$^2$s keV).
We verified the validity of the extrapolation of this spectrum to energies between 15 and 250 keV using RXTE HEXTE Crab data.
The HEXTE data utilized is from a long dedicated RXTE Crab observation (obs. id. 40805), performed between
17-31 March 1999 and 18-19 Dec. 1999, yielding dead time corrected cluster 0 and 1 exposures of 22.7 and 23.8 ks, respectively.
Applying an overall (energy independent) normalization factor of 1.087 to be consistent with the BeppoSAX LECS/MECS/PDS spectrum \citep{kuiper01}
the derived 15-250 keV HEXTE Crab total spectrum connects smoothly to the 0.3-10 keV Crab total spectrum derived by \citet{willingale01}.
The HEXTE Crab total spectrum can best be described by a power-law model with an energy dependent photon index as given below:
\begin{equation} F_{\gamma} = 1.5703(14) \times (E_{\gamma}/0.06355)^{-2.097(2)-0.0082(16)\cdot \ln({E_{\gamma}/0.06355)}}
\label{eq_2} \end{equation}
In Eq.\ref{eq_2}, $F_{\gamma}$ is expressed in ph/(cm$^2$\,s\,MeV) and $E_{\gamma}$ in MeV.
Up to $\sim 180-200$ keV the extrapolation of the spectrum given by \citet{willingale01} overlaps the HEXTE spectrum, and thus provides
a proper representation for the ISGRI energy range we are effectively dealing with.
Therefore, our method for deriving ISGRI spectra enables us to determine the 20-300 keV AXP spectra without detailed knowledge of the
instrument energy response, which at the time of our analysis still contained significant uncertainties.
Thus, we derive from these spatial analyses total source spectra (sum of pulsed and unpulsed components).
For the construction of spectra for the pulsed components we extracted count rates from the phase distributions
similarly as was done for the RXTE light curves. For the conversion to pulsed fluxes, also in this case, the Crab pulsed
signal in the HEXTE 15 - 250 keV range (obs. id. 40805) was used as a reference source. This pulsed HEXTE spectrum is properly described by:
\begin{equation} F_{\gamma} = 0.4693(21) \times (E_{\gamma}/0.04844)^{-1.955(7)-0.0710(78)\cdot \ln({E_{\gamma}/0.04844)}}
\label{eq_3} \end{equation}
over the entire 15-250 keV energy range. This model has been verified on the pulsed ISGRI spectrum of PSR B1509-58 data, yielding
consistent results as reported in \citet{kuiper99}.
\subsection{COMPTEL spatial analysis}
During the long mission lifetime of CGRO (April, 1991 -- May, 2000), most of the sky,
particularly the Galactic Plane, has been viewed with long exposures. We used the exposure
accumulated for each source over the total mission duration, amounting for 1RXS\, J1708\,-\,4009\ 5.3 Ms,
4U\, 0142\,$+$\,61\ 4.2 Ms, 1E\, 2259\,$+$\,586\ 4.9 Ms, 1E\, 1048.1\,-\,5937\ 4.8 Ms, and for 1E\, 1841-045\ 4.2 Ms.
Skymaps and source parameters can be derived with the maximum
likelihood method, which is implemented in the standard COMPTEL data analysis package.
See for the implementation of the maximum likelihood method for data from a
Compton telescope \citep{deboer92}, and for the specific treatment of the instrumental
background structure in the COMPTEL data space \citep{bloemen94}. For the analysis
and data selections we followed the approach described by \citet{zhang04}.
Standard energy intervals were selected for the analysis: 0.75 -- 3 MeV, 3 -- 10 MeV and
10 -- 30 MeV. For each of these energy intervals and for each AXP the source flux
(or upper limit) was measured at the known source position in the maximum likelihood skymaps.
\section{1RXS\, J1708\,-\,4009}
1RXS\, J1708\,-\,4009\ was discovered in 1996 during the ROSAT (0.1-2.4 keV) all sky survey.
X-ray pulsations at a 11-s period were subsequently detected with ASCA \citep{sugizaki97}.
Its 0.8-10 keV spectrum turned out to be very soft and its general X-ray
properties pointed to an AXP membership \citep[see e.g.][]{israel99a,kaspi99}. From regular RXTE PCA
monitoring observations performed between Jan. 1998 and June 1999 a phase-coherent timing solution was obtained
by \citet{kaspi99}. This demonstrated a high level of rotation stability. Since these early monitoring observations,
however, the source experienced two glitches - one in Sept./Oct. 1999 and a second in April 2001 - each with
different recovery behaviour \citep{kaspi00,kaspi03a,dallosso03}. For the period between the two glitches
\citet{gavriil02} presented a phase-coherent timing solution with a positive $\ddot\nu$ indicative for a
long-term glitch recovery.
The morphology of the X-ray pulse profile of 1RXS\, J1708\,-\,4009\ is changing as a function of energy \citep[e.g.][]{sugizaki97,israel01,gavriil02}. Phase-resolved spectral analyses indeed showed significant spectral variations with pulse-phase, most
pronounced in the photon power-law index \citep[e.g.][]{israel01,rea03,rea05}.
Furthermore, the total phase-averaged unabsorbed 0.5-10 keV X-ray flux and photon spectral index appear to be time
variable in a correlated way, with maximal fluxes and hardest spectra near the two glitch epochs \citep{rea05}.
At optical/IR wavelengths 2 potential counterparts were identified within the Chandra $0\farcs7$ HRC-I error circle \citep[see e.g.]
[for more details]{israel03,harb05}. A search for radio emission at 1.4 GHz from 1RXS\, J1708\,-\,4009\ only yielded a $5\sigma$ upper limit of
3 mJy at the position of the AXP \citep{gaensler01}.
Given the softness of the 0.5-10 keV X-ray spectra, the INTEGRAL detection reported by \citet{revnivtsev04} of a point
source at the position of 1RXS\, J1708\,-\,4009\ between 18 and 60 keV was a big surprise. Below we will present in detail the new
high-energy characteristics of this AXP derived in this work: a) the discovery of the pulsed emission above $\sim 10$ keV (profiles, spectra)
using RXTE PCA/HEXTE and IBIS ISGRI data; b) ISGRI and COMPTEL results on the total emission.
\begin{figure}[t]
\centerline{\includegraphics[width=8.0cm,height=12.cm,bb=150 195 440 610]{f1.ps}}
\caption[]{\label{pca_rxs_stack}RXTE PCA/HEXTE pulse profiles of 1RXS\, J1708\,-\,4009\
for energies in the range 1.8-222.9 keV combining data collected between
12 Jan 1998 and 26 Oct 2003 (see Table \ref{obs_table}).
Two cycles are shown for clarity. The vertical dotted lines at phases 0.25 and
0.55 serve as a guide to the eye for alignment comparisons. Note the drastic morphology changes with energy.}
\end{figure}
\subsection{1RXS\, J1708\,-\,4009\ timing characteristics}
\subsubsection{RXTE PCA/HEXTE pulse profiles}
Applying the timing analysis procedures outlined in Sect. \ref{rxte_timing} to the full set of RXTE observations of 1RXS\, J1708\,-\,4009\ listed
in Table \ref{obs_table}, resulted in a compilation of high-statistics {\em time-averaged} PCA/HEXTE pulse profiles for energies
between $\sim 2-220$ keV (see Fig. \ref{pca_rxs_stack}). The ephemerides used in the folding/correlation process (see Sect.
\ref{rxte_timing}) are given in \citet{kaspi00,gavriil02,kaspi03a}.
For the first time pulsed emission is detected above $\sim 10$ keV:
the non-uniformity significance of the 16.1-32.0 keV PCA pulse phase distribution (see Fig.\ref{pca_rxs_stack}d) is $14.2\sigma$
applying a $Z_2^2$-test and the HEXTE 35.2-222.9 keV profile (see Fig.\ref{pca_rxs_stack}f) deviates from uniformity at a $5.2\sigma$ level.
Above 35.2 keV the significances in the HEXTE 35.2-64.1 and 74.3- 222.9 keV bands (the intermediate energy window with a large instrumental background feature has been omitted) are both $3.75\sigma$.
Drastic morphology changes with energy are visible. The decomposition of the pulse profiles in terms of a finite number of
harmonics (see Eq. \ref{eq_1}) provides a means to visualize a change in morphology with energy.
The power ($a_k^2+b_k^2$, see Eq.\ref{eq_1}), derived from the time-averaged PCA pulse profiles of 1RXS\, J1708\,-\,4009, in the first harmonic
is dominant over the power in the second and third harmonics, and the power in harmonics with $k \ge 4$ can be neglected
\citep[see also][]{gavriil02}.
From Eq. \ref{eq_1} one can define a phase angle $\Phi_{\alpha}^k = \arctan(a_k/b_k)$ for each harmonic $k$. The energy dependence of $\Phi_{\alpha}^k$ for the first three harmonics is shown in Fig. \ref{pca_rxs_fourier}.
For each harmonic it reveals a very smooth variation of the phase angle with energy. The shape of the profile is changing drastically
between 2 and 10 keV. For energies above $\sim 15$ keV the phase angles for the 3 considered harmonics seem to converge to constant
values, a necessary condition for stable pulse shapes.
\begin{figure}[t]
\includegraphics[width=8.0cm,height=8.cm,bb=100 200 500 630]{f2.ps}
\caption[]{\label{pca_rxs_fourier} Phase angles as a function of energy for the first 3
harmonics used in the truncated Fourier series fit (see Eq. \ref{eq_1}) of the RXTE PCA pulse profiles of 1RXS\, J1708\,-\,4009.
The harmonics are labeled with their corresponding number.}
\end{figure}
\subsubsection{INTEGRAL IBIS ISGRI pulse profiles}
We also performed a timing analysis for 1RXS\, J1708\,-\,4009\ using IBIS ISGRI data. Data from science windows taken during INTEGRAL
Revs. 36-120 satisfying our $14\fdg5$ off-axis constraint were included (effective on-axis exposure after screening $\sim$ 1,360 ks).
The processing followed the guidelines presented in Sect. \ref{sect_int_timing} using the 1RXS\, J1708\,-\,4009\ ephemeris generated from
RXTE monitoring observations, given in Table \ref{eph_table}.
In the integral 20-300 keV ISGRI band we obtained a non-uniformity significance of $5.9\sigma$ applying a Z$_2^2$ test, which is
comparable to the HEXTE result. In differential energy bands we found: 20-75 keV, $4.3\sigma$, and 75-300 keV $3.6\sigma$
(see Fig. \ref{rxs_int_prof} for the corresponding pulse profiles). The HEXTE and the ISGRI profiles above 75 keV are very similar,
and suggest that the hard-X-ray 1RXS\, J1708\,-\,4009\ profile exhibits less structure than found below 10 keV. From these initial ISGRI timing
results it is clear that highly significant profiles can be expected in the near future when significantly more IBIS ISGRI data on
this source become available.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=6cm,angle=0,bb=170 195 420 605]{f3.ps}}
\caption{\label{rxs_int_prof} INTEGRAL IBIS ISGRI pulse profiles of 1RXS\, J1708\,-\,4009\ for two energy ranges.
The non-uniformity significances are $4.3\sigma$ and $3.6\sigma$ for 20-75 keV and 75-300 keV,
respectively. Pulse maxima are found near phase $\sim 0.2$, corresponding to phase 0.55 in
Fig. \ref{pca_rxs_stack}.
}
\end{figure}
\subsection{1RXS\, J1708\,-\,4009\ spectral characteristics}
In this section we present new high-energy spectral information above 2.5 keV up to 30 MeV for 1RXS\, J1708\,-\,4009:
(a) (time-averaged) pulsed emission from RXTE PCA and HEXTE; (b) pulsed emission from INTEGRAL IBIS ISGRI;
(c) total (pulsed and unpulsed) emission from ISGRI and upper limits to the total
emission from CGRO COMPTEL. Finally, the new spectra are compared with spectra reported
earlier for energies below 10 keV \citep[][for BeppoSAX LECS/MECS 0.4-10.8 keV and XMM Newton MOS/PN 0.5-10 keV, respectively]{rea03,rea05}.
\subsubsection{RXTE PCA/HEXTE pulsed spectrum}
The spectral procedures employed for the RXTE PCA and HEXTE data (see Sect. \ref{rxte_spectral}) resulted in a high-statistics
determination of the spectrum of the time-averaged pulsed emission of 1RXS\, J1708\,-\,4009\ in the $\sim 2.5-220$ keV energy range.
The PCA (aqua) flux values are derived assuming an absorbed double power-law spectral model, and are shown in
a $\nu F_{\nu}$ representation in Fig. \ref{HE_SPECTRUM_RXS}. Also drawn is the best fitting spectral model to the PCA data points
(2.5-36.9 keV; $\chi_{r}^2=1.11$ for 12 degrees of freedom; dashed line). The assumed absorbing Hydrogen column
density $N_H$ in the spectral fit was $1.36\times 10^{22}$cm$^{-2}$ \citep{rea03}.
The two power-law components become equally strong at $E_{cross}= 21.7 \pm 2.4$ keV: below this energy the power-law component
with index $\Gamma_1=2.60\pm 0.01$ dominates and above a component with very hard spectrum, index $\Gamma_2=-0.12\pm 0.07$.
It is clear that the pulsed spectrum hardens dramatically above 20 keV, however, the spectrum has to soften considerably to be consistent
with the index of $1.01(12)$ derived from a combination of PCA, HEXTE and IBIS ISGRI pulsed flux measurements for energies $\ga 15$ keV
(see Sect.\ref{sect_ii_rxs_tm}). The HEXTE flux values (simultaneously derived) are fully consistent with the upturn found in the PCA
spectrum (see Fig.\ref{HE_SPECTRUM_RXS}, blue datapoints).
\subsubsection{INTEGRAL IBIS ISGRI pulsed spectrum}
\label{sect_ii_rxs_tm} Pulse profiles for 1RXS\, J1708\,-\,4009\ have been generated in three energy intervals
from 20 to 300 keV using IBIS ISGRI data from Revs 36-120. The pulsed (excess) count rates were determined
and transformed to pulsed flux values, calibrated on the pulsed Crab spectrum. The three flux values
are also included in Fig. \ref{HE_SPECTRUM_RXS}. They are consistent with the data points derived
from RXTE HEXTE for the same energy window, but measured at different epochs.
Within the present statistical accuracies there is no indication for long-term flux variability above 20 keV, contrary
to the reported strong flux variability for energies below 10 keV, clearly visible in Fig. \ref{HE_SPECTRUM_RXS}.
A power-law fit through the PCA, HEXTE and IBIS ISGRI pulsed flux measurements for energies above 15 keV yielded a
photon index of $1.01\pm 0.12$. This model is shown in Fig. \ref{HE_SPECTRUM_RXS} as a dashed black line for the 15-300 keV energy
range. There is no indication yet for a second spectral break up to 300 keV.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f4.ps}}
\caption{\label{HE_SPECTRUM_RXS}A $\nu F_{\nu}$ spectral representation of the total and pulsed high-energy
emission from 1RXS\, J1708\,-\,4009. The aqua (PCA), blue (HEXTE) and magenta (IBIS ISGRI) data points show the
{\em time-averaged} 2.5-300 keV pulsed spectrum. The black dashed line shows the best power-law model
fit to the combination of PCA, HEXTE and ISGRI pulsed flux values for energies above $\sim 15$ keV.
The other measurements refer to the total emission spectrum: 0.5-10 keV, BeppoSAX LECS/MECS and XMM Newton
spectral models at different epochs (Rea et al. 2003,2005); 20-300 keV time-averaged (Revs. 36-106)
IBIS ISGRI spectrum; and 0.75-30 MeV, time-averaged CGRO COMPTEL $2\sigma$ upper-limits.
Note the drastic hardening of the pulsed spectrum near 20 keV. The COMPTEL upper limits require another
spectral break somewhere between 300 and 750 keV.}
\end{figure}
\subsubsection{INTEGRAL IBIS ISGRI and COMPTEL total spectrum}
1RXS\, J1708\,-\,4009\ was detected at a $6.5\sigma$ level in the 18-60 keV energy band by \citet{revnivtsev04} using IBIS ISGRI data
from a 2 Ms ultra deep INTEGRAL survey of the Galactic center region performed in Aug. - Sept. 2003 (INTEGRAL Revs. 104-107 \& 111-113).
Its 18-60 keV flux was $2.2\pm 0.3$ mCrab. In this work we analyzed (almost independent) IBIS ISGRI data from all publicly available
INTEGRAL observations performed between revolutions 36 and 106 in which the angular distance between 1RXS\, J1708\,-\,4009\ and the science window
pointing was $\le 14\fdg5$. The effective on-axis exposure after screening is about 974 ks.
Mosaic images have been made using OSA 4.1 procedures for 5 broad differential energy bands, 20-35, 35-60, 60-100, 100-175 and 175-300 keV,
in order to determine the total hard X-ray/soft $\gamma$-ray spectrum of this AXP. 1RXS\, J1708\,-\,4009\ was detected significantly only in the 20-35
and 35-60 keV energy bands with flux levels consistent with the 18-60 keV flux measurement by
\citet{revnivtsev04}. The resulting IBIS ISGRI spectral measurements are shown as purple data points in Fig. \ref{HE_SPECTRUM_RXS}.
Unfortunately, the statistics in these IBIS ISGRI data points are still too poor to constrain the underlying emission model in the
20-300 keV range: the fit quality specified by a reduced $\chi^2_r$ of $1.77$ for 3 d.o.f. assuming a simple power-law
model is still acceptable; the resulting photon index is $1.44 \pm 0.45$, slightly softer than, but consistent with, the pulsed flux
spectrum derived from the combined fit to the PCA, HEXTE and IBIS ISGRI pulsed flux measurements at $\ga 15$ keV.
Finally, at energies above 750 keV till 30 MeV we generated skymaps using all CGRO COMPTEL observations with 1RXS\, J1708\,-\,4009\ in its field of
view, spread over its full 9 year mission lifetime. The source was not seen for any of the standard energy intervals, and we derived
($2\sigma$) flux upper limits, also shown in Fig. \ref{HE_SPECTRUM_RXS}.
\subsubsection{Discussion of spectra and pulsed fraction of 1RXS\, J1708\,-\,4009}
Comparing the total IBIS ISGRI spectrum derived in the spatial analysis with the pulsed time-averaged spectra
from RXTE PCA/HEXTE and ISGRI, we see that the 1RXS\, J1708\,-\,4009\ emission is consistent with about 100\% pulsation
for energies in excess of 20 keV. For energies below 20 keV, the situation is very different. To highlight this, we included
in Fig.\ref{HE_SPECTRUM_RXS} the emission models (best spectral fits) for the total 0.5-10 keV X-ray spectra measured at two
different epochs \citep[][for BeppoSAX LECS/MECS 0.4-10.8 keV and XMM Newton MOS/PN 0.5-10 keV, respectively]{rea03,rea05}.
Given the apparent drastic spectral change in both shape and normalization for energies below 10 keV we cannot uniquely quantify
the pulsed fraction as a function of energy comparing with the time-averaged high-statistics PCA pulsed flux measurements.
Obviously, the BeppoSAX measurement during August 2001 does not connect to the RXTE/INTEGRAL spectra. 1RXS\, J1708\,-\,4009\ was apparently
in a (very) different high state during the BeppoSAX observation, which took place in the recovering phase from the secondly
reported glitch \citep[see][]{rea05}.
The XMM Newton spectrum connects very smoothly to the total spectrum measured by INTEGRAL, and suggests a variation
in pulsed fraction from $\sim$ 25\% at 2 keV to 100\% at 20 keV. Our time averaged results for the total and pulsed emission
above 20 keV suggest a very stable behaviour of this AXP at these higher energies. It seems that a stable and very hard pulsed
component is dominating the emission at these energies.
Interestingly, the COMPTEL MeV upper limits then require a spectral break in this hard spectrum of 1RXS\, J1708\,-\,4009\ somewhere
between 300 and 750 keV.
\section{4U\, 0142\,$+$\,61}
4U\, 0142\,$+$\,61\ was discovered in the early seventies by the scanning {\it Uhuru\/} X-ray observatory
\citep[see e.g.][]{forman78}. The X-ray sky survey performed by the SSI (1.5-20 keV) on {\it Ariel V\/}
\citep{warwick81} and X-ray observations by SAS-3 and HEAO-1 confirmed the source and refined
its position considerably \citep[see e.g.][for an X-ray position with $23\arcsec$ accuracy]{reid80}.
Data from the non-imaging ME collimator ($45\arcmin$ FWHM) aboard EXOSAT revealed a coherent 25 minute
signal, which was later absent in two shorter EXOSAT observations \citep{white87}. These authors also presented
more accurate ($3\farcs2$) positional information for 4U\, 0142\,$+$\,61\ using both EXOSAT LEIT and {\it Einstein\/} (HEAO-2)
HRI data and demonstrated the lack of a bright optical counterpart, ruling out the presence of a massive companion.
The discovery in ROSAT all-sky survey data of a Be/X-ray binary, RX J0146.9+6121, by \citet{motch91} at only $24\arcmin$ from
4U\, 0142\,$+$\,61\ showed that there had been source confusion. Revisiting the EXOSAT 1985 data, \citet{israel94} discovered a 8.7 s
periodicity, only detectable in the 1-3 keV energy band.
Finally, using ROSAT PSPC data, \citet{hellier94} unambiguously demonstrated that the 8.7 s periodicity originated from 4U\, 0142\,$+$\,61\ and the 25 min
signal from the Be/X-ray binary RX J0146.9+6121, which moreover showed transient activity.
A good quality phase-averaged 0.5-10 keV X-ray spectrum of 4U\, 0142\,$+$\,61\ was derived by \citet{white96} analyzing ASCA SIS/GIS data.
The physically most pausible model consists of an absorbed black-body plus power-law component with best-fit model parameters: black-body
temperature $0.386(5)$ keV; photon-index $3.67(9)$ and N$_{\hbox{\scriptsize H}}$ $9.5(4)\times 10^{21}$ cm$^{-2}$. These authors, using data covering a much wider time baseline, also confirmed the spin-down timescale of $\sim 1.2\times 10^5$ year, reported earlier by
\citet{hellier94}. The spin-down energy release is much lower than the observed X-ray luminosity, thereby excluding 4U\, 0142\,$+$\,61\ to be a spin-down
powered pulsar.
RXTE monitoring observations since November 1996 \citep{gavriil02} proved that 4U\, 0142\,$+$\,61\ is a very stable rotator, like spin-down powered pulsars, and
ASCA data taken in a 2 year gap of RXTE observations seemed to indicate a pulsar-like glitch in its rotation behaviour
\citep{morii05}. Time-averaged RXTE PCA pulse profiles in two different energy bands showed significant morphology changes with energy,
consistent with earlier results of much lower significance \citep{israel94,white96,israel99b}.
Recent X-ray observations of 4U\, 0142\,$+$\,61\ by Chandra \citep{juett02,patel03} and XMM \citep{gohler05} improved not only the positional
accuracy down to $0\farcs5$, but also provided high-quality (phase-resolved) spectra which are mutually consistent and
appeared featureless. The phase-resolved spectra showed significant changes in photon-index and black body temperature as a function
of pulse phase.
At lower energies, \citet{hulleman00} discovered a faint optical counterpart within the 4U\, 0142\,$+$\,61\ {\it Einstein\/} X-ray error circle, followed by
the detection of optical pulsations with a high pulsed fraction from this counterpart \citep{kern02}. No radio counterpart has been found yet
\citep{gaensler01}.
At higher energies the discovery was reported by \citet{hartog04} of hard X-rays/soft $\gamma$-rays (20 -- 150 keV) in INTEGRAL IBIS ISGRI data
of a deep 1.6 Ms observation of the Cassiopeia region. A detailed analysis of this long observation \citep{hartog06}, proved that the spectrum in this energy range is very hard with power-law photon index $\Gamma=0.73 \pm 0.17$. The X-ray luminosity between 20 and 100 keV was found to be
$5.9 \times 10^{34}$ erg s$^{-1}$, a factor 440 higher than the rotational energy loss. They also reported flux upper limits from COMPTEL
between 0.75 and 30 MeV, which indicated that the hard spectrum has to break/bend between $\sim$ 100 keV and 0.75 MeV.
In this work we will present for 4U\, 0142\,$+$\,61: a) for the first time the PCA/ HEXTE timing and spectral results for energies above $\sim 8$ keV
using all publicly available RXTE data, b) timing and spectral results from analysis of ASCA GIS 0.5 - 10 keV data, and c) a spectrum with
higher statistical accuracy of the total emission as seen by IBIS ISGRI over the 20-300 keV energy range using all available public,
Core Program (Galactic plane scans; 4U 0115+63 Target of Oppotunity Observation) and Open Time (Cassiopeia region; IGR J00291+5934 TOO)
INTEGRAL data.
\subsection{4U\, 0142\,$+$\,61\ timing characteristics}
\begin{figure}[t]
\centerline{\includegraphics[height=8.5cm,width=10cm,angle=90,bb=55 172 515 670]{f5.ps}}
\caption{\label{pca_u_stack} ASCA GIS, RXTE PCA and HEXTE pulse profiles of 4U\, 0142\,$+$\,61\
for energies in the range 0.5-156.8 keV. Two cycles are shown for clarity. The RXTE profiles shown in panels d--i are
time-averaged and based on observations performed between March 1996 and September 2003. The GIS profiles (panels a--c) are
from ASCA observations performed in July/August 1999 totaling $\sim 120$ ks of exposure. For the first time significant
deviations from uniformity are visible at energies $> 8$ keV: $Z_2^2$ is $16.6\sigma$ and $5.9\sigma$ for the PCA
8.0-16.3 and 16.3-31.8 keV bands, respectively; the significances for the HEXTE profiles are $3.4\sigma$ and $2.0\sigma$
for the 21.2-50.3 and 50.3-156.6 keV bands. Note the drastic morphology changes with energy.
}
\end{figure}
Time-averaged RXTE pulse profiles of 4U\, 0142\,$+$\,61\ are shown in Fig. \ref{pca_u_stack} for energies between 2.2 and 156.6 keV (PCA/HEXTE).
We used in the folding/correlation process (see Sect. \ref{rxte_timing}) the ephemerides given by \citet{gavriil02}.
The 2.2-4.0 keV PCA profile (panel d) shows two distinct pulses near phases 0.2 and 0.55. Moving up in energy the pulse near 0.2
loses significance and is gone for energies above $\sim 8$ keV (panels e--f). Instead above $\sim 8$ keV a feature near phase
0.9 pops up, which is visible up to $\sim 50$ keV (panels f--h).
Above 50 keV (panel i) the phase region between the two pulses at 0.55 and 0.9 seems to be filled in by a new component, but the
statistics is too poor to make stringent conclusions.
Most importantly, we detect for the first time significant pulsed emission from 4U\, 0142\,$+$\,61\ at energies above $\sim 8$ keV: $16.6\sigma$
and $5.9\sigma$ for the PCA 8.0-16.3 and 16.3-31.8 keV bands, respectively.
To investigate further the increase in strength of the pulse near phase 0.2 towards lower energies, we extended the energy baseline
by including also our results from a timing analysis of ASCA GIS (0.5-10 keV) data from observations performed in July/August 1999
($\sim 120$ ks exposure). Three profiles (energy bands 0.5-1.7, 1.7-3.0 and 3.0-10 keV) are shown in panels a--c of Fig.\ref{pca_u_stack}.
Indeed, for the pulse near phase 0.2 the trend of increasing strength towards lower energies is continued in the
ASCA GIS 0.5-1.7 and 1.7-3.0 keV energy ranges.
A decomposition of the PCA pulse profiles in terms of 5 Fourier harmonics shows that above 2 keV most of the signal power is
embedded in the first harmonic which becomes more and more dominant over the power in the second harmonic, the next powerful harmonic,
with increasing energy.
Below $\sim 2$ keV the power in the second harmonic is dominant over that in the first. The behaviour of
the phase angles $\Phi_{\alpha}^k$ as a function of energy for the first three harmonics is shown in Fig. \ref{pca_u_fourier}.
Similar to 1RXS\, J1708\,-\,4009\, but with different trends, also now a smooth energy dependency of $\Phi_{\alpha}^k$ is
shown for each harmonic with significant power.
The exception is the derivative of $\Phi_{\alpha}^2$ to energy which flips sign near 5 keV. For 4U\, 0142\,$+$\,61\ it is less obvious
that the pulse profile remains stable above 20 keV, than seems to be the case for 1RXS\, J1708\,-\,4009.
\begin{figure}[t]
\includegraphics[width=8.0cm,height=8.cm]{f6.ps}
\caption{\label{pca_u_fourier} Phase angles as a function of energy for the first 3
harmonics of the truncated Fourier series fit of the RXTE PCA pulse profiles of 4U\, 0142\,$+$\,61.
Note the sign reversal of the energy derivative of $\Phi_{\alpha}$ for the second harmonic near 5 keV.}
\end{figure}
\subsection{4U\, 0142\,$+$\,61\ spectral characteristics}
In this section we present: a) the {\em time-averaged} pulsed spectrum (2.2-102 keV) of 4U\, 0142\,$+$\,61\ based on RXTE PCA and
HEXTE measurements. The pulsed spectrum is extended down to 0.8 keV by including our spectral results from the ASCA
GIS observations ($\sim$120 ks); b) the {\em time-averaged} total (pulsed and unpulsed) spectrum
from our analysis of INTEGRAL IBIS ISGRI skymaps (20-300 keV), in comparison with that from Chandra ACIS-S CC-mode
data \citep[0.5-7 keV;][]{patel03} and the CGRO COMPTEL (0.75-30 MeV) upper limits \citep{hartog06}.
\subsubsection{RXTE PCA/HEXTE and ASCA GIS pulsed spectrum}
Assuming an underlying absorbed \citep[$N_H=9.3\times 10^{21}$cm$^{-2}$;][]{patel03} double power-law photon model
we fitted the pulsed PCA excess counts (2.2-31.5 keV) in a forward folding procedure (see Sect. \ref{rxte_spectral}),
resulting in a high-statistics time-averaged pulsed spectrum.
The best model fit ($\chi_{r}^2=0.7$; d.o.f. 15 - 4) yielded for the soft and hard photon indices $\Gamma_1 = 4.09 \pm 0.04$
and $\Gamma_2 = -0.8^{+0.10}_{-0.07}$, respectively. This is shown as a dashed aqua curve in Fig. \ref{HE_SPECTRUM_U}, together
with the ``deconvolved'' unabsorbed flux values (aqua colored open circles). The power-law model components become equally strong
at $11.46\pm 0.64$ keV.
The striking spectral hardening of the pulsed emission of 4U\, 0142\,$+$\,61\ around 10 keV is dramatic, much more pronounced than the
hardening observed for 1RXS\, J1708\,-\,4009.
The HEXTE pulsed flux values (Fig. \ref{HE_SPECTRUM_U}; dark-blue open squares) are in line with the extrapolation of the PCA
hard-power-law model component.
The ASCA GIS 0.8-10 keV pulsed flux values which we derived from the July/August 1999 observations are also shown in Fig.
\ref{HE_SPECTRUM_U} (filled squares) and are within the systematic/statistical uncertainties consistent with the
time-averaged PCA fluxes in the overlapping energy range. This suggests a rather stable pulsed spectrum both in
shape and normalization.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f7.ps}}
\caption{\label{HE_SPECTRUM_U} A $\nu F_{\nu}$ spectral representation of the total and pulsed high-energy emission
from 4U\, 0142\,$+$\,61. The aqua, blue and dark cyan data points/curves represent the pulsed emission component of the spectrum
(0.8-102 keV) based on RXTE PCA/HEXTE observations (time-averaged) and ASCA GIS. Note the dramatic hardening of the pulsed
spectrum near 10 keV. The other measurements refer to the total emission spectrum: 0.8-10 keV, Chandra ACIS \citep{patel03},
20-300 keV, INTEGRAL IBIS ISGRI (this work) and 0.75-30 MeV, CGRO COMPTEL \citep{hartog06}. The COMPTEL $2\sigma$ upper
limits require a spectral break somewhere between 140 and 750 keV.
}
\end{figure}
\subsubsection{INTEGRAL IBIS ISGRI total spectrum}
\citet{hartog06} report the detection and total spectrum of 4U\, 0142\,$+$\,61\ in the 20-100 keV energy range
analyzing IBIS ISGRI data from INTEGRAL observations of the Cassiopeia region performed in Dec. 2003
(Revs. 142 - 148). In this work all available INTEGRAL data (open time/core program)
have been used from observations made between 3 March 2003 (Rev. 47) and 28 December 2004 (Rev. 269) in
which the source was within $14\fdg5$ from the pointing axis.
The total screened exposure time for the set of accumulated science windows is about 2 Ms, which corresponds
to an effective on-axis exposure of about 858 ks. Image mosaics have been generated using OSA 4.1 software in
7 differential energy bands, 20-30, 30-45, 45-65, 65-95, 95-140, 140-205 and 205-300 keV. In these (time-averaged) maps\footnote{Be X-ray binary RX J0146.9+6121 at only $24\arcmin$ from 4U\, 0142\,$+$\,61\ was detected, fully resolved,
only in the first three energy bands.}
4U\, 0142\,$+$\,61\ is clearly detected up to and including the 95-140 keV energy range ($7.8\pm 1.0$ mCrab). The 20-300
keV flux measurements are shown as purple data points in Fig. \ref{HE_SPECTRUM_U}. A simple power-law fit to
these flux points yielded a hard photon index of $1.05\pm 0.11$ ($\chi^2_{r}$ = 0.86 for d.o.f. 6 - 2; the purple
dashed line in Fig. \ref{HE_SPECTRUM_U}). The difference with the value ($0.73 \pm 0.17$) reported for the index by \citet{hartog06} is 1.6$\sigma$.
\subsubsection{Discussion of spectra and pulsed fractions of 4U\, 0142\,$+$\,61}
The INTEGRAL spectrum of the total emission above 20 keV can be compared with the same Chandra spectrum for energies below 10 keV
\citep[see Fig. \ref{HE_SPECTRUM_U} the dark orange solid line;][]{patel03}. As is the case for the pulsed spectra above and below $\sim$ 10 keV,
these two spectra are drastically different, and together reveal a sharp minimum in luminosity of 4U\, 0142\,$+$\,61\ around 10 keV.
Including now in this high-energy spectral picture the time-averaged CGRO COMPTEL
$2\sigma$ flux upper limits (red triangles in Fig. \ref{HE_SPECTRUM_U}) from \citet{hartog06}
confirms that the power-law model statisfactorily describing the 20-300 keV
total spectrum will not extend into the MeV range, but must break somewhere
between 140 and 750 keV. To investigate this further, we also fitted a spectral model
with an energy-dependent photon index, $F_{\gamma}=K\cdot E_{\gamma}^{-(\Gamma + \alpha \ln(E_{\gamma}))}$,
to the 20-300 keV IBIS ISGRI measurements (see Fig. \ref{HE_SPECTRUM_U} dashed
dotted purple line). The model extrapolation towards the MeV energy range is consistent with the COMPTEL upper limits,
but the improvement of the fit is insufficient to claim a change in spectral shape in the 20-300 keV window.
Comparing now the total and pulsed high-energy spectra of 4U\, 0142\,$+$\,61,
the Chandra total flux values below 10 keV are about 10 times
higher than the time-averaged RXTE PCA and ASCA GIS pulsed flux measurements.
This is consistent with pulsed fraction estimates of $\sim$ 10\% reported
by \citet{patel03} and
\citet{gohler05} using solely Chandra and XMM Newton data, respectively.
The total spectrum measured by INTEGRAL above 20 keV with slope $1.05\pm 0.11$
is significantly softer than the pulsed spectrum measured by RXTE above
$\sim$ 10 keV with slope $-0.8^{+0.10}_{-0.07}$. As a result, the pulsed
fraction appears to vary with energy from $\sim$ 10\% at 20 keV to
$\sim$ 100\% between 80 and 100 keV.
\section{1E\, 2259\,$+$\,586}
Near the center of SNR CTB 109 (G 109.1-1.0) a strong compact X-ray source was found
by \citet{gregory80}, later dubbed 1E\, 2259\,$+$\,586, which appeared to be an X-ray pulsar with
a period of $3.489(2)$ s \citep{fahlman81}.
Analyzing more IPC data, this period turned out to coincide with the first
harmonic of a fundamental 6.978 s period \citep{fahlman83}.
No radio enhancement at the position of the X-ray pulsar could be identified
down to a level of 0.5 mJy \citep[WSRT, 21 cm;][]{hughes84},
0.2 mJy \citep[VLA, 20 cm;][]{gregory83} and even $50\mu$Jy \citep[VLA,
20cm;][]{coe94,kaspi03}.
On the other hand, the lack of a bright visual counterpart ($V\ge 21$) ruled out
a supergiant or massive main-sequence star association
\citep{fahlman81}.
Further X-ray observations of 1E\, 2259\,$+$\,586\ in the eighties with Tenma (Astro-B),
EXOSAT and Ginga (Astro-C) \citep[see e.g.][]{koyama87,morini88,hanson88,koyama89,iwasawa92} revealed a
steady spin-down, too slow to power the observed X-ray luminosity, and a very soft spectrum.
X-ray flux measurements from two different Ginga observations spread $\sim 8$ months apart
indicated considerable flux variations, accompanied with clear pulse morphology
changes and probably a decrease in spin-down rate \citep{iwasawa92}.
The X-ray picture was refined considerably in the nineties using data from
BBXRT, ASCA (Astro-D), ROSAT, BeppoSAX and RXTE.
\citet{corbet95} showed for the first time that a two-component spectral model -
black-body plus power-law - could fit
the ASCA and BBXRT spectral data satisfactorily without invoking spectral (line)
features. This finding was confirmed by \citet{rho97} analyzing
ROSAT PSPC, ASCA and BBXRT data simultaneously, and by \citet{parmar98} analyzing
BeppoSAX LECS and MECS data.
RXTE timing measurements \citep{mereghetti98} further tied down earlier limits on
the projected semi-axis to 0.03 lts, leaving room only for a white
dwarf companion, helium-burning star with mass smaller than 0.8 M$_\sun$ or
a main-sequence star viewed under very small inclination angles.
\citet{kaspi99} obtained for the first time a phase coherent timing solution
for 1E\, 2259\,$+$\,586, indicating great stability (rms 0.01 cycles) over the
2.6 year timespan (29-Sept-1996 -- 12-May-1999) of their RXTE monitoring
observations. This stability makes binary accretion scenarios very
unlikely and favours a magnetar interpretation. Additional RXTE monitoring data
showed that this rotational stability was maintained throughout
an extended 4.5 yr period, although the inclusion of
$\ddot\nu$ was required in the timing model \citep{gavriil02}.
These authors also found that the pulse morphology did not change significantly
with time \citep[cf.][who claimed changes in morphology with time]{iwasawa92}.
Moreover, there was no evidence for large
variability in the pulsed flux in line with earlier work by \citet{baykal00} using
RXTE data, but in contrast with the Ginga findings \citep{iwasawa92}.
Furthermore, \citet{gavriil02} showed, convincingly for the first time, that
the pulse profile morphology of 1E\, 2259\,$+$\,586\ changes
with energy \citep[cf.][for earlier indications]{hanson88}.
Observations with the Chandra X-ray observatory in January 2000 provided an
X-ray position with subarcsecond accuracy \citep{hulleman01,patel01}.
In the $0\farcs6$ error circle (99\%) a very faint ($K_s=21.7\pm 0.2$ mag)
near-infrared counterpart was identified \citep{hulleman01},
excluding models in which the source is powered by disk accretion.
The total X-ray (0.5-7.0 keV) spectrum of 1E\, 2259\,$+$\,586\ appeared featureless and could
best be described by a combination of a black body $kT=0.412(6)$ keV
plus power-law $\Gamma=3.6(1)$ absorbed by a Hydrogen column $N_H=(9.3\pm0.3)
\times 10^{21}$ cm$^{-2}$ \citep{patel01}.
In June 2002 RXTE observed an outburst of 1E\, 2259\,$+$\,586\ during which both the pulsed and persistent X-ray
emission increased by more than an order of magnitude relative to their quiescent levels \citep{kaspi03}.
During the course of the observation a spectral softening occured, and a
significant pulse profile change was observed. In the meantime
the pulsar underwent a sudden spin-up (glitch) followed by a large increase in
spin-down rate lasting for more than 18 days.
Furthermore, 80 X-ray bursts were detected during the 14.4 ks RXTE observation
with durations ranging from 2 ms to 3 s.
The outburst properties of 1E\, 2259\,$+$\,586\ share strong similarities with SGR outburst
characteristics unifying thereby conclusively AXPs and SGRs
and supporting strongly a magnetar interpretation. More detailed information on
the burst characteristics can be found in \citet{gavriil04}.
\citet{woods04} presented a comparison of the X-ray emission characteristics of
1E\, 2259\,$+$\,586\ before, during and after the June 2002 outburst using
data from XMM-Newton and RXTE. They quantified the changes of the temporal and
spectral properties and derived recovery timescales.
The X-ray flux increase and subsequent decay can be described by two distinct
components: one component linked to the burst activity with a
timescale of $\sim 2$ days and a second component which decays over the course
of the year according to a power-law in time ($F \propto t^\alpha$)
with index $\alpha = -0.22\pm0.01$. The latter component behaves similarly in
time as the observed near-infrared flux decay \citep{tam04} and
thus these seem to be linked to a common physical mechanism likely acting in the
pulsar's magnetosphere.
In this work we derived for the first time the time-averaged timing/spectral
properties of 1E\, 2259\,$+$\,586\ for energies above $\sim 8$ keV using archival
RXTE PCA/HEXTE data and we compared the pulsed spectrum with the upper limits
from IBIS ISGRI and COMPTEL \citep{hartog06}.
\subsection{1E\, 2259\,$+$\,586\ timing characteristics}
The {\em time-averaged} RXTE PCA/HEXTE pulse profiles combining all observations of 1E\, 2259\,$+$\,586\ listed in Table \ref{obs_table},
except the 14.4 ks period of outburst in 2002, are shown in Fig. \ref{pca_e_stack} for energies between 2.2 and 27 keV.
In the folding/correlation process (see Sect. \ref{rxte_timing}) we used the ephemeris
given by \citet{gavriil02}. The screened PCU-2 exposure amounts $\sim 747$ ks, while the corresponding
dead-time corrected on-source HEXTE exposures are 256.4 and 267.7 ks for cluster-0 and 1, respectively.
For the first time significant deviations from uniformity are visible at energies $\mathrel{\hbox{\rlap{\lower.55ex \hbox {$\sim$} 8$ keV:
$Z_2^2$ is $10.6\sigma$, $5.2\sigma$ and $3.1\sigma$ for the PCA 8.3-11.9, 11.9-16.3 and 16.3-24 keV bands, respectively
(Fig. \ref{pca_e_stack} panels c-e). The HEXTE sensitivity is still too low to detect the underlying pulsed emission
above $\sim 15$ keV (Fig. \ref{pca_e_stack} panel f).
\begin{figure}[t]
\centerline{\includegraphics[height=12cm,width=8cm,angle=0,bb=125 155 465 650]{f8.ps}}
\caption{\label{pca_e_stack}RXTE PCA/HEXTE pulse profile collage of 1E\, 2259\,$+$\,586\
for energies in the range 2.2-27.0 keV combining data collected between
29 Sept 1996 and 28 Oct 2003 (see Table \ref{obs_table}).
Two cycles are shown for clarity. The vertical dotted lines at phases 0.25 and
0.70 serve as a guide to the eye for alignment comparisons. Clear pulse profile
morphology changes with energy are present.
Pulsed emission is detected up to $\sim 24$ keV.
Note the enhancement within the main valley between the two peaks near phase
0.45, also visible in Fig. 7c of \citet{gavriil02}.}
\end{figure}
The morphology of the double-peaked pulse profile changes gradually moving up in
energy from 2.2 to 24 keV: the dominant pulse near phase 0.7
in Fig. \ref{pca_e_stack} at energies below $\sim 4$ keV loses significance with
respect to the second pulse near phase 0.25. The latter
dominates at energies above $\sim 8$ keV. Also note the existence of a narrow
pulse-like feature in the deepest minimum near phase 0.45, most
striking in panel b (4-8.3 keV) of Fig. \ref{pca_e_stack}. The latter feature
is also clearly visible in Fig. 7c of \citet{gavriil02}.
More quantitative information on the energy dependency of the pulse morphology
can be obtained through a decomposition of the pulse profiles
in Fourier components. At least 5 harmonics are required to adequately fit the
profiles in the various energy slices, mainly driven by
the existence of the narrow, significant feature in the main valley between the
peaks. This was also shown by \citet{gavriil02}. While the power of the second harmonic (2 maxima per cycle)
dominates over that of the first up to $\sim 10$ keV (beyond which they become
statistically equally strong) and the phase angle $\Phi_{\alpha}^2$ of
the second harmonic remains more or less at the same position in phase, the
first harmonic (broad component) shifts gradually to the right in
phase with increasing energy (see Fig. \ref{pca_pa_e2259}). This energy dependent
shift reflects the collapse at higher energies of the dominant pulse
at low energies near phase 0.7.
\subsection{1E\, 2259\,$+$\,586\ spectral characteristics}
\begin{figure}[t]
\includegraphics[width=8cm,height=7.5cm,bb=90 200 515 645]{f9.ps}
\caption{\label{pca_pa_e2259} Phase angles as a function of energy for the first 3
harmonics used in the truncated Fourier series fit of the RXTE PCA pulse profiles of 1E\, 2259\,$+$\,586.
Note that the position of the phase angle of the dominant second harmonic (labelled 2) is
more or less constant (energy independent), while that of the first harmonic (labelled 1)
decreases with increasing energy (shifts to the right).}
\end{figure}
In this section the {\em time-averaged} pulsed spectrum of 1E\, 2259\,$+$\,586\ is
derived based on the RXTE PCA \& HEXTE observations given in Table
\ref{obs_table}. The high-energy picture has been completed by
comparing our results with the total emission spectrum (2-10 keV) as determined by
\citet{patel01} and with IBIS ISGRI (20-300 keV) and COMPTEL (0.75-30 MeV)
upper limits \citep{hartog06}.
\subsubsection{RXTE PCA/HEXTE pulsed spectrum}
Employing the same counts-extraction technique as used above for the other AXPs, the pulsed
excess counts from the PCA profiles are converted into flux values
taking into account the different PCU exposures and sensitivities, and adopting
an absorbing Hydrogen column of $N_H=(9.3\pm0.3) \times 10^{21}$ cm$^{-2}$ \citep{patel01}.
Assuming a simple absorbed power-law model the fit resulted in a poor $\chi^2$ of $26.24$ for
$14-2$ degrees of freedom ($\chi^2_r = 2.187$), indicating an inappropriate fitting function
(there is a $\sim 1\%$ probability that such a high value of $\chi^2_r$ is obtained at random assuming
that the fit function is a good representation of the unkown parent function).
An absorbed double power-law model, however, yielded a reasonable $\chi^2$ of $16.70$ for $14-4$ degrees
of freedom ($\sim 10\%$ random probability). The improvement $\Delta \chi^2$ of $9.55$ adding two additional fit parameters
translates to a $\sim 3\sigma$ improvement adopting a maximum likelihood ratio test. Thus, the double power-law model
provided a significant improvement in describing the same spectral data.
Adopting this double-power-law model as underlying photon model spectrum, the pulsed flux values are shown in Fig.
\ref{HE_SPECTRUM_1E} as aqua colored filled circles, and the best fit double-power-law model
as a dashed aqua colored line.
The soft and hard power-law indices are $\Gamma_1=4.26\pm 0.01$ and $\Gamma_2=-1.02^{+0.24}_{-0.13}$,
respectively, and the power-law-model components become equally strong at $E_{int}=15.8\pm 2.3$ keV.
Thus, also for this AXP we witness the onset of a dramatic hardening of the pulsed spectrum beyond
$\sim 10$ keV, although confirmation for energies beyond 20 keV is required.
For HEXTE the source is too weak to be detected; $2\sigma$ upper limits (dark blue) are shown in Fig.
\ref{HE_SPECTRUM_1E}.
\subsubsection{Discussion of spectra and pulsed fractions of 1E\, 2259\,$+$\,586}
The ISGRI flux upper limits to the total emission from 1E\, 2259\,$+$\,586\ for energies above
20 keV \citep{hartog06} can be directly compared with the Chandra ACIS spectrum for the 2-10 keV range.
The black-body plus power-law model fit to this spectrum \citep{patel01} is shown in Fig. \ref{HE_SPECTRUM_1E}
(dark-orange solid line). The double-power-law spectral fit suggests that again a minimum
in luminosity is reached around 20 keV, where the extrapolation of the
Chandra model fit crosses the pulsed PCA spectrum. Given the hard pulsed component
suggested in these PCA data up to $\sim 24$ keV, we estimate that the source
will show up at higher energies in an IBIS ISGRI mosaic totalling $\mathrel{\hbox{\rlap{\lower.55ex \hbox {$\sim$} 4$ Ms in effective
on-axis exposure, what will be reached when adding in follow-up work already performed and/or
scheduled observations. To complete the high-energy picture, the COMPTEL
0.75-30 MeV upper limits to the total emission \citep{hartog06}
are also included in Fig. \ref{HE_SPECTRUM_1E}.
Comparing in the overlapping energy range below 10 keV the PCA time-averaged pulsed fluxes with the
Chandra total emission model, and assuming no time variability, indicates a rather high value of
$\mathrel{\hbox{\rlap{\lower.55ex \hbox {$\sim$} 43\%$ for the pulsed fraction (pulsed/total emission), in agreement with earlier
estimates \citep{patel01}.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f10.ps}}
\caption{\label{HE_SPECTRUM_1E}A $\nu F_{\nu}$ spectral representation of the total (2 keV - 30 MeV) and pulsed (2-30 keV)
high-energy emission from 1E\, 2259\,$+$\,586. The dashed aqua coloured line represents the double power-law fit to the PCA 2-30 keV pulsed
spectrum. This fit suggests the onset of a hard spectral tail in the pulsed spectrum near $\sim 15$ keV.}
\end{figure}
\section{1E\, 1048.1\,-\,5937}
1E\, 1048.1\,-\,5937\ was discovered with {\it Einstein\/} at an angular separation of $\sim 40\arcmin$ from $\eta$ Car
\citep{seward82}. During an observation on July 13, 1979 a sinusoidally shaped signal with a high pulsed
fraction of $68\pm 7 \%$ was detected at a period of $\sim 6.44$ s \citep{seward84,seward86}. The pulsations
at a period of 6.4407(9) s were confirmed in EXOSAT ME observations \citep{smale85,seward86}, and the positional
accuracy of 1E\, 1048.1\,-\,5937\ was further tightened up down to a radius of about $10\arcsec$.
Ginga observations combined with earlier measurements revealed a steady increase in spin period
at a mean rate of $\dot{\hbox{P}} = (4.64\pm 1.1) \times 10^{-4}$ s yr$^{-1}$ \citep{corbet90}, implying
a rotational energy loss much smaller than the lower limit on the X-ray luminosity.
X-ray observations in the early nineties by ROSAT and ASCA \citep{mereghetti95,corbet97} indicated that the
spin-down rate almost doubled after 1988, likely anticorrelated with the X-ray flux.
\begin{figure}[b]
\centerline{\includegraphics[height=12cm,width=7cm,angle=0,bb=190 195 405 610]{f11.ps}}
\caption{\label{pca_ee_stack}RXTE PCA pulse profiles of 1E\, 1048.1\,-\,5937\
for energies in the range 2.2-11.9 keV combining data collected between
July 27, 1996 and Feb. 24, 2004 (see Table \ref{obs_table}).
Two cycles are shown for clarity.}
\end{figure}
BeppoSAX LECS / MECS data from long exposures in May 1997 indicated the necessity of a two component
spectral model, a power-law plus black body, to describe properly the measured X-ray (0.5-10 keV) spectrum
\citep{oosterbroek98}. This was confirmed by \citet{paul00} using ASCA data. In the
meantime, detailed timing studies using RXTE monitoring observations performed in 1997-2000 \citep{kaspi01} showed
that significant deviations from simple spin-down exist, making phase-coherent timing impossible over time stretches
longer than a few months. Inspite of these rotational irregularities, neither pulse profile changes nor
large pulsed flux variations were found. 1E\, 1048.1\,-\,5937\ exhibited three X-ray bursts, on 2001 Oct. 29, 2001 Nov. 14, and 2004 June 29,
all caught during RXTE monitoring observations \citep{gavriil02b,gavriil06}.
Precise X-ray imaging of 1E\, 1048.1\,-\,5937\ with Chandra \citep{wang02,israel02} provided a source position with sub-arcsecond
accuracy. Within the $0\farcs7$ error circle only one faint near-IR source was found which clearly showed variability.
A detailed variability study in the infrared and optical was undertaken by
\citet{durant05}. They established the variable nature in the infrared and optical and found a possible anticorrelation
with the X-ray pulsed flux \citep[see][for an extensive study of the X-ray pulsed flux variability and spin-down rate variations]{gavriil04b}.
At radio frequencies (21 cm) an expanding hydrogen shell centered on 1E\, 1048.1\,-\,5937\ was detected \citep{gaensler05}, which
can be interpreted as a wind bubble blown by a 30-40 M$_\sun$ star, likely the massive progenitor of 1E\, 1048.1\,-\,5937.
Finally, the large effective area of XMM Newton, combined with accurate imaging, made it possible to perform in-depth
(phase-resolved) spectral analyses not polluted by effects caused by the presence of strong nearby X-ray sources.
XMM observed 1E\, 1048.1\,-\,5937\ on three occasions and results are presented by \citet{tiengo02,mereghetti04} and \citet{tiengo05}.
Comparing the three XMM observations revealed long-term flux and pulsed fraction variations in anti-correlation. The
featureless spectral shape, however, remained more or less the same, and could be described by a combination of a power-law
with photon index $\Gamma \sim 2.7-3.5$ and a blackbody with temperature $kT \sim 0.63$ keV. Phase-resolved spectroscopy
clearly indicated that the spectrum is softer at pulse minimum and harder at pulse maximum with respect to the phase averaged spectrum.
In the next section we present for the first time the hard X-ray/soft $\gamma$-ray timing and spectral characteristics
of 1E\, 1048.1\,-\,5937\ above $\sim 8$ keV using a) all available public RXTE PCA and HEXTE data (see Table \ref{obs_table}), b) INTEGRAL
IBIS ISGRI data (see Table \ref{obsint_table}) and c) CGRO COMPTEL 0.75-30 MeV data.
\subsection{1E\, 1048.1\,-\,5937\ timing characteristics}
In the timing analysis of the full set of PCA observations of 1E\, 1048.1\,-\,5937\ listed in Table \ref{obs_table} according to the guidelines
given in Sect. \ref{rxte_timing}, we produced {\em time-averaged} pulse-phase distributions for energies between $\sim 2$ and 35 keV.
In the folding/correlation process (see Sect. \ref{rxte_timing}) we used the ephemerides given by \citet{kaspi01}.
Significant pulsed emission has been detected up to $\sim 12$ keV. The PCA pulse profiles of 1E\, 1048.1\,-\,5937\ are shown in Fig. \ref{pca_ee_stack}
for the three following differential energy bands, 2.2-4.0, 4.0-7.9 and 7.9-11.9 keV. The detection of pulsed emission, with a non-uniformity
significance of $6.8\sigma$, between $\sim$ 7.9 and 11.9 keV is reported for the first time.
The single peaked pulse profiles, with a slightly steeper rise than fall, do not show morphology changes as a function
of energy. HEXTE data do not show pulsed emission at energies above $\sim$ 15 keV.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f12.ps}}
\caption{\label{HE_SPECTRUM_1E1048}A $\nu F_{\nu}$ representation of the spectrum
of 1E\, 1048.1\,-\,5937. Above $\sim 12$ keV the source is too weak to be detected so far. The IBIS ISGRI (purple) and CGRO
COMPTEL (red) $2\sigma$ upper-limits are for the total emission. Below 10 keV we also added spectral information
for the pulsed (dotted) and total (solid) emission from two XMM Newton observations performed at different
epochs \citep{tiengo05}, illustrating the variable nature both in normalization and shape of both components.}
\end{figure}
\subsection{1E\, 1048.1\,-\,5937\ spectral characteristics}
\subsubsection{RXTE PCA/HEXTE pulsed spectrum}
The derived pulsed excess counts have been converted to flux values assuming an absorbing column density of
N$_{\hbox{\scriptsize H}}=1.0 \times 10^{22}$ cm$^{-2}$ \citep[see][]{tiengo05}. It turned out that a single power-law model
for the PCA band did not give an acceptable fit. Therefore we used for this AXP a black body plus power-law input spectrum. The
optimized model ($\chi^2_r=0.55$ for 16 - 4 d.o.f.; $kT = 0.717(4)$ keV and $\Gamma=2.93(7)$) is shown in Fig. \ref{HE_SPECTRUM_1E1048}
together with the flux measurements, both aqua colored. It is interesting to note, that, while the photon index is in the expected range \citep{tiengo05}, the (time-averaged) black body temperature of the pulsed component is slightly higher than the values obtained
from the other X-ray instruments.
This is very likely caused by the lack of sensitivity of the PCA for energies less than $\sim 2.5$ keV, thus poorly constraining the
black body model at energies below its maximum.
For HEXTE we could only derive upper limits for the pulsed emission, also shown in Fig. \ref{HE_SPECTRUM_1E1048} as blue
symbols.
\subsubsection{INTEGRAL IBIS ISGRI and CGRO COMPTEL total spectrum}
IBIS ISGRI data (Core program -- GPS or public) from observations executed between revolution 30 and 217, satisfying
our off-source pointing constraint of $14\fdg5$ have been used to obtain spectral information on the total emission from
1E\, 1048.1\,-\,5937\ in the 20 -- 300 keV energy range. The effective on-axis exposure time is, however, small $\sim$ 250.8 ks, resulting
in rather high upper-limits given the non-detections in any of the chosen broad energy bands.
These $2\sigma$ upper-limits have been included in Fig.\ref{HE_SPECTRUM_1E1048} as purple symbols. Also, the analysis of full
mission COMPTEL data (4.8 Ms exposure in total) did not result in significant detections of 1E\, 1048.1\,-\,5937.
The $2\sigma$ COMPTEL upper-limits are shown as red symbols in Fig.\ref{HE_SPECTRUM_1E1048}. It is clear from the full
high-energy spectral picture presented in Fig. \ref{HE_SPECTRUM_1E1048} that current spectral information above $\sim 10$ keV
does not exclude the presence of a hard spectral component. Additional IBIS ISGRI data e.g. from a deep 2 Ms observation of the
Carina region, performed between INTEGRAL revolutions 192 and 203, with 1E\, 1048.1\,-\,5937\ always in the FCFOV, will be very usefull to
constrain the spectral properties of 1E\, 1048.1\,-\,5937\ further in a future study.
\section{1E\, 1841-045}
1E\, 1841-045\ is the first AXP for which surprisingly non-thermal pulsed X-ray/soft $\gamma$-ray emission
was discovered \citep{kuiper04}; see this paper also for a summary of earlier
observational results on 1E\, 1841-045). The pulsed spectrum above 10 keV could be described,
fitting RXTE PCA and HEXTE data, by a power-law model with photon index
0.94 $\pm$ 0.16 up to $\sim$ 150 keV. The total spectrum of 1E\, 1841-045\ presented in
\citet{kuiper04} was derived from HEXTE data using source ON/OFF rocking-mode
observations. We realized later that the HEXTE flux values were contaminated
by a contribution from the Galactic diffuse X-ray emission (see discussion in section 3.2).
Better spectral information can be obtained with an imaging instrument like INTEGRAL IBIS ISGRI.
Preliminary total flux measurements with ISGRI were so far only reported by \citet{molkov04}.
For the present work significantly more ISGRI data are available. Therefore, we present in this
section a) a new spectrum of the total emission and b) performed now also a timing analysis for
1E\, 1841-045\ using ISGRI data.
Furthermore, c) we repeated the timing/spectral RXTE PCA analysis using additional data
compared to \cite{kuiper04} as well as improved event selection criteria
(see Sect. \ref{rxte_timing}). Finally, d) also for this source we analyzed COMPTEL skymaps.
\subsection{1E\, 1841-045\ pulse profiles for RXTE PCA and INTEGRAL IBIS ISGRI}
\label{sect_pca_tm}
\begin{figure}[t]
\centerline{\includegraphics[height=11cm,width=6.5cm,angle=0,bb=165 165 420 640]{f13.ps}}
\caption{\label{ISGRI_PCA_1E1841}RXTE PCA pulse profiles of 1E\, 1841-045\ for the 2.1-10.3 and 10.3-20.1 keV energy ranges (panels a
and b, respectively), compared with the IBIS ISGRI 20-100 keV pulse profile (panel c).
The dotted lines at phases 0.7 and 1.05 serve as a guide to the eye for alignment comparison purposes.
The 20-100 keV IBIS ISGRI profile mimics the 10.3-20.1 keV RXTE PCA profile, suggesting a smooth prolongation
above 20 keV of the 10.3-20.1 keV profile shape.}
\end{figure}
In our RXTE reanalysis for 1E\, 1841-045, data from observation 80098 (spread over 28-3-2003 -- 17-2-2004; screened PCU-2 exposure
$\sim 69.1$ ks) were added to the set used in \citet{kuiper04} (now, total screened PCU-2 exposure: $\sim 340.2$ ks).
We used in the folding/correlation process (see Sect. \ref{rxte_timing}) the ephemerides given by \citet{gotthelf02} and derived in this work
(see Table \ref{eph_table}).
The results are pulse profiles with improved statistics, confirming the variation of pulse shape with energy as presented by \citep{kuiper04}. As an example, Fig. \ref{ISGRI_PCA_1E1841} (panels a and b) shows RXTE PCA profiles for the 2.1-10.3 and 10.3-20.1 keV energy bands.
IBIS ISGRI data from observations with source off-axis angles less than $\le 14\fdg5$ and taken during INTEGRAL revolutions 49 - 123 have been
processed to yield pulse phase distributions for differential energy bands between 20 and 300 keV. The total screened effective on-axis exposure was $\sim 808.8$ ks. We derived a very accurate (RMS residual is 0.015 in phase; see Table \ref{eph_table} for the details) phase coherent ephemeris using RXTE PCA monitoring observations performed during March - December 2003.
This has been applied in the folding process of the selected barycentered ISGRI events.
For the 20-100 keV energy range we obtained a $Z_2^2$ of $3.6\sigma$. This ISGRI pulse profile is also shown in Fig. \ref{ISGRI_PCA_1E1841}
(panel c). Its shape mimics that of the PCA 10.3-20.1 keV profile suggesting a stable pulse shape in the hard X-ray window. The statistics
are still low, however, but additional ISGRI data from already performed and future scheduled observations will allow more detailed studies
of the pulse shape in this energy range.
\subsection{1E\, 1841-045\ spectral characteristics}
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f14.ps}}
\caption{\label{HE_SPECTRUM_1E1841}A $\nu F_{\nu}$ spectral representation of the high-energy emission
from 1E\, 1841-045\ and SNR Kes 73. The aqua, blue and magenta data points/curves represent the (time averaged) pulsed emission component of
the spectrum (2-300 keV) based on RXTE PCA/HEXTE and INTEGRAL IBIS ISGRI observations. The black dashed line
shows the best power-law model fit to the PCA, HEXTE and ISGRI pulsed flux values for energies above $\sim 15$ keV.
The purple data points/curves reflect the IBIS ISGRI total flux measurements and fits (power-law, dashed; curved power-law,
dashed-dotted and extrapolated to 750 keV). The red $2\sigma$ upper limits from CGRO COMPTEL require a spectral break, somewhere
between 140 and 750 keV. For completeness, below 10 keV also the total flux from 1E\, 1841-045\ plus SNR Kes 73
(yellow symbols; XMM Newton) and the total flux from 1E\, 1841-045\ (dark orange solid line; Chandra ACIS) are shown \citep[see][for more details]{kuiper04}.}
\end{figure}
\subsubsection{RXTE and INTEGRAL pulsed emission}
Starting with the new RXTE PCA phase distributions, introduced above, the updated 1E\, 1841-045\ pulsed spectrum, now
with significant flux values up to $\sim 28$ keV, is shown in Fig. \ref{HE_SPECTRUM_1E1841} using aqua colored symbols.
Also given are the earlier-reported pulsed-flux values from HEXTE \citep[dark blue data points; see][]{kuiper04}, which clearly revealed
the hard pulsed component up to $\sim$ 150 keV.
Now we can also derive information on the pulsed spectrum from IBIS ISGRI. Phase distributions have been generated, followed
by extraction of pulsed counts, for three logarithmically spaced energy bands (20-50, 50-120 and 120-300 keV), yielding
only for the 50-120 keV band a significant pulsed-flux measurement. This ISGRI pulsed-flux value and two upper limits, shown
in Fig. \ref{HE_SPECTRUM_1E1841} as magenta symbols, are fully consistent with the hard spectrum measured by HEXTE.
We fitted the new PCA pulsed spectrum for energies above $\sim 15$ keV together with the HEXTE and INTEGRAL pulsed flux data points
with a simple power-law model, yielding as best fit a photon index $\Gamma$ of $0.72 \pm 0.15$, slightly harder than derived by
\citet{kuiper04}. This best fit model is indicated as a dashed black line in Fig. \ref{HE_SPECTRUM_1E1841}.
\subsubsection{INTEGRAL total emission}
In this work we derived the total emission spectrum of 1E\, 1841-045\ in the 20-300 keV
energy range from IBIS ISGRI mosaic maps generated for 7 differential energy
bands over the 20 - 300 keV band, thus significantly improving upon the
spectral information given by \citet{molkov04}.
We used a combination of public and Core program observations, all with source
off-pointing angles less than $14\fdg5$, and for which the
details\footnote{Data from Revs. 131,133-134 were ignored because these have
been taken in a staring mode configuration giving rise to substantial systematical
effects in the image deconvolution procedures.} are listed in Table
\ref{obsint_table}. The total screened effective on-axis exposure on 1E\, 1841-045\ in
this combination was $\sim 1,111$ Ms. Significant emission has been detected up
to $\sim 140$ keV (95-140 keV; $5.7\pm 0.9$ mCrab).
The derived flux values are shown in Fig. \ref{HE_SPECTRUM_1E1841} as purple
colored symbols. Over this 20-300 keV band we can {\em not} discriminate
between a power-law or power-law model with energy dependent index to be the best
model describing the measurements.
However, including in Fig. \ref{HE_SPECTRUM_1E1841} the CGRO COMPTEL flux
upper limits (red symbols) derived in this work, we notice that the
best fitting power-law model with photon index $\Gamma= 1.32 \pm 0.11$ does not extend
into the MeV domain. The spectrum must break somewhere between 140 and 750 keV.
The curved power-law model is consistent with the COMPTEL $2\sigma$ upper limits.
Both models are shown in Fig. \ref{HE_SPECTRUM_1E1841} as purple dashed (power-law) and
purple dashed-dotted (curved power-law) lines with the latter model extrapolated up to 750 keV.
\subsubsection{Discussion of spectra and pulsed fractions of 1E\, 1841-045}
The new and better total spectrum of 1E\, 1841-045, now derived using ISGRI data, is indeed
lower, $\sim$ 40 \% between 20 and 100 keV, than published earlier using HEXTE data
\citep{kuiper04}, and is also slightly harder. It connects better to the total spectra
measured at different epochs by XMM Newton and Chandra ACIS below 10 keV, shown in
Fig. \ref{HE_SPECTRUM_1E1841} and adopted from \citet[][and citations therein]{kuiper04}.
For the pulsed spectrum, the power-law fit to the new PCA spectrum above $\sim 15$ keV combined with
the new ISGRI measurements and the earlier published HEXTE flux values, confirms the drastic up turn/hardening
of the pulsed spectrum.
Also, the extrapolation of this very hard spectrum has to remain under our COMPTEL
upper limits, requiring an even more drastic bend/break compared to what is
required for the total spectrum.
The compilation of Chandra, XMM-Newton, RXTE and INTEGRAL spectra in Fig. \ref{HE_SPECTRUM_1E1841}, taken at
very different epochs over many years, suggests that the hard X-ray emission of 1E\, 1841-045\ is stable, i.e. this
magnetar is most of the time in the same state of activity for energies above $\sim$ 10 keV.
The pulsed fraction is confirmed to be $\sim$ 25\% at 20 keV and $\sim$ 100\% for energies beyond $\sim 100$ keV.
\section{Summary}
Exploiting the availability of archival data from RXTE monitoring observations
and new, deep INTEGRAL exposures of AXPs, we were able to show that AXPs
exhibit exceptionally hard spectra for energies above 10 keV. Of the sample of five AXPs
studied in this work, three (the brightest at energies below 10 keV: 1E\, 1841-045, 1RXS\, J1708\,-\,4009, 4U\, 0142\,$+$\,61)
are shown to emit up to energies of $\sim$ 150 keV.
Of the two weaker sources, for one (1E\, 2259\,$+$\,586) an upturn of pulsed emission is suggested
up to $\sim$ 25 keV, and the other (1E\, 1048.1\,-\,5937) is too weak to detect a possible similar upturn
in the hard-X-ray range with presently available exposures.
We regard this sufficient evidence to conclude that a persistent non-thermal component
at energies above 10 keV is a common property of AXPs.
\begin{table*}[t]
\caption{Pulsar characteristics and high-energy spectral properties (pulsed/total) of the AXPs studied in this work.\label{summary_table}}
{\scriptsize
\begin{center}
\begin{tabular}{|l|ccccc|cccc|cc|}
\hline
AXP & $P$ & $\dot{P}$ & $B_{s}$ & $L_{sd}^{32}$ & $d$ & $L_{1-10}^{32,p}$ & $\Gamma_l^p$ & $L_{10-100}^{32,p}$ & $\Gamma_h^p$ & $L_{10-100}^{32,T}$ & $\Gamma_h^T$ \\
& s & $10^{-11}$ s/s & $10^{14}$ G & erg/s & kpc & erg/s & & erg/s & & erg/s & \\
\hline
1E\, 1841-045 & 11.78 & 4.44 & 7.32 & 10.73 & 6.7 & 391 & $1.98(2)$ & 1312 & $0.72(15)$ & 2975 & $1.32(11)$ \\
1RXS\, J1708\,-\,4009 & 11.00 & 1.92 & 4.64 & 5.68 & 5 & 640 & $2.60(1)$ & 719 & $1.01(12)$ & 869 & $1.44(45)$ \\
4U\, 0142\,$+$\,61 & 8.69 & 0.20 & 1.34 & 1.21 & 3 & 347 & $4.09(2)$ & 686$^{\dagger}$ & $-0.80(9)^{\dagger}$ & 638 & $1.05(11)$ \\
1E\, 2259\,$+$\,586 & 6.98 & 0.048 & 0.59 & 0.56 & 3 & 485 & $4.26(1)$ & 188$^{\dagger}$ & $-1.02(19)^{\dagger}$ & $\ldots$ & $\ldots$ \\
1E\, 1048.1\,-\,5937 & 6.45 & 2.31 & 3.90 & 33.90 & 2.7 & 76$^{\ast}$ & $^{\ast}$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \\
\hline
\multicolumn{12}{l}{$^{\dagger}$ Based on double power-law fit to PCA data ($\sim$ 2-30 keV); 30-100 keV luminosity extrapolated from this model} \\
\multicolumn{12}{l}{$^{\ast}$ The 1-10 keV luminosity has been derived from a black body plus power-law model fit} \\
\multicolumn{12}{l}{$L^{32}$ means that the luminosity is expressed in units $10^{32}$ erg/s} \\
\end{tabular}
\end{center}}
\end{table*}
In Table \ref{summary_table} we summarize for the five AXPs the spectral parameters presented in this work,
together with estimates for their distances and pulsar characteristics (period, period derivative, deduced
surface magnetic field strength at the pole and spin-down power).
Most interesting and new are the results on the total and pulsed spectra, and luminosities above 10 keV.
As noted above, the exceptionally hard pulsed spectra seem to extend up to at least 150 keV, therefore we have
given for all four AXPs for which the spectral up turn around 10 keV has been measured, the luminosity of the
pulsed component integrating the best-fit power-law model over the full decade in energy from 10 to 100 keV.
For the luminosity of the pulsed component between 1 and 10 keV, we list the power in the power-law component
fitted to the pulsed spectra before the break, except for 1E\, 1048.1\,-\,5937, where the power has been derived from a
black body plus power-law model.
Thus for 4 AXPs we ignore the presence of a black-body component which is known to be present at lower energies as well.
However, for 1RXS\, J1708\,-\,4009, 4U\, 0142\,$+$\,61, 1E\, 2259\,$+$\,586\ and 1E\, 1841-045\ we could fit the pulsed PCA spectra down to about 2 keV well with
a single power-law model.
The total spectra above 20 keV appear also to be hard, but somewhat softer than the pulsed spectra.
For 1E\, 1841-045\ and 4U\, 0142\,$+$\,61\ the total emission becomes consistent with 100\% pulsed around $\sim 100$ keV, starting from a pulsed
fraction at 10 keV of $\sim$ 25\% and $\sim$ 10\% for 1E\, 1841-045\ and 4U\, 0142\,$+$\,61, respectively.
The exception is 1RXS\, J1708\,-\,4009, for which the hard X-ray emission above 10 keV is consistent with being 100\% pulsed.
Note in this context, however, the strong intensity variability reported for this AXP for energies below 10 keV, which
might be coupled to variations in pulsed fraction.
Table \ref{summary_table} immediately shows that the luminosities of the hard X-ray components (pulsed and total)
exceed the available total spin-down powers by a few orders of magnitude, a conclusion drawn earlier for the total luminosities (black body
plus power-law components) in the 1 -- 10 keV band.
Our compilations of total and pulsed flux values for energies above 10 keV for the three
strongest AXPs, using data from different instruments and collected at different epochs,
suggest that the high-energy component is stable, contrary to reports for the emissions
below 10 keV. However, addional observations are required (and will become available with INTEGRAL)
to constrain this further.
The time-averaged light curves for the different AXPs suggest that the pulse shapes for energies above 10 keV exhibit
less structure, and vary less with energy than seen below 10 keV. However, the statistics are still insufficient to draw
stringent conclusions. Phase-resolved spectroscopy on time-averaged profiles will be performed when more INTEGRAL data
can be added.
\section{Discussion}
The discussion of AXPs in the context of magnetically powered rotating neutron stars,
was till recently focussed on understanding the observational characteristics of the soft-spectrum
component for energies below 10 keV. Indeed, evidence for hard X-ray or soft gamma-ray emission from
AXPs was lacking, in contrast to the very luminous outbursts at soft gamma-ray energies characterizing SGRs.
Our discovery of pulsed emission from AXPs up to at least $\sim$ 150 keV adds a completely new
non-thermal component requiring a steady mechanism for accelerating particles in magnetospheres of magnetars.
Also this new quiescent emission component is far too luminous to be powered by rotational energy loss, as is evident
in Table \ref{summary_table}. Recently, INTEGRAL also found for the first time quiescent hard X-ray emission from a SGR
\citep[SGR 1806-20;][]{molkov05,mereghetti05}. As for AXPs, its spectrum above 10 keV could be fitted with a hard
power-law model, photon index 1.6 $\pm$ 0.1, an index comparable to, but somewhat softer than that measured for the
total AXP spectra in that energy window, but significantly softer than the hard spectra reported in this work
for the quiescent pulsed spectra of AXPs. In addition, the quiescent SGR spectrum does not exhibit a black body component
at energies below 10 keV. This underlines again, that AXPs and SGRs are (very) different manifestations of the magnetar scenario.
In Table \ref{summary_table}, we listed the high-energy characteristics for five AXPs studied in this work, out of a total of
six persistent AXPs known to date. Although this sample is still very small, we verified whether there are already any apparent
correlations between the measured parameters. There is a hint for a correlation between the magnetic field strength $B_s$
and the luminosity $L_{10-100}^p$ of the pulsed emission of the hard tail (10-100 keV) for four AXPs, as well as the total emission
of the hard tail (now for three AXPs). However, 1E\, 1048.1\,-\,5937\ has a $B_s$ three times higher than 4U\, 0142\,$+$\,61, and is located at a similar
distance, but no hard X-ray emission has been detected, and its luminosity at $\sim$ 10 keV is about three times lower than that
of 4U\, 0142\,$+$\,61. It is interesting to note that the spin down luminosity of 1E\, 1048.1\,-\,5937\ is the highest of this sample, mainly due to its large $\dot P$,
and reaches almost 50\% of the luminosity of the pulsed emission between 1 and 10 keV, a factor 10 - 100 higher than for the other four AXPs.
\begin{figure}[t]
\centerline{\includegraphics[height=8cm,width=8cm,angle=0]{f15.ps}}
\caption{\label{HE_SPECTRUM_AXP_HE} A $\nu F_{\nu}$ spectral representation of the pulsed emission for the 4 AXPs showing
hard spectral tails. For comparison purposes, also the pulsed high-energy ($\sim$ 1 keV - 10 GeV) spectra of the young Crab (PSR B0531+21)
and PSR B1509-58 pulsars, and the middle-aged Vela pulsar (PSR B0833-045) are shown. Furthermore, the $3\sigma$ GLAST sensitivity
(green dashed-dotted line) assuming an $E^{-2}$ source for an all-sky survey duration of one year ($>100$ MeV) is shown.}
\end{figure}
Surprisingly, the high-energy AXP spectra are very similar to those of "middle-aged" radio pulsars in the $\sim$ 10--300 keV
energy band. This can be seen in Fig. \ref{HE_SPECTRUM_AXP_HE} in which the AXP spectra are shown together with the spectra
of two young radio pulsars, PSR B0531+21 (Crab) and PSR B1509-58, and the middle-aged Vela pulsar (PSR B0833-45). These are
the only radio pulsars which have been detected from soft X-rays up to the gamma-ray domain. The other middle-aged high-energy
gamma-ray pulsars detected by CGRO EGRET \citep[see e.g. the review by][]{thompson97} are detected at high-energy gamma-rays
($>$100 MeV), and soft X-rays, but not in the intermediate hard X-ray range, in which they are weaker than the Vela pulsar.
The overall high-energy spectral shape of these middle-aged pulsars is expected to be similar to that of Vela. Like the AXPs,
the Vela pulsar has a (pulsed) thermal black-body component (only visible below 2 keV, and not shown in Fig.\ref{HE_SPECTRUM_AXP_HE}),
and an extrapolation of its power-law shape spectrum above 10 keV to the IR, optical and radio domains is approximately in agreement
with the measured flux values at these longer wavelengths. This is rather similar to the case of e.g. AXP 4U\, 0142\,$+$\,61,
with the difference that none of the AXPs have so far been seen in the radio, and the X-ray and IR emissions
exhibit a time variable behaviour, not seen for Vela. This similarity of the multiwavelengths spectra could suggest
some similarity in production processes in the two different scenarios of rotation-powered pulsars and magnetars
starting from very different sources of power.
A first attempt, before our discovery of hard X-ray emission from AXPs, to model possible production of high-energy (X-ray and gamma-ray)
emission in the magnetospheres of AXPs, applying a scenario proposed for high-energy production in the magnetospheres of radio pulsars, was made by
\citet{cheng01}. They modeled the production of high-energy (above 100 MeV) gamma-radiation in outer magnetospheric gaps of AXPs.
They argued that due to the strong field of a magnetar, the gamma-ray emission rooted at the polar caps will be quenched.
However, far away from the pulsar surface, i.e. in outer vacuum gaps, gamma radiation could be emitted because the local field will
drop below the critical quantum limit. They predict that the NASA Gamma-ray Large Area Space Telescope GLAST, due for launch in 2007,
might be able to detect this very-high-energy emission from AXPs (see Fig. \ref{HE_SPECTRUM_AXP_HE} for the $3\sigma$ GLAST sensitivity
assuming an $E^{-2}$ source spectrum for an all-sky survey duration of one year). However, their model calculations do not reproduce
the hard spectra and high X-ray luminosities now discovered above 10 keV.
The publication of the discovery of a hard spectral tail of 1E\, 1841-045\ by \citet{kuiper04}
stimulated \citet{thompson05} to reconsider high-energy emission from magnetars due to an
ultra strong magnetic field. A gradual release of energy in the stellar magnetosphere is expected
if it is twisted and a strong electric current is induced on the closed field lines. They
considered two mechanisms of gamma-ray emission:
(1) A thin surface layer of the star is heated by the downward beam of current carrying
charges, which excite Langmuir turbulance in the layer. Thus, a temperature kT of $\sim$
100 keV can be reached, emitting bremsstrahlung photons up to this characteristic temperature.
Interestingly, three of the AXPs, 1RXS\, J1708\,-\,4009, 4U\, 0142\,$+$\,61\ and 1E\, 1841-045, were detected up to these energies of $\sim$ 150 keV,
the shapes of their total and/or pulsed spectra being consistent with this mechanism with peak
luminosities around the predicted energy. However, the $100\%$ pulsed fraction around 20 keV for 1RXS\, J1708\,-\,4009\ is difficult to
reconcile with the surface emission, given the effects of general relativistic light bending.
Future observations have to reveal whether the cut off in the spectra is also consistent with this interpretation.
(2) Soft $\gamma$-rays are produced at a distance of $\sim$ 100 km
from the star surface in the magnetosphere, where the electron cyclotron energy
is in the keV range. A large electric field develops in this region in response to
the outward drag force felt by the current-carrying electrons from the flux of keV
photons leaving the star. Thompson \& Beloborodov show that a seed photon
injected in this region undergoes a runaway acceleration and upscatters keV
photons above the threshold for pair creation. The resulting synchrotron spectrum can
reach a peak at $\sim$ 1 MeV. Our hard-X-ray spectra together with
the upper limits from COMPTEL in the MeV domain, allow spectra with
maximum luminosities close to the MeV range.
But we should note that the flux values reported around $\sim$ 100 keV are already
at about the same level as the 2-$\sigma$ upper limits for the 0.75--3 MeV interval,
suggesting a different shape, thus challenging this interpretation.
An alternative quantum-electrodynamics model for the bursts and the quiescent
non-thermal emission from AXPs as well as SGRs was proposed by \citet{heyl05a,heyl05b}.
Their model is based on fast-mode breakdown, in which wave energy is dissipated into
electron-positron pairs when the scale of these discontinuities becomes comparable to an
electron Compton wavelength. They showed that under appropriate conditions, an electron-positron
fireball would result, producing primarily the X-ray and soft-gamma-ray bursts.
They also investigated \citep{heyl05b} the properties of non-thermal emission in the absence
of a fire ball. This quiescent, non-thermal emission associated with fast-mode breakdown
may account for the observed non-thermal emission presented in this work for AXPs, as well as for the quiescent
emission reported for SGR 1806-20 \citep{molkov05,mereghetti05}.
Indeed, they succeeded in fitting ISGRI AXP spectra as well as unabsorbed
optical data for 4U\, 0142\,$+$\,61\ from \citet{hulleman00}, and the SGR 1806-20
non-thermal spectrum above 10 keV. Interestingly, they predict that the
emission should extend beyond the observed INTEGRAL spectra without a break below
1 MeV. However, the combination of ISGRI spectra and COMPTEL upper limits presented
in this work, seem to contradict this claim. They further state, that if the magnetars
have significant optical excesses, such as 4U\, 0142\,$+$\,61, then the quiescent emission
from most of the AXPs discussed in this work and of SGR 1806-20 should be detectable
with GLAST (see again Fig. \ref{HE_SPECTRUM_AXP_HE} for the GLAST sensitivity).
Further INTEGRAL observations, and ultimately GLAST observations can support or reject these claims.
Over the last few years, two radio pulsars with periods and period time derivatives
similar to those found for AXPs, thus also with high-magnetic-field strengths, are
found: PSR J1847-0130, which has the highest by far inferred surface dipolar magnetic
field (B = $9.4 \times 10^{13}$ G) among all known radio pulsars \citep{mclaughlin03} and
PSR J1718-3718 \citep{hobbs04,kaspimcl05}.
The X-ray luminosities (or upper-limits to these) in the 2-10 keV range derived for these
two radio pulsars are much lower than those of the standard AXP group. Above 10 keV, detection of
hard X-rays could connect these two different types of neutron stars.
PSR J1847-0130 and PSR J1718-3718 both happened to be located near the centers of our
deep IBIS ISGRI exposures on 1E\, 1841-045\ and 1RXS\, J1708\,-\,4009, respectively. Therefore, we searched above 20 keV
for hard X-ray signatures from these pulsars, but nothing was detected in any of the energy bands
we have investigated. The $2\sigma$ upper limit on the flux in the 20-30 keV band for PSR J1847-0130 is
0.4 mCrab\footnote{1E\, 1841-045\ is detected in the same image with a flux of $\sim 1.5\pm 0.2$ mCrab}
and that for PSR J1718-3718 is also 0.4 mCrab, but now in the 20-35 keV band.
Therefore, why solitary rotating neutron stars with AXP-like timing parameters in some cases
behave as `'dull" radio pulsars and in other cases as `'enigmatic" magnetars is still unclear.
Different suggestions have been made \citep[see e.g.][]{mclaughlin03}, one of the possibilities being
that high-field pulsars and AXPs have similar dipole magnetic fields, but
AXPs have also quadrupole (or higher) components.
\citet{kaspimcl05} suggest a.o. the interesting possibility that
the high-field radio pulsars, may one day emit transient magnetar-like emission,
and conversely that the transient AXPs might be more likely to exhibit
radio pulsations. More theoretical work is required, but certainly also more
detailed observational results at all wavelengths to verify the model predictions.
Ongoing observations of AXPs with the wide-field-of-view INTEGRAL IBIS imager will allow us to
contribute with deeper studies in the hard X-ray/soft $\gamma$-ray range.
\acknowledgments
This research has made use of data obtained from the High Energy Astrophysics
Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center,
and of data obtained through the INTEGRAL Science Data Centre (ISDC), Versoix, Switzerland.
INTEGRAL is an ESA project with instruments and science data centre funded by ESA member
states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain),
Czech Republic and Poland, and with the participation of Russia and the USA.
We have extensively used NASA's Astrophysics Data System (ADS).
We thank Anton Klumper and Jaap Schuurmans for the maintenance of the OSA software at SRON and
the INTEGRAL data import from the ISDC site. Nanda Rea is acknowledged for useful discussions
on AXPs in general. Finally, we appreciate the willingness of Jacco Vink and Maurizio Falanga to
make their INTEGRAL open time data of the Cassiopeia region (Revs. 261-269) directly available for us.
|
1,116,691,501,008 | arxiv | \section{\@startsection{section}{1}\z@{.9\linespacing\@plus\linespacing}%
{.7\linespacing} {\fontsize{13}{14}\selectfont\bfseries\centering}}
\def\paragraph{\@startsection{paragraph}{4}%
\z@{0.3em}{-.5em}%
{$\bullet$ \ \normalfont\itshape}}
\renewcommand\theequation{\thesection.\arabic{equation}}
\@addtoreset{equation}{section}
\makeatother
\def\mathbbm{1}{\mathbbm{1}}
\newcommand\mathbbm{1}{\mathbbm{1}}
\newcommand\dl{\delta}
\newcommand\DI{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V}
\newcommand\DII{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(2)}_{\beta,V}}
\newcommand\EI{\sfE^{(1)}_V}
\newcommand\EII{\sfE^{(2)}_{\beta,V}}
\newcommand\Qb{\sfQ_\beta}
\newcommand\Db{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_\beta}
\newcommand\wDb{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta,V}}
\newcommand\wB{\sfB_{\alpha,V}}
\newcommand\Gb{{\mathsf G}} \def\mathsf{H}{{\mathsf H}} \def\sfI{{\mathsf I}_{\alpha,\beta}}
\newcommand\Pb{{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}_{\alpha,\beta}}
\newcommand{{\it i.e.}\,}{\emph{i.e.}}
\newcommand{{\it e.g.}\,}{\emph{e.g.}}
\newcommand{{\it cf.}\,}{\emph{cf.}}
\newcommand{\mathop{\mathrm{ess\;\!sup}}}{\mathop{\mathrm{ess\;\!sup}}}
\newcommand{\mathop{\mathrm{ess\;\!inf}}}{\mathop{\mathrm{ess\;\!inf}}}
\newcommand{\mathrm{const}}{\mathrm{const}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\renewcommand{\alpha}{\alpha}
\newcommand{\lambda}{\lambda}
\newcommand{\upharpoonright_{\Sigma}}{\upharpoonright_{\Sigma}}
\newcommand{\upharpoonright_{\Gamma}}{\upharpoonright_{\Gamma}}
\newcommand{\mathsf{H}_{\alpha ,\beta}}{\mathsf{H}_{\alpha ,\beta}}
\newcommand{\begin{equation} \begin{split}}{\begin{equation} \begin{split}}
\newcommand{\end{split} \end{equation}}{\end{split} \end{equation}}
\newcommand{\\ & =}{\\ & =}
\newcommand{\comm}[1]{}
\newcommand\lbra{\left}
\newcommand\rbra{\right}
\def\prime{{\prime}}
\def{\prime\prime}{{\prime\prime}}
\def\ddt#1{{\buildrel {\hbox{\LARGE ..}} \over {#1}}}
\def{\mathsf G}} \def\mathsf{H}{{\mathsf H}} \def\sfI{{\mathsf I}{\mathsf{G}}
\def\mathsf{\Sigma}{\mathsf{\Sigma}}
\def\mathsf{H}{\mathsf{H}}\def{\mathsf S}} \def\sfT{{\mathsf T}} \def\mathsf{U}{{\mathsf U}{\mathsf{S}}
\def{\mathsf S}} \def\sfT{{\mathsf T}} \def\mathsf{U}{{\mathsf U}{\mathsf{S}}
\def\mathsf{C}{\mathsf{C}}
\def\mathbbm{1}{\mathbbm{1}}
\def\Gamma{\Gamma}
\def\cC{\cC}
\def\gamma{\gamma}
\def\sigma{\sigma}
\def\mathsf{h}{\mathsf{h}}
\def\mathsf{k}{\mathsf{k}}
\def\partial{\partial}
\def\mathsf{w}{\mathsf{w}}
\def\mathsf{W}{\mathsf{W}}
\def\mathbbm{1}{\mathbbm{1}}
\def\omega{\omega}
\newcommand{\mathsf{d}}{\mathrm{d}}
\renewcommand{\iff}{\textit{if, and only if,}\,}
\newcommand{\mathrm{span}\,}{\mathrm{span}\,}
\newcommand\myemph[1]{\textbf{\emph{#1}}}
\def{\rm Re}\,{{\rm Re}\,}
\def{\rm Im}\,{{\rm Im}\,}
\def\rightarrow{\rightarrow}
\def{\textstyle\sum}{{\textstyle\sum}}
\def{\Upsilon}{{\Upsilon}}
\def{\rm diag}\,{{\rm diag}\,}
\newcommand{\mathsf{e}}{\mathsf{e}}
\newcommand{\mathsf{z}}{\mathsf{z}}
\newcommand{{\rm Ext}}{{\rm Ext}}
\newcommand{{\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}}{\mathsf{J}}
\newcommand{{\Theta}}{{\Theta}}
\renewcommand{\gg}{{\gamma}}
\def\mathfrak{m}{\mathfrak{m}}
\def\theta{\theta}
\def\lambda{\lambda}
\newcommand\sap{\sigma_{\rm ap}}
\def\sigma{\sigma}
\def\sigma_{\rm d}{\sigma_{\rm d}}
\def\sigma_{\rm ess}{\sigma_{\rm ess}}
\def{\mathsf{i}}{{\mathsf{i}}}
\def\partial{\partial}
\def\prime{\prime}
\def{\prime\prime}{{\prime\prime}}
\def{\prime\prime\prime}{{\prime\prime\prime}}
\def\kappa{\kappa}
\def\mathsf{H}{\mathsf{H}}
\def\mathsf{h}{\mathsf{h}}
\def\mathbbm{1}{\mathbbm{1}}
\def\mathsf{z}{\mathsf{z}}
\def{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}{\mathsf{P}}
\def\upharpoonright{\upharpoonright}
\def{\mathfrak f}{{\mathfrak f}}
\def{\mathfrak j}{{\mathfrak j}}
\def\omega{\omega}
\def\mathsf{w}{\mathsf{w}}
\def\mathsf{z}{\mathsf{z}}
\def{\mathsf S}} \def\sfT{{\mathsf T}} \def\mathsf{U}{{\mathsf U}{\mathsf{S}}
\def\mathsf{C}{\mathsf{C}}
\def\mathsf{W}{\mathsf{W}}
\def\mathsf{Z}{\mathsf{Z}}
\def{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}{\mathsf{P}}
\def\mathsf{U}{\mathsf{U}}
\def{\mathsf V}} \def\sfW{{\mathsf W}} \def\sfX{{\mathsf X}{\mathsf{V}}
\newcommand*{\medcap}{\mathbin{\scalebox{1.5}{\ensuremath{\cap}}}}%
\newcommand*{\medcup}{\mathbin{\scalebox{1.5}{\ensuremath{\cup}}}}%
\newcommand*{\medoplus}{\mathbin{\scalebox{1.5}{\ensuremath{\oplus}}}}%
\def3cm{3cm}
\def0.4{0.4}
\newcounter{counter_a}
\newenvironment{myenum}{\begin{list}{{\rm(\roman{counter_a})}}%
{\usecounter{counter_a}
\setlength{\itemsep}{1.ex}\setlength{\topsep}{0.8ex}
\setlength{\leftmargin}{5ex}\setlength{\labelwidth}{5ex}}}{\end{list}}
\newcommand\ds{\displaystyle}
\usepackage[latin1]{inputenc}
\usepackage[T1]{fontenc}
\numberwithin{figure}{section}
\numberwithin{equation}{section}
\theoremstyle{plain
\newtheorem*{thm*}{Theorem}
\newtheorem{thm}{Theorem}[section]
\newtheorem{hyp}{Hypothesis}[section]
\newtheorem{tm}{Theorem
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{example}[thm]{Example}
\newtheorem*{problem}{Open problem}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{conj}[thm]{Conjecture}
\newtheorem{notn}[thm]{Notation}
\newtheorem{calc}[thm]{Calculation}
\newtheorem*{thmA}{Theorem A}
\newtheorem*{thmB}{Theorem B}
\newtheorem{assumption}[thm]{Assumption}
\newtheorem{dfn}[thm]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[thm]{Remark}
\theoremstyle{plain}
\newtheorem{hypothesis}{Hypothesis}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{{\mathrm{spec}}}{{\mathrm{spec}}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm{i}}{\mathrm{i}}
\newcommand{\begin{equation*}}{\begin{equation*}}
\newcommand{\end{equation*}}{\end{equation*}}
\newcommand{\begin{equation*}{\begin{equation*}
\begin{aligned}}
\newcommand{\end{aligned}{\end{aligned}
\end{equation*}}
\newcommand{\begin{equation}{\begin{equation}
\begin{aligned}}
\newcommand{\end{aligned}{\end{aligned}
\end{equation}}
\newcommand{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}{\mathcal A}
\newcommand\cB{\mathcal B}
\newcommand{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}{\mathcal D}
\newcommand\cF{\mathcal F}
\newcommand{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}{\mathcal G}
\newcommand\cH{\mathcal H}
\newcommand\cK{\mathcal K}
\newcommand{\mathcal J}} \def\cK{{\mathcal K}} \def\cL{{\mathcal L}{\mathcal J}
\newcommand\cL{\mathcal L}
\newcommand{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}{\mathcal M}
\newcommand\cN{\mathcal N}
\newcommand{\mathcal P}} \def\cQ{{\mathcal Q}} \def\cR{{\mathcal R}{\mathcal P}
\newcommand\cR{\mathcal R}
\newcommand{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}{\mathcal S}
\newcommand\CC{\mathbb C}
\newcommand\NN{\mathbb N}
\newcommand\RR{\mathbb R}
\newcommand\ZZ{\mathbb Z}
\newcommand\frA{\mathfrak A}
\newcommand\frB{\mathfrak B}
\newcommand\frS{\mathfrak S}
\newcommand\fra{\mathfrak a}
\newcommand\frq{\mathfrak q}
\newcommand{\mathfrak s}{\mathfrak s}
\newcommand\frp{\mathfrak p}
\newcommand\frh{\mathfrak h}
\newcommand\fraD{\mathfrak{a}_{\rm D}}
\newcommand\fraN{\mathfrak{a}}
\newcommand\dis{\displaystyle}
\newcommand\overline{\overline}
\newcommand\wt{\widetilde}
\newcommand\wh{\widehat}
\newcommand{\mathrel{\mathop:}=}{\mathrel{\mathop:}=}
\newcommand{=\mathrel{\mathop:}}{=\mathrel{\mathop:}}
\newcommand{\mathrel{\mathop:}\hspace*{-0.72ex}&=}{\mathrel{\mathop:}\hspace*{-0.72ex}&=}
\newcommand\vectn[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix}}
\newcommand\vect[2]{\begin{pmatrix} #1 \\[1ex] #2 \end{pmatrix}}
\newcommand\restr[1]{\!\bigm|_{#1}}
\newcommand\RE{\text{\rm Re}}
\newcommand\sign{{\rm sign\,}}
\newcommand\void[1]{}
\def\overline{\overline}
\def{\rm ran\,}{{\rm ran\,}}
\def\sigma_{\rm p}{\sigma_{\rm p}}
\DeclareMathOperator\tr{tr}
\def{\mathfrak A}} \def\sB{{\mathfrak B}} \def\sC{{\mathfrak C}{{\mathfrak A}} \def\sB{{\mathfrak B}} \def\sC{{\mathfrak C}}
\def{\mathfrak D}} \def\sE{{\mathfrak E}} \def\sF{{\mathfrak F}{{\mathfrak D}} \def\sE{{\mathfrak E}} \def\sF{{\mathfrak F}}
\def{\mathfrak G}} \def\sH{{\mathfrak H}} \def\sI{{\mathfrak I}{{\mathfrak G}} \def\sH{{\mathfrak H}} \def\sI{{\mathfrak I}}
\def{\mathfrak J}} \def\sK{{\mathfrak K}} \def\sL{{\mathfrak L}{{\mathfrak J}} \def\sK{{\mathfrak K}} \def\sL{{\mathfrak L}}
\def{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}}
\def{\mathfrak P}} \def\sQ{{\mathfrak Q}} \def\sR{{\mathfrak R}{{\mathfrak P}} \def\sQ{{\mathfrak Q}} \def\sR{{\mathfrak R}}
\def{\mathfrak S}} \def\sT{{\mathfrak T}} \def\sU{{\mathfrak U}{{\mathfrak S}} \def\sT{{\mathfrak T}} \def\sU{{\mathfrak U}}
\def{\mathfrak V}} \def\sW{{\mathfrak W}} \def\sX{{\mathfrak X}{{\mathfrak V}} \def\sW{{\mathfrak W}} \def\sX{{\mathfrak X}}
\def{\mathfrak Y}} \def\sZ{{\mathfrak Z}{{\mathfrak Y}} \def\sZ{{\mathfrak Z}}
\def{\mathfrak b}{{\mathfrak b}}
\def{\mathfrak l}{{\mathfrak l}}
\def{\mathfrak s}{{\mathfrak s}}
\def{\mathfrak t}{{\mathfrak t}}
\def{\mathbb A}} \def\dB{{\mathbb B}} \def\dC{{\mathbb C}{{\mathbb A}} \def\dB{{\mathbb B}} \def\dC{{\mathbb C}}
\def{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}{{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}}
\def{\mathbb G}} \def\dH{{\mathbb H}} \def\dI{{\mathbb I}{{\mathbb G}} \def\dH{{\mathbb H}} \def\dI{{\mathbb I}}
\def{\mathbb J}} \def\dK{{\mathbb K}} \def\dL{{\mathbb L}{{\mathbb J}} \def\dK{{\mathbb K}} \def\dL{{\mathbb L}}
\def{\mathbb M}} \def\dN{{\mathbb N}} \def\dO{{\mathbb O}{{\mathbb M}} \def\dN{{\mathbb N}} \def\dO{{\mathbb O}}
\def{\mathbb P}} \def\dQ{{\mathbb Q}} \def\dR{{\mathbb R}{{\mathbb P}} \def\dQ{{\mathbb Q}} \def\dR{{\mathbb R}}
\def{\mathbb S}} \def\dT{{\mathbb T}} \def\dU{{\mathbb U}{{\mathbb S}} \def\dT{{\mathbb T}} \def\dU{{\mathbb U}}
\def{\mathbb V}} \def\dW{{\mathbb W}} \def\dX{{\mathbb X}{{\mathbb V}} \def\dW{{\mathbb W}} \def\dX{{\mathbb X}}
\def{\mathbb Y}} \def\dZ{{\mathbb Z}{{\mathbb Y}} \def\dZ{{\mathbb Z}}
\def{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}{{\mathsf A}} \def\sfB{{\mathsf B}} \def\mathsf{C}{{\mathsf C}}
\def{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}}
\def{\mathsf G}} \def\mathsf{H}{{\mathsf H}} \def\sfI{{\mathsf I}{{\mathsf G}} \def\mathsf{H}{{\mathsf H}} \def\sfI{{\mathsf I}}
\def{\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}{{\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}}
\def{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}{{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}}
\def{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}{{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}}
\def{\mathsf S}} \def\sfT{{\mathsf T}} \def\mathsf{U}{{\mathsf U}{{\mathsf S}} \def\sfT{{\mathsf T}} \def\mathsf{U}{{\mathsf U}}
\def{\mathsf V}} \def\sfW{{\mathsf W}} \def\sfX{{\mathsf X}{{\mathsf V}} \def\mathsf{W}{{\mathsf W}} \def\sfX{{\mathsf X}}
\def{\mathsf Y}} \def\sfZ{{\mathsf Z}{{\mathsf Y}} \def\mathsf{Z}{{\mathsf Z}}
\def{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}{{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}}
\def{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}{{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}}
\def{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}}
\def{\mathcal J}} \def\cK{{\mathcal K}} \def\cL{{\mathcal L}{{\mathcal J}} \def\cK{{\mathcal K}} \def\cL{{\mathcal L}}
\def{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}{{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}}
\def{\mathcal P}} \def\cQ{{\mathcal Q}} \def\cR{{\mathcal R}{{\mathcal P}} \def\cQ{{\mathcal Q}} \def\cR{{\mathcal R}}
\def{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}{{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}}
\def{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}
\def{\mathcal Y}} \def\cZ{{\mathcal Z}{{\mathcal Y}} \def\cZ{{\mathcal Z}}
\def\mathbb{N}{\mathbb{N}}
\renewcommand{\div}{\mathrm{div}\,}
\newcommand{\mathrm{grad}\,}{\mathrm{grad}\,}
\newcommand{\mathrm{Tr}\,}{\mathrm{Tr}\,}
\newcommand{\mathrm{dom}\,}{\mathrm{dom}\,}
\newcommand{\mathrm{mes}\,}{\mathrm{mes}\,}
\section{Introduction}\label{Sec.intro}
\subsection{Motivation}
Various physical systems can be effectively described by
Schr\"{o}dinger operators with $\delta$-interactions
supported on sets of zero Lebesgue measure.
To mention just a few, these operators are used:
\begin{myenum}
\item [--] in mesoscopic physics in the model of leaky quantum graphs~\cite[Chap. 10]{EK};
\item [--] for the description of atoms in strong magnetic fields~\cite{BD06};
\item [--] in the theory of semiconductors as a model for excitons~\cite{HKPC17};
\item [--] for the analysis of high contrast photonic crystals~\cite{FK96, HL17}.
\end{myenum}
One can expect that this list will keep expanding, in particular,
with the simplicity and versatility of the model in mind. This is
certainly a motivation to investigate its properties by rigorous
mathematical means.
One of the most traditional problems concerns the relation between
the geometry of the support of the $\delta$-interaction and the
spectrum of the corresponding Schr\"odinger operator; see the
review~\cite{E08}, the monograph~\cite{EK}, and the references
therein. A prominent particular question, addressed in numerous
papers (see {\it e.g.}\,~\cite{BEL14a, EI01, EK03, EL17, OP16, P15}), is to
analyze whether bound states below the threshold of the essential
spectrum are induced by an attractive $\delta$-interaction supported
on an unbounded, asymptotically flat hypersurface.
In the two-dimensional setting, this question is answered
affirmatively in~\cite{EI01}, provided that the asymptotically
straight curve is not a straight line. In the space dimension $d
\ge 4$, a circular conical surface is a non-trivial
example~\cite{LO16} of an asymptotically flat hypersurface such that
an attractive $\delta$-interaction of any strength, supported on it,
induces no bound states. Apparently, the three-dimensional case
happens to be the most subtle. In this space dimension, existence of
bound states (in fact, infinitely many of them) is shown
in~\cite{BEL14a, LO16, OP16} for all interaction strengths in the
geometric setting of conical surfaces, which is a special class of
asymptotically flat surfaces. On the other hand, for the most
natural geometric setting of locally deformed planes, existence of
at least one bound state below the threshold is proven
in~\cite{EK03} only in the strong-coupling regime. For the same
geometry, the question of existence of bound states below the
threshold for an arbitrary strength of an attractive
$\delta$-interaction still remains open and challenging.
The aim of this paper is to make one more step towards the complete
answer to this open question. Specifically, we prove the existence
of bound states induced by $\delta$-interactions supported on
locally deformed planes, in the small deformation limit. As a
by-product of the proof we obtain that for a sufficiently small
deformation the discrete spectrum consists of unique simple
eigenvalue. Moreover, we derive an asymptotic expansion of this
eigenvalue in terms of the profile of the deformation.
\subsection*{Notations}
Throughout the paper $g(\beta,\delta) = o_{\rm u}(h(\beta))$ and
$g(\beta,\delta) = \cO_{\rm u}(h(\beta))$ denote the standard asymptotic
notations in the limit $\beta \rightarrow 0+$, which are additionally
uniform in $\delta \in [0,1]$. For a Hilbert space $\cH$ we denote
by $\cB(\cH)$ the space of bounded, everywhere defined linear
operators in $\cH$. We denote by
$(L^2(\mathbb{R}^d),(\cdot,\cdot)_{L^2(\mathbb{R}^d)})$ (respectively, by
$(L^2(\mathbb{R}^d;\dC^d),(\cdot,\cdot)_{L^2(\mathbb{R}^d;\dC^d)})$) the usual
$L^2$-spaces over $\mathbb{R}^d$, $d \in\dN$, of scalar-valued
(respectively, vector-valued) functions. By $\cF\colon
L^2(\dR^2)\rightarrow L^2(\dR^2)$ we abbreviate the unitary
Fourier-Plancherel operator; with a slight abuse of terminology we
will refer to it as to Fourier transformation in $\dR^2$. In the
same vein, for any $\psi\in L^2(\dR^2)$ its Fourier transform
$\cF\psi$ will be denoted by $\wh \psi\in L^2(\dR^2)$. By
$H^1(\mathbb{R}^d)$ we denote the first order $L^2$-based Sobolev space over
$\mathbb{R}^d$, $d \in \dN$. For a $C^2$-smooth surface $\Gamma\subset\dR^3$,
$(L^2(\Gamma),(\cdot,\cdot)_{L^2(\Gamma)})$ is the usual $L^2$-space
over $\Gamma$, where the inner product $(\cdot,\cdot)_{L^2(\Gamma)}$
is introduced via the canonical Hausdorff measure $\sigma(\cdot)$ on
$\Gamma$; {\it cf.}\,~\cite[App.~C.8]{Le}. For an open interval $I\subset\dR$,
the operator-valued function $I\ni \dl\mapsto \sfB(\dl)\in\cB(\cH)$ is real analytic
if for any $\phi,\psi\in\cH$ the scalar-valued function
$I\ni \dl\mapsto (\sfB(\dl)\phi,\psi)_\cH$ is real analytic in the usual sense.
\subsection{The spectral problem for \boldmath{$\delta$}-interaction supported on a locally deformed plane}\label{sec:pre}
Let $\Gamma = \Gamma_\beta(f)\subset\mathbb{R}^3$, with $\beta \in
[0,\infty)$, be an unbounded surface given by
\begin{equation}\label{eq:Gamma}
\Gamma
:=
\big\{
(x_1,x_2,x_3)\in\mathbb{R}^3\colon x_3 = \beta f (x_1, x_2)\big\}\subset \mathbb{R}^3\,,
\end{equation}
where $f\colon \dR^2\rightarrow \dR$ ($f\not\equiv 0)$ is a $C^2$-smooth,
compactly supported function. The surface $\Gamma$ can be viewed as
a local deformation of the plane $\dR^2\times\{0\}$.
We also point out that in view of the identity $\Gamma_{-\beta}(f) =
\Gamma_\beta(-f)$ it is enough to consider non-negative values of
$\beta$ only. In what follows we set ${\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U} := \mathop{\mathrm{supp}}\nolimits f$ and denote by
$L_f > 0$ the Lipschitz constant of $f$; {\it i.e.}\,~the minimal positive
number such that $|f(x) - f(y)|\le L_f|x-y|$ holds for all
$x,y\in\dR^2$. By the mean-value theorem we infer that the
inequality $|\nabla f| \le L_f$ holds pointwise. Taking the
smoothness of $\Gamma$ into account, it is not difficult to check
that the mapping $\Omega \mapsto \sigma(\Omega \cap \Gamma)$ defines a
measure on $\mathbb{R}^3$, which belongs to the \emph{generalized Kato
class}; {\it cf.}\,~\cite[Sec. 2]{BEKS}.
Let a constant $\alpha > 0$ be fixed.
According to~\cite[Sec. 2]{BEKS} and also to~\cite[Prop. 3.1]{BEL14b}, the symmetric quadratic form
\begin{equation}\label{eq:form}
H^1(\mathbb{R}^3) \ni u\mapsto \frh_{\alpha,\beta}[u]
:=
\|\nabla u\|^2_{L^2(\mathbb{R}^3;\dC^3)}
-\alpha\| u|_\Gamma\|^2_{L^2(\Gamma)}\,,
\end{equation}
is closed, densely defined, symmetric, and semi-bounded in $L^2(\mathbb{R}^3)$;
here $u|_{\Gamma}$ denotes the trace of $u$ onto $\Gamma$.
Recall that the trace map $H^1(\dR^3)\ni u\rightarrow u|_{\Gamma} \in L^2(\Gamma)$ is well defined and continuous \cite[Thm. 3.38]{McL}. Now we are in position to define the Hamiltonian
with $\delta$-interaction supported on $\Gamma$, the main object of the present paper.
\begin{definition}\label{def:Op}
The self-adjoint Schr\"odinger operator $\mathsf{H}_{\alpha,\beta}$
in $L^2(\mathbb{R}^3)$ corresponding to the formal differential expression $-\Delta-\alpha\, \delta(x-\Gamma)$, $\alpha > 0$, is defined
via the first representation theorem~\cite[Thm. VI 2.1]{Kato} as associated with the quadratic form $\frh_{\alpha,\beta}$ in~\eqref{eq:form}.
\end{definition}
The surface $\Gamma$ is referred to as the support of the
$\delta$-interaction and the constant $\alpha > 0$ is usually called
the strength of this interaction. Schr\"odinger operators with
$\delta$-interactions supported on locally deformed planes were
first investigated in~\cite{EK03} and then subsequently
in~\cite{BEHL17, BEL14b, E17}. In the following proposition we collect
some previously known fundamental spectral properties of $\mathsf{H}_{\alpha ,\beta}$.
\begin{prop}\label{thm:known}
The spectrum of the self-adjoint operator $\mathsf{H}_{\alpha ,\beta}$
introduced in Definition~\ref{def:Op} is characterised as follows.
%
\begin{myenum}
\item $\sigma_{\rm ess}(\mathsf{H}_{\alpha ,\beta}) = \left [-\frac14\alpha^2,+\infty\right )$.
\item $\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta}) \ne \varnothing$ for all $\beta > 0$ and
all $\alpha > 0$ large enough.
\item $\sigma_{\rm d}(\mathsf{H}_{\alpha,0}) = \varnothing$ for $\beta = 0$
and all $\alpha > 0$.
\end{myenum}
\end{prop}
For a proof of item~(i) see~\cite[Thm. 4.1, Rem. 4.2]{EK03} and
\cite[Thm. 4.10]{BEL14b}. A~proof of item~(ii) can be found
in~\cite[Thm. 4.3]{EK03}. The claim of~(iii) easily follows via
separation of variables. Our considerations are inspired by the open
question, whether $\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta}) \ne \varnothing$ holds for all
$\alpha,\beta > 0$; {\it cf.}\,~\cite[Problem 7.5]{E08}.
\subsection{Main result}
Informally speaking, the main result of this paper says that the
discrete spectrum of $\mathsf{H}_{\alpha ,\beta}$ consists of exactly one simple eigenvalue
for all sufficiently small $\beta > 0$. Moreover, an asymptotic
expansion of this eigenvalue in terms of $\alpha$, $\beta$ and of the
function $f$ is found. In order to formulate this result precisely,
we denote by $\lambda_1^\alpha(\beta)$ the lowest spectral point of $\mathsf{H}_{\alpha ,\beta}$.
\begin{thm}\label{thm:main}
Let $\alpha > 0$ be fixed and let the self-adjoint operator $\mathsf{H}_{\alpha ,\beta}$ be as in Definition~\ref{def:Op}. Set
%
\begin{equation*}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} := \int_{\dR^2}
|p|^2\left (\alpha^2 - \frac{2\alpha^3}{\sqrt{4|p|^2 + \alpha^2} + \alpha}\right )|\wh f(p)|^2 {\mathsf{d}} p > 0,
\end{equation*}
%
where $\wh f$ is the Fourier transform of $f$.
Then $\#\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta}) = 1$ holds for all sufficiently small $\beta > 0$
and, moreover, the simple eigenvalue $\lambda_1^\alpha(\beta)$ admits the
asymptotic expansion
%
\begin{equation}\label{eq:main_expansion}
\lambda_1^\alpha(\beta) =
-\frac{\alpha^2}{4} -
\exp\left (-\frac{16\pi}{{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}\beta^2}\right ) \big (1+o(1)\big),\qquad \beta\rightarrow 0+.
\end{equation}
%
%
\end{thm}
The proof of this result relies on the Birman-Schwinger
principle~\cite{BEKS} for $\mathsf{H}_{\alpha ,\beta}$. Inspired by the technique developed
in~\cite{BFKLR17, CK11, EK02, EK08, EK15}, we take the advantage of
rewriting the Birman-Schwinger condition in the perturbative form,
in which the resolvent of the two-dimensional free Laplacian
appears. A technically demanding step is to expand this new
condition with respect to the small parameter~$\beta$. Following the
strategy similar in spirit to the one used in~\cite{S76}, we derive
from this condition an implicit scalar equation on the principal
eigenvalue of $\mathsf{H}_{\alpha ,\beta}$. Careful inspection of this equation yields the
existence and uniqueness of its solution for all sufficiently small
$\beta > 0$, as well as the expansion of this unique solution in the
asymptotic regime $\beta\rightarrow 0+$. Surprisingly, an integral
representation of the relativistic Schr\"odinger
operator~\cite{IT93} arises in this asymptotic analysis. The
obtained implicit equation seems to be of an independent interest,
because it allows to extract more terms in the asymptotic expansion
for $\lambda_1^\alpha(\beta)$. However, we will not elaborate on this point
here.
\subsection*{Organisation of the paper}
In Section~\ref{sec:BS} we recall the standard formulation of the
Birman-Schwin\-ger principle for the Hamiltonian $\mathsf{H}_{\alpha ,\beta}$ and employ it
to obtain a useful lower bound on $\lambda_1^\alpha(\beta)$. Furthermore,
we derive a perturbative reformulation of the Birman-Schwinger
principle and expand the new Birman-Schwinger condition with
respect to the small parameter $\beta$. In Section~\ref{sec:proof}
we prove our main result, formulated in Theorem~\ref{thm:main}. We
conclude the paper by Section~\ref{sec:dis} containing a discussion
on possible generalizations of the obtained results.
\section{Birman-Schwinger principle}\label{sec:BS}
\subsection{Standard formulation}
Birman-Schwinger principle (BS-principle in what follows) is a
powerful tool for the spectral analysis of Schr\"odinger operators.
Its generalization, which covers $\delta$-interactions supported on
hypersurfaces, is derived in~\cite{BEKS}; see also~\cite{BLL13, B95,
Po01} for some refinements.
In what follows, let $\lambda < 0$ and set $\kappa := \sqrt{-\lambda}$. Green's
function corresponding to the differential expression $-\Delta +
\kappa^2$ in $\mathbb{R}^3$ takes the following well-known form
\begin{equation*} \label{freeG3}
G_\kappa(x\! - \!y) = \frac{\mathrm{e}^{-\kappa|x-y|}}{4\pi | x \!- \! y|}.
\end{equation*}
Let the surface $\Gamma = \Gamma_\beta(f)\subset\mathbb{R}^3$ be as
in~\eqref{eq:Gamma}. Parametrizing $\Gamma$ by the mapping
\begin{equation}\label{eq:rbeta}
r_\beta\colon \mathbb{R}^2\rightarrow \mathbb{R}^3,\qquad
r_\beta(x) := \left(x, \beta f(x) \right),
\end{equation}
we can naturally express the surface measure on $\Gamma$
through the Lebesgue measure on $\dR^2$
via the relation ${\mathsf{d}}\sigma(x) = g_\beta(x) {\mathsf{d}} x$, where
the Jacobian $g_\beta$ is explicitly given by
\begin{equation*}\label{eq:Jacobian}
g_\beta(x)
=
\left (1 + \beta^2 |\nabla f(x)|^2\right )^{1/2}.
\end{equation*}
Next we introduce the \emph{weakly singular integral operator}
$\Qb(\kappa)\colon L^2(\mathbb{R}^2 )\to L^2(\mathbb{R}^2)$, $\kappa > 0$, acting as
\begin{equation}\label{def:SL}
\left(\Qb(\kappa) \psi\right ) (x)
:=
\int_{\mathbb{R}^2}
g_\beta(x)^{1/2}
G_\kappa\left (r_\beta(x)- r_\beta(y)\right )
g_\beta(y)^{1/2} \psi(y) {\mathsf{d}} y.
\end{equation}
Note that the linear mapping ${\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta \colon L^2(\Gamma) \rightarrow
L^2(\dR^2)$, $({\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta \psi)(x) =
g_\beta(x)^{1/2}\psi(r_\beta(x))$, is an isometric isomorphism and
it is not difficult to check that $\Qb(\kappa) = {\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta
R_{mm}({\mathsf{i}}\kappa){\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta^{-1}$, where the operator
$R_{mm}({\mathsf{i}}\kappa)\colon L^2(\Gamma)\rightarrow L^2(\Gamma)$ is defined as
\begin{equation*}\label{key}
(R_{mm}({\mathsf{i}}\kappa) \psi)(x)
:= \int_{\Gamma} G_\kappa(x-y) \psi(y){\mathsf{d}} \sigma(y).
\end{equation*}
In fact, $R_{mm}({\mathsf{i}}\kappa)$ is the Birman-Schwinger operator
introduced in~\cite[Sec. 2]{BEKS}, see also~\cite{B95}. In view of
this identification, we get from~\cite{B95} that $\Qb(\kappa)$ is a
bounded, self-adjoint, non-negative operator in $L^2(\dR^2)$.
Next theorem contains a BS-principle for the Schr\"o\-dinger
operator $\mathsf{H}_{\alpha ,\beta}$ in Definition~\ref{def:Op}. We remark that while this
formulation of the BS-principle is not the same as in \cite[Lem.
2.3\,(iv)]{BEKS} and~\cite[Lem. 1]{B95}, it can be easily derived
from those claims using the identity $\Qb(\kappa) = {\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta
R_{mm}({\mathsf{i}} \kappa){\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}_\beta^{-1}$.
\begin{thm}\label{thm:BS}
Let the self-adjoint operator $\mathsf{H}_{\alpha ,\beta}$
be as in Definition~\ref{def:Op}
and the operator-valued function $\mathbb{R}_+ \ni \kappa \mapsto
\Qb(\kappa)$ be as in~\eqref{def:SL}.
Then it holds that
%
\[
\forall\, \kappa > 0,
\qquad
\dim\ker\big(\mathsf{H}_{\alpha ,\beta} + \kappa^2\big)
=
\dim\ker\big(\sfI -\alpha \Qb(\kappa)\big).
\]
\end{thm}%
In the following lemma we recall the properties of $\sfQ_0(\kappa)$
({\it i.e.}\,~for $\beta = 0$). Since these properties are easy
to prove and difficult to find in the literature
we provide a short argument.
\begin{lem} \label{lem:fourier}
The operator $\sfQ_0(\kappa)$ is unitarily equivalent via the Fourier
transformation to the multiplication operator in $\mathbb{R}^2$ with the
function
%
\[
\mathbb{R}^2\ni p \mapsto \frac{1}{2\sqrt{|p|^2 + \kappa^2}}.
\]
%
In particular, $\sigma(\sfQ_0(\kappa)) = [0,\frac{1}{2\kappa}]$
and the operator-valued function $\dR_+\ni\kappa \mapsto
\sfQ_0(\kappa)$ is real analytic.
\end{lem}
\begin{proof}
Recall that for the Fourier transform of the convolution of $\psi_1,\psi_2\in L^2(\dR^2)$ we have
%
$\cF(\psi_1\star \psi_2) = \wh \psi_1\wh \psi_2$.
%
Using this formula and the fact that $p\mapsto\frac{1}{2\sqrt{|p|^2 + \kappa^2}}$
is the Fourier transform of $\dR^2\ni x\mapsto \frac{e^{-\kappa|x|}}{4\pi|x|}$
we get for any $\psi \in L^2(\dR^2)$
%
\begin{equation*}\label{key}
\begin{split}
\sfQ_0(\kappa)\psi
& =
\cF^{-1}\cF\int_{\dR^2} G_\kappa( \cdot -y) \psi(y) {\mathsf{d}} y\\
& =
\cF^{-1}
\left (\cF
\left (\frac{e^{-\kappa|\cdot|}}{4\pi|\cdot|}\right ) \wh\psi \right )
=
\cF^{-1}
\left ( \frac{\wh\psi}{2\sqrt{|\cdot|^2 + \kappa^2}}\right ),
\end{split}
\end{equation*}
%
and the main claim of the lemma immediately follows. The analyticity of
$\dR_+\ni\kappa\mapsto \sfQ_0(\kappa)$ is a consequence
of the same property of
the operator-valued function
of multiplication by
$\dR_+\ni\kappa\mapsto\frac{1}{2\sqrt{|p|^2 + \kappa^2}}$.
Moreover, we have
%
\[
\sigma(\sfQ_0(\kappa)) = \overline{\left \{\lambda \in\dR \colon
\lambda = \tfrac{1}{2\sqrt{|p|^2 + \kappa^2}}~\text{for }\,
p\in\dR^2\right \}} = \left [0,\tfrac{1}{2\kappa}\right ].
\qedhere
\]
%
\end{proof}
By means of the BS-principle in Theorem~\ref{thm:BS} we obtain a
useful lower bound on the lowest spectral
point $\lambda_1^\alpha(\beta)$ of $\mathsf{H}_{\alpha ,\beta}$.
\begin{prop}\label{prop:lower_bound}
Let $\lambda_1^\alpha(\beta)$ be the lowest spectral point
of the self-adjoint operator $\mathsf{H}_{\alpha ,\beta}$ introduced in Definition~\ref{def:Op}.
Then the following lower bound
%
\begin{equation*}\label{key}
\lambda_1^\alpha(\beta) \ge -\frac{\alpha^2}{4}
\left (1+\beta^2 L_f^2\right )
\end{equation*}
%
holds for all $\alpha,\beta > 0$. In particular,
$\lambda_1^\alpha(\beta)\rightarrow -\frac14\alpha^2-$ as $\beta \rightarrow 0+$.
\end{prop}
\begin{proof}
In view of Proposition~\ref{thm:known}\,(i) we clearly have
$\lambda_1^\alpha(\beta) \le -\frac14\alpha^2$ and if $\lambda_1^\alpha(\beta) < -\frac14\alpha^2$, then necessarily $\lambda_1^\alpha(\beta)\in\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta})$
holds. Applying the Schur test~\cite[Lem. 0.32]{Te} for the operator $\alpha\Qb(\kappa)$
we get, using monotonicity of $G_\kappa(\cdot)$
in combination with the inequalities $|r_\beta(x) - r_\beta(y)| \ge |x-y|$ and $|\nabla f| \le L_f$,
the following bound
%
\begin{equation*}\label{key}
\begin{split}
\|\alpha\Qb(\kappa)\|
& \le \alpha(1+\beta^2L_f^2)^{1/2}
\sup_{x\in\dR^2}
\int_{\dR^2} G_{\kappa}(r_\beta(x) - r_{\beta}(y)) {\mathsf{d}} y\\
& \le
\alpha
(1+\beta^2 L_f^2)^{1/2}
\sup_{x\in\dR^2}
\int_{\dR^2} \frac{\mathrm{e}^{-\kappa|x - y|}}{4\pi|x-y|} {\mathsf{d}} y\\
&
=
\alpha
(1+\beta^2 L_f^2)^{1/2}
\int_{\dR^2} \frac{\mathrm{e}^{-\kappa|y|}}{4\pi|y|} {\mathsf{d}} y \\
& =
\alpha(1+\beta^2 L_f^2)^{1/2}\frac12
\int_0^\infty e^{-\kappa r} {\mathsf{d}} r =
\frac{\alpha}{2\kappa}(1+\beta^2 L_f^2)^{1/2}.
\end{split}
\end{equation*}
%
Consequently, for $\kappa > \frac{\alpha}{2}(1+\beta^2 L_f^2)^{1/2}$
holds $\|\alpha\Qb(\kappa)\| < 1$ and by the BS-principle in
Theorem~\ref{thm:BS} we get $-\kappa^2\notin\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta})$.
Finally, we conclude that
%
\[
\lambda_1^\alpha(\beta) \ge -\frac{\alpha^2}{4}(1+\beta^2 L_f^2).
\qedhere
\]
\end{proof}
\subsection{Perturbative reformulation}
In our considerations it is convenient to deal with a perturbative
reformulation of the BS-principle. This technique has already been
successfully applied in~\cite{BFKLR17, CK11, EK02, EK08, EK15} for
the case of interactions supported on curves. To this aim, for $\kappa
\ge \frac12\alpha$ we set $\delta := \sqrt{\kappa^2 -\frac14\alpha^2}$ and
define the operator-valued function
\begin{equation*}\label{eq:D}
\Db(\delta) := \Qb(\kappa) - \sfQ_0(\kappa),
\end{equation*}
which is real analytic in $\delta\in(0,\infty)$ and in $\beta \in (0,\infty)$;
{\it cf.}\,~\cite[\Sigma VII.1.1]{Kato} and the explicit expression for the integral kernel in~\eqref{def:SL}.
Next, for $\kappa > \frac12\alpha$ we define
\begin{equation}\label{eq:B}
\sfB_\alpha (\delta) := \big(\sfI - \alpha\sfQ_0(\kappa)\big)^{-1},
\end{equation}
where existence and boundedness of the inverse of
$\sfI - \alpha\sfQ_0(\kappa)$ are guaranteed by Lemma~\ref{lem:fourier}.
The above auxiliary operators satisfy
\[
\dim\ker(\sfI - \alpha\Qb(\kappa) )
=
\dim\ker\big(\sfI - \alpha\sfQ_0(\kappa) -
\alpha\Db(\delta)\big)
=
\dim\ker\big(\sfI - \alpha\sfB_\alpha(\delta)\Db(\delta)\big).
\]
Thus, the BS-principle formulated in Theorem~\ref{thm:BS} yields
\begin{equation}\label{eq:reformulation}
\forall\, \kappa > \frac{\alpha}{2},
\qquad
\dim\ker\big(\mathsf{H}_{\alpha ,\beta} + \kappa^2\big)
=
\dim\ker\big(\sfI - \alpha\sfB_\alpha(\delta)\Db(\delta)\big).
\end{equation}
In the next lemma we collect the properties of the operator family $\sfB_\alpha(\delta)$.
In the following, we denote by $-\Delta_{\mathbb{R}^2}$ the usual self-adjoint free Laplacian in $L^2(\mathbb{R}^2)$, whose resolvent
is abbreviated by $\sfR(z) := (-\Delta_{\dR^2} + z)^{-1}$ for $z >0$.
\begin{lem} \label{le-decompoB}
The operator $\sfB_\alpha(\delta)$, $\delta >0$, in~\eqref{eq:B} admits the representation:
%
\begin{equation} \label{eq-decompoBa}
\sfB_\alpha(\delta) = \frac{\alpha^2}{2} \sfR(\delta^2) + \sfN_\alpha(\delta)
\end{equation}
%
with
%
\begin{equation*}\label{eq:N}
\sfN_\alpha(\delta) := 1 +
\alpha \sfR\big(\delta^2 + \tfrac14\alpha^2\big)^{1/2}\left(
\alpha \sfR\big(\delta^2 + \tfrac14\alpha^2\big)^{1/2} + 2
\right )^{-1},\qquad \delta\ge 0.
\end{equation*}
%
Moreover, the operator-valued function $\sfN_\alpha(\delta)$
satisfies the following properties.
%
\begin{myenum}
\item
The estimate
$\|\sfN_\alpha(\delta)\| \le \frac32$ is valid for all $\delta \ge 0$.
\item
The convergence $\sfN_\alpha(\delta)\rightarrow\sfN_\alpha(0)$ holds
in the operator norm as $\delta\rightarrow 0+$.
\item
$(0,\infty)\ni\delta\mapsto \sfN_\alpha(\delta)$ is real analytic.
\item
The estimate\footnote{Here and in the following we define
the derivative of an operator-valued function
$\dR_+\ni\delta\mapsto {\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}(\delta)$ as
the limit in the operator-norm
of the fraction
$\frac{{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}(\dl') -{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}(\dl)}
{\dl' - \dl}$ as $\dl'\rightarrow\dl$.}
$\|\partial_\delta\sfN_\alpha(\delta)\| \le \frac{\delta}{\alpha^2}$
is valid for all $\delta \ge 0$.
\end{myenum}
In particular, representation~\eqref{eq-decompoBa}
yields real analyticity of $\sfB_\alpha(\dl)$
with respect to $\delta \in (0,\infty)$.
\end{lem}
\begin{proof}
By Lemma~\ref{lem:fourier}, the operator $\sfB_\alpha(\delta)$ is unitarily equivalent (via the Fourier transformation) to the operator of multiplication with the function
%
\[
f_{\alpha,\dl}(p)
:=
\bigg(1 - \frac{\alpha}{2\tau_{\alpha,\dl} (p)}\bigg)^{-1},
\]
%
where
$\tau_{\alpha,\dl} (p)
:=
\sqrt{|p|^2 +\dl^2 +\frac14\alpha^2}$.
%
Note that the function $f_{\alpha,\delta}$ can be decomposed as
$f_{\alpha,\dl}(p) = m_{\alpha,\dl}(p) + n_{\alpha,\dl}(p)$ with
%
\[
m_{\alpha,\dl}(p) := \frac{\alpha^2}{2(|p|^2 + \delta^2)}
\qquad\text{and}\qquad
n_{\alpha,\dl}(p) := 1 + \frac{\alpha}{2\tau_{\alpha,\dl} (p) + \alpha}\,.
\]
%
Observe that we have
%
\begin{equation}\label{eq:f2}
n_{\alpha,\dl}(p) \le n_{\alpha,\dl}(0)
=1 + \frac{\alpha}{2\tau_{\alpha,\dl}(0)+\alpha}
\le
1 + \frac{\alpha}{2\tau_{\alpha,0}(0)+\alpha}
= 1 + \frac{1}{2} = \frac{3}{2}.
\end{equation}
%
Clearly, the operators of multiplication
with $m_{\alpha,\dl}$ and with
$n_{\alpha,\dl}$ are unitarily equivalent
via the inverse Fourier transformation to
$\frac{\alpha^2}{2}\sfR(\delta^2)$ and to $\sfN_\alpha(\delta)$,
respectively. Hence, the decomposition~\eqref{eq-decompoBa} is valid. In particular,
an upper bound determined in (i) holds, thanks to~\eqref{eq:f2}.
The estimate
%
\[
\begin{split}
\left \|\sfN_\alpha(\delta) - \sfN_\alpha(0)\right \|
& =
\sup_{p\in\dR^2} \left |
\frac{\alpha}{2\tau_{\alpha,\dl} (p) + \alpha}
-
\frac{\alpha}{2\tau_{\alpha,0} (p) + \alpha}
\right |\\
& \le
\frac{1}{2\alpha}
\sup_{p\in\dR^2} \left |
\tau_{\alpha,0} (p) - \tau_{\alpha,\dl} (p)\right | \le
\frac{\delta^2}{2\alpha}
\sup_{p\in\dR^2}
\frac{1}{\tau_{\alpha,0} (p) + \tau_{\alpha,\dl} (p)}
\le
\frac{\delta^2}{2\alpha^2},
\end{split}
\]
%
implies the convergence in (ii).
Analyticity of $(0,\infty)\ni \delta\mapsto \sfR(\frac14\alpha^2 + \delta^2)$ yields the claim of~(iii).
Define $\partial_\dl \sfN_\alpha(\delta)\colon L^2(\dR^2)\rightarrow L^2(\dR^2)$
as the operator being unitarily equivalent via the Fourier transformation to the multiplication with
the function
%
\begin{equation*}\label{key}
\partial_\dl n_{\alpha,\dl}(p) =
-\frac{2\alpha\dl}{\left (2\tau_{\alpha,\dl}(p)+ \alpha\right )^2\tau_{\alpha,\dl}(p)}.
\end{equation*}
%
Next, we show that the operator
$\partial_\dl \sfN_\alpha(\dl )$ defined as above
satisfies
%
\begin{equation}\label{eq-1N}
\lim _{\dl'\to \dl}
\left\|
\frac{\sfN_\alpha (\dl ')- \sfN_\alpha (\dl)}{\dl '-\dl } - \partial_\dl \sfN_\alpha (\dl )
\right\| = 0\,.
\end{equation}
%
Applying the mean-value theorem we obtain
%
\[
\left|
\frac{n_{\alpha,\dl'} (p)-n_{\alpha,\dl} (p)}
{\dl'-\dl } -
\partial_\dl n_{\alpha,\dl} (p) \right|
\leq
\left|
\partial_{\dl\dl}^2 n_{\alpha,\dl_\star} (p)(\dl '-\dl)\right|,
\]
%
where $\delta_\star\in (0,\infty)$ lies between $\delta$ and $\delta'$.
A straightforward calculations shows
%
\[
\partial_{\dl\dl}^2 n_{\alpha,\dl} (p)
=
-\frac{2\alpha }{ (2\tau_{\alpha,\dl} (p)+\alpha )^2 \tau_{\alpha,\dl} (p)}
+
\frac{8\alpha \dl^2 }{(2\tau_{\alpha,\dl} (p) +\alpha )^3 \tau_{\alpha,\dl} (p)^2}
+
\frac{2\alpha \dl^2}{(2\tau_{\alpha,\dl} (p)+\alpha )^2 \tau_{\alpha,\dl} (p)^3},
\]
%
which implies
%
\[
\left| \partial_{\dl\dl}^2 n_{\alpha,\dl} (p)\right|
\leq \frac{3}{\alpha^2},
\]
%
and hence
%
\[
\sup_{p\in \mathbb{R}^2}
\left|
\frac{n_{\alpha,\dl'} (p)- n_{\alpha,\dl} (p)}{\dl'-\dl } - \partial_\dl n_{\alpha,\dl} (p ) \right|
\leq
\frac{3}{\alpha^2}\left| \dl '-\dl \right|.
\]
%
This completes the verification of~\eqref{eq-1N}.
Finally, we get
%
\begin{equation*}\label{key}
\begin{split}
\|\partial_\dl \sfN_\alpha(\delta)\|
& =
\sup_{p\in\dR^2}
\frac{2\alpha\dl}{\left (2\tau_{\alpha,\dl}(p)+ \alpha\right )^2\tau_{\alpha,\dl}(p)} \\
& =
\frac{2\alpha\dl}{\left (2\tau_{\alpha,\dl}(0)+ \alpha\right )^2\tau_{\alpha,\dl}(0)}
\le
\frac{2\alpha\dl}{\left (2\tau_{\alpha,0}(0)+ \alpha\right )^2\tau_{\alpha,0}(0)} = \frac{\delta}{\alpha^2},
\end{split}
\end{equation*}
%
which settles the claim of (iv).
\end{proof}
In what follows, we identify $x\in\dR^2$ with $(x,0)\in\dR^3$. For a
given measurable $V\colon \dR^2\rightarrow \dR_+$ we introduce the integral kernels
\begin{subequations}\label{eq:kernels}
\begin{align}
\wDb(\delta)(x,y) & := V(x)\Db(\delta)(x,y) V(y),
\label{eq:kernel1}\\
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)(x,y) & := V(x)G_\kappa(x-y)V(y)E(x,y),
\label{eq:kernel2}
\end{align}
\end{subequations}
where
\begin{equation*}\label{eq:E}
E(x,y) :=
\frac{|\nabla f(x)|^2 + |\nabla f(y)|^2}{4}
-
\frac{|f(x) - f(y)|^2(\kappa|x-y|+ 1)}{2|x-y|^2}.
\end{equation*}
Furthermore, we work out a representation for
the operator-valued function $\wDb(\delta)$
associated with the kernel in~\eqref{eq:kernel1}
under certain limitation on the growth of $V$.
\begin{prop}\label{prop:expansion}
Let a measurable $V\colon\dR^2\rightarrow \dR_+$ satisfy $V(x) \le c\exp\left (\frac{\alpha}{4}|x|\right )$ for all $x\in\dR^2$ with some constant $c > 0$.
Let the integral kernels $\wDb(\delta)$ and ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)$, $\delta\in [0,1]$, $\beta \in (0,1]$, be as in~\eqref{eq:kernels}.
Then there exist constants $C_j = C_j(\alpha,f,c) > 0$, $j = 1,2,3$,
such that the following claims hold.
%
\begin{myenum}
\item
For all $x,y\in\dR^2$, the pointwise bound
%
\begin{equation}\label{eq:pntws_bnd}
|{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)(x,y)|\le
C_1 G_{\frac{\alpha}{4}}(x-y)\left [1 + \frac12 \kappa |x-y|\right ],
\end{equation}
%
holds, the kernel ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)(x,y)$
defines the self-adjoint operator ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)\in\cB(L^2(\dR^2))$,
and, in addition, $\|{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)\| \le C_2$.
\item
For all $x,y\in\dR^2$, the decomposition
%
\begin{equation}\label{eq:representation_kernel}
\wDb(\delta)(x,y)
=
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)(x,y)\beta^2 + \wDb^{(2)}(\delta)(x,y)\beta^4
\end{equation}
%
holds, the kernel $\wDb^{(2)}(\delta)(x,y)$ defines the self-adjoint operator $\wDb^{(2)}(\delta)\in\cB(L^2(\dR^2))$,
and, in addition,
$\|\wDb^{(2)}(\delta )\| \le C_3$.
%
\end{myenum}
%
In particular, the kernel $\wDb(\delta)(x,y)$ induces the self-adjoint operator
%
\begin{equation*} \label{eq:representation}
\wDb(\delta)
=
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta )\beta^2 + \wDb^{(2)}(\delta )\beta^4\in\cB(L^2(\dR^2))\,.
\end{equation*}
%
\end{prop}
\begin{proof}
\noindent{\rm (i)}\,
Recall that ${\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U} = \mathop{\mathrm{supp}}\nolimits f$ and let $\cB_R\subset\dR^2$ be an open ball of the radius $R > 0$ centred at the origin such
that the inclusion ${\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}\subset\cB_R$ holds.
The subset
of $\dR^2\times\dR^2$, where the factor $E(x,y)$ in the expression~\eqref{eq:kernel2} for ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)(x,y)$
is not equal to zero can be covered by two (intersecting) sets
%
\begin{equation}\label{eq:UV}
\cU := \cB_R\times\cB_R\quad\text{and}\quad
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} := ({\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}\times\cB_R^{\rm c}) \cup (\cB_R^{\rm c}\times{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}),
\end{equation}
%
where $\cB_R^{\rm c} := \dR^2\setminus \overline{\cB_R}$. Applying the bound
$\frac14|x-y| > \frac14(|x| + |y|) - \frac12 R$ (valid for all $(x,y)\in \cU\cup {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$) we get
%
\begin{equation}\label{eq:VGV2}
V(x)G_\kappa(x-y)V(y)
\le
c^2\exp\left (\frac{\alpha R}{2}\right ) G_{\frac{\alpha}{4}}(x-y),
\qquad \forall (x,y)\in \cU\cup{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X},
\end{equation}
%
where we also used monotonicity of Green's function with respect to $\kappa$.
Employing the inequality
$|\nabla f| \le L_f$ we can pointwise estimate the factor
$E$ by
%
\begin{equation}\label{eq:E}
|E(x,y)| \le
L_f^2\left[ 1 + \frac12 \kappa |x-y|\right ].
\end{equation}
%
Combining~\eqref{eq:VGV2} and \eqref{eq:E} we get
the bound~\eqref{eq:pntws_bnd} with
%
\begin{equation}\label{eq:C1}
C_1 := c^2L^2_f\exp\left (\frac{\alpha R}{2}
\right ).
\end{equation}
%
Taking into account that the integral
kernel of ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)$ is symmetric,
we obtain from~\eqref{eq:pntws_bnd} using the Schur test that
%
\begin{equation*}\label{eq:wtD1}
\begin{split}
\|{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)\| &
\le
C_1
\int_{\dR^2} G_{\frac{\alpha}{4}}(x) {\mathsf{d}} x
+
C_1\frac{\kappa}{2}
\int_{\dR^2} G_{\frac{\alpha}{4}}(x)|x| {\mathsf{d}} x
\\
& =
C_1\frac{1}{2}
\int_0^\infty e^{-\frac{\alpha}{4} r} {\mathsf{d}} r +
C_1\frac{\kappa}{4}
\int_0^\infty e^{-\frac{\alpha}{4} r} r {\mathsf{d}} r\\
& =
C_1
\left (\frac{2}{\alpha} + \frac{4\kappa}{\alpha^2}\right)
\le C_1
\left ( \frac{4}{\alpha} + \frac{4}{\alpha^2}\right ) =: C_2,
\end{split}
\end{equation*}
%
in the last step of the above estimates we employed that $\kappa = \sqrt{\frac14\alpha^2 + \dl^2}
\le \frac12\alpha + \dl \le \frac12\alpha + 1$ for all $\dl \in [0,1]$.
Thus, the kernel ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)(x,y)$ defines
the operator ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)\in\cB(L^2(\dR^2))$.
Self-adjointness of ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)$
is a consequence of the identity
${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)(x,y) = {\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_V^{(1)}(\delta)(y,x)$.
\smallskip
\noindent {\rm (ii)}\,
For $x,y\in\mathbb{R}^2$, we introduce
$\rho_\beta(x,y) :=
\left | r_\beta (x)- r _\beta (y)\right |^2$,
%
where the mapping $r_\beta\colon \dR^2\rightarrow\dR^3$ is as
in~\eqref{eq:rbeta}.
A simple computation yields
%
\begin{equation*}
\rho_\beta(x,y)
=
|x-y|^2 + |f(x) - f(y)|^2 \beta^2.
\end{equation*}
%
Furthermore, we define the function $F\colon\mathbb{R}_+\rightarrow \mathbb{R}_+$
by $F(s) := \frac{\mathrm{e}^{-\kappa\sqrt{s}}}{4\pi \sqrt{s}}$
and compute its first and second derivatives
%
\[
\begin{split}
F'(s) & = -\frac{\mathrm{e}^{-\kappa\sqrt{s}} \left(\kappa s^{1/2} + 1\right)}{8\pi s^{3/2}} = -F(s)\frac{\kappa s^{1/2}+1}{2s},\\
F''(s)& =
\frac{\mathrm{e}^{-\kappa \sqrt{s}}\big[\kappa^2 s + 3 \kappa s^{1/2} + 3\big]}{16\pi s^{5/2}}= F(s) \frac{\kappa^2 s + 3 \kappa s^{1/2} + 3}{4s^2} .
\end{split}
\]
%
Taylor expansion of $F(\cdot)$ in the vicinity of $s \in (0,\infty)$
with the remainder in the Lagrange form reads as
follows
%
\[
F(t) = F(s) + F'(s)(t-s) +
F''(s + \theta\cdot(t-s))\frac{(t-s)^2}{2}\,,
\qquad
\theta= \theta(s,t) \in (0,1).
\]
%
For $x,y\in\dR^2$ we define an auxiliary function
$\mu\colon\dR^2\times\dR^2\rightarrow \dR_+$ by
%
\[
\mu(x,y) :=
\rho_0(x,y) +
\theta(\rho_0(x,y),\rho_\beta(x,y))
|f(x)-f(y)|^2\beta^2.
\]
%
For the sake of brevity we denote
\[
H(x,y) :=
\frac{|f(x) - f(y)|^2(\kappa|x-y| + 1)}{2|x-y|^2},
\quad
K(x,y) := \frac{|\nabla f(x)|^2 + |\nabla f(y)|^2 }{4},
\]
%
and
%
\begin{subequations}
\begin{align}
K_1(x,y) &
:=
\left(K(x,y) - H(x,y)\right )\beta^2 = E(x,y)\beta^2,\label{eq:K1} \\
K_2(x,y) &
:=
\left (g_\beta(x)g_\beta(y)\right )^{1/2} - 1 - \beta^2 K(x,y),
\label{eq:K2}\\
K_3(x,y) &
:=
H(x,y)
\left (1 -
\left (g_\beta(x)g_\beta(y)\right )^{1/2} \right )\beta^2,\label{eq:K3}\\
K_4(x,y) &
:=
g_\beta(x)^{1/2} F''(\mu(x,y)) g_\beta(y)^{1/2}
\frac{|f(x) - f(y)|^4}{2}\beta^4.\label{eq:K4}
\end{align}
\end{subequations}
%
Dependence of the above kernels on $\beta$
and $f$ is not indicated in the notations as no confusion can arise.
Thus, the integral kernel $\wDb(\delta)(x,y)$ can be decomposed as
%
\[
\begin{split}
\wDb(\delta)(x,y)
& =
V(x)\left [\Qb(\delta)(x,y) - \sfQ_0(\delta)(x,y) \right ] V(y)\\
& =
V(x)\left (G_\kappa(x-y)\sum_{j=1}^3 K_j(x,y) +
K_4(x,y)\right )V(y).
\end{split}
\]
%
Hence, the expansion~\eqref{eq:representation_kernel} holds with the integral kernel of $\wDb^{(2)}(\delta)$ given by
%
\begin{equation}\label{eq:wDb}
\wDb^{(2)}(\delta)(x,y)\! =
\frac{V(x)V(y)}{\beta^4}
\big [G_\kappa(x-y)
\left (K_2(x,y) + K_3(x,y)\right )
+ K_4(x,y)\big ].
\end{equation}
%
With the aid of the definitions~\eqref{eq:K2},~\eqref{eq:K3}
for the kernels
$K_j(\cdot,\cdot)$, $j=2,3$,
one obtains using $\beta \in (0,1]$
and $|\nabla f| \le L_f$ that
%
\begin{equation}\label{eq:estimates_K23}
|K_2(x,y)|
\le
C_{2,f}\beta^4
\qquad\text{and}\qquad
|K_3(x,y)| \le
C_{3,f}\left (\kappa|x-y| +1\right)\beta^4,
\end{equation}
%
with some constants $C_{2,f}, C_{3,f} > 0$.
Taking into account that $F''$ is a decreasing positive function and using that $\beta \in (0, 1]$
we estimate $K_4$
in~\eqref{eq:K4} as
%
\begin{equation}\label{eq:estimate_K4}
|K_4(x,y)|
\le
C_{4,f}
G_{\kappa}(x-y) \left (\kappa^2|x-y|^2 + 3\kappa|x-y| + 3\right )\beta^4,
\end{equation}
%
with some constant $C_{4,f} > 0$.
Finally, combining the estimates~\eqref{eq:VGV2},~\eqref{eq:estimates_K23},
~\eqref{eq:estimate_K4}, and the expression
for $\wDb^{(2)}(\delta)(\cdot,\cdot)$ in~\eqref{eq:wDb} we end up with
%
\[
|\wDb^{(2)}(\delta)(x,y)|
\le
C_1C_3' G_{\frac{\alpha}{4}}(x-y)
\left [5+ 4\kappa|x-y| + \kappa^2|x-y|^2\right ],
\]
%
where $C_3' :=\,{\rm max}\{C_{2,f},
C_{3,f},C_{4,f}\}$
and $C_1$ is as in~\eqref{eq:C1}.
%
Applying the Schur test once again we get
%
\begin{equation*}\label{eq:wtD2}
\begin{split}
\|\wDb^{(2)}(\delta)\| &
\le
C_1C_3'
\int_{\dR^2} G_{\frac{\alpha}{4}}(x)
\left [5 + 4\kappa|x| + \kappa^2|x|^2\right ] {\mathsf{d}} x\\
& =
\frac12
C_1 C_3'
\int_0^\infty e^{-\frac{\alpha}{4} r}\left [5 + 4\kappa r + \kappa^2r^2\right ] {\mathsf{d}} r
=
C_1 C_3'
\left [ \frac{10}{\alpha} +
\frac{32}{\alpha^2}\kappa + \frac{64}{\alpha^3}\kappa^2\right ]\\
&
\le
C_1 C_3'
\left [ \frac{10}{\alpha} +
\frac{32}{\alpha^2}\left (\frac{\alpha}{2}+1\right ) + \frac{64}{\alpha^3}\left (\frac{\alpha^2}{4}+1\right)\right ] =: C_3,
\end{split}
\end{equation*}
%
where we used the bounds $\kappa^2 \le \frac14\alpha^2 + 1$ and $\kappa \le \frac12\alpha + 1$.
Thus, we have shown $\wDb^{(2)}(\dl)\in \cB(L^2(\dR^2))$. Self-adjointness of $\wDb^{(2)}(\dl)$ follows from
$\wDb^{(2)}(\dl)(x,y) = \wDb^{(2)}(\dl)(y,x)$.
\end{proof}
In the next proposition we show real analyticity
of $\wDb(\delta)$ with respect to $\delta$ and $\beta$. Furthermore, we estimate the norm of
$\partial_\delta \wDb(\delta)$.
\begin{prop}\label{prop:analytic}
Let the assumptions be as in Proposition~\ref{prop:expansion}.
Then the following claims hold.
%
\begin{myenum}
\item The operator-valued function $(0,1)^2\ni(\delta,\beta)\mapsto \wDb(\dl)$ is real analytic in both arguments.
\item $\|\partial_\dl \wDb(\dl)\| = \cO_{\rm u}(1)$
as $\beta \rightarrow 0+$.
\end{myenum}
\end{prop}
\begin{proof}
\noindent (i) Combining~\cite[Thm. III 3.12]{Kato} (and the discussion in~\cite{Kato} following it) with the claims of Proposition~\ref{prop:expansion} we conclude that it suffices to check real analyticity with respect to $\delta,\beta \in (0,1)$ of
the scalar-valued functions
%
\begin{equation*}\label{key}
(0,1)^2\ni(\delta,\beta)\mapsto \left (\wDb(\dl)h_1,h_2\right )_{L^2(\dR^2)},
\end{equation*}
%
where $h_1,h_2\in C^\infty_0(\dR^2)$.
The latter follows from real analyticity of $(0,1)^2\ni(\delta,\beta)\mapsto\Db(\dl)$ in $\delta$ and $\beta$, because the function $V$ is locally bounded.
\noindent (ii)
Differentiating the integral kernel $\wDb(\delta)(x,y)$ with respect
to $\dl$ we find
%
\begin{equation*}\label{key}
\begin{split}
\partial_\dl\wDb(\delta)(x,y)
& =
V(x)\partial_\dl(\Db(\delta)(x,y)) V(y)\\
& =
\frac{\delta}{\kappa}
V(x)
\left [g_\beta(x)^{1/2}
\partial_\kappa \big(G_\kappa(r_\beta(x) - r_\beta(y))\big)
g_\beta(y)^{1/2} - \partial_\kappa \big(G_\kappa(x - y)\big)\right ] V(y)\\
& =
\frac{\delta}{4\pi\kappa}
V(x)
\left [e^{-\kappa|x - y|} - g_\beta(x)^{1/2}
e^{-\kappa|r_\beta(x) - r_\beta(y)|}
g_\beta(y)^{1/2} \right ] V(y).
\end{split}
\end{equation*}
%
Next, we show that the integral operator
$\partial_\dl \wDb(\dl)\colon L^2(\dR^2)\rightarrow L^2(\dR^2)$ associated with the above
kernel satisfies
%
\begin{equation}\label{eq-2D}
\lim _{\dl'\to \dl }
\left\|
\frac{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta, V} (\dl')- {\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta, V} (\dl )}{\dl '-\dl } - \partial_ \dl {\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta, V} (\dl ) \right\| = 0.
\end{equation}
%
Applying the mean-value theorem for the
integral kernels, we get
%
\[
\left|
\frac{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta , V}(\dl ')(x,y)-
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta , V}(\dl )(x,y)
}{\dl'-\dl }-
\partial_\dl{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta , V}(\dl)(x,y)
\right|
\leq
|\partial_{\dl\dl}^2{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta,V}(\dl_\star )(x,y) (\dl'-\dl )|\,,
\]
%
where $\delta_\star$ lies between $\delta '$ and $\delta$.
Standard calculations yield
%
\[
\begin{split}
\partial^2_{\dl\dl}{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta,V}(\delta )(x,y)
& =
\frac{1}{4\pi} V(x) \left(
\big(g_\beta(x)g_\beta(y)\big)^{1/2}
\left(\frac{\dl ^2}{\kappa^3}-\frac{1}{\kappa }+\frac{\dl^2|r_\beta (x)-r_\beta (y)|}{\kappa^2 }\right) e ^{-\kappa |r_\beta (x)-r_\beta (y)|}
\right.
\\
&\qquad\qquad\qquad\qquad\left. - \left( \frac{\dl^2}{\kappa^3} -\frac{1}{\kappa}+\frac{\dl^2|x-y| }{\kappa^2} \right) e ^{-\kappa |x-y|}
\right)V(y).
\end{split}
\]
Using the inequality $|r_\beta (x)-r_\beta (y)|\leq 2|x-y|$ which holds for $L_f \beta \leq \sqrt{3}$ together
with the estimates $\|g_\beta\|_\infty \le 1+ L_f$ and $\kappa >\alpha /2$ we get
\begin{equation*}
|\partial^2_{\dl\dl}{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta, V}(\dl )(x,y) |
\leq
\frac{1}{4\pi}
\big(1+L_f\big)
V(x)
\left(
\frac{4}{\alpha }
+
\frac{16\dl^2}{\alpha^3}
+
\frac{12\dl^2}{\alpha^2}|x-y|
\right)
e^{-\frac{\alpha}{2} |x-y|}V(y).
\end{equation*}
%
Making use of the fact that ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta , V}(\dl)(x,y) = 0$ for $(x,y)\notin\cU\cup{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$
with $\cU,{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ as in~\eqref{eq:UV} and performing the analysis as in the {\it Step 1} of the proof for Proposition~\ref{prop:expansion} we get
%
\begin{equation*}
|\partial^2_{\dl\dl}{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}_{\beta , V}(\dl)(x,y) |
\leq
\frac{C}{4\pi}
\big(1+|x-y|\big)
e^{-\frac{\alpha }{4} |x-y|},
\end{equation*}
%
with some constant $C = C(\alpha,f) > 0$.
By the Schur test we obtain
%
\begin{equation*}
\begin{split}
\left\|\frac{\wDb(\dl ')- \wDb(\dl )}{\dl '-\dl } - \partial_ \dl \wDb (\delta ) \right\|
& \leq
|\dl - \dl'|
\frac{C}{4\pi}
\int_{\mathbb{R}^2}(1+|x|)e^{-\frac{\alpha }{4} |x|}{\mathsf{d}} x \\
& =
|\dl - \dl'|
C
\left(\frac{2}{\alpha} + \frac{8}{\alpha^2}\right ).
\end{split}
\end{equation*}
%
Therefore, the convergence~\eqref{eq-2D} is verified.
Furthermore, the subset of $\dR^2\times\dR^2$ determined by $\{(x,y) \,:\, x,y\in \dR^2 \,\wedge \, \partial_\dl\wDb(\delta)(x,y) \ne 0
\}$
can be covered
by two (intersecting) sets $\cU$ and ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ defined as in~\eqref{eq:UV}.
%
Using the inequalities
$|r_\beta(x) - r_\beta(y)| \ge |x-y|$ and
$\,\frac14|x-y| \ge \frac{|x| + |y|}{4} - \frac12R$ for $(x,y)\in\cU\cup{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$
we get for all $\beta \in (0,1]$ the estimate
%
%
\begin{equation*}
|\partial_\dl\wDb(\delta)(x,y)| \le
\frac{e^{\alpha R} \dl}{2\pi\alpha}\left[2+ L_f\right ] e^{-\frac{\alpha}{4}(|x|+|y|)}.
\end{equation*}
%
Hence, by the Schur test we find
%
\begin{equation*}\label{key}
\begin{split}
\|\partial_\dl\wDb(\dl)\|&
\le
\frac{e^{\alpha R} \dl}{2\pi\alpha}
\left[2+ L_f\right ]
\sup_{x\in\dR^2} \int_{\dR^2} e^{-\frac{\alpha}{4}(|x|+|y|)}{\mathsf{d}} y\\
& =
\frac{e^{\alpha R} \dl}{\alpha}
\left[2+ L_f\right ]
\int_0^\infty e^{-\frac{\alpha}{4} r}r{\mathsf{d}} r
=
\frac{16e^{\alpha R} \dl}{\alpha^3}
\left[2+ L_f\right ],
\end{split}
\end{equation*}
%
and the claim of (ii) follows.
\end{proof}
In what follows we employ for $V \equiv 1$ the shorthand notation ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}(\delta) := {\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_1(\delta)$.
\begin{cor}\label{cor:cD}
The integral kernel ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}(\delta)(\cdot,\cdot)$
in~\eqref{eq:kernel2} with $\delta\in[0,1]$
and $V\equiv 1$ satisfies
%
\begin{equation}\label{eq:integrability}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta) :=
2\alpha^3
\int_{\dR^2}\int_{\dR^2}{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}(\delta)(x,y){\mathsf{d}} x {\mathsf{d}} y < \infty.
\end{equation}
%
In addition, the function $[0,1]\ni\delta\mapsto{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta)$
is continuous.
\end{cor}
\begin{proof}
Note that there exists an integrable majorant for the integrand in~\eqref{eq:integrability}
with $\dl \in [0,1]$ given by
%
\[
\dR^2\times\dR^2\ni (x,y) \mapsto
2\alpha^3 C_1 G_{\frac{\alpha}{4}}(x-y)
\left [1+ \left (\frac{\alpha}{4} + \frac12\right ) |x-y|\right ]\chi_{\cT}(x,y),
\]
%
where $C_1$ is as in~\eqref{eq:pntws_bnd}, $\cT = ({\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}\times\dR^2)\cup(\dR^2\times{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U})$
and $\chi_\cT\colon \dR^2\times\dR^2\rightarrow \{0,1\}$ is the characteristic function of $\cT$.
Hence, finiteness of ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta)$
directly follows from the asymptotic behaviour of $ G_{\frac{\alpha}{4}}(\cdot)$.
Furthermore, taking into account the pointwise continuity of the integrand
in~\eqref{eq:integrability} with respect to $\delta$, continuity of $[0,1]\ni\delta\mapsto{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta)$
is a consequence of the dominated convergence theorem.
\end{proof}
Finally, we obtain
an alternative formula for ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(0)$
in terms of the Fourier transform of $f$. In the proof
of this proposition we use an integral representation of the
relativistic Schr\"odinger operator; see~\cite{IT93} and also~\cite[\Sigma 7.12]{LL01}.
\begin{prop}\label{prop:fractional}
The value ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} := {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(0)$ in~\eqref{eq:integrability} with $\delta = 0$
can be represented as
%
\[
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} =
\int_{\dR^2} |p|^2\left (\alpha^2 - \frac{2\alpha^3}{\sqrt{4|p|^2 + \alpha^2} + \alpha}\right) |\wh f(p)|^2{\mathsf{d}} p > 0.
\]
\end{prop}
\begin{proof}
First, we decompose ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}$
as ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} = {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}^{(1)} - {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}^{(2)}$ with
%
\begin{equation*}\label{key}
\begin{split}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}^{(1)} & := \frac{\alpha^3}{2} \int_{\dR^2}\int_{\dR^2}
\frac{e^{-\frac{\alpha}{2}|x-y|}}{4\pi|x-y|}
\left (|\nabla f(x)|^2 + |\nabla f(y)|^2 \right ) {\mathsf{d}} x {\mathsf{d}} y,\\[0.4ex]
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}^{(2)} & := \frac{\alpha^3}{2}\int_{\dR^2}\int_{\dR^2}
\frac{e^{-\frac{\alpha}{2}|x-y|}}{4\pi|x-y|}
\frac{|f(x) - f(y)|^2}{|x-y|^2} \left (\alpha |x-y|+2\right ) {\mathsf{d}} x {\mathsf{d}} y.
\end{split}
\end{equation*}
%
Then, we find by elementary computations
%
\begin{equation}\label{eq:D1}
\begin{split}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}^{(1)} & =
\alpha^3\left ( \int_{\dR^2} |\nabla f(x)|^2{\mathsf{d}} x\right )
\left (
\int_{\dR^2}
\frac{e^{-\frac{\alpha}{2}|y|}}{4\pi|y|} {\mathsf{d}} y\right )\\
& =
\frac{\alpha^3}{2}\left ( \int_{\dR^2} |p|^2 |\wh f(p)|^2{\mathsf{d}} p\right )
\left (
\int_{\dR_+}
e^{-\frac{\alpha}{2} r} {\mathsf{d}} r\right )
=
\alpha^2\int_{\dR^2} |p|^2 |\wh f(p)|^2{\mathsf{d}} p.
\end{split}
\end{equation}
%
Next, using the identities~\cite[Eq. (2.2) and (2.4) for $d =2$]{IT93}
we get
%
\begin{equation*}\label{key}
\begin{split}
\int_{\dR^2}
\left (\sqrt{|p|^2 + \tfrac14\alpha^2} - \tfrac12\alpha\right )|\wh f(p)|^2 {\mathsf{d}} p
& =
-\int_{\dR^2}\int_{\dR^2}
\left (f(x)f(y) - f(x)^2\right )n(x-y){\mathsf{d}} x{\mathsf{d}} y\\
& = \frac12\int_{\dR^2}\int_{\dR^2}
\left| f(x) - f(y)\right|^2 n(x-y){\mathsf{d}} x{\mathsf{d}} y,
\end{split}
\end{equation*}
%
for $n(\cdot)$ given by
%
\begin{equation*}\label{key}
\begin{split}
n(x) & =
2(2\pi)^{-3/2} \left (\frac{\alpha}{2}\right )^{3/2}
|x|^{-3/2}K_{3/2}\left (\frac{\alpha}{2}|x|\right )\\
& =
2(2\pi)^{-3/2} \left (\frac{\alpha}{2}\right )^{3/2}
|x|^{-3/2} \frac{\left (\frac{\pi}{2}\right )^{1/2}
\exp\left (-\frac{\alpha}{2}|x|\right )
\left (\frac{2}{\alpha|x|}+1\right )}
{\left (\frac{\alpha}{2}|x|\right )^{1/2}}\\
& =
\frac{\exp\left (-\frac{\alpha}{2}|x|\right )}{4\pi|x|}\frac{1}{|x|^2}\left (2+\alpha|x|\right ),
\end{split}
\end{equation*}
%
where in between we used the representation
%
\begin{equation*}\label{key}
K_{3/2}(x) =
\frac{\left (\frac{\pi}{2}\right )^{1/2}
\exp\left (-x\right )
\left (\frac{1}{x}+1\right )}{x^{1/2}}
\end{equation*}
%
for the modified Bessel function $K_{3/2}(\cdot)$ of order $\nu = \frac32$.
Hence, we get
%
\begin{equation}\label{eq:D2}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}^{(2)}_{\alpha,f}
=
\alpha^3
\int_{\dR^2}
\left (\sqrt{|p|^2 + \tfrac14\alpha^2} - \tfrac12\alpha\right )|\wh f(p)|^2{\mathsf{d}} p
=
2\alpha^3
\int_{\dR^2}
\frac{|p|^2}{\sqrt{4|p|^2 + \alpha^2} + \alpha} |\wh f(p)|^2{\mathsf{d}} p.
\end{equation}
%
Finally, combining~\eqref{eq:D1} and~\eqref{eq:D2} we obtain
%
\begin{equation*}\label{key}
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} = {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}^{(1)}_{\alpha,f} - {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}^{(2)}_{\alpha,f}
= \int_{\dR^2}|p|^2\left (\alpha^2 -
\frac{2\alpha^3}{\sqrt{4|p|^2 + \alpha^2} + \alpha}\right )|\wh f(p)|^2
{\mathsf{d}} p.
\end{equation*}
%
In particular, ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} > 0$ follows
from positivity almost everywhere in $\dR^2$ of the expression in the round
brackets standing in the integrand in the above
formula.
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}
\label{sec:proof}
We split the proof of the main result into three steps.
\noindent {\it\underline{Step 1: Spectral equation.}}
In order to derive the spectral equation, we introduce an auxiliary
function\footnote{Introducing $V$ is a purely technical step, needed
for a regularization. The final result does not depend on the
particular choice of $V$.} $V(x) := \mathrm{e}^{\frac{\alpha}{4} |x|}$, $\alpha >
0$. Note that $V$ satisfies the growth condition specified in
Proposition~\ref{prop:expansion} with $c = 1$ and moreover $V^{-1}
\in L^2(\dR^2)\cap L^\infty(\dR^2)$. Furthermore, we associate with the kernel
in~\eqref{eq:kernel1} the operator $\wDb(\delta)\in\cB(L^2(\dR^2))$
as in Proposition~\ref{prop:expansion}. Recall that the operator
$\wDb(\delta)$ admits the representation
\begin{equation}\label{eq:expansion_recall}
\wDb(\delta) =
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta )\beta^2 + \wDb^{(2)}(\delta )\beta^4\,,
\end{equation}
where ${\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta), \wDb^{(2)}(\delta) \in \cB(L^2(\dR^2))$
and we also have $\|\wDb(\delta)\| = \cO_{\rm u}(\beta^2)$ as $\beta
\rightarrow 0+$. Next, we define the product
\begin{equation*}
\wB (\delta ):= V^{-1} \sfB_\alpha (\delta ) V^{-1},\qquad
\delta > 0,
\end{equation*}
where $\sfB_\alpha(\dl)$ is as in~\eqref{eq:B}.
The spectral condition~\eqref{eq:reformulation} can be rewritten as
\begin{equation*}\label{eq-spectral2}
\forall\, \kappa > \frac{\alpha}{2},\qquad
\dim\ker\left (\mathsf{H}_{\alpha ,\beta} + \kappa^2\right )
=
\dim\ker\left (\sfI - \alpha\wB(\dl)\wDb(\dl)\right ).
\end{equation*}
To compute the dimension of $\ker(\sfI -\alpha\wB(\dl)\wDb(\delta))$ we investigate the asymptotic behaviour of $\wB(\dl)$ as $\dl\rightarrow 0+$.
First, we observe that the decomposition in Lemma~\ref{le-decompoB} yields
\begin{equation*}
\wB(\delta)
=
\frac{\alpha^2}{2} V^{-1} \sfR(\delta^2) V^{-1}+\sfN_{\alpha,V}(\delta ) \,,
\end{equation*}
where $\sfN_{\alpha,V}(\delta ) := V^{-1}\sfN_\alpha(\delta )V^{-1}$. Lemma~\ref{le-decompoB}\,(iii) implies that
$\dR_+\ni\delta \mapsto\sfN_{\alpha,V}(\delta )$ is real analytic.
Observe that $\sfR(\delta^2)$ is an integral operator with the kernel $\frac{1}{2\pi }K_0 (\delta |x-y|)$,
where $K_0(\cdot)$ is the modified Bessel function
of the second kind and order zero; {\it cf.}\,~\cite[\Sigma 9.6]{AS64}.
The function $K_0$ admits an asymptotic expansion
(see \cite[Eq. 9.6.13]{AS64})
\begin{equation}\label{eq:K0asymp}
K_0 (z)=
-\ln \frac{z}{2} - \gamma +\cO(z^2\ln z), \qquad z\rightarrow 0+\,,
\end{equation}
where $\gamma \approx 0.577\dots$ is the Euler-Mascheroni constant.
In accordance to the asymptotics~\eqref{eq:K0asymp}, the
operator-valued function $\delta \mapsto \frac{1}{2} V^{-1}
\sfR(\delta^2) V^{-1}$ can be decomposed as follows
\begin{equation*}\label{eq-decomG2}
\frac{1}{2} V^{-1} \sfR(\delta^2) V^{-1}
=
\sfL(\delta ) + {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta ) \,,
\end{equation*}
where
\begin{equation*}\label{eq:sfL}
\sfL(\delta ):=
-\frac{\ln \dl}{4\pi }
\left (\cdot,V^{-1} \right )_{L^2(\dR^2)} V^{-1}
\end{equation*}
and ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta )\colon L^2(\dR^2)\rightarrow L^2(\dR^2)$, $\delta > 0$, is a bounded integral operator with the kernel
\begin{equation*}\label{eq:sfM}
{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta) (x,y)
:=
\frac{1}{4\pi}V^{-1}(x)\left[ K_0 (\delta |x-y|) + \ln \delta \right] V^{-1}(y)\,.
\end{equation*}
Define also the bounded integral operator ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}(0)\colon L^2(\dR^2)\rightarrow L^2(\dR^2)$ with the kernel
\begin{equation*}\label{key}
{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}(0)(x,y)
:= -\frac{1}{4\pi}V^{-1}(x)\left[\gamma +\ln \frac{|x-y|}{2} \right] V^{-1}(y)\,.
\end{equation*}
Mimicking the arguments from~\cite[Prop. 3.2]{S76}
we conclude that the operator-valued function $(0,\infty)\ni \delta\mapsto {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta)$ is real analytic and that
\begin{equation*}\label{eq:M_cont}
\left \|{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta) - {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}(0)\right \|\rightarrow 0,\qquad \delta\rightarrow 0+.
\end{equation*}
We define the integral operator ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}'(\dl)\colon L^2(\dR^2)\rightarrow L^2(\dR^2)$ via the kernel
\begin{equation*}\label{key}
{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}'(\dl)(x,y)
:= \frac{1}{4\pi\dl} V^{-1}(x)\big (
1 -\dl K_1(\dl|x-y|) |x-y|\big ) V^{-1}(y),
\end{equation*}
where $K_1(\cdot)$ is the modified Bessel function
of the second kind and order $\nu = 1$; {\it cf.}\,~\cite[\Sigma 9.6]{AS64}. Analogously, for the ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}(\dl)$ one checks the following convergence
\begin{equation*}\label{eq-M}
\lim _{\dl'\to \dl }\left\|\frac{{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl ')- {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl )}{\dl '-\dl } - {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}'(\dl ) \right\| = 0.
\end{equation*}
Consequently, ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}' (\dl )$ can be identified with $\partial_{\delta} {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl )$.
Furthermore, using the inequality $1 - x K_1(x) < x$
we get by the Schur test
\begin{equation}\label{eq:Mprime}
\begin{split}
\|\partial_{\delta} {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl )\|
&
\le
\frac{1}{4\pi\dl}
\sup_{x\in\dR^2}
\int_{\dR^2} e^{-\frac{\alpha}{4}|x|}e^{-\frac{\alpha}{4}|y|}
\big| 1 -\dl|x-y| K_1(\dl|x-y|)\big |{\mathsf{d}} y \\
& \le
\frac{1}{4\pi}
\sup_{x\in\dR^2}\left (
e^{-\frac{\alpha}{4}|x|}
\int_{\dR^2} e^{-\frac{\alpha}{4}|y|}(|x| +|y|) {\mathsf{d}} y\right )\\
&
=
\frac{1}{2}
\left [
\left (\sup_{x\in\dR^2} |x|e^{-\frac{\alpha}{4}|x|}\right )
\int_0^\infty e^{-\frac{\alpha}{4} r} r {\mathsf{d}} r +
\int_0^\infty e^{-\frac{\alpha}{4} r} r^2{\mathsf{d}} r\right ]
=
\frac{32}{\alpha^3}\big(e^{-1} + 2\big).
\end{split}
\end{equation}
Next, denote
\begin{equation*}\label{key}
\Gb (\delta )
:=
\left (\alpha^2{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\delta) + \sfN_{\alpha,V}(\delta)\right )\wDb (\delta).
\end{equation*}
real analyticity of $\wDb (\delta)$, $\sfN_{\alpha,V}(\delta)$, and
${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}(\dl)$ with respect to $\dl,\beta \in (0,1)$ implies that
$\Gb (\delta )$ is also real analytic in $\delta,\beta \in
(0,1)$. It follows from the
expansion~\eqref{eq:expansion_recall} and the above estimates that
$\Gb (\delta )$ is a bounded operator, whose norm behaves as $\|
\Gb(\delta)\| = \cO_{\rm u}(\beta^2)$ as $\beta \rightarrow 0+$. Using
Lemma~\ref{le-decompoB}\,(iv), Propositions~\ref{prop:expansion}
and~\ref{prop:analytic}, and the estimate~\eqref{eq:Mprime}~ we get
applying the triangle inequality for the operator norm
\begin{equation}\label{eq:Gprime}
\begin{split}
\|\partial_\dl\Gb (\dl )\|
&\le
\big[\alpha^2 \|\partial_{\delta} {\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl )\| + \|\partial_\dl\sfN_{\alpha,V}(\dl)\|\big ]
\|\wDb (\dl)\|\\
& \qquad\quad + \big[\alpha^2 \|{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O} (\dl)\|
+ \|\sfN_{\alpha,V}(\dl)\|\big]
\|\partial_\dl\wDb (\dl)\|
= \cO_{\rm u}(1), \qquad \beta \rightarrow 0+.
\end{split}
\end{equation}
Next, for all sufficiently small $\beta > 0$, the operator $\sfI -
\alpha\Gb (\delta )$ is invertible and $\sfI -
\alpha\wB(\delta)\wDb(\delta) $ can be factorized as
\begin{equation*}\label{eq-spectral4}
\sfI - \alpha\wB(\delta)\wDb(\delta)
=
(\sfI- \alpha\Gb (\delta ))
\left(\sfI- \Pb (\delta ) \right),
\end{equation*}
where $\Pb (\delta )$ is the rank-one operator given by
\begin{equation*}\label{key}
\begin{split}
\Pb (\delta )
& := (\sfI - \alpha\Gb (\delta ))^{-1}
\sfL(\delta ) \alpha^3\wDb (\delta ) \\
& = - \alpha^3\frac{\ln \delta }{4\pi }
\left ( \cdot,\wDb (\delta ) V^{-1} \right )_{L^2(\dR^2)} (\sfI - \alpha\Gb (\delta ))^{-1} V^{-1}.
\end{split}
\end{equation*}
Thus, we get for all sufficiently small $\beta > 0$
\begin{equation*}\label{key}
\forall \delta > 0, \qquad
\dim\ker\left (\sfI -\alpha\wB(\delta)\wDb(\delta)\right )
=
\dim\ker\left (\sfI - \Pb(\delta)\right ).
\end{equation*}
Observe that $\dim\ker\left (\sfI - \Pb(\delta)\right ) \in \{0,1\}$. Using the relation $\dim\ker(\sfI - {\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}) = 1$
\iff~$\mathrm{Tr}\,{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R} = 1$ (true for any rank-one
operator ${\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}$), we find that $\dim\ker (\sfI - \Pb (\delta )) = 1$
\iff
\begin{equation}\label{eq-spectralfinal}
\boxed{4\pi +\alpha^3\ln \delta
\left ( \wDb (\delta )V^{-1},
(\sfI- \alpha\Gb (\delta ))^{-1}V^{-1}\right )_{L^2(\dR^2)} =0 \,.}
\end{equation}
In view of this reduction,
for all sufficiently small $\beta > 0$, each solution $\delta > 0$ of the equation~\eqref{eq-spectralfinal} corresponds to a simple eigenvalue $-\frac{\alpha^2}{4} - \delta^2$
of $\mathsf{H}_{\alpha ,\beta}$.
\vspace{0.6ex}
\noindent\underline{\emph{Step 2: Existence and uniqueness of solution for~\eqref{eq-spectralfinal}.}}\,
Define the function
\begin{equation*}\label{key}
\eta_\alpha(\beta,\delta)
:=
2\alpha^3
\left ( \wDb (\delta ) V^{-1} ,
(\sfI- \alpha\Gb (\delta ))^{-1} V^{-1} \right )_{L^2(\dR^2)}.
\end{equation*}
We remark that the function $\eta_\alpha(\cdot,\cdot)$ is real analytic
in $\dl, \beta > 0$ lying in a sufficiently small right neighbourhood of
the origin, thanks to real analyticity with respect to the same
parameters of the operator-valued functions $\wDb(\delta)$ and
$\Gb(\delta)$; see Proposition~\ref{prop:analytic} and the
discussion in {\it Step 1}. The spectral
condition~\eqref{eq-spectralfinal} can be equivalently written as
\begin{equation}\label{eq:spectral_easy}
\eta_\alpha(\beta,\delta) = -\frac{8\pi}{\ln \delta}.
\end{equation}
Applying the Neumann series argument and
using that $\|\Gb(\delta)\| = \cO_{\rm u}(\beta^2)$ ($\beta \rightarrow 0+$) we find
\begin{equation*}\label{key}
\|\left (\sfI - \alpha\Gb (\delta )\right )^{-1} - \sfI\| = o_{\rm u}(1),
\qquad \beta\rightarrow 0+.
\end{equation*}
Hence, we conclude from $\|\wDb(\delta)\| = \cO_{\rm u}(\beta^2)$ as
$\beta\rightarrow 0+$ that $\eta_\alpha(\beta,\delta) = \cO_{\rm u}(\beta^2)$ as $\beta\rightarrow 0+$. Combining the expansion~\eqref{eq:expansion_recall} and Corollary~\ref{cor:cD} we arrive at
\begin{equation}\label{eq:expansion_eta}
\begin{split}
\eta_\alpha(\beta,\delta)
& =
2\alpha^3
\beta^2\int_{\dR^2}\int_{\dR^2}
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}_V(\delta)(x,y) V^{-1}(x)V^{-1}(y) {\mathsf{d}} x {\mathsf{d}} y + \cO_{\rm u}(\beta^4) \\
& =
2\alpha^3
\beta^2\int_{\dR^2}\int_{\dR^2}
{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}^{(1)}(\dl)(x,y) {\mathsf{d}} x {\mathsf{d}} y + \cO_{\rm u}(\beta^4)\\
& =
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta)\beta^2 + \cO_{\rm u}(\beta^4),\qquad \beta\rightarrow 0+.
\end{split}
\end{equation}
Since ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} = {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(0) > 0$ by Proposition~\ref{prop:fractional},
Corollary~\ref{cor:cD} yields
that $\eta_\alpha(\beta,\delta) > 0$ for all sufficiently small $\delta, \beta > 0$. The continuous function $(0,1)\ni \delta\mapsto -\frac{8\pi}{\ln \delta}$
vanishes as $\delta\rightarrow 0+$ and its range coincides with $(0,\infty)$. Hence, for all sufficiently small $\beta > 0$ the equation~\eqref{eq:spectral_easy} has at least one solution $\delta(\beta) > 0$, which satisfies $\delta(\beta)\rightarrow 0+$
taking the lower bound in Proposition~\ref{prop:lower_bound} into account. In particular, we proved that $\#\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta}) \ge 1$
holds for all sufficiently small $\beta > 0$.
It remains to show that in fact for all sufficiently small $\beta >0$ holds $\#\sigma_{\rm d}(\mathsf{H}_{\alpha ,\beta}) = 1$. Indeed, the equation~\eqref{eq:spectral_easy} can be rewritten as
\begin{equation*}\label{eq:spectral_easy2}
\wt\eta_{\alpha}(\beta,\delta) = 0 \qquad
\text{with}~~~
\wt\eta_{\alpha}(\beta,\delta) := \exp\left (-\frac{8\pi}{\eta_\alpha(\beta,\delta)}\right) - \delta.
\end{equation*}
Suppose that $\beta > 0$ is small enough and that $\wt\eta_{\alpha}(\beta,\delta) = 0$ has two solutions
$\delta_1,\delta_2 \in (0,1)$ such that $\delta_1 < \delta_2$.
By Rolle's theorem there exists a point $\delta_\star \in (\delta_1,\delta_2)$ such that
\begin{equation}\label{eq:Rolle}
(\partial_\delta \wt\eta_{\alpha})(\beta,\delta_\star) = 0.
\end{equation}
Computing the partial
derivative of $\wt\eta_\alpha$ with respect to $\delta$ we get
\begin{equation}\label{eq:deriv_wt_eta}
\partial_\delta\wt\eta_{\alpha}(\beta,\delta)
=
\frac{8\pi\partial_{\delta}\eta_\alpha(\beta,\delta)}{\eta_\alpha(\beta,\delta)^2}\exp\left (-\frac{8\pi}{\eta_\alpha(\beta,\delta)}\right) - 1.
\end{equation}
Differentiating the operator-valued function
$(\sfI - {\mathsf G}} \def\mathsf{H}{{\mathsf H}} \def\sfI{{\mathsf I}_\alpha(\dl))^{-1}$ with respect to $\dl$ we find
\begin{equation*}\label{der:G}
\begin{split}
& \lim_{\dl'\to \dl } \frac{( \sfI-\alpha \Gb (\dl '))^{-1}- (\sfI-\alpha \Gb (\dl ))^{-1}}{\dl'-\dl }\\
%
&\qquad = \alpha\lim_{\dl'\to \dl }
(\sfI-\alpha \Gb (\dl'))^{-1}
\frac{\Gb (\dl ')-\Gb (\dl) }{\dl'-\dl }(\sfI -\alpha \Gb (\dl))^{-1} \\
%
&\qquad\qquad =
\alpha(\sfI-\alpha \Gb (\dl ))^{-1} \partial_ \dl\Gb (\dl )
(\sfI-\alpha \Gb (\dl ))^{-1}.
\end{split}
\end{equation*}
Hence, differentiating the scalar function $\eta_\alpha$ with respect to $\delta$ and applying Propositions~\ref{prop:expansion},~\ref{prop:analytic} and the estimate~\eqref{eq:Gprime}, we end up with
\[
\begin{split}
\partial_{\delta}\eta_\alpha(\beta,\delta)
& =
2\alpha^3
\left (\partial_\delta \wDb (\delta ) V^{-1} ,
(\sfI- \alpha\Gb (\delta ))^{-1} V^{-1} \right )_{L^2(\dR^2)}\\
& \qquad +
2\alpha^4
\left ( \wDb (\delta ) V^{-1} ,
(\sfI-\alpha \Gb (\dl ))^{-1}
(\partial_\delta\Gb (\delta )) (\sfI- \alpha\Gb (\delta ))^{-1} V^{-1} \right )_{L^2(\dR^2)}\\
& = \cO_{\rm u}(1),
\quad\beta\rightarrow 0+.
\end{split}
\]
Eventually, we derive from~\eqref{eq:deriv_wt_eta} that $\partial_\delta\wt\eta_{\alpha}(\beta,\delta) = -1 + o_{\rm u}(1)$ as $\beta\rightarrow 0+$, which contradicts to~\eqref{eq:Rolle} for all sufficiently small
$\beta > 0$.
\vspace{0.6ex}
\noindent\underline{\emph{Step 3: Asymptotic expansion.}}\,
Let $\dl(\beta) > 0$ be the unique solution of~\eqref{eq:spectral_easy} for sufficiently small
$\beta > 0$.
Substituting the expansion~\eqref{eq:expansion_eta} into
the spectral condition~\eqref{eq:spectral_easy}
and making an additional use of $\delta(\beta) = o(1)$
(as $\beta\rightarrow 0+$) we get
\begin{equation*}\label{key}
8\pi + \ln \delta(\beta)\beta^2
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}(\delta(\beta))
+ o(\ln \delta(\beta)\beta^2) = 0,\qquad
\beta \rightarrow 0 +.
\end{equation*}
Applying Corollary~\ref{cor:cD} we obtain
\[
8\pi + \ln \delta(\beta)\beta^2
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f} + o(\ln \delta(\beta)\beta^2) = 0,\qquad\beta \rightarrow 0 +.
\]
Hence, we deduce
\begin{equation}\label{eq-asymdelta}
\delta(\beta) =
\exp \left( - \frac{8\pi}{{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}\beta^2}\right) \left (1+o(1) \right ),\qquad\beta \rightarrow 0 +.
\end{equation}
Finally, the asymptotic expansion of $\lambda_1^\alpha(\beta)$ in~\eqref{eq:main_expansion}
follows from~\eqref{eq-asymdelta} and the identity
$\lambda_1^\alpha(\beta) = -\frac14\alpha^2 - \delta^2(\beta)$.\qed
\section{Discussion}\label{sec:dis}
The main result of this paper
might be possible to extend for less regular $f$ with a non-compact support. A natural limitation of admissible generalizations is finiteness of the constant ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{\alpha,f}$
in Theorem~\ref{thm:main}.
Apparently, a similar asymptotic analysis can be performed in space
dimensions $d \ge 4$, where not much is known apart from the result
in~\cite{LO16} mentioned above. We note that a convincing physical
motivation is missing in this case, so far at least, and also one
can expect here that for all sufficiently small $\beta>0$ the
discrete spectrum would be empty.
It is also worth noting that analogous spectral problem can be
considered for the Robin Laplacian in a locally perturbed
half-space. In view of~\cite{EM14} one may expect that the existence
of the unique bound state for all sufficiently small $\beta > 0$
will depend on the function $f$, defining the profile of the
deformation. However, the technique to deal with the asymptotic
analysis should be different for the Robin spectral problem, because
a Birman-Schwinger-type principle with an explicitly given integral
operator is not available in this setting.
Finally, let us point out that in the present paper we have not
touched the case where the interaction support is a topologically
non-trivial surface which could be regarded as a certain analogue of
spectral analysis in infinite, topologically nontrivial
layers~\cite{CEK04}. It is not so clear to what extent the main
result and the technique of the present paper can be generalized to
include such more involved geometries.
\subsection*{Acknowledgements}
P.E. and V.L. acknowledge the support by the grant No.~17-01706S of
the Czech Science Foundation (GA\v{C}R). S.K and V.L. acknowledge
the support by the grant DEC-2013/11/B/ST1/03067 of the Polish
National Science Centre. Moreover, V.L. is grateful to the University of Zielona G\'{o}ra for the hospitality during a visit in February 2016, where a part of this paper was written.
S.K. thanks the Department of Theoretical Physics, NPI CAS in \v{R}e\v{z}, for the hospitality
during a visit in November 2016.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,116,691,501,009 | arxiv | \section{Introduction}
Let ${\mathbb{N}}$ be the set of natural numbers. The relation $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$, an extension of the divisibility relation $\mid$ on ${\mathbb{N}}$ to the set $\beta {\mathbb{N}}$ of ultrafilters on ${\mathbb{N}}$, was introduced in \cite{So1} and further investigated in \cite{So2, So3, So4, So5}. The main idea was to understand the impact of various properties of $\mid$ to $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$ and possibly, learning about the $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-hierarchy, to acquire better understanding of $\mid$. In this paper we will make another step in that direction, considering possible extensions of the congruence relations to $\beta {\mathbb{N}}$ and their relation to $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$, as well as to the operations of addition and multiplication on $\beta {\mathbb{N}}$.\\
When working with the set of ultrafilters $\beta S$ on a set $S$ it is common to identify each element $s\in S$ with the principal ultrafilter $\{A\subseteq S:s\in A\}$. Having that in mind, any binary operation $\star$ on $S$ can be extended to $\beta S$ as follows: for $A\subseteq S$,
\begin{equation}\label{eqn1}
A\in{\cal F}\star{\cal G}\Leftrightarrow\{s\in S:s^{-1}A\in{\cal G}\}\in{\cal F},
\end{equation}
where $s^{-1}A=\{t\in S:s\star t\in A\}$. If $(S,\star)$ is a semigroup equiped with the discrete topology, $(\beta S,\star)$ becomes a compact Hausdorff right-topological semigroup. The base sets for the topology are (clopen) sets $\overline{A}=\{{\cal F}\in\beta S:A\in{\cal F}\}$. Many aspects of structures obtained in this way were examined in \cite{HS}.\\
Every function $f:{\mathbb{N}}\rightarrow {\mathbb{N}}$ can be extended uniquely to a continuous $\widetilde{f}:\beta {\mathbb{N}}\rightarrow\beta {\mathbb{N}}$: the ultrafilter $\widetilde{f}({\cal F})$ is generated by $\{f[A]:A\in{\cal F}\}$. This was used in \cite{So1} to define analogously an extension of a binary relation $\rho$ on ${\mathbb{N}}$ to a relation $\widetilde{\rho}$ on $\beta {\mathbb{N}}$: ${\cal F}\widetilde{\rho}{\cal G}$ if and only if for every $A\in{\cal F}$ the set $\rho[A]:=\{n\in {\mathbb{N}}:(\exists a\in A)a\rho n\}$ is in ${\cal G}$. This coincides with the so-called canonical way of extending relations from ${\mathbb{N}}$ to $\beta {\mathbb{N}}$ described in \cite{Gor}. It turned out that the extension $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$ of the divisibility relation $\mid$ has a simple equivalent definition, more convenient for practical use:
$${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}\Leftrightarrow{\cal F}\cap{\cal U}\subseteq{\cal G},$$
where ${\cal U}=\{A\in P({\mathbb{N}})\setminus\{\emptyset\}:A\hspace{-0.1cm}\uparrow=A\}$ is the family of sets upward closed for $\mid$. $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$ is a quasiorder, so we think of it as an order on the set of $=_\sim$-equivalence classes, where ${\cal F}=_\sim{\cal G}$ if and only if ${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}$ and ${\cal G}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}$. We say that $C\subseteq {\mathbb{N}}$ is {\it convex} if for all $x,y\in C$ and all $z$ such that $x\mid z$ and $z\mid y$ holds $z\in C$. All ultrafilters from the same $=_\sim$-equivalence class ${\cal C}$ have the same convex sets. Clearly, each equivalence class ${\cal C}$ is determined by ${\cal F}\cap{\cal U}$ (for any ${\cal F}\in{\cal C}$), or by the family of convex sets belonging to any ${\cal F}\in{\cal C}$.
An ultrafilter ${\cal F}$ is divisible by some $n\in {\mathbb{N}}$ if and only if $n{\mathbb{N}}:=\{nk:k\in{\mathbb{N}}\}\in{\cal F}$. If ${\cal F}\in{\mathbb{N}}$ as well, $n\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}$ holds if and only if $n\mid{\cal F}$. Hence, we can write just $n\mid{\cal F}$ in case $n\in{\mathbb{N}}$.
Especially useful are prime ultrafilters ${\cal P}$: those $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-divisible only by 1 and themselves. This is equivalent to $P\in{\cal P}$, where $P$ is the set of prime numbers.
The $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-hierarchy can be naturally divided into two parts. The ``lower" part, $L$, can be divided into levels: $L=\bigcup_{l<\omega}\overline{L_l}$, where $L_l=\{p_1p_2\dots p_l:p_1,p_2,\dots,p_l\mbox{ are prime}\}$ is the set of natural numbers having exactly $l$ (not necessarily distinct) prime factors. Some nice properties of $L$ were established in \cite{So3}; for example every ultrafilter in $\overline{L_l}$ has exactly $l$ prime ingredients (but being divisible by the $n$-th power of a prime ${\cal P}$ is not the same as being divisible by ${\cal P}$ $n$ times). The ``upper" part, however, is much more complicated. It contains the maximal $=_\sim$-class, $MAX$, consisting of ultrafilters divisible by all $n\in {\mathbb{N}}$, and consequently by all ${\cal F}\in\beta {\mathbb{N}}$ (\cite{So4}, Lemma 4.6). Another interesting class is $NMAX$, maximal among ${\mathbb{N}}$-{\it free} ultrafiters (those that are not divisible by any $n\in {\mathbb{N}}$), see \cite{So5}, Theorem 5.4. A set belonging to an ${\mathbb{N}}$-free ultrafilter is called an ${\mathbb{N}}$-{\it free} set.\\
The paper is organized as follows. In Section 2 several well-known results of elementary number theory are employed to obtain results about the congruence of ultrafilters modulo an integer in connection with the divisibility relation $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$. In Section 3 we recapitulate basic definitions about $\omega$-hyperextensions, obtained by iterating nonstandard extensions of the set ${\mathbb{Z}}$. Tensor pairs play an important role here. They were first considered by Puritz in \cite{P2}; Di Nasso proved several useful characterizations and coined the term (see \cite{DN}). Most of the results in Section 3 are taken from Luperi Baglini's thesis \cite{L1}, where the concept of a tensor pair is implemented in the surrounding of $\omega$-hyperextensions. In Section 4 we define congruence modulo an ultrafilter and find several conditions equivalent to this definition. The next section deals with a stronger relation, and we prove some results connecting it to $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$ and operations of addition and multiplication of ultrafilters. In Section 6 we define another version of divisibility, obtained in a natural way from the strong congruence relation, and get some basic results about it. The last section contains several remarks and open questions.\\
{\bf Notation.} ${\mathbb{N}}$ is the set of natural numbers (without zero), $\omega={\mathbb{N}}\cup\{0\}$, $P$ is the set of prime numbers and ${\mathbb{Z}}$ the set of integers. The calligraphic letters ${\cal F},{\cal G},{\cal H},\dots$ are reserved for ultrafilters, and small letters $x,y,z,\dots$ for integers (both standard and nonstandard). For $A\subseteq {\mathbb{N}}$, $A\hspace{-0.1cm}\uparrow=\{n\in {\mathbb{N}}:\exists a\in A\;a\mid n\}$ and $A\hspace{-0.1cm}\downarrow=\{n\in {\mathbb{N}}:\exists a\in A\;n\mid a\}$. If $m,r\in {\mathbb{N}}$, then ${\mathbb{Z}}_m=\{0,1,\dots,m-1\}$ and $mA+r=\{mn+r:n\in A\}$. Finally, ${\cal U}=\{A\in P({\mathbb{N}})\setminus\{\emptyset\}:A\hspace{-0.1cm}\uparrow=A\}$ and ${\cal V}=\{A\in P({\mathbb{N}})\setminus\{{\mathbb{N}}\}:A\hspace{-0.1cm}\downarrow=A\}$.
Because we use $\zve{\mathbb{N}}$ for a nonstandard extension of ${\mathbb{N}}$, to avoid confusion we will not denote $\beta{\mathbb{N}}\setminus{\mathbb{N}}$ with ${\mathbb{N}}^*$. Likewise, we will avoid writing $A^2$ for $A\times A$, since this notation had another meaning in papers preceding this one.
\section{Congruence modulo integer}\label{kongm}
Let $m\in {\mathbb{N}}$ and let ${\mathbb{Z}}_m$ be given the discrete topology. The homomorphism $h_m:{\mathbb{N}}\rightarrow {\mathbb{Z}}_m$ is defined as follows: $h_m(n)$ is the residue of $n$ modulo $m$. $h_m$ extends uniquely to a continuous function $\widetilde{h_m}:\beta {\mathbb{N}}\rightarrow {\mathbb{Z}}_m$. The next results follows directly from \cite{HS}, Corollary 4.22.
\begin{pp}\label{hom}
$h_m$ is a homomorphism, both for addition and multiplication of ultrafilters.
\end{pp}
As described in the Introduction, the relation $\equiv_m$ of congruence modulo $m$ can be extended to a relation $\widetilde{\equiv_m}$ on $\beta {\mathbb{N}}$: ${\cal F}\widetilde{\equiv_m}{\cal G}$ if and only if, for every $A\in{\cal F}$, $\{n\in{\mathbb{N}}:(\exists a\in A)n\equiv_ma\}\in{\cal G}$. Recall that the kernel of a function $h:{\mathbb{N}}\rightarrow {\mathbb{N}}$ is the relation ${\rm ker}h=\{(x,y)\in {\mathbb{N}}\times{\mathbb{N}}:h(x)=h(y)\}$.
\begin{pp}\label{kernel}
(\cite{So1}, Theorem 2.13) If $h:{\mathbb{N}}\rightarrow {\mathbb{N}}$ and $\rho=\ker h$, then $\widetilde{\rho}=\ker\widetilde{h}$.
\end{pp}
Thus, for $m\in {\mathbb{N}}$ the extension of $\equiv_m$ to $\beta {\mathbb{N}}$ coincides with the definition found in \cite{HS}: ${\cal F}\widetilde{\equiv_m}{\cal G}$ if and only if $h_m({\cal F})=h_m({\cal G})$. In particular, $r<m$ is the residue of ${\cal F}\in\beta {\mathbb{N}}$ modulo $m$ (${\cal F}\widetilde{\equiv_m}r$) if and only if $m{\mathbb{N}}+r\in{\cal F}$. For practical reasons, we will denote the extension of $\equiv_m$ to $\beta {\mathbb{N}}$ also by $\equiv_m$ from now on.
The congruence of ultrafilters modulo integer is not new, but it was mostly marginally mentioned; for example the following interesting result has only the status of a comment in \cite{HS}.
\begin{pp}
(\cite{HS}, Comment 11.20) For every ${\cal F}\in\beta {\mathbb{N}}$ and every $U\in{\cal F}$ there is a neighborhood $\bar A$ of ${\cal F}$ such that $A\subseteq U$ and for all ${\cal G}\in{\bar A}\setminus A$ and all $m\in {\mathbb{N}}$ holds ${\cal G}\equiv_m{\cal F}$.
\end{pp}
We begin with a simple result about the solvability of a system of congruences in $\beta {\mathbb{N}}$. A system such that its every finite subsystem has a solution in $\beta {\mathbb{N}}$ will be called {\it feasible}.
\begin{lm}\label{feasys}
(a) Let $x\equiv_{m_i}a_i$ (for $i=0,1,\dots,k$, $a_i\in{\mathbb{Z}}$ and $m_i\in{\mathbb{N}}$) be a finite system of congruences. It has a solution in $\beta {\mathbb{N}}\setminus {\mathbb{N}}$ if and only if it has a solution in ${\mathbb{N}}$.
(b) The system $x\equiv_{m_i}a_i$ (for $i\in\omega$, $a_i\in{\mathbb{Z}}$ and $m_i\in{\mathbb{N}}$) of congruences has a solution in $\beta {\mathbb{N}}$ if and only if it is feasible.
\end{lm}
\noindent{\bf Proof. } (a) Let ${\cal F}\in\beta {\mathbb{N}}\setminus {\mathbb{N}}$ be a solution of the given system. Then $A_i:=\{x\in {\mathbb{N}}:x\equiv_{m_i}a_i\}\in{\cal F}$ for each $i=0,1,\dots,k$. Hence $A:=\bigcap_{i=0}^kA_i\in{\cal F}$, and any $x\in A$ is a solution of the given system.
On the other hand, if $s\in {\mathbb{N}}$ is a solution and $u=lcm(m_0,m_1,\dots,m_k)$ (the least common multiplier of $m_0,m_1,\dots,m_k$), then all the elements of the set $B=\{x\in {\mathbb{N}}:x\equiv_us\}$ are also solutions. Thus every ${\cal F}\in\overline{B}\setminus B$ is a solution of the system in $\beta {\mathbb{N}}\setminus {\mathbb{N}}$.\\
(b) One direction is trivial, so assume the given system to be feasible. Let $A_i=\{x\in {\mathbb{N}}:x\equiv_{m_i}a_i\}$. By the assumption, every finite subsystem of the given system has a solution, so the family $\{\overline{A_i}:i<\omega\}$ has the finite intersection property. Since all the sets $\overline{A_i}$ are closed, it follows that $A=\bigcap_{i<\omega}\overline{A_i}$ is nonempty, and any ${\cal F}\in A$ is a solution of the given system.\hfill $\Box$ \par \vspace*{2mm}
Since $=_\sim$-equivalence classes within $L$ are singletons (\cite{So3}, Corollary 5.10), each class in $L$ trivially contains ultrafilters congruent only to one residue modulo $m$. We want to investigate for which systems of congruences there is a $=_\sim$-class such that all its ultrafilters satisfy it. Clearly, such a system must be feasible. On the other hand, by Lemma \ref{feasys} a feasible system $S$ has a solution ${\cal G}\in\beta{\mathbb{N}}$ so we can assume that it is a system of all congruences satisfied by ${\cal G}$ (we will call such a system {\it maximal}). Also, every congruence $x\equiv_{m_i}r_i$ is equivalent to a system of congruences modulo mutually prime factors of $m_i$, so we can assume that all $m_i$ are powers of primes themselves. Let $Q_S=\{p\in P:{\cal G}\equiv_{p^n}0\mbox{ for all }n\in {\mathbb{N}}\}$ and $T_S=P\setminus Q_S$. As a special case, if $T_S=\emptyset$, all ultrafilters from the class $MAX$ satisfy $S$.
$A\subset {\mathbb{N}}$ is an {\it antichain} if there are no distinct $a,b\in A$ such that $a\mid b$.
\begin{te}\label{jedanost}
For every maximal feasible system $S$ of congruences $x\equiv_{p^n}r_{p,n}$ (for $n\in\omega$, $p\in P$ and $r_{p,n}<p^n$) such that $T_S$ is infinite there is an $=_\sim$-equivalence class ${\cal C}\not\subseteq L$ such that ${\cal F}\equiv_{p^n}r_{p,n}$ for all ${\cal F}\in{\cal C}$.
\end{te}
\noindent{\bf Proof. } We consider two cases.\\
$1^\circ$ $Q$ is infinite. Let $\{q_i:i\in\omega\}$ and $\{t_i:i\in\omega\}$ be enumerations of $Q_S$ and $T_S$ respectively. For $i\in\omega$ let $s_i=\min\{n\in{\mathbb{N}}:{\cal G}\not\equiv_{t_i^n}0\}$. We construct, by recursion on $n$, a set $A=\{a_n:n\in\omega\}$ such that $a_n<a_{n+1}$ and:
(1) $a_n\in t_i^{s_i+n}{\mathbb{N}}+r_{t_i,s_i+n}$ for $i<n$;
(2) $t_n^{s_n}\mid a_n$;
(3) $q_j^n\mid a_n$ for every $j<n$.
Start with choosing any $a_0\in t_0^{s_0}{\mathbb{N}}$. Assume that $a_n$ is constructed. We want to choose $a_{n+1}$ satisfying the system $x\equiv_{t_i^{s_i+n+1}}r_{t_i,s_i+n+1}$ for $i\leq n$, $x\equiv_{t_{n+1}^{s_{n+1}}}0$ and $x\equiv_{q_j^n}0$ for $j\leq n$. By the Chinese remainder theorem this system has a solution in ${\mathbb{N}}$ such that $a_{n+1}>a_n$. Clearly, obtained $a_{n+1}$ satisfies conditions (1)-(3).
$A$ is an antichain: for all $m<n$, $a_m<a_n$ implies that $a_n\nmid a_m$, and $t_m^{s_m}\mid a_m$ and (1) imply that $a_m\nmid a_n$. Let ${\cal C}$ be the $=_\sim$-equivalence class of any ultrafilter containing $A$. Every ultrafilter ${\cal F}\in{\cal C}$ contains $A\hspace{-0.1cm}\uparrow$ and $A\hspace{-0.1cm}\downarrow$, so it contains $A=A\hspace{-0.1cm}\uparrow\cap A\hspace{-0.1cm}\downarrow$ as well. Condition (3) clearly implies that $A$ intersects each level $L_l$ only in finitely many elements, so ${\cal F}\notin L$, and in particular ${\cal F}$ is nonprincipal. By (1), $A\setminus(t_i^{s_i+n}{\mathbb{N}}+r_{t_i,s_i+n})$ is finite for all $i$ and all $n$, hence ${\cal F}\equiv_{t_i^{s_i+n}}r_{t_i,s_i+n}$. By (3), ${\cal F}\equiv_{q_i^n}0$ for all $i\in\omega$ and $n\in{\mathbb{N}}$. Thus ${\cal F}$ satisfies all congruences of the given system.\\
$2^\circ$ $Q$ is finite. We repeat the construction from case $1^\circ$, but for $j\geq |Q_S|$ (when we ``run out" of elements from $Q_S$) instead of $q_j$ in condition (3) we use some elements $t_i\in T$ for $i>n$. (This condition is needed here only to ensure that ${\cal F}\notin L$.)\hfill $\Box$ \par \vspace*{2mm}
\begin{pp}\label{free}
(\cite{So5}, Lemma 5.2) If $A$ is an ${\mathbb{N}}$-free set, then $A\not\subseteq n_1{\mathbb{N}}\cup n_2{\mathbb{N}}\cup\dots\cup n_k{\mathbb{N}}$ for any $n_1,n_2,\dots,n_k\in{\mathbb{N}}\setminus\{1\}$.
\end{pp}
\begin{ex}
(1) Let us show that the condition of $T_S$ being infinite in the theorem above is necessary. Consider a system $S$ consisting of $x\equiv_{t_i}r_i$ (for some primes $t_0,t_1,\dots,t_{l-1}$ and some nonzero $r_i<t_i$) and $x\equiv_{p^n}0$ for all $p\in P\setminus\{t_0,t_1,\dots,t_{l-1}\}$ and all $n\in{\mathbb{N}}$. Let us show that there can be no $=_\sim$-class ${\cal C}$ such that all ${\cal F}\in{\cal C}$ satisfy $S$. Assume the opposite. Then every such ${\cal F}$ contains all sets in ${\cal U}_N:=\{A\in{\cal U}:A\mbox{ is }{\mathbb{N}}\mbox{-free}\}$: by Proposition \ref{free} each $A\in{\cal U}_N$ must contain an element $a$ mutually prime to all $t_0,t_1,\dots,t_{l-1}$. Hence $a\mid{\cal F}$ implies $a{\mathbb{N}}\in{\cal F}$, and therefore $A\in{\cal F}$. This means that ${\cal F}\cap{\cal U}={\cal U}_N\cup\{n{\mathbb{N}}:n\in{\mathbb{N}}\land t_i\nmid n\mbox{ for all }i=0,1,\dots,l-1\}$. But now, if we change any of the $r_i$'s into another nonzero value we stay inside the same class ${\cal C}$.
(2) In the class $NMAX$ of $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-maximal ${\mathbb{N}}$-free ultrafilters one can find an ultrafilter congruent to $r$ modulo $m$ for any $0<r<m$ such that $gcd(m,r)=1$. Namely, the family ${\cal U}_N\cup\{{\mathbb{N}}\setminus n{\mathbb{N}}:n>1\}\cup\{m{\mathbb{N}}+r\}$ has the finite intersection property: for any given $A\in{\cal U}_N$ and $n_0,n_1,\dots,n_k\in{\mathbb{N}}\setminus\{1\}$, since $A$ is ${\mathbb{N}}$-free, Proposition \ref{free} says that there is $a\in A$ mutually prime to all of $m,n_0,\dots,n_k$. By the Chinese remainder theorem the system $x\equiv_mr$, $x\not\equiv_{n_i}0$, $x\equiv_a0$ has a solution, and it belongs to $A\cap(m{\mathbb{N}}+r)\cap\bigcap_{0\leq i\leq k}({\mathbb{N}}\setminus n_i{\mathbb{N}})$.
\end{ex}
Now we will prove a result describing which residues modulo a given prime can appear in the same $=_\sim$-class; first we need the following definition. A set $S$ of residues modulo $p\in P$ is {\it a geometric set of residues} if there are $s$ and $r$ such that $0\leq s<p$, $0<r<p$ and $S=\{rest(sr^k,p):k\in {\mathbb{N}}\}$, where $rest(x,p)$ is the residue of $x$ modulo $p$.
\begin{te}\label{geomset}
Let $p\in P$ and let $S\subseteq\{0,1,\dots,p-1\}$. There is an $=_\sim$-equivalence class ${\cal C}$ such that the set of residues of ultrafilters ${\cal F}\in{\cal C}$ is exactly $S$ if and only if $S$ is a geometric set of residues.
\end{te}
\noindent{\bf Proof. } ($\Leftarrow$) First assume that $S=\{s_0,\dots,s_{l-1}\}$ is a geometric set of residues, where $s_i=rest(s_0r^i,p)$ (for $i=0,1,\dots,l-1$) are exactly all distinct residues of numbers $s_0r^k$ modulo $p$. If $S=\{0\}$, which happens for $s_0=0$, any $=_\sim$-class of ultrafilters divisible by $p$ (i.e.\ containg the set $p{\mathbb{N}}$) will do. Otherwise, by Dirichlet's prime number theorem, there are primes $s\equiv_ps_0$ and $b\equiv_pr$. Let $B=\{sb^k:k\in\omega\}$, ${\cal U}'=\{U\in{\cal U}:U\cap B\neq\emptyset\}$ and ${\cal V}'=\{V\in{\cal V}:{\mathbb{N}}\setminus V\notin{\cal U}'\}$. Then the family ${\cal U}''={\cal U}'\cup{\cal V}'$ has the finite intersection property: ${\cal U}'$ is closed for finite intersections, and every $V\in{\cal V}'$ contains $B$. Let ${\cal C}$ be the $=_\sim$-equivalence class determined by ${\cal U}''$. For every ${\cal F}\in{\cal C}$ we have $B\in{\cal F}$ (since $B\cup\{b^k:k\in\omega\}\in{\cal V}'$ and ${\mathbb{N}}\setminus\{b^k:k\in\omega\}\in{\cal U}'$) and $B\subseteq\bigcup_{i=0}^{l-1}(p{\mathbb{N}}+s_i)$, so every such ${\cal F}$ is congruent to some $s_i$ modulo $p$. On the other hand, for each $i\in\{0,1,\dots,l-1\}$ the family ${\cal U}''\cup\{p{\mathbb{N}}+s_i\}$ has the finite intersection property: $B$ contains infinitely many elements from each of the sets $p{\mathbb{N}}+s_i$, and finite intersections of sets from ${\cal U}''$ contain all but finitely many elements from $B$, so they also intersect $p{\mathbb{N}}+s_i$. Hence there is an ultrafilter ${\cal F}\in{\cal C}$ such that ${\cal F}\equiv_p s_i$.\\
($\Rightarrow$) Now assume $S$ is the set of residues modulo $p$ of ultrafilters ${\cal F}\in{\cal C}$ for some $=_\sim$-equivalence class ${\cal C}$. Every singleton is clearly a geometric set of residues (obtained by choosing the quotient $r=1$), so we will assume $|S|>1$. Let ${\cal W}$ be the family of all convex sets belonging to all ${\cal F}\in{\cal C}$. Since the elements of $S$ are all possible residues of ultrafilters ${\cal F}\in{\cal C}$, there is $C\in{\cal W}$ (a finite intersection of sets from $({\cal U}\cup{\cal V})\cap{\cal F}$) such that $C\subseteq\bigcup_{k=0}^{l-1}(p{\mathbb{N}}+s_k)$ (otherwise ${\cal W}\cup\{{\mathbb{N}}\setminus\bigcup_{k=0}^{l-1}(p{\mathbb{N}}+s_k)\}$ would have the finite intersection property).
Let $q$ be a primitive root modulo $p$ (this means that for every $0<r<p$ there is $k\in {\mathbb{N}}$ such that $q^k\equiv_pr$; see \cite{Bur} for more details). Let $S=\{s_0,\dots,s_{l-1}\}$, where $s_i=rest(q^{k_i},p)$, $k_0<k_1<\dots<k_{l-1}$ and for each $s_i$ the smallest $k_i$ is chosen. If we denote $r_i=k_i-k_0$ for $0<i<l$, then $s_i=rest(s_0q^{r_i},p)$.\\
\underline{Claim 1.} The set $R:=\{r_i:0<i<l\}$ is closed for the $gcd$ (greatest common divisor) operation.
{\it Proof of Claim 1.} Let $0<i<j<l$. Take $A_0$ to be the set of $\mid$-minimal elements of $C\cap(p{\mathbb{N}}+s_0)$. By recursion on $k$, let $A_{3k+1}$ be the set of $\mid$-minimal elements of $C\cap A_{3k}\hspace{-0.1cm}\uparrow\cap(p{\mathbb{N}}+s_i)$, $A_{3k+2}$ the set of $\mid$-minimal elements of $C\cap A_{3k+1}\hspace{-0.1cm}\uparrow\cap(p{\mathbb{N}}+s_j)$ and $A_{3k+3}$ the set of $\mid$-minimal elements of $C\cap A_{3k+2}\hspace{-0.1cm}\uparrow\cap(p{\mathbb{N}}+s_0)$. Each of the sets $A_m$ (for $m\in\omega$) must be nonempty, since otherwise $C\subseteq (C\setminus A_0\hspace{-0.1cm}\uparrow)\cup(C\cap A_0\hspace{-0.1cm}\uparrow\setminus A_1\hspace{-0.1cm}\uparrow)\cup\dots\cup(C\cap A_{m-1}\hspace{-0.1cm}\uparrow)$, and each of the (convex) sets on the right would miss one of the sets $p{\mathbb{N}}+s_0$, $p{\mathbb{N}}+s_i$ or $p{\mathbb{N}}+s_j$, so it could not belong to all ultrafilters in ${\cal C}$.
Now let $d=gcd(r_i,r_j)$. By B\' ezout's lemma there are $a',b'\in {\mathbb{Z}}$ such that $a'r_i+b'r_j=d$. By replacing $a',b'$ with their residues modulo $p-1$ we get $a,b\in {\mathbb{Z}}_{p-1}$ such that $ar_i+br_j\equiv_{p-1}d$. Let $m=3(a+b)$ and let $\langle c_i:0\leq i<m\rangle$ be a $\mid$-chain in $C$ of length $m$ such that $c_i\in A_i$ (it exists since $A_{m-1}\neq\emptyset$). Let $c_{i+1}=c_id_i$; then $d_{3k}\equiv_p q^{r_i}$ and $d_{3k}d_{3k+1}\equiv_p q^{r_j}$ for all $k$. Hence
\begin{eqnarray*}
e &:=& d_0d_3\dots d_{3(a-1)}d_{3a}d_{3a+1}d_{3(a+1)}d_{3(a+1)+1}\dots d_{3(a+b-1)}d_{3(a+b-1)+1}\\
&\equiv_p& (q^{r_i})^a(q^{r_j})^b=q^{ar_i+br_j}\equiv_p q^d
\end{eqnarray*}
(in the last equality we used Fermat's little theorem). But $c_0e$ is divisible by $c_0$ and divides $c_m$; since $C$ is convex, $c_0e\in C$ and hence $d\in R$.\\
\underline{Claim 2.} $rest(tr_1,p-1)\in R$ for all $t\in {\mathbb{N}}$.
{\it Proof of Claim 2} is similar to (though simpler than) the one from Claim 1. We construct a $\mid$-chain $\langle c_i:0\leq i\leq 2t-2\rangle$ such that $c_i\in p{\mathbb{N}}+s_0$ for odd $i$ and $c_i\in p{\mathbb{N}}+s_1$ for even $i$. If $c_{i+1}=c_id_i$, we get $c_0d_1d_3\dots d_{2t-3}\equiv_p q^{tr_1}$, so $tr_1\equiv_{p-1}r_j$ for some $r_j\in R$.\\
Now, since $r_1<r_2<\dots<r_{l-1}$, the two Claims show that $R$ must have the form $R=\{ir_1:0<i<l\}$. But then $s_i\equiv s_0(q^{r_1})^i$, which is what we wanted to prove.\hfill $\Box$ \par \vspace*{2mm}
\section{$\omega$-hyperextensions of ${\mathbb{Z}}$}
In the previous two papers, \cite{So4} and \cite{So5}, we employed nonstandard methods (more precisely, the superstructure approach) to get more information on the relation $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$. We will continue that practice here. However, now we turn to extensions of the set ${\mathbb{Z}}$ of all integers instead of ${\mathbb{N}}$. The reason is, of course, that we want to use the operation of subtraction. Let $X$ be a set containing a copy of ${\mathbb{Z}}$ consisting of atoms: none of the elements of $X$ contains as an element any of the other relevant sets. Let $V_0(X)=X$, $V_{n+1}(X)=V_n(X)\cup P(V_n(X))$ for $n\in\omega$ and $V(X)=\bigcup_{n<\omega}V_n(X)$. $V(X)$ is then called a {\it superstructure}. The rank of an element $x\in V(X)$ is the smallest $n\in\omega$ such that $x\in V_n(X)$.
If $V(X)$ is a superstructure, its {\it nonstandard extension} is a pair $(V(Y),*)$, where $V(Y)$ is a superstructure with the set of atoms $Y$ and $*:V(X)\rightarrow V(Y)$ is a rank-preserving function such that $A\subseteq\zve A$ for $A\subseteq X$, ${\mathbb{Z}}\subset\zve {\mathbb{Z}}$, $\zve X=Y$ and satisfying the Transfer principle (we delay the formulation of Transfer until later, since we will need a more general version)
A nonstandard extension $(V(Y),*)$ of $V(X)$ is a $\kappa$-{\it enlargement} if for every family $F$ of subsets of some set in $V(X)$ with the finite intersection property such that $|F|<\kappa$ there is an element in $\bigcap_{A\in F}\zve A$. $\kappa$-enlargements are known to exist in ZFC.
For an excellent introduction to nonstandard methods we refer the reader to \cite{G}.
The connection between a nonstandard extension and $\beta {\mathbb{Z}}$ is given by the function $v:\zve {\mathbb{Z}}\rightarrow\beta {\mathbb{Z}}$, defined by $v(x)=\{A\subseteq {\mathbb{Z}}:x\in\zve A\}$. $v$ is onto whenever $(V(Y),*)$ is a ${\goth c}^+$-enlargement.
\begin{pp}\label{slaganjev}
(\cite{NR}, Lemma 1) For every $x\in\zve Z$ and every $f:{\mathbb{Z}}\rightarrow{\mathbb{Z}}$, $v(\zve f(x))=\widetilde{f}(v(x))$.
\end{pp}
More information about $v$ can be found in \cite{NR} and \cite{L1}. The following proposition is Theorem 3.1 of \cite{So4}, adjusted for extensions of ${\mathbb{Z}}$ (instead of ${\mathbb{N}}$).
\begin{pp}\label{ekviv}
The following conditions are equivalent for every two ultrafilters ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$:
(i) ${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}$;
(ii) in every ${\goth c}^+$-enlargement $V(Y)$, there are $x,y\in\zve {\mathbb{Z}}$ such that $v(x)={\cal F}$, $v(y)={\cal G}$ and $x\zvez\mid y$;
(iii) in some ${\goth c}^+$-enlargement $V(Y)$, there are $x,y\in\zve {\mathbb{Z}}$ such that $v(x)={\cal F}$, $v(y)={\cal G}$ and $x\zvez\mid y$.
\end{pp}
First, let us establish that we can use all previously obtained results about $\zve {\mathbb{N}}$ while working with $\zve {\mathbb{Z}}$. In every extension $V(Y)$ the nonstandard set $\zve {\mathbb{Z}}$ consists of $\zve {\mathbb{N}}$, another (``inverted") copy of $\zve {\mathbb{N}}$ (containing negative nonstandard numbers) and zero. For $x,y\in\zve {\mathbb{Z}}$, $x\zvez\mid y$ holds if and only if $|x|\;\zvez\mid\;|y|$.
The situation with $\beta {\mathbb{Z}}$ is similar. Let, for $A\subseteq {\mathbb{Z}}$, $-A:=\{-a:a\in A\}$; likewise, for ${\cal F}\in\beta {\mathbb{N}}$ let $-{\cal F}:=\{-A:A\in{\cal F}\}$. Then every ultrafilter in $\beta {\mathbb{Z}}$ (except the principal ultrafilter identified with zero) contains either ${\mathbb{N}}$ or $-{\mathbb{N}}$, so $\beta {\mathbb{Z}}=\beta {\mathbb{N}}\cup\{-{\cal F}:{\cal F}\in\beta {\mathbb{N}}\}\cup\{0\}$. The family ${\cal U}_Z:=\{U\in P({\mathbb{Z}})\setminus\{\emptyset\}:U\hspace{-0.1cm}\uparrow=U\}$ of upward closed subsets of ${\mathbb{Z}}$ consists of sets $V\cup -V\cup\{0\}$ for $V\in{\cal U}$, and divisibility in $\beta {\mathbb{Z}}$ is naturally defined as: ${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}$ if and only if ${\cal F}\cap{\cal U}_Z\subseteq{\cal G}$. Thus, ${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}$ if and only if $|{\cal F}|\hspace{1mm}\widetilde{\mid}\hspace{1mm}|{\cal G}|$ (for absolute values of ultrafilters defined in the obvious way).
We will write ${\cal F}-{\cal G}$ instead of ${\cal F}+(-{\cal G})$. So $A\in{\cal F}-{\cal G}$ if and only if $\{n\in {\mathbb{Z}}:n-A\in{\cal G}\}\in{\cal F}$, where $n-A=\{n-a:a\in A\}$. Note that there can be no confusion with this notation, since ${\cal F}-{\cal G}$ is exactly the ultrafilter obtained by extending the subtraction operation from ${\mathbb{Z}}$ to $\beta {\mathbb{Z}}$, as defined in (\ref{eqn1}).\\
A nonstandard extension $(V(Y),*)$ of $V(X)$ is called a {\it single superstructure model} if $Y=X$. The existence of such model was proved in \cite{Ben}. In a single superstructure model it is possible to iterate the star-function, since it is defined for all elements in the range of $*$.
\begin{de}
Let $(V(X),*)$ be a single superstructure model with ${\mathbb{Z}}\subseteq X$. Define recursively, for $x\in V(X)$, $S_0(x)=x$ and $S_{k+1}(x)=\zve(S_k(x))$ for all $k\in\omega$. For $A\subseteq X$ the set $\tac A=\bigcup_{k<\omega}S_k(A)$ is called an $\omega$-hyperextension of $A$.
\end{de}
Now, any $(V(X),S_k)$ is a nonstandard extension, and $(V(X),\bullet)$ is also a nonstandard extension. Moreover, we have the following.
\begin{pp}\label{prenosnatac}
(\cite{L1}, Proposition 2.5.7) If $(V(X),*)$ is a single superstructure model which is a ${\goth c}^+$-enlargement, then $(V(X),S_k)$ for every $k\in\omega$ and $(V(X),\bullet)$ are also ${\goth c}^+$-enlargements.
\end{pp}
We will call a single superstructure model $(V(X),*)$ which is a ${\goth c}^+$-enlargement a $\omega$-{\it hyperenlargement}.
Now we can use the Transfer principle within any of the mentioned extensions. Recall that a first-order formula $\varphi(x_1,x_2,\dots,x_n)$ is bounded if all its quantifiers are bounded, i.e.\ of the form $(\forall x\in y)$ or $(\exists x\in y)$. In the Transfer principle the free variables $x_1,x_2,\dots,x_n$ that appear in $\varphi(x_1,x_2,\dots,x_n)$ can take values of elements $a_1,a_2,\dots,a_n\in V(X)$ and in $\varphi(\zve a_1,\zve a_2,\dots,\zve a_n)$ they are replaced with their star-counterparts. Any $k$-ary relation $A\in V(X)$ appearing as an atomic subformula in $\varphi$ is also considered like a free variable and gets replaced with $\zve A$.\\
{\it The Transfer principle.} For every bounded formula $\varphi$ and every $a_1,a_2,\dots$, $a_n\in V(X)$, in $V(X)$ $\varphi(a_1,a_2,\dots,a_n)$ holds if and only if $\varphi(S_k(a_1),S_k(a_2),\dots$, $S_k(a_n))$ holds (for any $k\in {\mathbb{N}}$) if and only if $\varphi(\tac a_1,\tac a_2,\dots,\tac a_n)$ holds.
As a simple application of Transfer let us show that $\zve(x+y)=\zve x+\zve y$ for $x,y\in\tac Z$, a fact that we will need later. If $z=x+y$, Transfer implies that $\zve z=\zve x+\zve y$. Likewise, $\zve(x\cdot y)=\zve x\cdot\zve y$.
\begin{pp}\label{nivoi}
(\cite{L1}, Proposition 2.5.3)
(a) For $k\leq l$ and $A\subseteq {\mathbb{Z}}$, $S_k(A)=S_l(A)\cap S_k({\mathbb{Z}})$. Consequently, $S_k(A)=\tac A\cap S_k({\mathbb{Z}})$.
(b) For $h:{\mathbb{Z}}\rightarrow {\mathbb{Z}}$ and $x\in S_k({\mathbb{Z}})$, $\tac h(x)=S_k(h)(x)$.
\end{pp}
Let us comment on the iterated version of the divisibility relation. It is common to omit $*$ (or, more generally, $S_k$) in formulas in front of the relations $=$ and $\in$ and arithmetical operations, in order to simplify notation. Let us show that it is justified to do the same with the divisibility relation, even when working in an $\omega$-hyperextension. Firstly, $(x,y)\in S_k(\mid)$ can hold only if $x,y\in S_k({\mathbb{Z}})$. On the other hand, for $x\in S_k({\mathbb{N}})$, $y\in S_k({\mathbb{Z}})$ and $l>k$, we will show that $(x,y)\in S_k(\mid)$ if and only if $(x,y)\in S_l(\mid)$.
$(x,y)\in S_k(\mid)$ means that there is $z\in S_k({\mathbb{Z}})$ such that $y=xz$. But $S_k({\mathbb{Z}})\subseteq S_l({\mathbb{Z}})$, so $(x,y)\in S_l(\mid)$ follows. In the other direction, if $(x,y)\in S_l(\mid)$ for some $l>k$, and $y=xz$, then $z\in S_k({\mathbb{Z}})$ so $(x,y)\in S_k(\mid)$ as well. Thus, there will be no ambiguity if we drop the stars and write simply $x\mid y$ instead of $(x,y)\in S_k(\mid)$.
\begin{de}
For ${\cal F}\in\beta{\mathbb{Z}}$, $\mu_n({\cal F})=\{x\in S_n({\mathbb{Z}}):(\forall A\in{\cal F})x\in S_n(A)\}$.
The monad of ${\cal F}$ is $\mu({\cal F})=\bigcup_{n<\omega}\mu_n({\cal F})=\{x\in\tac {\mathbb{Z}}:(\forall A\in{\cal F})x\in\tac A\}$.
For $x\in\tac {\mathbb{Z}}$, $v(x)$ is the unique ${\cal F}\in\beta{\mathbb{Z}}$ such that $x\in\mu({\cal F})$.
\end{de}
Note that this definition of $v(x)$ agrees with the previous one (for $x\in\zve {\mathbb{Z}}$).
\begin{pp}\label{monadi}
(\cite{L1}, Proposition 2.5.11) For every $x\in\tac {\mathbb{Z}}$ and every $n\in\omega$, $v(S_n(x))=v(x)$.
\end{pp}
Let us recall the tensor (or Fubini) product of ultrafilters: for ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$, ${\cal F}\otimes{\cal G}$ is the ultrafilter on ${\mathbb{Z}}\times {\mathbb{Z}}$ defined by
$$S\in{\cal F}\otimes{\cal G}\Leftrightarrow\{x\in {\mathbb{Z}}:\{y\in {\mathbb{Z}}:(x,y)\in S\}\in{\cal G}\}\in{\cal F}.$$
The definitions of monads of ultrafilters of the form ${\cal F}\otimes{\cal G}$ and the corresponding function $v$ are analogous as above. For ultrafilters ${\cal F}$ and ${\cal G}$ and nonstandard numbers $x\in\mu({\cal F})$ and $y\in\mu({\cal G})$, $(x,y)$ is a {\it tensor pair} if $(x,y)\in\mu({\cal F}\otimes{\cal G})$.
\begin{lm}\label{minustensor}
If $(x,y)\in\zve{\mathbb{Z}}\times\zve{\mathbb{Z}}$ is a tensor pair, then so are $(x,-y)$ and $(-x,y)$.
\end{lm}
\noindent{\bf Proof. } Let ${\cal F}=v(x)$ and ${\cal G}=v(y)$; then $v(-y)=-{\cal G}$ and $v((x,y))={\cal F}\otimes{\cal G}$. We need to prove that $v((x,-y))={\cal F}\otimes(-{\cal G})$. But whenever $(x,-y)\in\zve S$ for some $S\subseteq{\mathbb{Z}}\times{\mathbb{Z}}$, we have $(x,y)\in\zve S'$, where $S':=\{(m,-n):(m,n)\in S\}$. By the assumptions $S'\in{\cal F}\otimes{\cal G}$, so $\{x\in{\mathbb{Z}}:\{y\in{\mathbb{Z}}:(x,y)\in S\}\in(-{\cal G})\}=\{x\in{\mathbb{Z}}:-\{y\in{\mathbb{Z}}:(x,y)\in S\}\in{\cal G}\}=\{x\in{\mathbb{Z}}:\{y\in{\mathbb{Z}}:(x,y)\in S'\}\in{\cal G}\}\in{\cal F}$, so $S\in{\cal F}\otimes(-{\cal G})$.
The proof for $(-x,y)$ is analogous.\hfill $\Box$ \par \vspace*{2mm}
By \cite{DN}, Proposition 11.7.2, for any tensor pair $(x,y)$ we have $x+y\in\mu({\cal F}+{\cal G})$ and $x\cdot y\in\mu({\cal F}\cdot{\cal G})$. An important feature of $\omega$-hyperextensions is that they provide a canonical way to obtain tensor pairs.
\begin{pp}\label{zbirpro}
(\cite{L1}, Theorem 2.5.27) If $x\in\mu({\cal F})$ and $y\in\mu({\cal G})$, then the pair $(x,\zve y)$ is a tensor pair. Hence, $x+\zve y\in\mu({\cal F}+{\cal G})$ and $x\cdot\zve y\in\mu({\cal F}\cdot{\cal G})$.
\end{pp}
\section{Congruence modulo ultrafilter}\label{flawed}
A natural way to define the congruence relation modulo an ultrafilter would be to imitate again the construction of an extension $\widetilde{\rho}$, as described in Section \ref{kongm}.
\begin{de}\label{defcong}
For ${\cal M}\in\beta {\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$, ${\cal F}\equiv_{\cal M}{\cal G}$ if and only if for every $A\in{\cal M}$ the set $\{(x,y)\in {\mathbb{Z}}\times {\mathbb{Z}}:(\exists m\in A)x\equiv_my\}$ belongs to the ultrafilter ${\cal F}\otimes{\cal G}$.
\end{de}
This definition has a nice equivalent formulation via divisibility of ultrafilters.
\begin{lm}\label{equivdelj}
For ${\cal M}\in\beta {\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$, ${\cal F}\equiv_{\cal M}{\cal G}$ if and only if ${\cal M}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}-{\cal G}$.
\end{lm}
\noindent{\bf Proof. } \begin{eqnarray*}
{\cal F}\equiv_{\cal M}{\cal G} &\Leftrightarrow & (\forall A\in{\cal M})\{x\in {\mathbb{Z}}:\{y\in {\mathbb{Z}}:(\exists m\in A)x\equiv_my\}\in{\cal G}\}\in{\cal F}\\
&\Leftrightarrow & (\forall A\in{\cal M})\{x\in {\mathbb{Z}}:\{y\in {\mathbb{Z}}:x-y\in A\hspace{-0.1cm}\uparrow\}\in{\cal G}\}\in{\cal F}\\
&\Leftrightarrow & (\forall A\in{\cal M}\cap{\cal U})\{x\in {\mathbb{Z}}:\{y\in {\mathbb{Z}}:x-y\in A\}\in{\cal G}\}\in{\cal F}\\
&\Leftrightarrow & (\forall A\in{\cal M}\cap{\cal U})\{x\in {\mathbb{Z}}:x-A\in{\cal G}\}\in{\cal F}\\
&\Leftrightarrow & (\forall A\in{\cal M}\cap{\cal U})A\in{\cal F}-{\cal G},
\end{eqnarray*}
which is equivalent to ${\cal M}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}-{\cal G}$.\hfill $\Box$ \par \vspace*{2mm}
The following lemma justifies our using the same notation as for the relation from Section \ref{kongm}.
\begin{lm}\label{restr1}
If $m\in {\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$, ${\cal F}\equiv_m{\cal G}$ as defined in Section \ref{kongm} is equivalent to ${\cal F}\equiv_m{\cal G}$ from Definition \ref{defcong}.
\end{lm}
\noindent{\bf Proof. } Since $h_m$ is a homomorphism, $\widetilde{h_m}({\cal F}-{\cal G})=\widetilde{h_m}({\cal F})-\widetilde{h_m}({\cal G})$. It follows that $m\mid{\cal F}-{\cal G}$ if and only if $\widetilde{h_m}({\cal F}-{\cal G})=0$, if and only if $\widetilde{h_m}({\cal F})-\widetilde{h_m}({\cal G})$.\hfill $\Box$ \par \vspace*{2mm}
$\equiv_{\cal M}$ also has a nonstandard characterization. First we recall Puritz's result that $(x,y)\in\zve{\mathbb{N}}\times\zve{\mathbb{N}}$ is a tensor pair if and only if $x<\zve f(y)$ for every $f:{\mathbb{N}}\rightarrow {\mathbb{N}}$ such that $\zve f(y)\in\zve {\mathbb{N}}\setminus {\mathbb{N}}$ (\cite{P2}, Theorem 3.4). Taking into account Lemma \ref{minustensor}, we get the following version of this result.
\begin{pp}
$(x,y)\in\zve{\mathbb{Z}}\times\zve{\mathbb{Z}}$ is a tensor pair if and only if $|x|<|\zve f(y)|$ for every $f:{\mathbb{Z}}\rightarrow{\mathbb{Z}}$ such that $\zve f(y)\in\zve{\mathbb{Z}}\setminus{\mathbb{Z}}$.
\end{pp}
If we denote ${\cal G}=v(y)$, the condition $\zve f(y)\notin{\mathbb{Z}}$ is equivalent to $f\upharpoonright B$ not being constant for any $B\in{\cal G}$. Let us call $f:{\mathbb{Z}}\rightarrow {\mathbb{Z}}$ non-${\cal G}$-constant in that case.
Note that we are still working in any ${\goth c}^+$-enlargement (we do not need an $\omega$-hyperextension), so $\mu({\cal F})$ here actually means $\mu_1({\cal F})$.
\begin{te}
Let ${\cal M}\in\beta{\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta{\mathbb{Z}}$. The following conditions are equivalent:
(i) ${\cal F}\equiv_{\cal M}{\cal G}$;
(ii) in some ${\goth c}^+$-enlargement holds
\begin{equation}\label{eqpair}
(\forall m\in\mu({\cal M}))(\exists x\in\mu({\cal F}))(\exists y\in\mu({\cal G}))((x,y)\mbox{ is a tensor pair }\land m\mid x-y)
\end{equation}
(iii) in every ${\goth c}^+$-enlargement holds (\ref{eqpair}).
\end{te}
\noindent{\bf Proof. } (ii)$\Rightarrow$(i) Let (\ref{eqpair}) hold in some ${\goth c}^+$-enlargement. If $y\in\mu({\cal G})$ then $-y\in\mu(-{\cal G})$. Since for a tensor pair $(x,y)$ we have, by Lemma \ref{minustensor}, $x-y=x+(-y)\in\mu({\cal F}-{\cal G})$, the ``if" part follows directly from Proposition \ref{ekviv}.\\
(i)$\Rightarrow$(iii) Assume ${\cal M}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}-{\cal G}$; we work in arbitrary ${\goth c}^+$-enlargement. We define, for $A,B\subseteq{\mathbb{Z}}$, $M\subseteq {\mathbb{N}}$ and $f:{\mathbb{Z}}\rightarrow{\mathbb{Z}}$:
\begin{eqnarray*}
& & E_{A,B,M}=\{(m,a,b)\in{\mathbb{N}}\times{\mathbb{Z}}\times{\mathbb{Z}}:a\in A\land b\in B\land m\in M\land m\mid a-b\}\\
& & F_f=\{(m,a,b)\in{\mathbb{N}}\times{\mathbb{Z}}\times{\mathbb{Z}}:|a|<|f(b)|\}.
\end{eqnarray*}
We prove that the family $\{E_{A,B,M}:A\in{\cal F},B\in{\cal G},M\in{\cal M}\}\cup\{F_f:f:{\mathbb{Z}}\rightarrow{\mathbb{Z}}\mbox{ is non-}{\cal G}\mbox{-constant}\}$ has the finite intersection property. $\{E_{A,B,M}:A\in{\cal F},B\in{\cal G},M\in{\cal M}\}$ is closed for finite intersections. So let $A\in{\cal F}$, $B\in{\cal G}$, $M\in{\cal M}$ and let $f_1,f_2,\dots,f_k:{\mathbb{Z}}\rightarrow{\mathbb{Z}}$ be non-${\cal G}$-constant. Since $M\hspace{-0.1cm}\uparrow\in{\cal M}\cap{\cal U}$, ${\cal M}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}-{\cal G}$ implies $M\hspace{-0.1cm}\uparrow\in{\cal F}-{\cal G}$. Hence $\{n\in{\mathbb{Z}}:n-M\hspace{-0.1cm}\uparrow\in{\cal G}\}\in{\cal F}$. Let $a\in A\cap\{n\in{\mathbb{Z}}:n-M\hspace{-0.1cm}\uparrow\in{\cal G}\}$. This means that $B_1:=B\cap(a-M\hspace{-0.1cm}\uparrow)\in{\cal G}$. Hence there is $b\in B_1$ such that $|f_i(b)|>|a|$ for all $i\leq k$ (otherwise $\{b\in B_1:f_i(b)=j\}\in{\cal G}$ for some $i\leq k$ and some $-a\leq j\leq a$, a contradiction with the assumption that $f_i$ is non-${\cal G}$-constant). Since $b\in a-M\hspace{-0.1cm}\uparrow$, there is $m\in M$ such that $m\mid a-b$, so $(m,a,b)\in E_{A,B,M}\cap F_{f_1}\cap F_{f_2}\cap\dots\cap F_{f_k}$.
Now, since we are working with a ${\goth c}^+$-enlargement, there is
$$(m,x,y)\in\bigcap_{A\in{\cal F},B\in{\cal G},M\in{\cal M}}\zve E_{A,B,M}\;\;\cap\bigcap_{f\mbox{ non-}{\cal G}\mbox{-constant}}\zve F_f.$$
This means that $m\in\mu({\cal M})$, $x\in\mu({\cal F})$, $y\in\mu({\cal G})$ and $m\mid x-y$. Also, for every non-${\cal G}$-constant $f:{\mathbb{Z}}\rightarrow{\mathbb{Z}}$, $|\zve f(y)|>|x|$, so $(x,y)$ is a tensor pair.\hfill $\Box$ \par \vspace*{2mm}
Unfortunately, we do not even know whether $\equiv_{\cal M}$ is an equivalence relation on $\beta {\mathbb{Z}}$, which makes it unconvenient to work with. Therefore in the next section we introduce a stronger relation with much nicer properties.
\section{Strong congruence}\label{congult}
To better explain the forthcoming definition of congruence, we begin with a few simple lemmas. Recall that $MAX$ is the class of ultrafilters $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-divisible by all others.
\begin{lm}\label{razlika1}
Let $x,y\in\tac {\mathbb{Z}}$ and $v(x)=v(y)$. Then $m\mid x-y$ for all $m\in {\mathbb{N}}$ and $x-y\in\mu(MAX)$.
\end{lm}
\noindent{\bf Proof. } For each $m\in {\mathbb{N}}$, let $h_m$ be the function defined in Section \ref{kongm}. Then $\tac h_m(x)\in {\mathbb{Z}}_m$ for all $x\in\tac {\mathbb{Z}}$. By Proposition \ref{slaganjev}, $v(\tac h_m(x))=\widetilde{h_m}(v(x))=\widetilde{h_m}(v(y))=v(\tac h_m(y))$, so $x$ and $y$ have the same residue modulo $m$.
Ultrafilters from $MAX$ are those divisible by all $m\in{\mathbb{N}}$. Hence $\mu(MAX)$ consists exactly of nonstandard numbers divisible by all $m\in{\mathbb{N}}$, so the second statement follows directly from the first.\hfill $\Box$ \par \vspace*{2mm}
By Theorem \ref{geomset}, the assumption of Lemma \ref{razlika1} can not be relaxed to $v(x)=_\sim v(y)$: there are $=_\sim$-equivalent ultrafilters giving different residues modulo some $m\in {\mathbb{N}}$.
\begin{lm}\label{razlika2}
Let $x,y\in\tac {\mathbb{Z}}$, $v(x)=v(y)$ and $m\in S_k({\mathbb{N}})$. Then $m\mid S_k(x)-S_k(y)$.
\end{lm}
\noindent{\bf Proof. } By Lemma \ref{razlika1}, $(\forall m\in {\mathbb{N}})m\mid x-y$. By Transfer, $(\forall m\in S_k({\mathbb{N}}))m\mid S_k(x)-S_k(y)$.\hfill $\Box$ \par \vspace*{2mm}
Thus, for every $m\in S_k({\mathbb{N}})$, all the numbers from $\mu({\cal F})\cap S_k[\tac {\mathbb{Z}}]$ have the same residue modulo $m$. We will use this to establish a strengthening of congruence modulo ${\cal M}\in\beta {\mathbb{N}}$.
\begin{de}\label{defkong}
Ultrafilters ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$ are strongly congruent modulo ${\cal M}\in\beta {\mathbb{N}}$ if, in every $\omega$-hyperenlargement,
\begin{equation}\label{eq3}
(\forall m\in\mu_1({\cal M}))(\exists x\in\mu({\cal F}))(\exists y\in\mu({\cal G}))m\mid\zve x-\zve y.
\end{equation}
We write ${\cal F}\equiv_{\cal M}^s{\cal G}$.
\end{de}
We easily get the following equivalent condition.
\begin{lm}\label{strongcong}
${\cal F}\equiv_{\cal M}^s{\cal G}$ implies that in every $\omega$-hyperenlargement
$$(\forall m\in\mu_1({\cal M}))(\forall x\in\mu({\cal F}))(\forall y\in\mu({\cal G}))m\mid\zve x-\zve y.$$
\end{lm}
\noindent{\bf Proof. } Let $x_0\in\mu({\cal F})$ and $y_0\in\mu({\cal G})$ be such that $m\mid\zve x_0-\zve y_0$, and let $x\in\mu({\cal F})$ and $y\in\mu({\cal G})$ be arbitrary. By Lemma \ref{razlika2}, $m\mid\zve x-\zve x_0$ and $m\mid\zve y-\zve y_0$, so $m\mid\zve x-\zve y$ as well.\hfill $\Box$ \par \vspace*{2mm}
To avoid constant repetition, in each of the proofs in the rest of the paper it will be understood that we are working in an $\omega$-hyperenlargement (a single structure extension which is a ${\goth c}^+$-enlargement).
It will follow from Lemmas \ref{strongovi}, \ref{deljivosti} and \ref{equivdelj} that ${\cal F}\equiv_{\cal M}^s{\cal G}$ implies ${\cal F}\equiv_{\cal M}{\cal G}$. For now we prove that $\equiv_m^s$ for $m\in{\mathbb{N}}$ also coincides with the congruence relation modulo integer (from Section \ref{kongm}).
\begin{lm}\label{restr}
If $m\in{\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$, ${\cal F}\equiv_m^s{\cal G}$ holds if and only if ${\cal F}\equiv_m{\cal G}$.
\end{lm}
\noindent{\bf Proof. } The only element of $\mu_1(m)$ is $m$ itself. Let $x\in\mu({\cal F})$ and $y\in\mu({\cal G})$ be such that $m\mid\zve x-\zve y$; then $\zve x$ and $\zve y$ have the same residue modulo $m$: $\tac h_m(\zve x)=\tac h_m(\zve y)$. Then, by Propositions \ref{slaganjev} and \ref{monadi}, $\widetilde{h_m}({\cal F})=v(\tac h_m(\zve x))=v(\tac h_m(\zve y))=\widetilde{h_m}({\cal G})$, so ${\cal F}\equiv_m{\cal G}$. The other implication is proved similarly, using Lemma \ref{strongcong}.\hfill $\Box$ \par \vspace*{2mm}
\begin{lm}
$\equiv_{\cal M}^s$ is an equivalence relation on the set $\beta {\mathbb{Z}}$.
\end{lm}
\noindent{\bf Proof. } Reflexivity and symmetry are obvious from the definition. So let ${\cal F}\equiv_{\cal M}^s{\cal G}$ and ${\cal G}\equiv_{\cal M}^s{\cal H}$. By Lemma \ref{strongcong}, for any $m\in\mu_1({\cal M})$, $x\in\mu({\cal F})$, $y\in\mu({\cal G})$ and $z\in\mu({\cal H})$ holds $m\mid\zve x-\zve y$ and $m\mid\zve y-\zve z$. Then $m\mid\zve x-\zve z$, so ${\cal F}\equiv_{\cal M}^s{\cal H}$.\hfill $\Box$ \par \vspace*{2mm}
\begin{te}\label{compat}
Let ${\cal M}\in\beta {\mathbb{N}}$. $\equiv_{\cal M}^s$ is compatible with operations $+$ and $\cdot$ in $\beta {\mathbb{Z}}$:
(a) ${\cal F}_1\equiv_{\cal M}^s{\cal F}_2$ and ${\cal G}_1\equiv_{\cal M}^s{\cal G}_2$ imply ${\cal F}_1+{\cal G}_1\equiv_{\cal M}^s{\cal F}_2+{\cal G}_2$;
(b) ${\cal F}_1\equiv_{\cal M}^s{\cal F}_2$ and ${\cal G}_1\equiv_{\cal M}^s{\cal G}_2$ imply ${\cal F}_1\cdot{\cal G}_1\equiv_{\cal M}^s{\cal F}_2\cdot{\cal G}_2$.
\end{te}
\noindent{\bf Proof. } Let $m\in\mu_1({\cal M})$, $x_1\in\mu_1({\cal F}_1)$, $x_2\in\mu_1({\cal F}_2)$, $y_1\in\mu_1({\cal G}_1)$ and $y_2\in\mu_1({\cal G}_2)$. It follows from Proposition \ref{monadi} that $\zve y_1\in\mu({\cal G}_1)$ and $\zve y_2\in\mu({\cal G}_2)$. By the assumptions we have $m\mid\zve x_1-\zve x_2$ and $m\mid\zve{\zve y_1}-\zve{\zve y_2}$.
(a) By Proposition \ref{zbirpro} $x_1+\zve y_1\in\mu({\cal F}_1+{\cal G}_1)$ and $x_2+\zve y_2\in\mu({\cal F}_2+{\cal G}_2)$. From the above conclusions follows $m\mid(\zve x_1+\zve{\zve y_1})-(\zve x_2+\zve{\zve y_2})$, i.e.\ $m\mid\zve(x_1+\zve y_1)-\zve(x_2+\zve y_2)$. Since we started with arbitrary $m\in\mu_1({\cal M})$, this means that ${\cal F}_1+{\cal G}_1\equiv_{\cal M}^s{\cal F}_2+{\cal G}_2$.
(b) By Proposition \ref{zbirpro} $x_1\cdot\zve y_1\in\mu({\cal F}_1\cdot{\cal G}_1)$ and $x_2\cdot\zve y_2\in\mu({\cal F}_2\cdot{\cal G}_2)$. We have $m\mid(\zve x_1-\zve x_2)\zve{\zve y_1}$ and $m\mid\zve x_2(\zve{\zve y_1}-\zve{\zve y_2})$. Hence $m\mid\zve x_1\zve{\zve y_1}-\zve x_2\zve{\zve y_2}$, i.e.\ $m\mid\zve(x_1\zve y_1)-\zve(x_2\zve y_2)$, so ${\cal F}_1\cdot{\cal G}_1\equiv_{\cal M}^s{\cal F}_2\cdot{\cal G}_2$.\hfill $\Box$ \par \vspace*{2mm}
The following simple result is a version of a well-known fact (\cite{P1}, Corollary 8.3).
\begin{lm}\label{FminusF}
(a) Every ${\cal F}\in MAX$ is strongly congruent to zero modulo any ultrafilter;
(b) for every ${\cal F}\in\beta {\mathbb{Z}}\setminus {\mathbb{Z}}$, ${\cal F}-{\cal F}\in MAX$.
\end{lm}
\noindent{\bf Proof. } (a) For any ${\cal F}\in MAX$ and any $x\in\mu({\cal F})$, $(\forall m\in {\mathbb{N}})m\mid x$ implies by Transfer $(\forall m\in\zve {\mathbb{N}})m\mid\zve x$, which gives us ${\cal F}\equiv_{\cal M} 0$ for any ${\cal M}$.\\
(b) We will show that $A\in{\cal F}-{\cal F}$ for all $A\in{\cal U}_Z$. Let $m\in A$ be arbitrary. Then there is $r\in {\mathbb{Z}}_m$ such that $m{\mathbb{Z}}+r\in{\cal F}$, so since $m{\mathbb{Z}}\subseteq -A$, it follows that $n-A\in{\cal F}$ for all $n\in m{\mathbb{Z}}+r$. Thus $m{\mathbb{Z}}+r\subseteq\{n\in {\mathbb{Z}}:n-A\in{\cal F}\}$, so $\{n\in {\mathbb{Z}}:n-A\in{\cal F}\}\in{\cal F}$, which means that $A\in {\cal F}-{\cal F}$.\hfill $\Box$ \par \vspace*{2mm}
Let us also note, regarding the lemma above, that ${\cal F}=_\sim{\cal G}$ is not enough to conclude that ${\cal F}-{\cal G}\in MAX$. By Theorem \ref{geomset} there are ${\cal F},{\cal G}\in\beta {\mathbb{N}}$ and $m\in {\mathbb{N}}$ such that ${\cal F}=_\sim{\cal G}$ but ${\cal F}\not\equiv_m{\cal G}$, say ${\cal F}\equiv_mr_1$ and ${\cal G}\equiv_mr_2$ for some $r_1<m$ and $r_2<m$. From Proposition \ref{hom} we get ${\cal F}-{\cal G}\equiv_m r_1-r_2\neq 0$, so $m\nmid{\cal F}-{\cal G}$.
\begin{de}
A family $\{{\cal F}_i:i\in I\}$ of ultrafilters is a complete residue system modulo ${\cal M}\in\beta {\mathbb{N}}$ if it contains exactly one element of every equivalence class of strong congruence modulo ${\cal M}$.
\end{de}
As an application of the above results, we have an ultrafilter version of a well-known theorem on complete residue systems in ${\mathbb{Z}}$.
\begin{te}\label{crs}
If $\{{\cal F}_i:i\in I\}$ is a complete residue system modulo ${\cal M}\in\beta {\mathbb{N}}$ then, for every ${\cal G}\in\beta {\mathbb{N}}$, $\{{\cal F}_i+{\cal G}:i\in I\}$ and $\{{\cal G}+{\cal F}_i:i\in I\}$ are complete residue systems modulo ${\cal M}$.
\end{te}
\noindent{\bf Proof. } We need to show that in ${\cal R}=\{{\cal F}_i+{\cal G}:i\in I\}$ no two ultrafilters are congruent modulo ${\cal M}$, and that each congruence class has a representative in ${\cal R}$.
First assume ${\cal F}_i+{\cal G}\equiv_{\cal M}^s{\cal F}_j+{\cal G}$ for some $i,j\in I$, $i\neq j$. By Theorem \ref{compat} ${\cal F}_i+{\cal G}-{\cal G}\equiv_{\cal M}^s{\cal F}_j+{\cal G}-{\cal G}$. By Lemma \ref{FminusF} ${\cal F}_i={\cal F}_i+0\equiv_{\cal M}^s{\cal F}_i+{\cal G}-{\cal G}\equiv_{\cal M}^s{\cal F}_j+{\cal G}-{\cal G}\equiv_{\cal M}^s{\cal F}_j$, a contradiction.
Now let ${\cal H}\in\beta {\mathbb{N}}$ be arbitrary. There is $i\in I$ such that ${\cal F}_i\equiv_{\cal M}^s{\cal H}-{\cal G}$. Using Theorem \ref{compat} and Lemma \ref{FminusF} again we get ${\cal F}_i+{\cal G}\equiv_{\cal M}^s{\cal H}-{\cal G}+{\cal G}\equiv_{\cal M}^s{\cal H}$.
The proof that $\{{\cal G}+{\cal F}_i:i\in I\}$ is a complete residue system modulo ${\cal M}\in\beta {\mathbb{N}}$ is analogous.\hfill $\Box$ \par \vspace*{2mm}
\section{Strong divisibility}
It is natural to ask: which ultrafilters are strongly congruent to zero modulo some ${\cal M}\in\beta {\mathbb{N}}$? Are those exactly the ultrafilters divisible by ${\cal M}$? For example, we saw in Lemma \ref{FminusF} that $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-maximal ultrafilters are always strongly congruent to zero. In general, the above question leads us to the following definition.
\begin{de}\label{defstrong}
Let ${\cal M}\in\beta {\mathbb{N}}$ and ${\cal F}\in\beta {\mathbb{Z}}$. ${\cal F}$ is strongly divisible by ${\cal M}$ if, in every $\omega$-hyperenlargement,
$$(\forall m\in\mu_1({\cal M}))(\exists x\in\mu({\cal F}))m\mid\zve x.$$
We write ${\cal M}\mid^s{\cal F}$.
\end{de}
In the same way as Lemma \ref{strongcong}, we get a seemingly stronger condition.
\begin{lm}\label{strongstrong}
${\cal M}\mid^s{\cal F}$ implies that in every $\omega$-hyperenlargement
$$(\forall m\in\mu_1({\cal M}))(\forall x\in\mu({\cal F}))m\mid\zve x.$$
\end{lm}
Proposition \ref{ekviv} easily implies the following.
\begin{lm}\label{deljivosti}
For all ${\cal M}\in\beta {\mathbb{N}}$ and ${\cal F}\in\beta {\mathbb{Z}}$, ${\cal M}\mid^s{\cal F}$ implies ${\cal M}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal F}$.
\end{lm}
It is tempting to try to prove the reverse implication; unfortunately this is not true, as we will now see.
\begin{lm}\label{freeprost}
No ${\mathbb{N}}$-free ultrafilter has any $\mid^s$-divisors.
\end{lm}
\noindent{\bf Proof. } Assume the opposite, that an ${\mathbb{N}}$-free ultrafilter ${\cal F}$ is $\mid^s$-divisible by some ${\cal G}$. Then ${\cal G}$ is also ${\mathbb{N}}$-free, so for any $x\in\mu({\cal F})$ holds $(\forall m\in {\mathbb{N}})m\nmid x$. By Transfer $(\forall m\in\zve {\mathbb{N}})m\nmid\zve x$, a contradiction with ${\cal G}\nmid^s{\cal F}$.\hfill $\Box$ \par \vspace*{2mm}
Thus, this notion of divisibility is too strong to be our main divisibility relation, but it has some properties that are in good accordance with the strong congruence relation and operations on $\beta {\mathbb{N}}$.
However, Lemma \ref{freeprost} also says that $\mid^s$ is not reflexive: ${\mathbb{N}}$-free ultrafilters are not divisible by themselves. It is, however, transitive: let ${\cal F}\mid^s{\cal G}$ and ${\cal G}\mid^s{\cal H}$. Let $x\in\mu_1({\cal F})$, $y\in\mu_1({\cal G})$ and $z\in\mu_1({\cal H})$ be arbitrary. Then $x\mid\zve y$ and $y\mid\zve z$. Hence $\zve y\mid\zve{\zve z}$, so $x\mid\zve{\zve z}$, which suffices for ${\cal F}\mid^s{\cal H}$.
\begin{lm}\label{strongovi}
${\cal F}\equiv_{\cal M}^s{\cal G}$ if and only if ${\cal M}\mid^s{\cal F}-{\cal G}$.
\end{lm}
\noindent{\bf Proof. } ($\Rightarrow$) Let $m\in\mu_1({\cal M})$ be arbitrary and let $x\in\mu_1({\cal F})$ and $y\in\mu_1({\cal G})$ be such that $m\mid\zve x-\zve y$. By Proposition \ref{monadi}, $v(y)=v(\zve y)$ so, by Lemma \ref{razlika2}, $m\mid\zve y-\zve{\zve y}$. It follows that $m\mid\zve x-\zve{\zve y}$, i.e.\ $m\mid\zve(x-\zve y)$. On the other hand, since $-y\in\mu(-{\cal G})$, by Lemma \ref{minustensor} and Proposition \ref{zbirpro}, $x-\zve y=x+\zve(-y)\in\mu({\cal F}-{\cal G})$, so ${\cal M}\mid^s{\cal F}-{\cal G}$.
($\Leftarrow$) Let $m\in\mu_1({\cal M})$, $x\in\mu_1({\cal F})$ and $y\in\mu_1({\cal G})$ be arbitrary. Then $x-\zve y\in\mu({\cal F}-{\cal G})$ so, by Lemma \ref{strongstrong}, $m\mid\zve(x-\zve y)$. By Lemma \ref{razlika2} again we have $m\mid\zve y-\zve{\zve y}$, so $m\mid\zve x-\zve y$, meaning that ${\cal F}\equiv_{\cal M}^s{\cal G}$.\hfill $\Box$ \par \vspace*{2mm}
\begin{te}
Let ${\cal M}\in\beta {\mathbb{N}}$ and ${\cal F},{\cal G}\in\beta {\mathbb{Z}}$.
(a) ${\cal M}\mid^s{\cal F}$ and ${\cal M}\mid^s{\cal G}$ imply ${\cal M}\mid^s{\cal F}+{\cal G}$;
(b) ${\cal M}\mid^s{\cal F}$ implies ${\cal M}\mid^s{\cal F}\cdot{\cal G}$;
(c) ${\cal M}\mid^s{\cal G}$ implies ${\cal M}\mid^s{\cal F}\cdot{\cal G}$.
\end{te}
\noindent{\bf Proof. } Let $m\in\mu_1({\cal M})$, $x\in\mu_1({\cal F})$ and $y\in\mu_1({\cal G})$.
(a) By assumptions $m\mid\zve x$ and $m\mid\zve{\zve y}$. Hence $m\mid\zve(x+\zve y)$, and therefore ${\cal M}\mid^s{\cal F}+{\cal G}$.
(b) Now we have $m\mid\zve x$, which suffices for $m\mid\zve x\zve{\zve y}$ i.e.\ $m\mid\zve(x\zve y)$, so ${\cal M}\mid^s{\cal F}\cdot{\cal G}$.
(c) By Lemma \ref{strongstrong} ${\cal M}\mid^s{\cal G}$ implies $m\mid\zve{\zve y}$, so again $m\mid\zve x\zve{\zve y}$ and ${\cal M}\mid^s{\cal F}\cdot{\cal G}$.\hfill $\Box$ \par \vspace*{2mm}
Let us remind ourselves of the definitions of other three divisibility relations from \cite{So1}:
\begin{eqnarray*}
{\cal G}\mid_L{\cal F} & \mbox{iff} & (\exists{\cal H}\in\beta {\mathbb{N}}){\cal F}={\cal H}\cdot{\cal G}\\
{\cal G}\mid_R{\cal F} & \mbox{iff} & (\exists{\cal H}\in\beta {\mathbb{N}}){\cal F}={\cal G}\cdot{\cal H}\\
{\cal G}\mid_M{\cal F} & \mbox{iff} & (\exists{\cal H}_1,{\cal H}_2\in\beta {\mathbb{N}}){\cal F}={\cal H}_1\cdot{\cal G}\cdot{\cal H}_2.
\end{eqnarray*}
What is the place of $\mid^s$ (restricted to $\beta{\mathbb{N}}\times\beta{\mathbb{N}}$) among these relations? Like all the others, its restriction to ${\mathbb{N}}\times{\mathbb{N}}$ is just the usual divisibility relation (Lemma \ref{restr}). We already saw that $\mid^s\subset\hspace{1mm}\widetilde{\mid}\hspace{1mm}$. We will show that this is the only inclusion that can be established:
\begin{center}
\includegraphics[scale=0.15]{06slika.png}
\end{center}
First, why $\mid_L\not\subseteq\mid^s$? Let ${\cal P},{\cal Q}\in\beta {\mathbb{N}}\setminus {\mathbb{N}}$ be $\hspace{1mm}\widetilde{\mid}\hspace{1mm}$-prime and let ${\cal F}={\cal P}\cdot{\cal Q}$. Then ${\cal Q}\mid_L{\cal F}$ but, by Lemma \ref{freeprost}, ${\cal Q}\nmid_s{\cal F}$. Analogously we conclude that $\mid_R\not\subseteq\mid^s$.
That $\mid^s\subseteq\mid_M$ does not hold either can be seen by considering maximal classes of these two orders. By \cite{So2}, Theorem 4.1, the $\mid_M$-maximal ultrafilters are exactly those in the smallest ideal $K(\beta {\mathbb{N}},\cdot)$. On the other hand, the class of $\mid^s$-maximal ultrafilters is exactly $MAX$ by Lemmas \ref{FminusF} and \ref{deljivosti}. But $MAX$ is a proper superset of $K(\beta {\mathbb{N}},\cdot)$; we postpone the detailed examination of this and other aspects of maximal ultrafilters until a projected sequel to this paper.
\section{Final remarks and questions}
Even after finding, in Section \ref{flawed}, several equivalent conditions for $\equiv_{\cal M}$, we were not able to answer the following.
\begin{qu}
Is $\equiv_{\cal M}$ an equivalence relation?
\end{qu}
Not being able to prove that it is presents a big drawback for using this relation, which seems to be the most natural extension of the congruence relation to $\beta {\mathbb{N}}$.
Some more properties of our relations could be proved if we worked with ${\goth c}^+$-saturated nonstandard extensions. This is a stronger condition than being a ${\goth c}^+$-enlargement: $(V(Y),*)$ is $\kappa$-{\it saturated} if every family $F$ of internal sets in $V(Y)$ with the finite intersection property such that $|F|<\kappa$ has nonempty intersection. To Proposition \ref{ekviv} one can add two more equivalent conditions (see \cite{So5}, Theorem 3.4):\\
(iv) in every ${\goth c}^+$-saturated extension $V(Y)$, for every $x\in\mu({\cal F})$ there is $y\in\mu({\cal G})$ such that $x\zvez\mid y$;
(v) in every ${\goth c}^+$-saturated extension $V(Y)$, for every $y\in\mu({\cal G})$ there is $x\in\mu({\cal F})$ such that $x\zvez\mid y$.\\
However, Proposition \ref{prenosnatac} does not hold for ${\goth c}^+$-saturation in place of ${\goth c}^+$-enlargement: see \cite{L1}, page 74. So to use the equivalents (iv) and (v) we would have to answer the following question.
\begin{qu}
Is it possible to construct a ${\goth c}^+$-saturated $\omega$-hyperextension of ${\mathbb{Z}}$?
\end{qu}
\section{Declarations}
The author acknowledges financial support of the Science Fund of the Republic of Serbia (call PROMIS, project CLOUDS, grant no.\ 6062228) and Ministry of Education, Science and Technological Development of the Republic of Serbia (grant no.\ 451-03-68/2020-14/200125).
\footnotesize
|
1,116,691,501,010 | arxiv | \section{Introduction} \label{sec:introduction}
\IEEEPARstart{I}{n} conventional wireless systems, the limited battery capacity of mobile devices typically affects the overall network lifetime. Increasing the size of the battery might not be a feasible solution to address this problem, due to a consequent reduction of the portability and increase in the cost of the equipment. For these reasons, the study of novel techniques to prolong the lifetime of the battery has triggered an increased interest in the wireless communications community. In this context, the study of the so-called wireless power transfer (WPT) has recently gained prominence as means to implement a cable-less power transfer between devices \cite{WPTOverview}, either by resonant inductive coupling \cite{Kurs06072007} or by far-field power transfer \cite{art:brown84}. The latter approach has seen increasing momentum in recent years, due to its promising potential for longer range transfers. A fundamental breakthrough in this context has been the design of rectifying antennas (rectennas) for microwave power transfer (MPT), key component to achieve an efficient radio frequency to direct current (RF-to-DC) conversion. This has brought to several technological advances, e.g., the design of flying vehicles powered solely by microwave \cite{conf:Schlesak88}, which confirmed that RF-to-DC conversion can not only be performed but also achieve remarkable efficiency. In this regards, commercial products exhibiting an efficiency larger than $50\%$ are already available on the market \cite{prod:P2110}. Remarkably, performance of state-of-the-art RF-to-DC converters can be even higher, i.e., more larger than $80\%$, resulting in an impressive potential DC-to-DC efficiency of $45\%$ \cite{art:le2008, art:mcspadden02}.
\subsection{Related Works}
Recent advances in signal processing and microwave technology have shown that far-field power transfer can offer interesting perspectives also in the context of traditional wireless communication systems. For instance, the same electromagnetic field could be used as a carrier for both energy and information, realizing the so-called simultaneous wireless information and power transfer (SWIPT). The potential of this approach has been first highlighted by the studies proposed in \cite{Varshney} and \cite{ShannonTeslar}. Therein the trade-off between the two transfers within SWIPT was investigated, in the case of both flat fading and frequency-selective channels. After these pioneering works, several studies have been performed to assess the implementability of receivers that can make use of RF signals to both harvest energy and decode information \cite{art:hui13}, and analyze the performance of SWIPT in many scenarios, e.g., multi-antenna systems \cite{art:zhang13}, opportunistic networks \cite{art:kaibin13}, wireless sensor networks \cite{art:visser13}. In particular, the aforementioned trade-off is investigated in multiple-input multiple-output (MIMO) systems for two information and power transfer architectures, i.e., time-switching and power-splitting, for both perfect \cite{art:zhang13} and imperfect \cite{imperfectCSI} channel state information (CSI). A different approach is considered in \cite{ReviewerRef2}, where the trade-off between the information and the energy transfer is investigated in a multi-user network under two different constraints, i.e., a constraint expressed in terms of secrecy rate for the former and amount of harvested energy at the receiver for the latter.
A line of work considering orthogonal frequency division multiplexing (OFDM) systems is presented in \cite{NgOFDMSingleSplitting, NgOFDMMultiSeparate, NgOFDMMultiSplitting}, where the resource allocation policy at the transmitter is studied for both single and multi-user scenarios, in which the receiver adopts a power-splitting strategy and considers undesired interference as an additional energy resource. Finally, ad-hoc scenarios departing from the standard SWIPT paradigm have been analyzed in works such as \cite{ChenEnergyBeamforming, ReviewerRef1}, where it is assumed that the energy and information transfers are performed by two different devices, operating in frequency-division duplexing (FDD) mode. More precisely, in \cite{ChenEnergyBeamforming} the time allocation policy for the two transmitters is studied, under the assumption that the efficiency of the energy transfer is maximized by means of an energy beamformer that exploits a quantized version of the CSI received in the uplink by the energy transmitter. Along similar lines, \cite{ReviewerRef1} investigates the optimal time and power allocations strategies such that the amount of harvested energy is maximized, taking into account the impact of the CSI accuracy on the latter quantity.
\subsection{Summary of Our Contribution}
In general, most existing works for SWIPT rely on ideal assumptions: a) availability of perfect CSI at the transmitter, b) no penalty for CSI acquisition, c) no power consumption for signal decoding operations at the receiver. If these assumptions are relaxed and the resulting penalties and issues are considered, then the performance of wireless systems can decrease significantly. Studies taking into account these aspects have been proposed for conventional information transfers, and the effect of imperfect CSI acquisition, training, feedback as well as the resource allocation problem have been thoroughly analyzed \cite{CaireFeedback}. Departing from these observations, in this work we aim at investigating the efficacy of SWIPT for practically relevant scenarios, by relaxing the three aforementioned ideal assumptions. In particular, we target a multiple-input single-output (MISO) system consisting of a multi-antenna access point (AP) which transfers both information symbols and energy to a single user terminal (UT) that does not have access to any external power source. Accordingly, in contrast to the previous contributions on this topic, in this work the power harvested by means of the WPT is used by the UT to perform all the necessary signal processing operations for both information decoding and uplink communications. Additionally, we adopt a systematic approach and consider the three main possible scenarios for an AP that engages in a downlink transmission in modern networks, to provide a more complete characterization of the considered system. Consequently, in this work, the performance and feasibility of the SWIPT in a MISO system is studied for the three following cases:
\begin{itemize}
\item No CSI available at the AP.
\item Time-division duplexing (TDD) communications and CSI acquisition at the AP by means of training symbols.
\item FDD communications and CSI acquisition at the AP by means of analog symbols feedback.
\end{itemize}
We compare these three scenarios for three performance metrics of interest, namely, ergodic downlink rate, energy shortage probability, and data outage probability. Our contributions in this work are as follows:
\begin{itemize}
\item We derive closed-form representations for the three performance metrics of interest in all three scenarios and match them to the numerical results.
\item We derive the approximations of the ergodically optimal duration of the WPT phase in all the three scenarios as a portion of the channel coherence time.
\item Additionally, for the TDD and the FDD scheme, we derive closed-form approximations for the ergodically optimal duration of the channel training/feedback phases, to maximize the downlink rate.
\item We show that the TDD scheme can outperform the FDD scheme in SWIPT systems in terms of both downlink rate and data outage probability.
\end{itemize}
Our numerical findings verify the correctness of our derivations. More specifically, concerning the downlink rate, we show that the performance gap between the numerical optimal solutions and the results obtained by means of our approximations is very small for low to mid signal-to-noise ratio (SNR) values and negligible for high SNR values. Moreover, we show that both TDD and FDD outperform the non-CSI case at any SNR value in terms of downlink rate. This confirms that CSI knowledge at the AP is always beneficial for the information transfer in SWIPT systems, despite both its imperfectness and the resources devoted to the channel estimation/feedback procedures. The correctness of our derivations is further verified when numerically evaluating both the energy shortage and data outage probability of the considered MISO system adopting SWIPT, for which a perfect match of analytic and numerical results is achieved. Finally, it is worth noting that throughout our study TDD consistently outperforms FDD in terms of both downlink rate and data outage probability, confirming the potential of this duplexing scheme for the future advancements in modern networks.
The rest of the paper is organized as follows. In Sec.~\ref{Sec: System Setup}, we specify the system model. In Sec.~\ref{Sec: Non-CSI scheme}, \ref{Sec: TDD scheme}, and \ref{Sec: FDD scheme}, we specify the system model and derive the downlink rate for the non-CSI, the TDD, and the FDD scheme. In Sec.~\ref{Sec: Outage probability}, we derive the energy shortage and data outage probability for each scheme. In Sec.~\ref{Sec: Numerical results}, we show and discuss the numerical results. Finally, we conclude in Sec.~\ref{Sec: Conclusions}.
{\it Notations:}
In this paper, we denote matrices as boldface upper-case letters, vectors as boldface lower-case letters. Additionally, we let $[\cdot]^\dag$ be the conjugate transpose of a vector. All vectors are columns, unless otherwise stated. Furthermore, for a scalar $c \in \mathbb{C}$ we note by $c^*$ its complex conjugate. We use $\mathbf{x} \perp \mathbf{y}$ to express the orthogonality between vectors $\mathbf{x}$ and $\mathbf{y}$. We denote a circular symmetric Gaussian random vector with mean $\boldsymbol{\mu}$ and covariance matrix $\mathbf{\Sigma}$ as $\mathcal{C}\mathcal{N}\left(\boldsymbol{\mu},\mathbf{\Sigma}\right)$. The chi-squared distribution with $K$ degrees of freedom is denoted by $\chi_K^2$ and its probability density function (PDF) is given by $f_{X}\left(x\right)=\frac{1}{\Gamma\left(\frac{K}{2}\right)}2^{-K/2}x^{\frac{K}{2}-1}e^{-x/2}$ where $\Gamma(q)=\int_{0}^{\infty}u^{q-1}e^{-u}\mathrm{d}u$ is the Gamma function. The non-central chi-squared distribution with $K$ degrees of freedom and non-central parameter $\nu$ is denoted by $\chi_K^{'2}\left(\nu\right)$ and its PDF is given by
$f_{X}\left(x \right)=\frac{1}{2}e^{-\left(x+\nu\right)/2}\left(\frac{x}{\nu}\right)^{K/4-1/2}I_{\frac{K}{2}-1}\left(\sqrt{\nu x}\right)$,
$I_n\left(\cdot\right)$, modified Bessel function of the first kind.
$\Gamma(q,r)=\int_{r}^{\infty}u^{q-1}e^{-u}\mathrm{d}u$ is the upper incomplete Gamma function.
$Q_M\left(q,r\right)=\int_{r}^{\infty}\frac{\xi^M}{q^{M-1}}\exp\left(-\frac{\xi^2+q^2}{2}\right)I_{M-1}\left(q\xi\right)\mathrm{d}\xi$ is the generalized Marcum Q-function \cite{art:marcum50}.
\section{System Model}\label{Sec: System Setup}
We consider a point-to-point communication system consisting of an AP with $L$ antennas and a UT with single antenna. We denote the downlink channel (from the AP to the UT) as $\mathbf{h}=[h_1,\cdots,h_L]^\top$. The channel is assumed to be block fading, with independent fading from block to block. The entries of the channel vector are complex Gaussian (Rayleigh fading), hence $\mathbf{h} \sim \mathcal{CN}\left(\mathbf{0},\mathbf{I}_L\right)$. Let $T_C$ be the coherence time length. For simplicity in the notation, we assume that the total number of symbols that can be transmitted within the coherence time is $T_C$. The AP transmits the symbol $\mathbf{x} \in \mathbb{C}^{L \times 1}$ with a transmit power $P$, i.e., $\mathbb{E}\left[\lVert\mathbf{x}\rVert^2\right]=P$. The received signal at the UT is given by
$y = \mathbf{h}^{\dag}\mathbf{x}+n,$
where $n\sim \mathcal{CN}\left(0,N_0\right)$ is the thermal noise, modeled as a complex additive white Gaussian noise (AWGN). We assume the UT does not have any external power source (such as the battery) and all the power required for the operations to be performed at the UT is provided by the AP through the WPT component of the SWIPT. Accordingly, the UT is equipped with a circuit that can perform two different functions: a) harvest energy from the received RF signal, b) information decoding. As considered in previous literature \cite{art:zhang13}, we assume that the UT cannot harvest energy and decode information from the same signal, at the same time. Hence, a time switching strategy is adopted under which the AP transmits the signals in two phases: the signal sent during the first phase has WPT purposes and is used by the UT to harvest energy, whereas the signal sent in the second phase has information transfer purposes. Note that, throughout this work, we assume that the energy harvested in the first phase (power transfer phase) is the sole source of power for all the subsequent operations performed by the UT (the exact details of these operations will be specified the following sections).\footnote{Note that, the words energy and power are used interchangeably in this paper, for the sake of simplicity, in spite of their conceptual difference.}
\subsubsection{Details of the Power Transfer Phase}
During each coherence time interval, the AP first transmits the power wirelessly to the UT for $\epsilon< T_C$\footnote{The exact value of $\epsilon$ will be specified later depending upon the mode of operation.} time slots. First, the AP divides its power $P$ equally between its $L$ transmit antennas to perform the WPT. Hence the $L$-sized transmit symbol during this phase, denoted by ${\bf x}_{\text{EH}}$, is given by
${\bf x}_{\text{EH}} = \sqrt{\frac{P}{L}} {\bf s},$
where ${\bf s}$ is a random vector with zero mean and covariance matrix $\mathbb{E}\left[{\bf s}{\bf s^{\dag}}\right]=\mathbf{I}_L$. Thus, the power harvested at the UT is given by
\begin{equation} \label{Eq: System-Harvested power}
P_H=\frac{\beta P\lVert\mathbf{h}\rVert^2}{L},
\end{equation}
where $\beta \in [0,1]$ is a coefficient that measures the efficiency of the RF to direct current (RF-to-DC) power conversion \cite{art:zhang13, art:visser13}.
\subsubsection{Details of the Information Transmission Phase}
For the second phase, namely information transmission phase, we adopt a systematic approach and consider the three scenarios. In the first one, the AP transmits the information symbols without the knowledge of CSI (we will refer to this approach as non-CSI scheme). In the second scheme, we consider a TDD communication, in which the downlink and uplink communications are performed over the same bandwidth. Accordingly, first the AP acquires the CSI by evaluating a pilot sequence transmitted by the UT in the uplink, and then engages in the downlink transmission. In the last case, we consider an FDD communication, in which the downlink and uplink communications are performed over two separate bandwidths. Consequently, in this case, UT sends an analog feedback signal in the uplink, carrying the downlink channel estimation, to allow the AP to acquire the CSI and subsequent transmit information symbols.
Under the aforementioned settings, we analyze the performance of the system for two different metrics of interest, namely the downlink rate and outage probability. We provide a detailed analysis of these two metrics in the rest of the paper.
\section{Analysis of the Downlink Rate}\label{sec:downlink}
In this section, we analyze the downlink rate for the three considered schemes.
\subsection{Non-CSI Scheme}\label{Sec: Non-CSI scheme}
We first consider the case where the AP transmits the information symbols without the knowledge of CSI. The schematic diagram of this scenario is shown in Fig.~\ref{Fig: Non CSI - System model}.
\begin{figure}[!h]
\centering
\subfigure[Operations of the AP.]
{\def.67\columnwidth{.67\columnwidth}
\import{}{fig1a.eps_tex}}
\label{Fig: Non CSI - Operation of AP}
\subfigure[Operations of the UT.]
{\def.67\columnwidth{.67\columnwidth}
\import{}{fig1b.eps_tex}}
\label{Fig: Non CSI - Operation of UT}
\caption{Operations of the AP and the UT during the coherence time in the non-CSI scheme.}
\label{Fig: Non CSI - System model}
\end{figure}
Under this scheme, the system utilizes $\epsilon = \alpha_NT_C$ symbols to transfer power
and the remaining symbols to transmit information symbols, where $0<\alpha_N< 1$.
The received signal during the information transmission phase
is given by
$y = \mathbf{h}^\dagger \mathbf{x} + n,$
where the $\mathbf{x} = \sqrt{\frac{P}{L}} \mathbf{s}$ and
$\mathbf{s} \sim \mathcal{CN}(\mathbf{0},\mathbf{I}_L).$
Note that in the absence of CSI, the AP performs equal power allocation over all its antennas to transmit the information
symbol.
For the information decoding at the UT, we consider that the power consumption of the circuit components devoted to the decoding is proportional to the number of received symbols (as typically considered in previous works on the subject \cite{conf:heinzelman00}). Accordingly, we denote the power consumption per decoded symbol at the UT as $P_D$.
Since the power harvested in the first phase must be sufficient to decode all the information symbols, we have that
$\alpha_NT_C P_H=\left(1-\alpha_N\right)T_C P_D.$
Now, if we plug \eqref{Eq: System-Harvested power} into this equation then, after some manipulations, we have that the minimum fraction of time that should be devoted to the power transfer, i.e., $\alpha_N$, given by
\begin{equation}\label{Eq: Non-Time portion alpha}
\alpha_N=\frac{LP_D}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}.
\end{equation}
We now analyze the downlink rate for this scheme. We recall that the AP can transmit $1-\alpha_N$ symbols for the information transfer. Accordingly, using \eqref{Eq: Non-Time portion alpha}, the downlink rate obtained for the non-CSI scheme is given by
\begin{align}
R_{NC}& =R_{NC}(\alpha_N)=\left(1-\alpha_N\right)\log_2\left(1+\frac{P\lVert\mathbf{h}\rVert^2}{N_0L}\right) \notag
\\&=\frac{\beta P\lVert\mathbf{h}\rVert^2}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}\log_2\left(1+\frac{P\lVert\mathbf{h}\rVert^2}{N_0L}\right).\label{Eq: Non-Rate}
\end{align}
\subsection{TDD Scheme}\label{Sec: TDD scheme}
We switch our focus to the TDD scheme, whose schematic diagram is shown in Fig.~\ref{Fig: TDD-System model}.
\begin{figure}[!h]
\centering
\subfigure[Operations of the AP.]{\def.67\columnwidth{\columnwidth}
\import{}{fig2a.eps_tex}}
\label{Fig: TDD-Operation of AP}
\subfigure[Operations of the UT.]{\def.67\columnwidth{\columnwidth}
\import{}{fig2b.eps_tex}}
\label{Fig: TDD-Operation of UT}
\caption{Operations of the AP and the UT during the coherence time in the TDD scheme.}
\label{Fig: TDD-System model}
\end{figure}
We recall that, in this case, the AP should provide the UT with sufficient energy for the latter to be able not only to decode the received data but also to perform all the operations related the uplink signaling inherent to the TDD scheme. Accordingly, the system utilizes $\epsilon = \alpha_TT_C$ symbols for power transfer, where $0<\alpha_T<1$. This is followed by the CSI acquisition phase. Since we assume that all the operations are performed within the coherence time, the AP can exploit the reciprocity of the downlink and uplink channels, inherent feature of the TDD scheme. This way, the channel estimated in the uplink can be used to design the beamformer for the downlink transmission. Accordingly, the UT transmits uplink pilots with power $P_E$ for the next $\eta_TT_C\in\mathbb{Z}^{+}$ symbol periods, with $0<\eta_T<1$ and $0<\alpha_T+\eta_T\leq 1$. The signal received by the AP during the $i$th symbol period (in the uplink pilot transmission phase) is given by
$\mathbf{y}^{p}_{T}[i]=\sqrt{P_E}\mathbf{h}^{*}+\mathbf{w}[i],$
where $\mathbf{w}[i] \sim \mathcal{C}\mathcal{N}(\mathbf{0},N_0\mathbf{I}_L)$ is the Gaussian noise at the AP. The AP estimates the channel by a minimum variance unbiased (MVU) based estimator \cite{KayEstimation}. Thus, the channel estimate at the AP is given by
\begin{align}
\mathbf{\hat{h}} & =\frac{1}{\sqrt{P_E}\eta_TT_C}\sum\limits_{i=1}^{\eta_TT_C}\left(\sqrt{P_E}\mathbf{h}+\mathbf{w}^*[i]\right) =\mathbf{h}+\bar{\mathbf{w}},\label{Eq: TDD-Estimated channel at AP}
\end{align}
where $\mathbf{\bar{w}}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},\frac{N_0}{\eta_TT_CP_E}\mathbf{I}_L\right)$ denotes the estimation error.
This is followed by the information transmission phase. The focus of this work is on the performance of the SWIPT under non-ideal system assumptions and practical transmit schemes. Therefore, for simplicity we will assume that the AP beamforms the signal carrying the information symbols with a matched filter precoder (MFP) \cite{conf:demig2008}, optimal linear filter for maximizing the SNR. Accordingly, the AP exploits the CSI estimate to design the desired beamforming vector, obtained as
$\mathbf{m}_T = \mathbf{\hat{h}}/\lVert\mathbf{\hat{h}}\rVert.$
Then, the received signal at the UT is given by
\begin{equation}
y_{T}^{s}=\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}}{\lVert\mathbf{\hat{h}}\rVert}s+n,\label{Eq: TDD-Received information}
\end{equation}
where $s$ is the information symbol, with $\mathbb{E}\left[\lvert s\rvert^2\right]=P$. As in the previous case, the UT consumes a power $P_D$ to decode every received information symbol. Thus, since the harvested power by the UT must be sufficient to send the pilot symbols and decode information at the UT, the condition
$\alpha_TT_C P_H=\eta_TT_C P_E+\left(1-\alpha_T-\eta_T\right)T_CP_D$
must be satisfied for the TDD scheme. Now, if we plug \eqref{Eq: System-Harvested power} into this condition then, after some manipulations, we have that the minimum fraction of time that should be devoted to the power transfer, i.e., $\alpha_T$, is given by
\begin{equation} \label{Eq: TDD-Time portion alpha}
\alpha_T=\frac{\eta_TLP_E-\eta_TLP_D+LP_D}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}.
\end{equation}
Accordingly, we can use \eqref{Eq: TDD-Time portion alpha} to compute the downlink rate for the TDD scheme as
\begin{align}
&R_T =R_T (\alpha_T,\eta_T)=\left(1-\alpha_T-\eta_T\right)\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right)\notag
\\&=\frac{\left(1-\eta_T\right)\beta P\lVert\mathbf{h}\rVert^2-\eta_TLP_E}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right). \label{Eq: TDD-Rate}
\end{align}
Note that the channel training time $\eta_T T_c$ impacts both the accuracy of the estimated channel vector and the remaining available time for information transfer. As a consequence, let $\eta^{\star}_T$ be the duration of the portion of coherence time devoted to the channel training/estimation that maximizes the ergodic downlink rate, defined as
\begin{align}
\eta^{\star}_T = \operatorname{argmax}_{\eta_T}
\mathbb{E}_{\mathbf{h},\mathbf{\bar{\mathbf{w}}}}\Bigg[\frac{\left(1-\eta_T\right)\beta P\lVert\mathbf{h}\rVert^2-\eta_TLP_E}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}&\notag
\\\times\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right)&\Bigg].\label{Eq: SecIII-TDD-rate optimal problem}
\end{align}
The derivation of the exact value of $\eta^{\star}_T$ is very complicated. However, two approximations of this value, valid for high and low SNR, respectively, can be derived as stated in the following result.
\begin{lemma}\label{Lem: TDD eta over all channels}
At high SNR, $\eta^{\star}_T$ can be approximated as
\begin{equation}
\eta_T^{\star} \approx\sqrt{\frac{N_0L\log_2e}{B_1T_CP_E\left(L-1\right)}},\label{Eq: TDD_opt_high_SNR}
\end{equation}
where
$
B_1=\mathbb{E}_{\mathbf{h}}\left[\left(1+\frac{LP_E}{\beta P\lVert\mathbf{h}\rVert^2}
\right)\log_2\left(\frac{P\lVert\mathbf{h}\rVert^2}
{N_0}\right)\right].
$
At low SNR, it can be approximated as
\begin{align}
\eta_T^{\star} &\approx\frac{N_0}{T_CP_E}
\left(-1+\sqrt{1+\frac{\left(L-1\right)\beta PT_CP_E}{LN_0\left(\beta P+P_E\right)}-\frac{1}{L}}\right).\label{Eq: TDD_opt_low_SNR}
\end{align}
\end{lemma}
\begin{proof}
See Appendix-\ref{Apx: TDD eta over all channels}.
\end{proof}
Lemma \ref{Lem: TDD eta over all channels} provides a result whose interpretation is not trivial. In fact, several parameters are present in \eqref{Eq: TDD_opt_high_SNR} and \eqref{Eq: TDD_opt_low_SNR}, thus understanding their impact on the accuracy of the proposed approximations is rather complex. However, some interesting insights can drawn from Lemma \ref{Lem: TDD eta over all channels}, if we focus on the approximations that are introduced in order to derive the final results. First, we note that the impact of $P_D$ on the accuracy of the results is likely negligible, due the fact that $P \gg P_D$ by construction. Subsequently, let us focus on the quantity $\lambda=\frac{2\eta_TT_CP_E\lVert\mathbf{h}\rVert^2}{N_0}$, introduced in \eqref{Eq: TDD eta high SNR Taylor series-1}. If we fix $N_0$ at the denominator of $\lambda$ then it is straightforward to see that the latter increases with an increase in $P_E$ and $L$. Now, consider the low SNR case. In this case, $N_0$ is very large, thus the approximation $\lambda \approx 0$ is adopted. In practice, the accuracy of this approximation depends on the value of $L$ and $P_E$, i.e., the lower those values are, the more accurate the approximation is. Switching our focus to the high SNR analysis, we observe an opposite behavior. In fact, in this case $N_0$ is very small, hence the approximation $\lambda \gg 0$ is introduced. Thus, the accuracy of the approximation is greater when $P_E$ and $L$ are large. Concerning the latter parameter, i.e., $L$ number of antennas at the AP, we note that an analysis of its impact on the accuracy of the results in Lemma \ref{Lem: TDD eta over all channels} is extremely interesting, given its relevance in a MISO systems. Accordingly, a detailed discussion on this aspect will be provided in Sec.~\ref{Sec: Numerical results}.
\subsection{FDD Scheme}\label{Sec: FDD scheme}
We now consider the FDD scheme, whose schematic diagram is illustrated in Fig.~\ref{Fig: FDD-System model}.
\begin{figure}[!h]
\centering
\subfigure[Operations of the AP.]{\def.67\columnwidth{.67\columnwidth}
\import{}{fig3a.eps_tex}}
\vspace{1mm}
\subfigure[Operations of the UT.]{\def.67\columnwidth{.67\columnwidth}
\import{}{fig3b.eps_tex}}
\caption{Operations of the AP and the UT during the coherence time in the FDD scheme.}
\label{Fig: FDD-System model}
\end{figure}
Differently from the TDD case, the downlink and uplink channels are in general uncorrelated in the FDD scheme. Therefore, two separate channel estimation procedures have to be performed at the UT and the AP, to be able to provide to the latter with the CSI w.r.t. the downlink channel. Accordingly, the operations in the FDD scheme are as follows. First, the AP transfers power to the UT for a period of $\epsilon = \alpha_FT_C$, with $0<\alpha_F<1$. As before, we recall that the AP should provide the UT with sufficient energy for the latter to be able not only to decode the received data but also to perform all the operations related the uplink signaling inherent to the FDD scheme. Afterwards, a downlink channel training phase takes place, in which the AP sends pilot sequences of $\eta_FT_C\in\mathbb{Z}^{+}$ symbols with power $P$ to the UT for estimating the downlink channel, with $0<\eta_F<1$. Finally, the UT feeds back in the uplink the estimated CSI in analog form over the subsequent $\tau_FT_C\in\mathbb{Z}^{+}$ symbols, where $0<\tau_F<1$ and $0<\alpha_F+\eta_F+\tau_F\leq 1$. Note that, in this work we adopt a simplified model for the uplink communication, for the sake of simplicity of the analysis, and matters of space economy. Specifically, we assume that the feedback signal sent by the UT to the AP experiences an AWGN channel. We note that, this follows the typical approach proposed in the literature for first studies on CSI acquisition schemes based on analog feedback signals \cite{CaireFeedback, art:samardzija06}.
Now, let us analyze the aforementioned steps in detail. Consider the $l$th antenna. We denote the pilot sequence sent over it as $\mathbf{e}_l=[e_l[1] \cdots,e_l[\eta_FT_C]]^\top\in\mathbb{C}^{\eta_F T_C}$, $l \in [1,L]$. Naturally, the sequences adopted in this phase are known at both ends of the communications. In particular, without loss of generality, we assume orthogonality between pilot sequences sent over different antennas, i.e., $\mathbf{e}_i \perp \mathbf{e}_j,$ for $i \neq j$. Thus, in order to guarantee their orthogonality, and estimate $L$ independent channel coefficients, a lower bound on the minimum sequence size must be satisfied, i.e., $\eta_FT_C\geq L$. Moreover, the AP equally divides the power $P$ among its $L$ antennas, yielding $||e_l|| ^2 = \frac{P}{L}$ and thus $\lVert\mathbf{e}_{1}\rVert^2=\cdots=\lVert\mathbf{e}_{L}\rVert^2=\frac{\eta_FT_CP}{L}$. Then, the signal received by the UT during the downlink channel training phase is given by
$\mathbf{y}_{UT,F}^{p} = \mathbf{e}_{1}h_{1}^{*}+\cdots+\mathbf{e}_{L}h_{L}^{*}+\mathbf{w}_{UT},$
where $\mathbf{w}_{UT}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},N_0\mathbf{I}_{\eta_FT_C}\right)$ is the thermal noise at the UT.
The UT in turn multiplies the received signal $\mathbf{y}_{UT,F}^{p}$ by $\mathbf{e}_{l}^{\dag}/\lVert\mathbf{e}_{l}\rVert^2$ to estimate the $l$th channel coefficient, $h_l$. Similar to the previous section, the $L$ downlink channel coefficients are estimated by an MVU based estimator. The estimated channel vector at the UT can be written as
$\mathbf{\hat{h}}_{UT}=[\hat{h}_{UT,1}, \dots, \hat{h}_{UT,L}]^\top=\mathbf{h}+\mathbf{\hat{w}}_{UT},$
with $\mathbf{\hat{w}}_{UT}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},\frac{N_0L}{\eta_FT_CP}\mathbf{I}_L\right)$ estimation error vector at the UT. The power consumption at UT to decode a pilot sequence sent from one of the $L$ transmit antennas is modeled similarly to the previous case, i.e., proportional to $P_D$. Accordingly, the total power consumed in decoding the pilot symbols is given by $\eta_F T_C P_D.$
At this stage, the UT encodes each coefficient by means of a sequence $\mathbf{f}_l=[f_l[1],\cdots,f_l[\tau_FT_C]]^\top$ $\in\mathbb{C}^{\tau_FT_C}$, $\forall l \in [1,L]$, such that the $L$ sequences form an orthogonal set, i.e., $\mathbf{f}_i \perp \mathbf{f}_j$, for $i \neq j$, and $\lVert\mathbf{f}_{1}\rVert^2=\cdots=\lVert\mathbf{f}_{L}\rVert^2=\frac{\tau_FT_CP_F}{L}$. As before, the adopted sequences are known at both ends of the communications. In particular, in order to guarantee their orthogonality and encode $L$ independent channel coefficients, a lower bound on the minimum sequence size must be satisfied, i.e., $\tau_FT_C\geq L$.
After the encoding, the signal to be fed back by the UT to the AP is obtained as the sum of all the obtained sequences at the previous step, i.e., $\mathbf{x}^{f}_{F} = \mathbf{f}_{1}\hat{h}_{UT,1}+\cdots+\mathbf{f}_{L}\hat{h}_{UT,L}$. Consequently, its transmission requires a power given by
\begin{equation}
\frac{P_F}{L}\left(\lvert\hat{h}_{UT,1}\rvert^2+\cdots+\lvert\hat{h}_{UT,L}\rvert^2\right)=\frac{P_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2}{L}.\label{Eq: FDD-Feedbacked power}
\end{equation}
Then, the received signal by the AP is given by
$\mathbf{y}^{f}_{AP,F} =\mathbf{f}_{1}\hat{h}_{UT,1}+\cdots+\mathbf{f}_{L}\hat{h}_{UT,L}+\mathbf{w}_{AP},$
where $\mathbf{w}_{AP}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},N_0\mathbf{I}_{\tau_FT_C}\right)$ is the thermal noise at the AP. Now, the latter multiplies the received sequence by $\mathbf{f}_k^\dag/\lVert\mathbf{f}\rVert^2$ to estimate $h_k$. Thus, the estimated channel vector at the AP is obtained as
$\mathbf{\hat{h}}_{AP}=\mathbf{h}+\mathbf{\hat{w}}_{UT}+\mathbf{\hat{w}}_{AP},$
where $\mathbf{\hat{w}}_{AP}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},\frac{N_0L}{\tau_F T_CP_F}\mathbf{I}_L\right)$. In particular, we note that $\mathbf{\hat{w}}_{UT}$ and $\mathbf{\hat{w}}_{AP}$ are independent by definition.
Finally, the AP can exploit the knowledge of $\mathbf{\hat{h}}_{AP}$ to derive the desired MFP as before, given by $\mathbf{\hat{h}}_{AP}/\lVert\mathbf{\hat{h}}_{AP}\rVert$, and use it as beamforming vector while transmitting the information symbols for the remaining $(1-\alpha_F-\eta_F-\tau_F)T_C$ symbols.
The received information symbol at the UT is given by
\begin{equation}
y_{UT,F}^{s}=\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}s+n,\label{Eq: FDD-Received information}
\end{equation}
where $s$ is the information symbol, with $\mathbb{E}\left[\lvert s\rvert^2\right]=P$. Concerning the energy required to perform all the operations at the UT, as a matter of fact, since the harvested energy must be sufficient to decode the received pilot sequences, feedback the estimated CSI, and decode the subsequent information, we have that the condition
$\alpha_FT_C P_H=\eta_FT_C P_D+\tau_FT_CP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2/L+\left(1-\alpha_F-\eta_F-\tau_F\right)T_CP_D$
must be satisfied. Therefore, if we plug \eqref{Eq: System-Harvested power} into this condition then, after some manipulations, we have that the minimum duration of the energy transfer/harvesting phase for this case, i.e., $\alpha_F$, should be
\begin{equation}\label{Eq: FDD-Time portion alpha}
\alpha_F=\frac{\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\tau_FLP_D+LP_D}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}.
\end{equation}
Now, we can use \eqref{Eq: FDD-Time portion alpha} to compute the downlink rate for the FDD scheme as
\begin{align}
R_F &=R_F(\alpha_F,\eta_F,\tau_F)\notag\\
&=\left(1-\alpha_F-\eta_F-\tau_F\right)\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right)\notag
\\&=\frac{\left(1-\eta_F-\tau_F\right)\beta P\lVert\mathbf{h}\rVert^2-\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\eta_FLP_D}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}\notag
\\&\quad\times\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right).\label{Eq: FDD-Rate}
\end{align}
In this case, two parameters describe the duration of the channel estimation phase, i.e., $\eta_F$ and $\tau_F$, related to the channel estimation procedures at the UT and the AP, respectively. In practice, these parameters impact both the accuracy of the estimated channel vectors and the remaining available time for information transfer at the AP. As a consequence, let $(\eta^{\star}_F, \tau^{\star}_F)$ be the optimal couple of parameters that maximizes the ergodic downlink rate, defined as
\begin{align}
&\hspace{-0.6em}(\eta^{\star}_F,\tau^{\star}_F) = \operatorname{argmax}_{\eta_F,\tau_F}\mathbb{E}
_{\mathbf{\mathbf{h},\hat{w}}}\Bigg[\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right)\notag
\\&\hspace{-0.83em}\times \frac{\left(1-\eta_F-\tau_F\right)\beta P\lVert\mathbf{h}\rVert^2-\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\eta_FLP_D}{\beta P\lVert\mathbf{h}\rVert^2+LP_D}\Bigg],\label{Eq: SecIII-FDD-rate optimal problem}
\end{align}
where $\mathbf{\hat{w}}=\left(\mathbf{\hat{w}}_{AP},\mathbf{\hat{w}}_{UT}\right)$. As before, the derivation of the exact value of $\eta^{\star}_F$ and $\tau^{\star}_F$ is very complicated. Nevertheless, two approximations of this value, valid for high and low SNR, respectively, can be derived as stated in the following result.
\begin{lemma}\label{Lem: FDD eta over all channels}
At high SNR, $\eta^{\star}_F$ and $\tau_F^{\star}$ can be approximated as
\begin{align}
\eta_F^{\star}&\approx\sqrt{\left(1+\frac{P_F}{\beta P}\right)\frac{P_F}{P}}\times\tau_F^{\star},\label{Eq: FDD_opt_high_SNR-1}
\\\tau_F^{\star}&\approx\sqrt{\frac{N_0L^2\log_2e}{B_5T_C\left(L-1\right)P_F\left(1+\frac{P_F}{\beta P}\right)}},\label{Eq: FDD_opt_high_SNR-2}
\end{align}
where $B_5=\mathbb{E}_\mathbf{h}\left[\log_2\left(\frac{P\lVert\mathbf{h}\rVert^2}{N_0}\right)\right]$.
At low SNR, they can be approximated as
\begin{align}
\tau^{\star}_F&\approx\frac{N_0L\left(-1+\sqrt{1+\frac{4\beta P^2\left(\frac{\beta P}{P_F}+1+\frac{N_0L}{\eta_F^{\star}T_CP}\right)}{\left(\frac{N_0L}{\eta_F^{\star}T_C}\right)^2}}\right)}{2T_C\left(\beta P+P_F+\frac{P_FN_0L}{\eta_F^{\star}T_CP}\right)},\label{Eq: FDD_opt_low_SNR-1}
\\\eta_F^{\star}&\approx\frac{P_FN_0L}{PT_C\left(\beta P+P_F\right)}
\left(-1+\sqrt{1+\frac{T_CP\left(\beta P+P_F\right)}{P_FN_0L}}\right).\label{Eq: FDD_opt_low_SNR-2}
\end{align}
\end{lemma}
\begin{proof}
See Appendix-\ref{Apx: FDD eta over all channels}.
\end{proof}
Despite the complexity of \eqref{Eq: FDD_opt_high_SNR-1}, \eqref{Eq: FDD_opt_high_SNR-2}, \eqref{Eq: FDD_opt_low_SNR-1} and \eqref{Eq: FDD_opt_low_SNR-2}, some interesting insights can drawn from the approximations adopted in the derivation in Appendix-\ref{Apx: FDD eta over all channels} following an approach similar to what has been done for Lemma \ref{Lem: TDD eta over all channels}. As before, the impact of $P_D$ on the accuracy of the results is likely negligible, due the fact that $P \gg P_D$ by construction. Now, consider the quantities $\lambda_1=\frac{2T_C\lVert\mathbf{h}\rVert^2}{N_0L\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}$ and $\lambda_2=\frac{2\eta_F T_CP\lVert\mathbf{h}\rVert^2}{N_0L}$, introduced in \eqref{Eq: FDD eta high SNR neglect fraction-2} and \eqref{Eq: FDD eta low SNR Taylor series-2} respectively. We first focus on the low SNR case. Therein, the approximations $\lambda_1, \lambda_2 \approx 0$ are adopted. In this case, a smaller $P_F$ improves the accuracy of these approximations, whereas no clear insight can be drawn for $L$. Conversely, in the high SNR case, the approximation $\lambda_1 \gg 0$ is adopted. Differently from the previous case, the accuracy of this approximation increases with $P_F$. A further approximation is introduced in this part of the study, i.e., $\frac{N_0L^2}{\eta_FPT_C} \approx 0$ in \eqref{Eq: FDD eta high SNR neglect fraction-1}. Accordingly, an additional insight on the impact of the number of antennas on the accuracy of the result in Lemma 2 can be drawn, i.e., the smaller $L$ the larger the accuracy. Interestingly, this is in contrast with the impact of the same parameter in the TDD case and highlights the expected larger penalty for CSI acquisition that FDD pays w.r.t. TDD as the number of antennas grows. A more detailed discussion on its impact on the accuracy of the results in Lemma \ref{Lem: FDD eta over all channels}, is deferred to Sec.~\ref{Sec: Numerical results}, where a comparative study of the downlink rate of the three considered schemes is provided.
\section{Analysis of the Outage Probability}\label{Sec: Outage probability}
In this section, we will study the outage probability for the considered system as a function of the parameters introduced so far, and the downlink rate. In the considered practical SWIPT implementation two possible outage events can occur:
\begin{itemize}
\item The harvested energy is not sufficient for all the operations at the UT (channel estimation, pilot transmission/CSI feedback and information decoding), i.e., the UT experiences an \textit{energy shortage}.
\item The harvested energy is sufficient to perform all the operations at the UT, but the achieved downlink rate is smaller than a target value, i.e., the UT experiences a \textit{data outage}.
\end{itemize}
We first focus on the case for which energy shortage occurs. Subsequently, we analyze the case for which the harvested energy is sufficient for all the operations at the UT, and compute the data outage probabilities for the three transmit schemes considered in this work. Before we proceed, we remark that, the analytic expressions derived in this section for the outage probabilities as a function of the system parameters are very complicated, and straightforward inference on their behavior is difficult to be drawn. Consequently, as before we defer the discussion on the outage as a function of the system parameters for all the cases considered in this work to Sec.~\ref{Sec: Numerical results}.
\subsection{Energy Shortage Probability} \label{Sec: Energy shortage}
\subsubsection{Non-CSI Scheme}
Referring to \eqref{Eq: Non-Time portion alpha}, for any given value for $\alpha_N$, the energy shortage probability for the non-CSI case can be expressed mathematically as
\begin{align}
\mathcal{P}_{N}^{E, out}\left(\alpha_N\right)&=\Pr\left\{\frac{\alpha_N\beta P\lVert\mathbf{h}\rVert^2}{L}<\left(1-\alpha_N\right)P_D\right\}\notag
\\&=\frac{\gamma\left(L,\frac{\left(1-\alpha_N\right)LP_D}{\alpha_N\beta P}\right)}{\Gamma\left(L\right)},\label{Eq: Non-Energy outage closed-form}
\end{align}
where $\gamma(q,r)=\int_0^{r}u^{q-1}e^{-u}\mathrm{d}u$ is the lower incomplete Gamma function. The closed-form expression of this probability is derived by considering the cumulative distribution function (CDF) of $\chi^2_{2L}$ if we note that $2\lVert\mathbf{h}\rVert^2\sim\chi^2_{2L}$.
\subsubsection{TDD Scheme}
Referring to \eqref{Eq: TDD-Time portion alpha}, for any given value for $\alpha_T$ and $\eta_T$, the energy shortage probability for the TDD case, denoted by $\mathcal{P}_{T}^{E, out}\left(\alpha_T,\eta_T\right)$, can be expressed mathematically as
\begin{align}
&\mathcal{P}_{T}^{E, out}\left(\alpha_T,\eta_T\right)\notag
\\&\quad=\Pr\bigg\{\frac{\alpha_T\beta P\lVert\mathbf{h}\rVert^2}{L}<\left(1-\alpha_T-\eta_T\right)P_D+\eta_TP_E\bigg\}\notag
\\&\quad=\frac{\gamma\left(L,\frac{\eta_T LP_E+\left(1-\alpha_T-\eta_T\right)LP_D}{\alpha_T\beta P}\right)}{\Gamma\left(L\right)}.\label{Eq: TDD power outage probability}
\end{align}
Using the same approach as for \eqref{Eq: Non-Energy outage closed-form}, the closed-form expression of the probability in \eqref{Eq: TDD power outage probability} is computed.
\subsubsection{FDD Scheme}
Consider \eqref{Eq: FDD-Time portion alpha}. For any given value for $\alpha_F$, $\eta_F$ and $\tau_F$, the energy shortage probability for the FDD case can be stated mathematically as
\begin{align}
\mathcal{P}_{F}^{E, out}\left(\alpha_F,\eta_F,\tau_F\right)=\Pr\bigg\{
&\frac{\alpha_F\beta P\lVert\mathbf{h}\rVert^2}{L}<\frac{\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2}{L}
\notag
\\&+\left(1-\alpha_F-\tau_F\right)P_D\bigg\}.\label{Eq: FDD-Energy outage-1}
\end{align}
The following result provides a closed-form expression of \eqref{Eq: FDD-Energy outage-1}. However, for the sake of the simplicity of the representation of the result, let us denote $\sigma_1=\sqrt{\frac{N_0L}{2\eta_F T_CP}}$, $\rho_1=\frac{\sqrt{2}\tau_F P_F}{\alpha_F\beta P-\tau_F P_F}$,
$\rho_2=\sqrt{\frac{2\tau_F P_F}{\alpha_F\beta P-\tau_F P_F}+\frac{2\tau_F^2P_F^2}{\left(\alpha_F\beta P-\tau_F P_F\right)^2}}$, and
$\rho_3=\sqrt{\frac{2\left(1-\alpha_F-\tau_F\right)LP_D}{\alpha_F\beta P-\tau_F P_F}}$.
\begin{lemma}\label{Lem: FDD power outage probability}
The energy shortage probability for the FDD scheme, as in \eqref{Eq: FDD-Energy outage-1}, can be computed as
\begin{align}
&\mathcal{P}_{F}^{E, out}\left(\alpha_F,\eta_F,\tau_F\right)\notag
\\&\quad=1-\int_{\theta_4 = 0}^{\infty}\frac{Q_{L}\left(\sqrt{\rho_1^2\sigma_1^2\theta_4},\sqrt{\rho_2^2\sigma_1^2\theta_4+\rho_3^2}\right)}{e^{\frac{\theta_4}{2}}\theta_4^{1-L}2^{L}\Gamma\left(L\right)}
\mathrm{d}\theta_4.\label{Eq: FDD power outage closed form}
\end{align}
\end{lemma}
\begin{proof}
The outage probability can be evaluated as follows. First, applying the law of total probability, i.e., given a random variable $A$,
$\Pr\left(\cdot\right)=\mathbb{E}_{A}\left[\Pr\left(\cdot|A\right)\right]$, we have
\begin{align}
&\eqref{Eq: FDD-Energy outage-1}=\mathbb{E}_{\mathbf{\hat{w}}_{UT}}\Big[\Pr\Big\{
\lVert\sqrt{2}\mathbf{h}-\rho_1\mathbf{\hat{w}}_{UT}\rVert^2\notag
\\&\hspace{10em}<
\rho_2^2\lVert\mathbf{\hat{w}}_{UT}\rVert^2
+\rho_3^2\big|\mathbf{\hat{w}}_{UT}
\Big\}\Big].\label{Eq: App4-Outage probability-1}
\end{align}
From \eqref{Eq: App4-Outage probability-1}, it can be easily deduced that $\lVert\sqrt{2}\mathbf{h}-\rho_1\mathbf{\hat{w}}_{UT}\rVert^2\big|_{\mathbf{\hat{w}}_{UT}}\sim\chi^{'2}_{2L}\left(\rho_1^2\lVert\mathbf{\hat{w}}_{UT}\rVert^2\right)$. Therefore, substituting the PDF of $\lVert\sqrt{2}\mathbf{h}-\rho_1\mathbf{\hat{w}}_{UT}\rVert^2\big|_{\mathbf{\hat{w}}_{UT}}$ into \eqref{Eq: App4-Outage probability-1}, we can rewrite
\begin{equation}
\eqref{Eq: App4-Outage probability-1} = 1-\mathbb{E}_{\mathbf{\hat{w}}_{UT}}\left[Q_{L}\left(\rho_1\lVert\mathbf{\hat{w}}_{UT}\rVert,\sqrt{\rho_2^2\lVert\mathbf{\hat{w}}_{UT}\rVert^2+\rho_3^2}\right)\right].\label{Eq: App4-Outage probability-2}
\end{equation}
Since $\mathbf{\hat{w}}_{UT}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},2\sigma_1^2\mathbf{I}_L\right)$, we have $\lVert\mathbf{\hat{w}}_{UT}\rVert^2=\sigma_1^2\Theta_4$, where $\Theta_4\sim\chi^2_{2L}$. Substituting the PDF of $\lVert\mathbf{\hat{w}}_{UT}\rVert^2$ into \eqref{Eq: App4-Outage probability-2}, we derive the RHS of \eqref{Eq: FDD power outage closed form}, and this concludes the proof.
\end{proof}
At this stage, if we focus on \eqref{Eq: Non-Energy outage closed-form}, \eqref{Eq: TDD power outage probability}, and \eqref{Eq: FDD power outage closed form}, we note that the energy shortage probability in the three considered cases clearly depends on the values of $\alpha_N$, $(\alpha_T,\eta_T)$, and $(\alpha_F,\eta_F,\tau_F)$ respectively. However, drawing meaningful insights from these results is extremely difficult, due to their complexity. Accordingly, we will investigate this aspect in Sec.~\ref{Sec: Numerical results}, by means of suitable numerical analyses.
\subsection{Data Outage Probability for the Non-CSI Scheme}\label{sec:data outafe for non-CSI}
We now compute the data outage probability for the non-CSI scheme. Given $\alpha_N$ and a specific target downlink rate $R_{NC}$, the data outage probability can be stated mathematically as
\begin{align}
&\mathcal{P}_{N}^{D, out}\left(\alpha_N,R_{NC}\right)=\Pr\bigg\{\frac{\alpha_N\beta P\lVert\mathbf{h}\rVert^2}{L}\geq\left(1-\alpha_N\right)P_D,\notag
\\&\hspace{4em}\left(1-\alpha_N\right)\log_2\left(1+\frac{P\lVert\mathbf{h}\rVert^2}{N_0L}\right)<R_{NC}\bigg\},\notag
\end{align}
that is the probability that the harvested energy is sufficient for the decoding operations at the UT, but the achieved downlink rate is smaller than $R_{NC}$. Now, let us rewrite $\mathcal{P}_{N}^{D, out}$ as
\begin{align}
&\hspace{-0.7em}\Pr\left\{\frac{\left(1-\alpha_N\right)LP_D}{\alpha_N\beta P}\leq
\lVert\mathbf{h}\rVert^2<\frac{N_0L}{P}\left(2^{\frac{R_{NC}}{1-\alpha_N}}-1\right)
\right\}.\label{Eq: Non-Data outage closed-form-1}
\end{align}
The intersection between the two events in \eqref{Eq: Non-Data outage closed-form-1} is non-empty when
\begin{equation} \label{eq:condition_non_CSI}
\frac{\left(1-\alpha_N\right)LP_D}{\alpha_N\beta P}<\frac{N_0L}{P}\left(2^{\frac{R_{NC}}{1-\alpha_N}}-1\right).
\end{equation}
If this condition is not satisfied, then \eqref{Eq: Non-Data outage closed-form-1} in this case is equal to 0. In is worth noting that, assuming $R_{NC}\neq0$, the data outage probability would be 0 only in case of extremely low value of $N_0$, given that typically $P \gg P_D$, as previously discussed. This is in line with what could be expected in a wireless communication system, in which the data outage probability tends to 0 as the SNR at the receiver increases. If this is not the case, and \eqref{eq:condition_non_CSI} is satisfied, then \eqref{Eq: Non-Data outage closed-form-1} can be computed as
\begin{align}
&\mathcal{P}_{N}^{D, out}\left(\alpha_N,R_{NC}\right)\notag
\\&=\frac{\gamma\left(L,\frac{N_0L}{P}\left(2^{\frac{R_{NC}}{1-\alpha_N}}-1\right)\right)}{\Gamma\left(L\right)}-\frac{\gamma\left(L,\frac{\left(1-\alpha_N\right)LP_D}{\alpha_N\beta P}\right)}{\Gamma\left(L\right)},\label{Eq: Non-Data outage closed-form-2}
\end{align}
where we made use of the CDF of the $\chi^2_{2L}$ distribution.
\subsection{Data Outage Probability for the TDD Scheme}\label{sec:data outafe for TDD}
We switch our focus back to the TDD scheme. For given values of $\alpha_T$, $\eta_T$, and a target downlink rate $R_T$, the data outage probability is expressed as
\begin{align}
&\hspace{-0.7em}\mathcal{P}_{T}^{D, out}\left(\alpha_T,\eta_T,R_{T}\right)=\Pr\bigg\{\frac{\alpha_T\beta P\lVert\mathbf{h}\rVert^2}{L}\geq\left(1-\alpha_T-\eta_T\right)P_D\notag
\\&\hspace{-0.7em}+\eta_TP_E,\left(1-\alpha_T-\eta_T\right)\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right)<R_T
\bigg\},\label{Eq: TDD-Data outage}
\end{align}
that is the probability that the harvested energy is sufficient to engage in the pilots transmission and decode the received data, but the achieved downlink rate is smaller than $R_T$.
The following result provides a closed-form expression for \eqref{Eq: TDD-Data outage} and concludes the study of the TDD case. However, before proceeding, let us denote $b_3=\frac{N_0}{P}\left(2^{\frac{R_T}{1-\alpha_T-\eta_T}}-1\right)$, $b_4=\frac{\eta_T LP_E+\left(1-\alpha_T-\eta_T\right)LP_D}{\alpha_T\beta P}$, $b_5=\frac{N_0+\eta_T T_CP_E}{N_0}$ and $b_6=\frac{N_0}{\eta_T T_CP_E}$, for the sake of the simplicity of the representation of the result.
\begin{lemma}\label{Lem: TDD data outage probability}
When $\frac{N_0}{P}\left(2^{\frac{R_T}{1-\alpha_T-\eta_T}}-1\right)<\frac{\eta_T LP_E+\left(1-\alpha_T-\eta_T\right)LP_D}{\alpha_T\beta P}$,
then the data outage probability for the TDD scheme, as in \eqref{Eq: TDD-Data outage}, can be computed as
\begin{align}
&\mathcal{P}_{T}^{D, out}=\int_{\theta_3 = 0}^{\infty}\int_{\theta_1 = 0}^{2b_5 b_3}\frac{\Gamma\left(L-1,b_5 b_4-\frac{\theta_1}{2}\right)
\theta_3^{L-1}}{2^{L+1}\Gamma\left(L-1\right)\Gamma\left(L\right)}\notag
\\&\qquad\times I_0\left(\sqrt{\frac{\theta_1 \theta_3}{b_6}}\right)e^{-\left(\frac{\theta_1}{2}+\frac{\theta_3}{2b_6}+\frac{\theta_3}{2}\right)}\mathrm{d}\theta_1\mathrm{d}\theta_3.\label{Eq: TDD data outage probability closed form 1}
\end{align}
Conversely, when $\frac{N_0}{P}\left(2^{\frac{R_T}{1-\alpha_T-\eta_T}}-1\right)\geq\frac{\eta_T LP_E+\left(1-\alpha_T-\eta_T\right)LP_D}{\alpha_T\beta P}$, it can be computed as
\begin{align}
&\mathcal{P}_{T}^{D, out}=\int_{\theta_3 = 0}^{\infty}\int_{\theta_1 = 0}^{2b_5 b_4}\frac{\Gamma\left(L-1,b_5 b_4-\frac{\theta_1}{2}\right)
I_0\left(\sqrt{\frac{\theta_1 \theta_3}{b_6}}\right)}{2^{L+1}\Gamma\left(L-1\right)\Gamma\left(L\right)}\notag
\\&\times \theta_3^{L-1}e^{-\left(\frac{\theta_1}{2}+\frac{\theta_3}{2b_6}+\frac{\theta_3}{2}\right)}\mathrm{d}\theta_1\mathrm{d}\theta_3+\int_{\theta_3 = 0}^{\infty}e^{-\frac{\theta_3}{2}}\theta_3^{L-1}\notag
\\&\times
\frac{\left(Q_{1}\left(\sqrt{\frac{\theta_3}{b_6}},\sqrt{2b_5 b_4}\right)
-Q_{1}\left(\sqrt{\frac{\theta_3}{b_6}},\sqrt{2b_5 b_3}\right)\right)}{2^L\Gamma\left(L\right)}\mathrm{d}\theta_3.\label{Eq: TDD data outage probability closed form 2}
\end{align}
\end{lemma}
\begin{proof}
See Appendix-\ref{Apx: TDD data outage probability}.
\end{proof}
\subsection{Data Outage Probability for the FDD Scheme}\label{sec:data outafe for FDD}
We conclude our study on the data outage probability by considering the FDD case. For given $\alpha_F$, $\eta_F$, $\tau_F$, and a specific target downlink rate $R_F$, the data outage probability can be stated mathematically as
\begin{align}
&\hspace{-0.5em}\mathcal{P}_{F}^{D, out}\left(\alpha_F,\eta_F,\tau_F,R_{F}\right)\notag
\\&\hspace{-0.5em}=\Pr\Bigg\{\frac{\alpha_F\beta P\lVert\mathbf{h}\rVert^2}{L}\geq\frac{\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2}{L}
+\left(1-\alpha_F-\tau_F\right)P_D,\notag
\\&\hspace{-0.5em}\left(1-\alpha_F-\eta_F-\tau_F\right)\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right)<R_F\Bigg\},\label{Eq: FDD-Data outage-1}
\end{align}
that is the probability that the harvested energy is sufficient to estimate the downlink channel, feed back its estimated version in the uplink, and decode the received data, but the achieved downlink rate is smaller than $R_F$.
The following result provides a closed-form expression of \eqref{Eq: FDD-Data outage-1} and concludes the study of the FDD case. However, as before, let us introduce some new notation to further simplify representation of the results. Accordingly, we let $\sigma_2=\frac{N_0L+\eta_F PT_C}{N_0L}$, $\sigma_3=\frac{N_0L}{\eta_F PT_C}$, $\sigma_4=\frac{N_0L}{\tau_F P_FT_C}$, $\sigma_5=\frac{\left(1+\sigma_3\right)\sigma_4}{1+\sigma_3+\sigma_4}$, $b_7=\frac{N_0}{P}\left(2^{\frac{R_F}{1-\alpha_F-\eta_F-\tau_F}}-1\right)$, $b_8=\frac{\left(1-\alpha_F-\tau_F\right)LP_D}{\alpha_F\beta P}$, and $b_9=\frac{\tau_F P_F}{\alpha_F\beta P}$.
\begin{lemma}\label{Lem: FDD data outage probability}
The data outage probability for the FDD scheme, as in \eqref{Eq: FDD-Data outage-1}, can be computed as
\begin{align}
&\mathcal{P}_{F}^{D, out}=\int_{\theta_9 = 0}^{\infty}\int_{\theta_7+\theta_8>\frac{2\left(b_7-b_8\right)}{b_9\sigma_5}}\int_{\theta_5 = 0}^{2\sigma_2b_7}\frac{\theta_8^{L-2}\theta_9^{L-1}}{\Gamma\left(L\right)\Gamma\left(L-1\right)}\notag
\\&\times Q_{L-1}\left(\sqrt{\frac{\theta_8\sigma_5}{\sigma_2\sigma_3^2}},
\sqrt{2\sigma_2\left(b_8+\frac{b_9\left(\theta_7+\theta_8\right)\sigma_5}{2}\right)-\theta_5}\right)\notag
\\&\times\frac{I_0\left(\sqrt{\frac{\theta_5\theta_7\sigma_5}{\sigma_2\sigma_3^2}}\right)I_0\left(\sqrt{\frac{\theta_7\theta_9\left(1+\sigma_3\right)}{\sigma_4}}\right)}{2^{2L+1}\times e^{\left(\frac{\theta_7\sigma_5}{2\sigma_2\sigma_3^2}+\frac{\theta_5+\theta_7+\theta_8+\theta_9}{2}+\frac{\left(1+\sigma_3\right)z}{2\sigma_4}\right)}}\mathrm{d}\theta_5\mathrm{d}\theta_7\mathrm{d}\theta_8\mathrm{d}\theta_9\label{Eq: FDD-Data outage closed form-1}
\\&+\int_{\theta_9 = 0}^{\infty}\int_{\theta_7+\theta_8\leq\frac{2\left(b_7-b_8\right)}{b_9\sigma_5}}\int_{\theta_5 = 0}^{2\sigma_2 \left(b_8+\frac{b_9\left(\theta_7+\theta_8\right)\sigma_5}{2}\right)}\notag
\\&\quad Q_{L-1}\left(\sqrt{\frac{\theta_8\sigma_5}{\sigma_2\sigma_3^2}},
\sqrt{2\sigma_2\left(b_8+\frac{b_9\left(\theta_7+\theta_8\right)\sigma_5}{2}\right)-\theta_5}\right)\notag
\\&\times I_0\left(\sqrt{\frac{\theta_5\theta_7\sigma_5}{\sigma_2\sigma_3^2}}\right)
I_0\left(\sqrt{\frac{\theta_7\theta_9\left(1+\sigma_3\right)}{\sigma_4}}\right)\theta_8^{L-2}\theta_9^{L-1}\notag
\\&\times\frac{e^{-\left(\frac{\theta_7\sigma_5}{2\sigma_2\sigma_3^2}+\frac{\theta_5+\theta_7+\theta_8+\theta_9}{2}+\frac{\left(1+\sigma_3\right)\theta_9}{2\sigma_4}\right)}}{\Gamma\left(L-1\right)\Gamma\left(L\right)2^{2L+1}}\mathrm{d}\theta_5\mathrm{d}\theta_7\mathrm{d}\theta_8\mathrm{d}\theta_9\label{Eq: FDD-Data outage closed form-2}
\\&+\int_{\theta_9 = 0}^{\infty}\int_{\theta_7+\theta_8\leq\frac{2\left(b_7-b_8\right)}{b_9\sigma_5}}
\frac{I_0\left(\sqrt{\frac{\theta_7\theta_9\left(1+\sigma_3\right)}{\sigma_4}}\right)}{2^{2L}\times e^{\left(\frac{\left(1+\sigma_3\right)\theta_9}{2\sigma_4}+\frac{\theta_7+\theta_8+\theta_9}{2}\right)}}\notag
\\&\times\Bigg[Q_1\left(\sqrt{\frac{\theta_7\sigma_5}{\sigma_2\sigma_3^2}},\sqrt{2\sigma_2 \left(b_8+\frac{b_9\left(\theta_7+\theta_8\right)\sigma_5}{2}\right)}\right)\notag
\end{align}
\begin{align}&-Q_1\left(\sqrt{\frac{\theta_7\sigma_5}{\sigma_2\sigma_3^2}},\sqrt{2\sigma_2 b_7}\right)\Bigg]
\times \frac{\theta_8^{L-2}\theta_9^{L-1}}{\Gamma\left(L-1\right)\Gamma\left(L\right)}\mathrm{d}\theta_7\mathrm{d}\theta_8\mathrm{d}\theta_9.\label{Eq: FDD-Data outage closed form-3}
\end{align}
\end{lemma}
\begin{proof}
See Appendix-\ref{Apx: FDD data outage probability}.
\end{proof}
\section{Numerical Results}\label{Sec: Numerical results}
In this section we evaluate the performance of SWIPT for MISO systems, to assess its merit under the transmit schemes considered in this work. The parameters used in our numerical results are as follows. We consider $\beta=0.5$, which is a good approximation of the performance delivered by state-of-the-art commercial products \cite{prod:P2110}, and typically adopted value in the literature on this subject \cite{ art:zhang13, art:kaibin13, art:liu13}. We consider $P=1$ and $T_C=1000$ for simplicity, and $L\in\{3,6\}$. We assume that the system operates in the industrial, scientific and medical (ISM) band, i.e., carrier frequency of 2.4~GHz. Accordingly, we set a distance between the AP and the UT in the order of meters such that we can ensure that the latter is situated in the far-field region of the radiating AP. Furthermore, we assume that the signals transmitted by both the AP and the UT experience a generic path loss attenuation, with a path loss exponent equal to 3. As a consequence, we can safely let $\frac{P}{P_D}=1000$. The rationale for this is that by incorporating the propagation losses in $\frac{P}{P_D}$, we frame a more realistic scenario. Finally, we model the ratio between the power budgets available at the AP and the UT following the same logic. We let $\frac{P}{P_E}=\frac{P}{P_F}=100$, in accordance with the typical ratio between the available power budgets at both sides of the communication in modern networks, which is roughly $20$~dB \cite{rpt:3gpp.36.814}.
\subsection{Downlink Rate}
Now, we focus on the ergodic downlink rate. First, we compute the optimal numerical performance of the system by numerically solving the problems in \eqref{Eq: SecIII-TDD-rate optimal problem} and \eqref{Eq: SecIII-FDD-rate optimal problem}, by means of an exhaustive search whose complexity and time requirements are not suitable for realistic implementations. Subsequently, we evaluate the accuracy of our theoretical results by comparing them to the numerical performance results. Throughout this section, we will refer to the derived approximated parameters in Lemma \ref{Lem: TDD eta over all channels} and Lemma \ref{Lem: FDD eta over all channels} as analytic results, for the sake of clarity. Now, for the TDD scheme, let $R^\star_T$ and $\eta^\star_T$ be the optimal downlink rate and the optimal duration of the portion of the coherence time devoted to the channel training/estimation, computed by extensive Monte-Carlo simulations. For the sake of clarity, with a little abuse of notation, we denote $\hat{\eta}^\star_T$ as the optimal parameter of interest for the TDD scheme, computed according to Lemma \ref{Lem: TDD eta over all channels}. For the FDD scheme, a similar notation is defined. Now, we define
$\zeta_T = \frac{R_T (\alpha_T, \hat{\eta}^\star_T)}{R^\star_T} \in [0,1]$, for TDD, and
$\zeta_F = \frac{R_F (\alpha_F,\hat{\eta}^\star_F,\hat{\tau}^\star_F)}{R^\star_F} \in [0,1]$, for FDD,
as the ratio between the downlink rate obtained with the analytic and optimal numerical results.\footnote{Note that, $\alpha_T$ and $\alpha_F$
are computed according to \eqref{Eq: TDD-Time portion alpha} and \eqref{Eq: FDD-Time portion alpha} respectively.} We let SNR $\in [0,30]$~dB and compute $\zeta_T$ and $\zeta_F$ for both $L=3$ and $L=6$ in Fig.~\ref{fig:TDD_L3}, Fig.~\ref{fig:TDD_L6}, Fig.~\ref{fig:FDD_L3}, and Fig.~\ref{fig:FDD_L6}.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{$\zeta_T$ for analytic and numerical parameters, TDD and $L=3$ antennas.}
\label{fig:TDD_L3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{$\zeta_T$ for analytic and numerical parameters, TDD and $L=6$ antennas.}
\label{fig:TDD_L6}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig6.eps}
\caption{$\zeta_F$ for analytic and numerical parameters, FDD and $L=3$ antennas.}
\label{fig:FDD_L3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig7.eps}
\caption{$\zeta_F$ for analytic and numerical parameters, FDD and $L=6$ antennas.}
\label{fig:FDD_L6}
\end{figure}
Quantitatively, if we focus on the best performer for each of the considered SNR values, the gap between $\zeta_T$ ($\zeta_F$ in the FDD case) and 1 is remarkably small. Thus, the accuracy of our derivations is confirmed.
If we focus on the impact of $L$ on the two analytic results, we note that they confirm the intuitions provided in Sec.~\ref{Sec: TDD scheme} and Sec.~\ref{Sec: FDD scheme}. However, the difference in terms of the best $\zeta_T$ (and $\zeta_F$) between the two antenna configurations is rather small. This shows that the impact of the number of antennas at the AP on the accuracy of the analytic results is not very significant.
Furthermore, we see that $\zeta_F\leq\zeta_T$, $\forall$ SNR$\in [0,30]$~dB and $\forall\,L \in\{3,6\}$. This is due to the two-step channel estimation process that is needed in the FDD scheme for the CSI acquisition at the AP. As a consequence, a greater number of approximations is necessary. This reduces the accuracy of our closed-form representation of $\eta^\star_F$ and $\tau^\star_F$.\footnote{The interested reader may refer to Appendix-\ref{Apx: FDD eta over all channels} for further details.}
Focusing on the practical implementation, we note that the presence of the analytic results provides a twofold alternative for the AP, depending on the system intrinsic constraints. When the time available for the optimization of the transmit parameters is small, the analytic results could be used to achieve a performance which is reasonably close to the optimal, without resorting to an exhaustive search. Conversely, if more time is available for the AP, the analytic results can be used to improve the efficiency of the search for the optimal parameters. In this regard, we note that the downlink rate is a concave function of $(\eta,\tau)$. Accordingly, in this case, the local optimum coincides with the global optimum. Now, assume that the results of Lemma \ref{Lem: TDD eta over all channels} and Lemma \ref{Lem: FDD eta over all channels} were adopted as a starting point for finding the numerically optimal parameters, by means of an exhaustive search inside a smaller set. Then, the necessary time to identify the global optimum could be significantly reduced w.r.t. a ``blind'' exhaustive search, due to the proximity of the analytic results and the actual global optimum.
To conclude our analysis on the downlink rate, we investigate the advantages, if any, that the two duplexing schemes discussed so far can bring w.r.t. the non-CSI case in terms of the downlink rate. We remark that our goal is to characterize the performance of the system under the realistic assumptions made in Sec.~\ref{sec:introduction}. Thus, in the following study, all the system parameters discussed so far are set according to the analytic results derived in Sec.~\ref{sec:downlink}. Moreover, for simplicity in the representation, we let
$R=R_T (\alpha_T,\hat{\eta}^\star_T)$ and
$R=R_F (\alpha_F,\hat{\eta}^\star_F,\hat{\tau}^\star_F)$
be the downlink rate for TDD and FDD, respectively, when the analytic results are adopted. The ratio between these rates and their counterpart for the non-CSI case (i.e., $\frac{R}{R_{NC}}$, with $R_{NC}$ as in \eqref{Eq: Non-Rate}) is represented in Fig.~\ref{fig:ergodic}, for SNR$\in [0,30]$~dB.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig8.eps}
\caption{Ratio between the ergodic downlink rate for the CSI acquisition schemes and the non-CSI case.}
\label{fig:ergodic}
\end{figure}
Remarkably, both duplexing schemes clearly outperform the non-CSI approach in terms of downlink rate. This shows that, despite the penalties incurred to acquire the CSI, evident downlink rate enhancements are experienced by the AP, thanks to presence of the CSI, however imperfect the latter might be.
The result in Fig.~\ref{fig:ergodic} is even more remarkable, considering that therein the two duplexing schemes always outperform the non-CSI approach, regardless of the antenna configuration and the SNR value. Furthermore, the largest advantage over the non-CSI performance is obtained in the low-to-mid SNR regime. In this regard, we first focus on the TDD case. In both cases, i.e., $L=3$ and $L=6$, $\frac{R}{R_{NC}}$ is a monotonically decreasing function of the SNR, confirming that the MFP performs better for low than for high SNR values \cite{conf:demig2008, TseWireless}. In particular, this shows that the availability of the CSI at the AP, albeit imperfect, is sufficient to achieve a much larger downlink rate as compared to the non-CSI approach. Furthermore, the performance for $L=6$ is strictly larger than for $L=3$, showing that, as in the case of traditional wireless communications, the SWIPT can effectively exploit the transmit diversity gain delivered by a MISO system as $L$ grows. Interestingly, the same is true for the FDD scheme. The CSI acquisition procedure in this case is more complex and prone to a higher uncertainty, especially at low SNR. This impacts the behavior of $\frac{R}{R_{NC}}$ that presents a maximum at SNR$=10$~dB, for both the considered antenna configurations. On one hand, the gain brought by the FDD scheme over the non-CSI approach is dominated by the power gain at the UT, brought by a more accurate beamformer design at the AP, for SNR$\leq10$~dB. On the other hand, the reduction of the multiplexing gain due to the increasing impact that both the channel estimation and feedback phases have on available time for information transfer, as the quality of the CSI increases, determines the decreasing behavior of $\frac{R}{R_{NC}}$ for SNR$>10$~dB. Finally, we note that the difference at high SNR between the values of $\frac{R}{R_{NC}}$ for TDD and FDD is very low, but increases with the $L$. In fact, when the SNR is high, the channel estimation/feedback phases are very short, thus the difference in the amount of time available for the information transfer in both cases is small. Nevertheless, a bigger $L$ entails a larger $\tau_F$ (thus $\alpha_F$) and, in turn, increases the difference between the values of $\frac{R}{R_{NC}}$ for TDD and FDD at high SNR as well.
\subsection{Outage Probability}
We switch our focus to the analysis of the energy shortage and the data outage probability. A set of Monte-Carlo simulations is performed to obtain the numerically computed probabilities. Subsequently, we set the values of $\eta_T$, $\eta_F$, and $\tau_F$ according to Lemma \ref{Lem: TDD eta over all channels} and Lemma \ref{Lem: FDD eta over all channels} and compute the exact value of both metrics by means of the analytic results in Sec.~\ref{Sec: Outage probability}. At this stage, we only consider the case $L=3$ owing to space economy. In the previous subsection, we verified that the impact of change in the number of antennas on the accuracy of the analytic results on the downlink rate is rather small. Accordingly, a robustness of the accuracy of our results to a change in the number of antennas could be conjectured. For the sake of clarity we let $p^{E,out}$ and $p^{D,out}$ be the energy shortage probability and the data outage probability when no energy shortage occurs, respectively. Furthermore, we let $R_{NC}=R_{T}=R_{F}=6$ (bit/s/Hz)\footnote{Referring to Sec.~\ref{sec:data outafe for non-CSI}, \ref{sec:data outafe for TDD}, and \ref{sec:data outafe for FDD}, we note that $R_{NC}$, $R_{T}$, and $R_{F}$ are specified values.} be the target rate for the considered system. Finally, we depict $p^{E,out}$ for SNR$\in[0,30]$~dB in Fig.~\ref{fig:non_energy_outage_L3}, Fig.~\ref{fig:TDD_energy_outage_L3}, and Fig.~\ref{fig:FDD_energy_outage_L3} and $p^{D,out}$ for SNR$\in[0,15]$~dB in Fig.~\ref{fig:all_data_outage_L3}. As shown in these figures, the numerical results perfectly match the analytic results derived in Sec.~\ref{Sec: Outage probability} for all the three schemes. This perfect match verifies the correctness of our derivations.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig9.eps}
\caption{Energy shortage probability, non-CSI and $L=3$ antennas.}
\label{fig:non_energy_outage_L3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig10.eps}
\caption{Energy shortage probability, TDD and $L=3$ antennas.}
\label{fig:TDD_energy_outage_L3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig11.eps}
\caption{Energy shortage probability, FDD and $L=3$ antennas.}
\label{fig:FDD_energy_outage_L3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fig12.eps}
\caption{Data outage probability when no energy shortage occurs, $L=3$ antennas.}
\label{fig:all_data_outage_L3}
\end{figure}
We start by noting that the energy shortage probability strongly depends on the considered parameters. Thus, a comparison between schemes could have limited interested w.r.t. a comparison between the results obtained for each scheme, as the duration of the energy transfer phase varies. Accordingly, we restrain our focus to the latter aspect. As expected, the energy shortage probability is independent of the SNR, regardless of the value of $\alpha_N$. However, for both the TDD and the FDD scheme, the energy shortage probability decreases with the SNR, regardless of the value of $\alpha_T$ and $\alpha_F$. In these cases, a larger SNR reduces the optimal time for both devices to perform the operations intrinsic to the CSI acquisition and achieve accurate channel estimations. In other words, the channel estimation accuracy increases with the SNR value, thus the CSI acquisition requires less time. Therefore, the energy consumption at the UT is lower when the SNR is large. Now, if the duration of the energy transfer phase is doubled or tenfold, a reduction of the energy shortage probability from almost one to three orders of magnitude is observed, depending on the considered scheme. In practice, if the coherence time is long enough, even a rather small increase of the duration of the energy transfer phase can positively impact the energy shortage probability.
We now switch our focus to the data outage probability illustrated in Fig.~\ref{fig:all_data_outage_L3}. We start by noting that, to compute the numerical data outage probability in this case, the duration of the energy transfer phase for the three considered schemes, i.e., $\alpha_N$, $\alpha_T$, and $\alpha_F$, is chosen at each iteration of the simulations such that the harvested energy at the UT is sufficient to perform the receiver operations intrinsic to each scheme. As a matter of fact, the obtained quantitative results for a study of this kind are not extremely relevant, in fact they clearly depend on the selected target rate. In practice, their qualitative behavior is definitely more interesting. In this regard, the lowest data outage probability is experienced by the considered system in the case of the TDD scheme. This could have been expected after our findings on the downlink rate in the previous section, in which the TDD scheme resulted as the best performer out of the three considered cases.
\section{Conclusion}\label{Sec: Conclusions}
In this work, we have examined the efficacy of SWIPT in a MISO system consisting of an AP and a single UT. In particular, the latter is not equipped with any local power source, but instead harvests the necessary energy for its operations from the received RF signals. The performance of the considered system has been analyzed under realistic and practically relevant system assumptions. Three practical cases have been considered: a) absence of CSI at the AP, b) imperfect CSI at the AP acquired by means of pilots estimation (TDD), c) imperfect CSI at both the UT and the AP acquired by means of analog CSI feedback in the uplink (FDD). We have compared the considered scenarios by means of three performance metrics of interest, i.e, the ergodic downlink rate, the energy shortage probability, and the data outage probability. Accordingly, we have derived closed-form expressions for each metric, and for the ergodically optimal duration of both the WPT and the channel training/feedback phases, to maximize the downlink rate in all the three scenarios. The accuracy of our derivations has been verified by an extensive numerical analysis. First, it is worth noting that TDD has consistently been the best performer for each considered metric, confirming the potential of this duplexing scheme for the future advancements in modern networks. More specifically, concerning the downlink rate, our findings show that CSI knowledge at the AP is always beneficial for the information transfer in SWIPT systems, despite the resources devoted to the channel estimation/feedback procedures and the presence of estimation errors. In a follow-up of this work, we will study both strategies to maximize the efficiency of the WPT, in the case of the availability of CSI knowledge at the AP prior to the WPT phase (or part of it), and their impact on the energy shortage probability. Additional subject of future investigation will be the extension of the considered set-up to a multi-user scenario.
\appendices
\section{Proof of Lemma \ref{Lem: TDD eta over all channels}}
\label{Apx: TDD eta over all channels}
To evaluate \eqref{Eq: SecIII-TDD-rate optimal problem}, we use the law of iterated expectations, i.e.,
$\mathbb{E}_{\mathbf{h},\mathbf{\bar{w}}}\left[\cdot\right]=\mathbb{E}_{\mathbf{h}}\left[\mathbb{E}_{\mathbf{\bar{w}}}\left[\cdot|\mathbf{h}\right]\right]$. Furthermore, we neglect $LP_D$ in \eqref{Eq: SecIII-TDD-rate optimal problem} since, in practice, $P_D\ll P$ generally \cite{rpt:3gpp.36.814}. We proceed by first computing the following expression:
\begin{align}
\left(1-\eta_T-\frac{\eta_T LP_E}{\beta P\lVert\mathbf{h}\rVert^2}\right)\mathbb{E}_{\mathbf{\bar{w}}}\left[\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right)\right],\label{Eq: App1-Optimize rate}
\end{align}
for a given channel realization $\mathbf{h}$.
In order to compute \eqref{Eq: App1-Optimize rate}, the following straightforward results can be derived:
\begin{align}
\lvert\mathbf{h}^\dag\hat{\mathbf{h}}\rvert^2=\frac{N_0\lVert\mathbf{h}\rVert^2}{2\eta_T T_CP_E}\Psi_1 \text{ and }
\lVert\mathbf{\hat{h}}\rVert^2=\frac{N_0}{2\eta_T T_CP_E}\Psi_2 \label{eqn:papa101},
\end{align}
where $\Psi_1\sim\chi_2^{'2}\left(\frac{2\eta_T T_CP_E\lVert\mathbf{h}\rVert^2}{N_0}\right)$ and $\Psi_2\sim\chi_{2L}^{'2}\left(\frac{2\eta_T T_CP_E\lVert\mathbf{h}\rVert^2}{N_0}\right)$. We break up the subsequent analysis into two cases, namely, the high SNR and low SNR cases.
First, we consider the analysis at high SNR. In this case, applying the approximation, $\log_2(1+SNR)\approx\log_2SNR$ when $SNR\gg 0$, and \eqref{eqn:papa101} to \eqref{Eq: App1-Optimize rate}, we can derive
\begin{align}
\hspace{-1.3em}\mathbb{E}_{\mathbf{\bar{w}}}\left[\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{N_0\lVert\mathbf{\hat{h}}\rVert^2}\right)\right]\approx\mathbb{E}_{\mathbf{\bar{w}}}\left[\log_2\left(\frac{P\lVert\mathbf{h}\rVert^2\Psi_1}{N_0\Psi_2}\right)\right].\label{Eq: App1-High SNR rate approximation-2}
\end{align}
Subsequently, using the Taylor series expansion of $\log_2\Psi_1$ and $\log_2\Psi_2$
at their respectively mean values (i.e. $2+\lambda$ and $2L+\lambda$ respectively, where $\lambda=\frac{2\eta_T T_CP_E\lVert\mathbf{h}\rVert^2}{N_0}$), we have
\begin{align}
&\eqref{Eq: App1-High SNR rate approximation-2}= \log_2\frac{P\lVert\mathbf{h}\rVert^2}{N_0}+\log_2e \notag
\\ &\times\mathbb{E}_{\Psi_1,\Psi_2}\Bigg[\ln\left(2+\lambda\right)+\frac{\Psi_1-2-\lambda}{2+\lambda}-\frac{\left(\Psi_1-2-\lambda\right)^2}{2\left(2+\lambda\right)^2}+\cdots\notag
\\&-\ln\left(2L+\lambda\right)-\frac{\Psi_2-2L-\lambda}{2L+\lambda}+\frac{\left(\Psi_2-2L-\lambda\right)^2}{2\left(2L+\lambda\right)^2}-\cdots\Bigg]\label{Eq: TDD eta high SNR Taylor series-1}
\\&=\log_2\frac{P\lVert\mathbf{h}\rVert^2}{N_0}+\log_2e\Bigg(\ln\left(2+\lambda\right)-\frac{\left(2+2\lambda\right)}{\left(2+\lambda\right)^2}+\cdots\notag
\\&-\ln\left(2L+\lambda\right)+\frac{\left(2L+2\lambda\right)}{\left(2L+\lambda\right)^2}-\cdots\Bigg)\label{Eq: TDD eta high SNR Taylor series-1-1}
\\&\stackrel{(a)}{\approx}\log_2\frac{P\lVert\mathbf{h}\rVert^2\lambda}{N_0\left(2L+\lambda\right)}
=\log_2\frac{P\lVert\mathbf{h}\rVert^2}{N_0}-\log_2\left(1+\frac{2L}{\lambda}\right),\label{Eq: Apx2-Approximated rate at high SNR-2}
\end{align}
where $(a)$ in \eqref{Eq: Apx2-Approximated rate at high SNR-2} is derived by noting that $\lambda$ is large (at high SNR), and hence we can neglect the higher order fractional terms in \eqref{Eq: TDD eta high SNR Taylor series-1-1}.
Moreover, $2$ in the $\ln(2+\lambda)$ term is neglected owing to $\lambda \gg 2$.
Further, we use \eqref{Eq: Apx2-Approximated rate at high SNR-2} in \eqref{Eq: App1-Optimize rate} and take the expectation over $\mathbf{h}$.
Moreover, since $L/\lambda$ is small at high SNR, we use the approximation, $\log_2\left(1+x \right)\approx x\log_2e$ when $x\approx0$, in the derivation. By some straightforward computations, we can rewrite \eqref{Eq: SecIII-TDD-rate optimal problem} as
\begin{align}
\eta_T^{\star}\approx\operatorname{argmax}_{\eta_T}
B_2-\eta_TB_1-\frac{N_0L\log_2e}{\eta_TT_CP_E\left(L-1\right)},\label{Eq: Apx2-Approximated rate at high SNR-4}
\end{align}
where
$B_1=\mathbb{E}_{\mathbf{h}}\left[\left(1+\frac{LP_E}{\beta P\lVert\mathbf{h}\rVert^2}
\right)\log_2\left(\frac{P\lVert\mathbf{h}\rVert^2}
{N_0}\right)\right]$, and $B_2$ is a constant value.
In order to derive $\eta_T^{\star}$,
we differentiate \eqref{Eq: Apx2-Approximated rate at high SNR-4} with respect $\eta_T$ and set it equal to $0$. By some straightforward computations, we can derive the result \eqref{Eq: TDD_opt_high_SNR}.
We now move to the analysis at low SNR. Before deriving, we first note that $\mathbb{E}[X]=\mathbb{E}[\exp(\ln X)]$. Subsequently,
using the approximation, $\log_2\left(1+SNR \right)\approx SNR\log_2e$ when $SNR\approx0$, and Jensen's inequality, i.e., $\mathbb{E}[\exp(\ln X)]\geq\exp(\mathbb{E}[\ln X])$, we have
\begin{align}
\eqref{Eq: App1-Optimize rate}\geq\left(1-\eta_T-\frac{\eta_T LP_E}{\beta P\lVert\mathbf{h}\rVert^2}\right)\frac{P\log_2e}{N_0}&\notag
\\\times \exp\left(\mathbb{E}_{\mathbf{\bar{w}}}\left[\ln\left(\frac{\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{\lVert\mathbf{\hat{h}}\rVert^2}\right)\right]\right)&.\label{Eq: TDD eta low SNR Jensen's inequality}
\end{align}
Once again, we apply Taylor series expansion to $\mathbb{E}_{\mathbf{\bar{w}}}\left[\ln\left(\frac{\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}\rvert^2}{\lVert\mathbf{\hat{h}}\rVert^2}\right)\right]$, i.e., steps \eqref{Eq: App1-High SNR rate approximation-2}, \eqref{Eq: TDD eta high SNR Taylor series-1}, \eqref{Eq: TDD eta high SNR Taylor series-1-1}, and \eqref{Eq: Apx2-Approximated rate at high SNR-2}. In this case, since $\lambda$ is small at low SNR, we approximate the higher order terms, such as $\frac{2+2\lambda}{(2+\lambda)^2}$ in \eqref{Eq: TDD eta high SNR Taylor series-1-1}, by
a constant value $\kappa_1$. Using this, we can rewrite \eqref{Eq: TDD eta low SNR Jensen's inequality} as
\begin{align}
\frac{\kappa_1\left(1-\eta_T-\frac{\eta_T LP_E}{\beta P\lVert\mathbf{h}\rVert^2}\right)P\lVert\mathbf{h}\rVert^2\log_2e}{N_0}\times\frac{2+\lambda}{2L+\lambda}.\label{Eq: TDD eta low SNR constant fraction}
\end{align}
Finally, we take the expectation over $\mathbf{h}$. By using Jensen's inequality (similar approaches in \eqref{Eq: TDD eta low SNR Jensen's inequality}) and applying Taylor series expansion
for the logarithm term
at the mean value $\lVert\mathbf{h}\rVert^2$ (similar approaches in \eqref{Eq: TDD eta high SNR Taylor series-1}, \eqref{Eq: TDD eta high SNR Taylor series-1-1}, and \eqref{Eq: Apx2-Approximated rate at high SNR-2}), we rewrite \eqref{Eq: SecIII-TDD-rate optimal problem} as
\begin{align}
\eta_T^{\star}\approx\operatorname{argmax}_{\eta_T}
\frac{\kappa_2\left(1+\frac{\eta_TT_CP_EL}{N_0}\right)\left(L-\eta_TL-\frac{\eta_T LP_E}{\beta P}\right)}{1+\frac{\eta_TT_CP_E}{N_0}}\notag
\end{align}
where $\kappa_2$ is a constant.
In order to derive $\eta_T^{\star}$,
we differentiate the above formula
with respect $\eta_T$ and set it equal to $0$. By some straightforward computations, we can derive the result \eqref{Eq: TDD_opt_low_SNR}.
\section{Proof of Lemma \ref{Lem: FDD eta over all channels}}\label{Apx: FDD eta over all channels}
To evaluate \eqref{Eq: SecIII-FDD-rate optimal problem}, we use the similar approaches as in \eqref{Eq: App1-Optimize rate}. Hence, we first compute
\begin{align}
\mathbb{E}_{\mathbf{\hat{w}}}\Bigg[\left(1-\tau_F-\frac{\tau_FP_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2}{\beta P\lVert\mathbf{h}\rVert^2}-\eta_F\right)&\notag
\\\times\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right)&\Bigg].\label{Eq: App2-Optimize rate}
\end{align}
To compute \eqref{Eq: App2-Optimize rate},
the following results can be derived (details omitted for lack of space):
\begin{align}
\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2&=\frac{N_0L\lVert\mathbf{h}\rVert^2}{2T_C}\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)\Phi_1,\label{Eq: App2-FDD distribution-1}
\\\lVert\mathbf{\hat{h}}_{AP}\rVert^2&=\frac{N_0L}{2T_C}\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)\Phi_2,\label{Eq: App2-FDD distribution-2}
\\\lVert\mathbf{\hat{h}}_{UT}\rVert^2&=\frac{N_0L}{2\eta_FT_C P}\Phi_3,\label{Eq: App2-FDD distribution-3}
\end{align}
where $\Phi_1\sim\chi_{2}^{'2}\left(\frac{2T_C\lVert\mathbf{h}\rVert^2}{N_0L\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}\right)$, $\Phi_2\sim\chi_{2L}^{'2}\left(\frac{2T_C\lVert\mathbf{h}\rVert^2}{N_0L\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}\right)$, and $\Phi_3\sim\chi^{'2}_{2L}\left(\frac{2\eta_FT_C P\lVert\mathbf{h}\rVert^2}{N_0L}\right)$.
Before proceeding, we note that the pre-log term and the term inside the logarithm in \eqref{Eq: App2-Optimize rate} are correlated, due to the presence of $\mathbf{\hat{w}}_{UT}$ in both terms. However, at high SNR, the variance of $\mathbf{\hat{w}}_{UT}$ will be small. Thus, we assume that the pre-log term and term inside the logarithm are approximately independent (and hence we can take the expectations of these two terms in \eqref{Eq: App2-Optimize rate} separately). For the term inside the logarithm, using \eqref{Eq: App2-FDD distribution-1}, \eqref{Eq: App2-FDD distribution-2}, and \eqref{Eq: App2-FDD distribution-3} and by Taylor series expansion (similar approaches in \eqref{Eq: TDD eta high SNR Taylor series-1}, \eqref{Eq: TDD eta high SNR Taylor series-1-1}, and \eqref{Eq: Apx2-Approximated rate at high SNR-2}), we obtain
\begin{align}
\mathbb{E}_{\mathbf{\hat{w}}}\left[\log_2\left(1+\frac{P\lvert\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}\rvert^2}{N_0\lVert\mathbf{\hat{h}}_{AP}\rVert^2}\right)\right]
\approx
\log_2\frac{P\lVert\mathbf{h}\rVert^2\lambda_1}{N_0\left(2L+\lambda_1\right)},\label{Eq: FDD eta high SNR neglect fraction-2}
\end{align}
where $\lambda_1=\frac{2T_C\lVert\mathbf{h}\rVert^2}{N_0L\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}$.
For the pre-log term, we have
\begin{align}
&\mathbb{E}_{\mathbf{\hat{w}}}\left[1-\tau_F-\frac{\tau_F P_F\lVert\mathbf{\hat{h}}_{UT}\rVert^2}{\beta P\lVert\mathbf{h}\rVert^2}-\eta_F\right]\approx1-\tau_F-\frac{\tau_F P_F}{\beta P}-\eta_F.
\label{Eq: FDD eta high SNR neglect fraction-1}
\end{align}
The approximation in \eqref{Eq: FDD eta high SNR neglect fraction-1} is derived using $\mathbb{E}_{\mathbf{\hat{w}}}[\lVert\mathbf{\hat{h}}_{UT}\rVert^2]
=\lVert\mathbf{h}\rVert^2+\frac{N_0L^2}{\eta_FT_C P}\approx\lVert\mathbf{h}\rVert^2$
(since at high SNR, $\frac{N_0L^2}{\eta_F PT_C}$ is small enough to be neglected).
Finally, we substitute \eqref{Eq: FDD eta high SNR neglect fraction-2} and \eqref{Eq: FDD eta high SNR neglect fraction-1} into \eqref{Eq: App2-Optimize rate} and take the expectation over $\mathbf{h}$.
Using the approximation $\log_2\left(1+x \right)\approx x\log_2e$ when $x\approx0$ and by some straightforward computations, we can rewrite \eqref{Eq: SecIII-FDD-rate optimal problem} as
\begin{align}
\left(\eta_F^{\star},\tau_F^{\star}\right)\approx\operatorname{argmax}_{\eta_F,\tau_F}\left(1-\tau_F-\frac{\tau_FP_F}{\beta P}-\eta_F\right)&\notag
\\\times\left(B_5-\frac{N_0L^2\log_2e}{T_C\left(L-1\right)}\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)\right)&,\label{Eq: Apx4-Objective function at high SNR-3}
\end{align}
where $B_5=\mathbb{E}_\mathbf{h}\left[\log_2\left(\frac{P\lVert\mathbf{h}\rVert^2}{N_0}\right)\right]$.
In order to find $\eta_F^{\star}$ and $\tau_F^{\star}$, we differentiate \eqref{Eq: Apx4-Objective function at high SNR-3} with respect $\eta_F$ and $\tau_F$ and equate them to $0$. By some straightforward computations, we can derive the results \eqref{Eq: FDD_opt_high_SNR-1} and \eqref{Eq: FDD_opt_high_SNR-2}.
We now turn to the analysis at low SNR. Herein, we follow the similar derivations as in Appendix-\ref{Apx: TDD eta over all channels}. First we use the same approaches as in \eqref{Eq: TDD eta low SNR Jensen's inequality}.
Further, we apply \eqref{Eq: App2-FDD distribution-1}, \eqref{Eq: App2-FDD distribution-2}, and \eqref{Eq: App2-FDD distribution-3} and use the Taylor series expansion (similar to the approaches in \eqref{Eq: TDD eta low SNR constant fraction}).
Thus, we can derive
\begin{align}
\eqref{Eq: App2-Optimize rate} &\geq
\left(\left(1-\eta_F-\tau_F\right)\beta P\lVert\mathbf{h}\rVert^2-\frac{\tau_F P_FN_0L}{2\eta_FT_CP}\left(2L+\lambda_2\right)\right)\notag
\\&\hspace{11em}\times\frac{\kappa_3 \log_2e\left(2+\lambda_1\right)}{N_0\beta \left(2L+\lambda_1\right)},
\label{Eq: FDD eta low SNR Taylor series-2}
\end{align}
where $\lambda_2=\frac{2\eta_F T_CP\lVert\mathbf{h}\rVert^2}{N_0L}$, and $\kappa_3$ is a constant value.
Finally, as before we take the expectation over $\mathbf{h}$. By using Jensen's inequality (similar approaches in \eqref{Eq: TDD eta low SNR Jensen's inequality}) and applying Taylor series expansion of the logarithm term at the mean value $\lVert\mathbf{h}\rVert^2$ (similar approaches in \eqref{Eq: TDD eta high SNR Taylor series-1}, \eqref{Eq: TDD eta high SNR Taylor series-1-1}, and \eqref{Eq: Apx2-Approximated rate at high SNR-2}), we can rewrite the expected value of \eqref{Eq: FDD eta low SNR Taylor series-2} over $\mathbf{h}$ as
\begin{align}
&\frac{\kappa_4\left(1-\eta_F-\tau_F-\frac{\tau_F P_F}{\beta P}
-\frac{\tau_F P_FN_0L}{\eta_FT_C\beta P^2}\right)}{\left(L+\frac{T_C}{N_0\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}\right)\left(1+\frac{T_C}{N_0\left(\frac{1}{\eta_F P}+\frac{1}{\tau_F P_F}\right)}\right)^{-1}},\label{Eq: Apx4-Low SNR approximation-4}
\end{align}
where $\kappa_4$ is a constant value. Now, we differentiate \eqref{Eq: Apx4-Low SNR approximation-4} with respect $\eta_F$ and $\tau_F$ and equate them to $0$. Using some straightforward computations, we obtain the result \eqref{Eq: FDD_opt_low_SNR-1}.
When $x\approx 0$, $\sqrt{1+x}\approx 1+\frac{x}{2}$. By utilizing this approximation in \eqref{Eq: FDD_opt_low_SNR-1} since $N_0$ is large at low SNR, we can derive $\tau^{\star}_F\approx\frac{\left(\eta^{\star}_F\right)^2T_C\beta P^2}{P_FN_0L}$.
Substituting the latter approximated $\tau^{\star}_F$ into the differentiated equation $\frac{\partial\eqref{Eq: Apx4-Low SNR approximation-4}}{\partial \eta_F}=0$, we can also derive the result \eqref{Eq: FDD_opt_low_SNR-2}.
\section{Proof of Lemma \ref{Lem: TDD data outage probability}}\label{Apx: TDD data outage probability}
We now proceed with the proof. In equation \eqref{Eq: TDD-Data outage}, we have two random variables, i.e., $\left|\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}}{\lVert\mathbf{\hat{h}}\rVert}\right|^2$ and $\lVert\mathbf{h}\rVert^2$.
To evaluate \eqref{Eq: TDD-Data outage}, we first express
$\lVert\mathbf{h}\rVert^2$ as the sum of $\left|\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}}{\lVert\mathbf{\hat{h}}\rVert}\right|^2$ and another independent random variable (see the steps below). This step simplifies the evaluation as shown in the following.
In order to do so, first we project the row vector $\mathbf{h}^{\dag}$ onto an orthonormal set of vectors $\Omega = \left\{\frac{\mathbf{\hat{h}}^{\dag}}{\lVert\mathbf{\hat{h}}\rVert},\mathbf{g}_2^{\dag},\cdots,\mathbf{g}_L^{\dag}\right\}$,
where $\mathbf{g}_2^{\dag},\cdots,\mathbf{g}_L^{\dag}$ are chosen arbitrarily
such that the vectors in $\Omega $ span the complex $L$ dimensional space.
Recall that $\mathbf{h}^{\dag}|_{\mathbf{\hat{h}}}\sim\mathcal{C}\mathcal{N}\left(\frac{1}{1+b_6}\mathbf{\hat{h}}^{\dag},\frac{1}{b_5}\mathbf{I}_L\right)$.
Since the distribution of a Complex Gaussian random vector (with distribution $\mathcal{C}\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)$) projected onto an orthonormal set remains
unchanged \cite{TseWireless}, we can conclude that
$ \frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}}{\lVert\mathbf{\hat{h}}\rVert}\Big|_{\mathbf{\hat{h}}}\sim\mathcal{C}\mathcal{N}\left(\frac{\lVert\mathbf{\hat{h}}\rVert}{\left(1+b_6\right)},b_5^{-1}\right)$ and
$\mathbf{h}^{\dag}\mathbf{g}_l\big|_{\mathbf{\hat{h}}}\sim\mathcal{C}\mathcal{N}\left(0,b_5^{-1}\right),
\ l = 2,\dots,L.$
Additionally, the $\mathbf{h}^{\dag}\mathbf{g}_l\big|_{\mathbf{\hat{h}}} \ l = 2,\dots,L$ are independent random variables.
Thus we can conclude the following:
\begin{align}
\left|\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}}{\lVert\mathbf{\hat{h}}\rVert}\right|^2\Bigg|_{\mathbf{\hat{h}}}&=\frac{1}{2b_5}\Theta_1,\label{Eq: App3-chi-square-1}
\\\left(\left|\mathbf{h}^{\dag}\mathbf{g}_2\right|^2+\cdots+\left|\mathbf{h}^{\dag}\mathbf{g}_L\right|^2\right)\Big|_{\mathbf{\hat{h}}}&=\frac{1}{2b_5}\Theta_2,\label{Eq: App3-chi-square-2}
\end{align}
where $\Theta_1\sim\chi^{'2}_2\left(\frac{2}{b_5b_6^2}\lVert\mathbf{\hat{h}}\rVert^2\right)$, and $\Theta_2\sim\chi^{2}_{2L-2}$. Furthermore, $\Theta_1$ and $\Theta_2$ are independent and $\Theta_1+\Theta_2=2b_5\lVert\mathbf{h}\rVert^2$.
Now, applying the same approach as in \eqref{Eq: App4-Outage probability-1} to \eqref{Eq: TDD-Data outage} and using \eqref{Eq: App3-chi-square-1} and \eqref{Eq: App3-chi-square-2}, we can derive the following:
$\mathcal{P}_{T}^{D, out}=\mathbb{E}_\mathbf{\hat{h}}\left[\Pr\left\{
\Theta_1<2b_5b_3,\Theta_1+\Theta_2\geq 2b_5b_4
\right\}\right].$
Let us focus on the relationship between $b_3$ and $b_4$ and consider two possible cases, i.e., $b_3<b_4$ and $b_3\geq b_4.$ We start from the former. In this case, denoting the PDF of $\Theta_i$ as $f_{\Theta_i}\left(\theta_i \right)$ for $i\in \{1,2\}$,
we have,
\begin{align}
&\mathcal{P}_{T}^{D, out}\notag
\\&=\mathbb{E}_\mathbf{\hat{h}}\left[\int_{\theta_1 = 0}^{2b_5b_3}\frac{\Gamma\left(L-1,b_5 b_4-\frac{\theta_1}{2}\right)I_0\left(\sqrt{\frac{2\lVert\mathbf{\hat{h}}\rVert^2\theta_1}{b_5b_6^2}}\right)}{2\Gamma\left(L-1\right)e^{\left(\frac{\theta_1}{2}+\frac{\lVert\mathbf{\hat{h}}\rVert^2}{b_5b_6^2}\right)}}
\mathrm{d}\theta_1\right].\label{Eq: App3-TDD data outage a<b-1}
\end{align}
Since $\mathbf{\hat{h}}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},\left(1+b_6\right)\mathbf{I}_L\right)$, it follows that $\lVert\mathbf{\hat{h}}\rVert^2=\frac{\left(1+b_6\right)}{2}\Theta_3$, where $\Theta_3\sim\chi^2_{2L}$. Substituting the PDF of $\lVert\mathbf{\hat{h}}\rVert^2$ into \eqref{Eq: App3-TDD data outage a<b-1}, we obtain our result \eqref{Eq: TDD data outage probability closed form 1}.
In the second case $b_3\geq b_4$,
following the same approaches as in \eqref{Eq: App3-TDD data outage a<b-1}, we obtain our result \eqref{Eq: TDD data outage probability closed form 2} and conclude the proof.
\section{Proof of Lemma \ref{Lem: FDD data outage probability}}\label{Apx: FDD data outage probability}
Adopting a similar approach to what has been adopted in \eqref{Eq: App4-Outage probability-1} and denoting the conditional distribution of $\mathbf{\hat{h}}_{UT}$ given $\mathbf{\hat{h}}_{AP}$ as
$f\left(\mathbf{\hat{h}}_{UT}\big|\mathbf{\hat{h}}_{AP}\right)$, the analytic expression for the outage probability, i.e., $\mathcal{P}_{F}^{D, out}$ in \eqref{Eq: FDD-Data outage-1}, can be written as
\begin{align}
&\mathbb{E}_{\mathbf{\hat{h}}_{AP}}\bigg[\int\Pr\Bigg\{
\left|\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2<b_7,\lVert\mathbf{h}\rVert^2\geq b_8\notag
\\&\quad\quad+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2\Big|\mathbf{\hat{h}}_{AP},\mathbf{\hat{h}}_{UT}\Big\}
f\left(\mathbf{\hat{h}}_{UT}\big|\mathbf{\hat{h}}_{AP}\right)\mathrm{d}\mathbf{\hat{h}}_{UT}\Big].\label{Eq: Proof of data outage in FDD-1}
\end{align}
To compute \eqref{Eq: Proof of data outage in FDD-1}, we use the projection approach that we have used in Appendix-\ref{Apx: TDD data outage probability}. First, we project the row vector $\mathbf{h}^{\dag}$ onto an orthonormal set of vector $\left\{\frac{\mathbf{\hat{h}}^{\dag}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert},\mathbf{g}^{\dag}_2,\cdots,\mathbf{g}_L^{\dag}\right\}$, (such that
these vectors span the $L$ dimensional complex space).
Since $\mathbf{h}|_{\mathbf{\hat{h}}_{AP},\mathbf{\hat{h}}_{UT}}\sim\mathcal{C}\mathcal{N}\left(\frac{1}{1+\sigma_3}\mathbf{\hat{h}}_{UT},\frac{1}{\sigma_2}\mathbf{I}_L\right)$, following the same approach as in Appendix-\ref{Apx: TDD data outage probability}, we obtain
\begin{align}
\left|\frac{\mathbf{h}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2\Bigg|_{\mathbf{\hat{h}}_{AP},\mathbf{\hat{h}}_{UT}}&=\frac{1}{2\sigma_2}\Theta_5,\label{Eq: App-5-chi square-1}
\\\left(\left|\mathbf{h}^{\dag}\mathbf{g}_2\right|^2+\cdots\left|\mathbf{h}^{\dag}\mathbf{g}_L\right|^2\right)\Big|_{\mathbf{\hat{h}}_{AP},\mathbf{\hat{h}}_{UT}}&=\frac{1}{2\sigma_2}\Theta_6,\label{Eq: App-5-chi square-2}
\end{align}
where $\Theta_5\sim\chi^{'2}_2\left(\frac{2}{\sigma_2\sigma_3^2}\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2\right)$ and $\Theta_6\sim\chi^{'2}_{2L-2}\left(\frac{2}{\sigma_2\sigma_3^2}\left(\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2\right)\right)$. Moreover, $\Theta_5$ and $\Theta_6$ are independent and $\Theta_5+\Theta_6=2\sigma_2\lVert\mathbf{h}\rVert^2$. Using \eqref{Eq: App-5-chi square-1} and \eqref{Eq: App-5-chi square-2}, we rewrite \eqref{Eq: Proof of data outage in FDD-1} as
\begin{align}
&\mathbb{E}_{\mathbf{\hat{h}}_{AP}}\bigg[\int\Pr\Big\{
\Theta_5+\Theta_6\geq 2\sigma_2 \left(b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2\right),\notag
\\&\hspace{7em}\Theta_5<2\sigma_2 b_7
\Big\}
f\left(\mathbf{\hat{h}}_{UT}\big|\mathbf{\hat{h}}_{AP}\right)\mathrm{d}\mathbf{\hat{h}}_{UT}\Big] \label{Eq: Proof of data outage in FDD-2}.
\end{align}
To compute the double integration over $\theta_5$ and $\theta_6$ in \eqref{Eq: Proof of data outage in FDD-2}, we have two two cases, i.e., $b_7<b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2$ and $b_7\geq b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2$. In the first case, the probability term in \eqref{Eq: Proof of data outage in FDD-2} can be computed as
\begin{align}
&\int_{\theta_5=0}^{2\sigma_2 b_7}\frac{1}{2}Q_{L-1}\left(\sqrt{\frac{2}{\sigma_2\sigma_3^2}\left(\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2\right)},\right.\notag
\\&\hspace{0.25em}\left.\sqrt{2\sigma_2\left(b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2\right)-\theta_5}\right)
I_0\left(\sqrt{\frac{2\theta_5}{\sigma_2\sigma_3^2}\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2}\right)\notag
\\&\quad\times e^{-\left(\frac{\theta_5}{2}+\frac{1}{\sigma_2\sigma_3^2}\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{\hat{h}}_{AP}\rVert}\right|^2\right)}\mathrm{d}\theta_5.\label{Eq: Proof of data outage in FDD-3}
\end{align}
Since $\mathbf{\hat{h}}_{UT}|_{\mathbf{\hat{h}}_{AP}}\sim\mathcal{C}\mathcal{N}\left(\frac{\sigma_5}{\sigma_4}\mathbf{\hat{h}}_{AP},\sigma_5\mathbf{I}_L\right)$, using the projection approach once again, we can derive
\begin{align}
\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{h}_{AP}\rVert}\right|^2\Bigg|_{\mathbf{\hat{h}}_{AP}}&=\frac{\sigma_5}{2}\Theta_7,\label{Eq: App-5-chi square-3}
\\\left(\lVert\mathbf{\hat{h}}_{UT}\rVert^2-\left|\frac{\mathbf{\hat{h}}_{UT}^{\dag}\mathbf{\hat{h}}_{AP}}{\lVert\mathbf{h}_{AP}\rVert}\right|^2\right)\Bigg|_{\mathbf{\hat{h}}_{AP}}&=\frac{\sigma_5}{2}\Theta_8,\label{Eq: App-5-chi square-4}
\end{align}
where $\Theta_7\sim\chi^{'2}_2\left(\frac{2\sigma_5}{\sigma_4^2}\lVert\mathbf{\hat{h}}_{AP}\rVert^2\right)$ and $\Theta_8\sim\chi^2_{2L-2}$. Furthermore, $\Theta_7$ and $\Theta_8$ are independent and $\Theta_7+\Theta_8=\frac{2}{\sigma_5}\lVert\mathbf{\hat{h}}_{UT}\rVert^2$.
Lastly, we note that since $b_7<b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2$, we have
$b_7<(b_8+b_9) \frac{(\Theta_7+\Theta_8)\sigma}{2}$
and hence $\Theta_7+\Theta_8>\frac{2\left(b_7-b_8\right)}{b_9\sigma_5}.$
Now, applying \eqref{Eq: Proof of data outage in FDD-3}, \eqref{Eq: App-5-chi square-3}, and \eqref{Eq: App-5-chi square-4} to \eqref{Eq: Proof of data outage in FDD-2}, the integral of \eqref{Eq: Proof of data outage in FDD-2} can be computed as
\begin{align}
&\hspace{-0.5em}\int_{\theta_7+\theta_8>\frac{2\left(b_7-b_8\right)}{b_9\sigma_5}}\int_{\theta_5 = 0}^{2\sigma_2 b_7}
I_0\left(\sqrt{\frac{2\sigma_5\theta_7\lVert\mathbf{\hat{h}}_{AP}\rVert^2}{\sigma_4^2}}\right)\frac{\theta_8^{L-2}}{2^{L+1}}\notag
\\&\hspace{-0.5em}\times Q_{L-1}\left(\sqrt{\frac{\theta_8\sigma_5}{\sigma_2\sigma_3^2}},
\sqrt{2\sigma_2\left(b_8+\frac{b_9\left(\theta_7+\theta_8\right)\sigma_5}{2}\right)-\theta_5}\right)\notag
\\&\hspace{-0.5em}\times\frac{I_0\left(\sqrt{\frac{\theta_5\theta_7\sigma_5}{\sigma_2\sigma_3^2}}\right)}{\Gamma\left(L-1\right)e^{\left(\frac{\theta_5+\theta_7+\theta_8}{2}+\frac{\theta_7\sigma_5}{2\sigma_2\sigma_3^2}+\frac{\sigma_5\lVert\mathbf{\hat{h}}_{AP}\rVert^2}{\sigma_4^2}\right)}}\mathrm{d}\theta_5\mathrm{d}\theta_7\mathrm{d}\theta_8.\label{Eq: Proof of data outage in FDD-4}
\end{align}
We note that, since $\mathbf{\hat{h}}_{AP}\sim\mathcal{C}\mathcal{N}\left(\mathbf{0},\left(1+\sigma_3+\sigma_4\right)\mathbf{I}_L\right)$, we have that $\lVert\mathbf{\hat{h}}_{AP}\rVert^2=\frac{1+\sigma_3+\sigma_4}{2}\Theta_9$, where $\Theta_9$ is $\chi^2_{2L}$. Using this fact and \eqref{Eq: Proof of data outage in FDD-4} in \eqref{Eq: Proof of data outage in FDD-2}, we obtain \eqref{Eq: FDD-Data outage closed form-1} in Lemma \ref{Lem: FDD data outage probability}.
We now consider the second aforementioned case, i.e., $b_7\geq b_8+b_9\lVert\mathbf{\hat{h}}_{UT}\rVert^2$. Following similar steps as in the first case, we obtain \eqref{Eq: FDD-Data outage closed form-2} and \eqref{Eq: FDD-Data outage closed form-3} in Lemma \ref{Lem: FDD data outage probability} (the detailed steps have been omitted for matters of space economy). At this stage, the outage probability in \eqref{Eq: FDD-Data outage-1} is obtained as the sum of \eqref{Eq: FDD-Data outage closed form-1}, \eqref{Eq: FDD-Data outage closed form-2}, and \eqref{Eq: FDD-Data outage closed form-3}, and this concludes the proof.
\bibliographystyle{IEEEtran}
|
1,116,691,501,011 | arxiv | \section{Introduction}
Solid surfaces coated with thin lubricating films are very common and desirable in many commercial and industrial applications, such as semiconductor device fabrication, friction reduction, heat transfer, protection, to name a few.\cite{erdemir2005review} In most of these applications, stability of the lubricating films determines the performance of the device. For thin lubricating films (thickness $\ll$ Capillary length), thin film stability can be defined in terms of the total free energy (per unit area) of the system due to the interfacial (short-range) and intermolecular (long-range) interactions. Under unstable condition, thin liquid films may rupture or dewet via different mechanism depending on the property of the system. Many research groups have investigated the deweting of thin liquid films to learn about interfacial boundary condition, properties of liquids and solids and interactions.\cite{higgins2002timescale, reiter2005residual, damman2007relaxation, peschka2019signatures} Different combinations of short-range (spreading coefficient) and long-range (Hamaker constant) interactions are used to predict the stable or unstable behavior of thin liquid films.\cite{sharma1993relationship,sharma1993equilibrium} It has been also shown that stable thin liquid films can be destabilized using different external stimulus viz. temperature induced marangoni flow, mechanical vibrations, electric or magnetic forces \cite{mitov1998convection, warner2002dewetting, alvarez2008surface, sterman2017rayleigh, kataoka1999patterning, schaeffer2000electrically, surenjav2009manipulation}. However, if the stability condition can be controlled reversibly, one can further investigate the same properties in a reversible manner.\cite{severin2012reversible}
It has been shown that external electric field can be used to very effectively to manipulate the effective interfacial energy.\cite{quilliet2002investigation} Due to the accumulation of the electric charges at a dielectric interface the interfacial free energy changes, which subsequently modified the wetting properties of the interface. Following this idea, many researchers studied the electric field induced dewetting of thin liquid films, theoretically and experimentally.\cite{herminghaus1999dynamical, schaffer2001electrohydrodynamic, verma2005electric, staicu2006electrowetting, priest2006controlled, sahoo2019reversible} The advantage of studying thin film instability using an electric field is to further investigate the behavior of the instability pattern upon removing the electric field. This would help us understand the dynamic change in the interfacial and material properties during the full cycles (voltage ON and voltage OFF). Few recent studies have demonstrated reversible wetting and dewetting in confined geometry (nanopores, microfluidics) using electrowetting.\cite{powell2011electric, li2019ionic} John et al. demonstrated wettability rachets driven fluid transport by switching ON and OFF the applied voltage.\cite{john2007liquid, john2008ratchet}. It is also found that the process of dewetting and spreading do not completely follow the same dynamics. During dewetting, rim of a liquid film recedes at a constant speed with a fixed dynamic contact angle, which quickly relaxes to a spherical shape drop towards the end. On the other hand, during spreading, the contact angle of a drop decreases continuously while the drop maintains its spherical shape.\cite{edwards2016not, edwards2020viscous}
Thin lubricating films of \textit{Nepenthes'} pitcher plant inspired slippery surfaces are among the most suitable candidates to study the stability of thin liquid films.\cite{lafuma2011slippery, wong2011bioinspired, rao2021highly, epstein2012liquid, wang2016bioinspired, liu2020robust, stamatopoulos2017exceptional} It has been shown that on hydrophilic solid surfaces, the thin lubricating oil films dewet under aqueous drops, which subsequently brings the drops into direct contact with the solid surface and get pinned.\cite{lafuma2011slippery, carlson2013short} Dewetting mechanism and dynamics dominantly depend on the surface and interfacial energies of various components of the system, i.e. solid surface, lubricating fluid, and test liquid.\cite{daniel2017oleoplaning, sharma2019sink, bhatt2022dewetting} In this article, we demonstrate the electric field controlled reversible dewetting of thin lubricating films underneath aqueous drops on stable slippery surfaces over multiple cycles.
\section{Results and discussion}
Figure \ref{Fig.1}(A) shows the schematics of the experimental system where an aqueous drop is deposited on a thin lubricating film (PDMS) coated hydrophobic solid surface, and a potential difference is applied across the drop and the underneath conducting silicon substrate. The experimental system correspond to a stable slippery surface where the lubricating PDMS film provide frictionless slippery interface to the top aqueous drops. The stable lubricating film can be made unstable upon applying an external electric field across the dielectric lubricating film.
\begin{figure*}[t!]
\centering
\includegraphics[width=1\textwidth]{Fig.1.pdf}
\caption{(A) Schematic of the experimental setup. (B) Second derivative of the excess free energy $\Delta G^{\prime \prime}(h)$ showing the unstable behavior of thin PDMS films underneath a drop at different applied voltages. Inset shows the same plot for 0 V and 0.5 V, confirming that thin PDMS films become unstable even at 0.5 V. (C) Fluorescence micrographs of dewetting of thin PDMS films for the forward (after applying 40 V) and reverse (after switching off the voltage) cycles. The scale bar for all the micrographs is 100$\,\upmu$m. (D) The intensity profiles of dewetting dynamics showing the change in the amplitude of the surface waves for the forward (voltage ON) and reverse (voltage OFF) cycles.}
\label{Fig.1}
\end{figure*}
Theoretically, the stability of a thin liquid film can be defined in terms of the total excess free energy of the film, which is the sum of the van der Waals interaction, the acid-base interaction and the Born repulsion.\cite{verma2005electric} In the presence of an external electric field, an additional term due to the electrostatic interaction is added to the total free energy. The electrostatic term is destabilizing in nature and decreases the total free energy of the system. For the present experimental system, Si/SiO$_{2}$/OTS/PDMS/drop, the total excess free energy can be defined as,
\begin{equation}
\begin{split}
\label{eq:1}
\Delta G(h)\,=-\frac{A_{\mathrm{OTS/PDMS/drop}}}{12 \pi h^{2}}+\frac{A_{\mathrm{OTS/PDMS/drop}}-{A_{\mathrm{SiO_2/PDMS/drop}}}}{12 \pi (\,{h+d_{\mathrm{OTS}}})\,^{2}}\\
+\frac{A_{\mathrm{SiO_2/PDMS/drop}}-A_{\mathrm{Si/PDMS/drop}}}{12 \pi (\,{h+d_{\mathrm{OTS}}+d_{\mathrm{SiO_2}}})\,^{2}}\,+ S_{\mathrm{P}}\,\mathrm{exp}(\,\frac{{d_{\mathrm{min}}}-h}{l})\\
-\frac{1}{2}\frac{\epsilon_\mathrm{0}\epsilon_\mathrm{SiO_2}}{d_\mathrm{SiO_2}}\frac{1}{(1+\frac{d_\mathrm{OTS}\epsilon_\mathrm{SiO_2}}{d_\mathrm{SiO_2}\epsilon_\mathrm{OTS}}+\frac{h\epsilon_\mathrm{SiO_2}}{d_\mathrm{SiO_2}\epsilon_\mathrm{PDMS}})}V^2 + \frac{c_\mathrm{OTS}}{h^8}
\end{split}
\end{equation}
where, $h$, $d_\mathrm{OTS}$, $d_\mathrm{SiO_{2}}$, $d_\mathrm{min}$ and $l$ represent the thickness of PDMS film, OTS monolayer, dielectric SiO$_2$ layer, atomic cut-off distance and polymer correlation length, respectively. $A$ is the effective Hamaker constant for a three-layer system, $S_{\mathrm{P}}$ is the polar component of the spreading coefficient (which determines the acid/base interaction), $c$ is the strength of the short-range interaction and $V$ is the applied $\mathit{rms}$ voltage. Figure \ref{Fig.1}(B) shows the plot of the second derivative of the total excess free energy $(\Delta G^{\prime \prime}(h))$ confirming that thin PDMS films become unstable with applied voltage. With increase in the value of the applied voltage, the magnitude of $\Delta G^{\prime \prime}(h)$ also increases, resulting in the faster dewetting of the thin PDMS films. Inset of Figure \ref{Fig.1}(B) shows the $\Delta G^{\prime \prime}(h)$ curve for 0 V and 0.5 V, which indicates that even such a small voltage can make PDMS films unstable. The total excess free energy equation (Eq. \ref{eq:1}) also suggests that upon removing the applied voltage, thin uniform film will be the stable configuration.
Figure \ref{Fig.1}(C) top row shows fluorescence images for the dewetting of a 500 nm thin PDMS film underneath an aqueous drop at 40 V (forward cycle). The uniform intensity of the first image (at 0 s) confirms the stable nature of PDMS films at 0 V. After applying the voltage, surface capillary waves appear in milliseconds in random directions, which are similar to the perturbations in spinodal dewetting due to the disjoining pressure \cite{verma2005electric}. The amplitude of these surface waves grows in time, leading to the final dewetting pattern of smaller droplets after about 16 s. The dewetted droplets are found stable as long as the applied voltage is present. Upon reducing the applied voltage to 0 V (reverse cycle), the contact angle of dewetted droplets start decreasing and they start coalescing with neighboring droplets within few seconds. Contact angle of coalesced droplets continue to decrease as they rewet to form a uniform continuous film, similar to the starting one. Here one should note that for 40 V, the total time corresponding to the dewetting in forward cycle is about 16 s, which increases to about 25 mins for complete rewetting in the reverse cycle. Figure \ref{Fig.1}(D) shows the scanline profiles of the surface of a PDMS film for the same times as in the fluorescence images for the forward and reverse voltage cycles. It is clear from the figure that in forward cycle, the perturbations on the smooth surface of a PDMS film grow leading to hole nucleation after about 2 s time, and the final dewetting is complete after about 16 s. On the other hand in the reverse cycle, the final dewetted droplets spread, coalesce and rewet to form a continuous film within 16 s (from the beginning of the reverse cycle) but the film surface contains perturbations. Subsequently, it takes about 25 mins to the film to become completely smooth with no residual surface perturbations. Due to the high viscosity of the PDMS, surface relaxation takes such a long time \cite{edwards2016not, edwards2020viscous}. Also, it is important because we would like to have identical initial condition i.e. completely smooth surface of a PDMS film during different forward cycles. It is also observed that the total time of the reverse cycle (rewetting time) decreases with the increase in the applied potential. For example, at 10 V the dewetting time is 40 mins and the corresponding rewetting time is about 3 h, whereas at 60 V within milliseconds dewetting time, the rewetting time reduces to 6 mins.
After reducing the voltage to 0 V and formation of uniform PDMS film at the end of the first reserve cycle, another round of dewetting-rewetting cycle was performed to test the repeatability of the system. It was observed that in the second round of dewetting-rewetting cycle, the PDMS film dewets and rewets in a manner identical (with respect to the morphology and time) to the first cycle. The complete reversibility was also observed even in the third dewetting-rewetting cycle. This tells us that thin PDMS films underneath aqueous drops on stable slippery surfaces can reversibly undergo multiple dewetting-rewetting cycles without loosing any of its characteristic feature as the entire process if solely controlled by the applied electric potential.
Since the dewetting is being investigated using optical fluorescence microscopy, the temporal evolution of dewetting PDMS films are analyzed in terms of intensity profile, rather than height profile obtained by atomic force microscope of dewetting films \cite{seemann2001gaining, khare2007dewetting}. During dewetting, the capillary waves present on the surface of the a PDMS film under an aqueous drop can be analyzed using the linear stability analysis. Considering the lubricating PDMS fluid as a Newtonian liquid, the Navier-Stokes equation can be simplified using the long-wave approximation. It has been shown that electric field induced instability is a long-wave type where the wavelength of the instability is assumed large compared to the thickness of the film \cite{schaeffer2000electrically}. Since the slip length of PDMS on an OTS grafted surface is about 25 nm, the system can be considered as under weak or moderate slip regime and the linear stability analysis will be similar as for the no-slip regime \cite{scarratt2019slippery, rauscher2008spinodal}. The thin-film equation (spatiotemporal) for the lateral liquid flow under no-slip regime is given by
\begin{equation}
\label{eq:2}
3 \eta \left(\frac{\partial h(x,t)}{\partial t}\right) -\nabla \cdot[h^{3}\nabla P]=0
\end{equation}
where, $\eta$ is the viscosity of PDMS, $h$ is the local film thickness and $P$ is the total pressure across the film due to curvature, disjoining pressure and Maxwell's stress tensor\cite{verma2005electric}. Using the linear stability analysis, Eq. \ref{eq:2} can be linearized with the ansatz $h = h_\mathrm{0} + \epsilon e^{(\omega t-ikx)}$, where $k$ is the wavenumber, $\omega$ is the growth rate, and $\epsilon$ ($<< h_\mathrm{0}$) is the amplitude of surface perturbations. So, the dispersion relation for a capillary wave under long wave approximation can be simplified to
\begin{equation}
\label{eq:3}
\omega = -\frac{h_\mathrm{0}^{3} k^{2}}{3 \eta }\left(\Delta G^{\prime\prime}(h=h_\mathrm{0})+k^{2}\gamma_\mathrm{PD}\right)
\end{equation}
where $\gamma_\mathrm{PD}$ represent the interfacial tension between PDMS and aqueous drop. Since the dispersion relation is in the Fourier space, the Fourier spectrum (Power Spectral Density) of the intensity profile of a dewetting PDMS film at 40 V is shown in Fig. \ref{Fig.2}(A). Since the linear stability analysis is performed in one dimension, smaller area of 100$\times$100 $\upmu \mathrm{m}^2$ was chosen for the analysis to avoid overlapping of multiple capillary waves oriented in different directions. In the beginning at 0 s, the black line in Fig. \ref{Fig.2}(A) confirms that there is no preferred wavelength present in the system.
\begin{figure*}[t]
\centering
\includegraphics[width=1.00\textwidth]{Fig.2.pdf}
\caption{(A) Power spectral density for the forward cycle 1 at 40 V showing temporal evolution and mode selection, resulting in the dominant mode $k_{\mathrm{max}}=0.53\;\upmu\mathrm{m}^{-1}$ along with few other modes at $0.06\;\upmu\mathrm{m}^{-1}, 0.33\;\upmu\mathrm{m}^{-1}\;\&\;0.73\;\upmu\mathrm{m}^{-1}$ $k$ values. (B) Power spectral density at initial and final stages for three consecutive cycles of the dewetting process showing the presence of $k_{\mathrm{max}}$ and other modes. (C) semi-logarithmic plot of the amplitude of the fastest growing mode $k_{\mathrm{max}}$ for the three cycles of the dewetting process.}
\label{Fig.2}
\end{figure*}
After about 800 ms, surface capillary waves with all wavelengths greater than the critical wavelength (corresponding to $\omega$ = 0) appear and start growing, as shown from magenta to red lines in Fig. \ref{Fig.2}(A). Out of these waves, one wave grows with the fastest speed (corresponding to the fastest growing mode) and attempts to suppress the other waves, which correspond to $k_{\mathrm{max}}=0.53;\upmu\mathrm{m}^{-1}$ for the present case. In addition to $k_{\mathrm{max}}$, few other waves at $0.06\;\upmu\mathrm{m}^{-1}, 0.33\;\upmu\mathrm{m}^{-1}\;\&\;0.73\;\upmu\mathrm{m}^{-1}$ $k$ values also grow with relatively slower rate, due to the overlapping of multiple surface waves in different directions (cf. Fig. \ref{Fig.1}(C)). The final red line corresponds to the time when the waves touch the substrate, and holes start nucleating. After this the dewetted holes grow and results in the final dewetted droplets. At the end of the dewetting, the applied voltage was reduced to 0 by switching off the power supply. As a result, the dewetted droplets rewet and coalesce resulting in a homogeneous film at a later stage. Upon applying the same potential of 40 V again, similar temporal evolution of surface waves and mode selection of the dominant wave was observed and shown in Fig. \ref{Fig.2}(B) for three different cycles. Power spectral density in Fig. \ref{Fig.2}(B) shows the red, blue and magenta curves for cycle 1, 2 and 3, respectively at initial and final stages. All characteristic modes, i.e. $k_{\mathrm{max}}$ and other modes, were found to be present in all the three cycles. This further confirms the fully reversible characteristic of the dewetting of thin PDMS films underneath aqueous drops. The mode corresponding to $k_{\mathrm{max}}= 0.53\;\upmu\mathrm{m}^{-1}$ grows with the fastest rate in all three cycles, and its amplitude is plotted in the semi-logarithmic scale in Fig. \ref{Fig.2}(C). So, it is clear that the amplitude of the fastest growing mode $k_{\mathrm{max}}$ grows exponentially in all the three cycles with the time constant $\uptau=3(\pm 0.02)\;\mathrm{s}$. Since the wavenumber of the fastest growing mode remains the same in the three dewetting cycles, it indicates that the mode selection is the inherent property of system and does not depend on the number of cycles.
To verify that the dewetting process follows linear instability analysis, we can also calculate the wavelength of the dominant or fastest growing mode from the dispersion relation (Eq. \ref{eq:3}), using $\mathrm{d} \omega/\mathrm{d} k=0$,
\begin{equation}
\label{eq:4}
\lambda_\mathrm{m} = 2 \pi \left(-\frac{\Delta G^{\prime\prime}(h=h_\mathrm{0})}{2\gamma_\mathrm{PD}}\right)^{-1/2}
\end{equation}
which depends on $\Delta G^{\prime\prime}(h=h_\mathrm{0})$ which subsequently depends on the applied voltage (cf. Eq. \ref{eq:1}). Figure \ref{Fig.3}(A) shows the plot of the wavelength of the fastest growing mode ($\lambda_\mathrm{m}$) with thickness of PDMS films for different applied voltages. The plot tells that for a fixed thickness of PDMS film, $\lambda_\mathrm{m}$ decreases with increasing voltage. Figure \ref{Fig.3}(A) also tells that PDMS films with thickness smaller that $h_{\mathrm{min}}$ are stable, hence will be present between dewetted droplets after the dewetting completes.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.76\textwidth]{Fig.3.pdf}
\caption{(A) Plot of the wavelength of the fastest growing mode with film thickness using Eq. \ref{eq:4} for voltages from 0.5 V to 60 V. (B) Variation of the wavelength of the fastest growing mode with applied voltage. Black data points represent experimental values and solid red line represents theoretical curve from Eq. \ref{eq:4}. Inset shows rescaled wavelength ($\lambda_\mathrm{m}/\lambda_\mathrm{0}$) with electric field ($E_\mathrm{P}/E_\mathrm{0}$) fitted with the theoretical curve from Eq. \ref{eq:5}. (C) Rescaled plot of the dewetting time constant $\uptau/\uptau_\mathrm{0}$ with the applied electric field ($E_\mathrm{P}/E_\mathrm{0}$) fitted with the theoretical curve from Eq. \ref{eq:7}. (D) The nearest neighbor distance of dewetted PDMS droplets measured from the fluorescence micrographs for different voltages.}
\label{Fig.3}
\end{figure*}
Experimental values of $\lambda_\mathrm{m}$ are obtained from the corresponding Fourier spectrum and plotted with the corresponding theoretical curve (for $h = 400\;\mathrm{nm}$) from Eq. \ref{eq:4} in Fig. \ref{Fig.3}(B). An excellent agreement of the two indicate that the thickness of PDMS films is reduced from 500 nm (as prepared after spin coating) to 400($\pm 100$) nm due to hydrodynamic squeezing after depositing aqueous drops \cite{bhatt2022dewetting, daniel2017oleoplaning}, which finally undergo dewetting. At a voltage smaller than 10 V, the dewetting time becomes very large and the value of $\lambda_\mathrm{m}$ also diverges, so performing the dewetting dynamics experiments becomes challenging. Also, the thickness of stable PDMS film between dewetted droplets increases at smaller voltage, and it adds to the difficulty in analyzing the dewetting dynamics. To generalize the fastest growing wavelength ($\lambda_\mathrm{m}$) with the applied electric field, Eq. \ref{eq:4} can be written in the non-dimensional form as \cite{schaffer2001electrohydrodynamic},
\begin{equation}
\label{eq:5}
\frac{\lambda_\mathrm{m}}{\lambda_\mathrm{0}}=2\pi\left(\frac{E_\mathrm{p}}{E_\mathrm{0}}\right)^{-3/2}
\end{equation}
where, $\lambda_\mathrm{0} = \varepsilon_\mathrm{0}\varepsilon_\mathrm{SiO_2}^3\varepsilon_\mathrm{OTS}^3\varepsilon_\mathrm{PDMS}\mathrm{V}^2/2\gamma_\mathrm{PD}$ is a characteristic wavelength and $E_0 = \mathrm{V}/\lambda_\mathrm{0}$. $E_{p}$ is the net electric field across the entire dielectric layers and can be written as,
\begin{equation}
\label{eq:6}
E_{p}=\frac{\mathrm{V}}{d_\mathrm{PDMS}\varepsilon_\mathrm{OTS}\varepsilon_\mathrm{SiO_2}+d_\mathrm{OTS}\varepsilon_\mathrm{PDMS}\varepsilon_\mathrm{SiO_2}+d_\mathrm{SiO_2}\varepsilon_\mathrm{PDMS}\varepsilon_\mathrm{OTS}}
\end{equation}
For the present experimental system, $E_{p} \sim 10^6$ V/m. Inset in Fig. \ref{Fig.3}(B) shows the rescaled plot of Eq. \ref{eq:6}, with the solid red line correspond to the theoretical fit with slope -1.5. This again confirms the excellent agreement between the experiments and the theory.
We also calculated the time constant ($\uptau$) of the instability from the exponential growth of the amplitude of surface waves.\cite{khare2007dewetting} Figure \ref{Fig.3}(C) shows the logarithmic plot of rescaled time constant ($\uptau / \uptau_0$) with the rescaled electric field ($E_P / E_0$). The solid red line represent the theoretical curve obtained from the dispersion relation
\begin{equation}
\label{eq:7}
\frac{\uptau}{\uptau_\mathrm{0}}=\pi^{4}\left(\frac{E_{p}}{E_\mathrm{0}}\right)^{-6}
\end{equation}
Therefore the linear stability analysis is able to capture and explain the entire dewetting process of thin PDMS films underneath aqueous drops at different voltages for different cycles. Therefore, as mentioned earlier, the electric field controlled dewetting of 400 nm thick PDMS films is qualitatively similar to the spinodal dewetting of few nanometer thick films controlled only by the van der Waals interaction.
Plot of the nearest neighbor distance ($<d>$) of PDMS droplets after complete dewetting for different voltages is shown in Fig. \ref{Fig.3}(D). It is clear that the nearest neighbor distance decreases with increasing voltage, as also predicted by Eq. \ref{eq:4}. Qualitatively, $<d>$ follows the same behavior as $\lambda_m$ shown in Fig. \ref{Fig.3}(B) but quantitatively $<d>$ is always slightly larger than $\lambda_m$. This is primarily due to the presence of other waves in addition to the $k_\mathrm{max}$. As a result, after the complete dewetting, $<d>$ corresponding to $k_\mathrm{max}$ and other modes are present in the system, hence it is slightly different compared to the $\lambda_\mathrm{m}$. Also, with increasing voltage, the size of dewetted droplets become smaller hence the number of dewetted droplets increases.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.5\textwidth]{Fig.4.pdf}
\caption{(A) Binary images of the final dewetted droplets for cycle 1, 2 and 3 at 40 V to analyze the correlation of droplet position over different cycles. Different colors is used only to differentiate between different cycles (scale bar 100 $\upmu$m). (B) Overlap area fraction ($\phi$) of the dewetted droplets between cycle 1, cycle 2 and cycle 3 for different voltages. The red line correspond to the mean overlap area fraction at about 16$\%$.}
\label{Fig.4}
\end{figure*}
Although the dewetting of thin PDMS films underneath aqueous drops with external electric field is fully reversible, it is interesting to analyze the memory effect in the dewetting process over multiple dewetting cycles. Here the memory effect refer to the formation of dewetted PDMS droplets at the same position over different dewetting cycles. As discussed earlier, before the beginning of a new dewetting cycle, it was made sure that the dewetted PDMS film of the previous cycle becomes completely uniform. Memory effect in the dewetting process was analyzed by comparing the positions of final dewetted droplets over multiple dewetting cycles. During image analysis, we overlapped the binary images of dewetted PDMS droplets for different cycles to calculate the area fraction of overlap region of dewetted droplets ($\phi$). Fig. \ref{Fig.4}(A) shows the overlapping of the binary images of dewetted PDMS droplets at 40 V for the cycles 1, 2 \& 3. It is clear from the figure that the dewetted droplets are not located at identical positions over different dewetting cycles, hence the overlap area fraction seems to be quite small. Figure \ref{Fig.4}(B) shows the percentage overlap area fraction at different voltages for the three consecutive cycles. We see that the overlap area fraction between two cycles for different applied voltages is always in the range of 7$\%$ to 26$\%$, with mean around 16$\%$. Since the dewetting process of the thin PDMS films is due to surface capillary waves oriented in random directions, the final dewetted droplets are not expected to have any correlation over different cycles, hence show minimum memory effect. If a dewetting process is dominated by heterogeneous nucleation due to surface defects, dewetted droplets would depict large overlap area fraction showing considerable memory effect in the system.
\section{Conclusion}
To summarize, the current approach to reversibly control dewetting of thin lubricating films underneath aqueous drops with external electric field can be used to investigate dewetting and rewetting processes in details over multiple cycles. The study shows that stable thin lubricating films underneath aqueous drops on a hydrophobic solid surface dewet upon applying AC voltage across the films. The dewetting process is identical in nature to spinodal dewetting having surface capillary waves of wavelengths greater than the critical wavelength which grow exponentially with time leading to the final pattern with multiple dewetted droplets. The wavelength of the fastest growing mode correspond to the final droplet separation. We observed that even a small voltage of 0.5 V is able to destabilize the lubricating films resulting in smaller number of large sized dewetted droplets. At larger voltages, the minimum of the total free energy increases, hence the films dewet faster with larger number of small sized dewetted droplets. Experimentally obtained droplet separation and dewetting time follow a universal scaling behavior obtained by the linear stability analysis. Upon reducing the applied voltage to 0, the dewetted droplets coalesce and form a uniform film again. Total time of dewetting in a forward cycle is found to be much smaller compared to rewetting in reverse cycle. This asymmetry is due to slow surface relaxation of high viscosity lubricating fluid during rewetting process. The final dewetting pattern, i.e. the distribution of dewetted droplets, do not show any memory effect over multiple dewetting cycles.
\section{Materials and methods}
\textbf{Materials}
p-type silicon (Si) wafers ($<$100$>$, resistivity 0.001-0.005 ohm-cm) with 1 $\upmu$m thermal oxide layer (University Wafers Inc.) were used as a substrate which serves as the bottom electrode. To prepare stable slippery surfaces, the surface energy of Si wafers were modified by grafting self-assembled monolayer (SAM) of octadecyltrichlorosilane (OTS) (Sigma-Aldrich) molecules. Polydimethylsiloxane (PDMS) (viscosity 5000 cSt, surface tension 21.2 mN/m, Dow Corning) was used as a lubricating fluid for all experiments. Nile red dye, ($\mathrm{C}_{20}\mathrm{H}_{18}\mathrm{N}_{2}\mathrm{O}_{2}$, Sigma-Aldrich) was added in PDMS for fluorescence imaging. Mixture of 80$\%$Glycerol (Fisher Scientific) and 20$\%$ DI water (with 0.1 M NaCl) was used as aqueous liquid (conductivity 0.11 S/m). NaCl is added to enhance the conductivity of the aqueous liquid. The aqueous liquid is hygroscopically stable at our experimental conditions. Copper wire (diameter 70 $\upmu\mathrm{m}$) was connected with the bottom substrate using a silver paste (Fisher Scientific) and a platinum wire (diameter 250 $\upmu\mathrm{m}$) was used as top electrode.
\textbf{Method}
Si wafers were cleaned using ethanol, acetone and toluene in ultrasonic bath followed by plasma cleaning (Harrick Plasma) with O$_2$ plasma for 5 min. The cleaned substrates were then immersed in a 0.2 V/V$\%$ of OTS solution in toluene. After taking out, the substrates were rinsed thoroughly with toluene to remove excess non-grafted OTS molecules. Subsequently, the substrated were heated at 90$^\circ$C for 30 mins. To prepare lubricating films, PDMS was diluted with n-heptane (having Nile red dye with 0.0015 W/V$\%$) with 4 W/V$\%$ ratio and was spin-coated for 100 s with 2000 RPM, 10 s acceleration. This resulted in the thickness of lubricating film of $500\;(\;\pm\;10)$ nm.
\textbf{Experimental setup}
Thickness of lubricating films was measured using an optical profiler (F-20, KLA USA). Aqueous drop (glycerol-water mixture) of 0.5 ml volume was deposited on the lubricated substrates and left for 15 mins. AC voltage of 1 KHz frequency was taken from a function generator (SG1610C, Aplab India) and amplified using a high voltage amplifier (T-50, Elbatech Italy). A digital oscilloscope (GDS-1062, Gwinstek India) was used to measure the frequency and voltage of the input AC signal. A fluorescence microscope (BX-51, Olympus Japan) equipped with a color CMOS camera (10 fps, $1024\;\mathrm{pixel}\;\times\;798\;\mathrm{pixel}$) was used to observe the dewetting dynamics of lubricating films under aqueous drops.
\textbf{Image analysis}
Open-source softwares, Gwyddion and Image J, were used to analyze the fluorescence images of dewetting of lubricating films. Intensity profiles, droplet count, area fraction, power spectral density were calculated using these softwares.
|
1,116,691,501,012 | arxiv | \section{Introduction}
Cosmological observations indicate that the very early universe was approximately homogeneous and isotropic. However, the only primordial observables we have had access to so far are the background cosmology and the two-point function of scalar perturbations. Focusing on the observed isotropy, it is interesting to ponder whether the universe could secretely be anisotropic---in terms of other observables, such as higher-point correlation functions---but featuring an accidental isotropy for the observables we have detected so far. By `accidental' we mean something precise: that isotropy of those observables is a so-called accidental symmetry, akin to baryon number conservation in the standard model of electroweak interactions. That is, an approximate symmetry that is enforced by the fundamental symmetries of the theory to some low order in a perturbative expansion.
Icosahedral inflation is a concrete implementation of this idea \cite{icosahedralinflation}.
Icosahedral inflation can be thought of as inflation driven by a peculiar solid with icosahedral symmetry, which is a discrete subgroup of 3D rotations ($SO(3)$). In principle, the background cosmological evolution and all correlation functions for perturbations must be invariant under such discrete rotations, but not necessarily under generic continuous rotations. However, icosahedral rotations are so `dense' (in a colloquial sense) in $SO(3)$, that the background evolution and the scalar two-point function at long distances happen to be accidentally isotropic \cite{icosahedralinflation}. Beyond these two observables, full isotropy is lost, and one can check explicitely that already the scalar three-point function and the tensor two-point function are generically anisotropic. In particular, the scalar three-point function can be maximally anisotropic \cite{icosahedralinflation}, i.e., can have vanishing overlap with all isotropic templates used in data analyses, and the tensor spectrum can have nonzero mixed correlators between the two helicities
\cite{tensortensor}.
Here, we show that the mixed scalar-tensor two-point function is also expected to be nonzero in icosahedral inflation. Such mixed correlator vanishes to lowest order in the derivative expansion. However, it is generically there once higher derivative corrections are taken into account. This makes it suppressed within the regime of validity of the derivative expansion, which is the relevant perturbative expansion for an effective field theory like ours. As a result, it is much smaller that the scalar spectrum. Still, since the tensor spectrum is also suppressed compared to the scalar one, there is a consistent choice of parameters that makes the scalar-tensor mixing more important than the tensor spectrum itself. Schematically,
\begin{equation}
\frac{\langle \zeta \gamma \rangle}{\langle \zeta \zeta \rangle} \sim \Delta c^2_{\zeta \gamma} \; ,
\qquad
\frac{\langle \gamma \gamma \rangle}{\langle \zeta \zeta \rangle} \sim \epsilon c_L^5 \; ,
\end{equation}
where $\Delta c^2_{\zeta \gamma}$ is a small dimensionless mixing parameter, $\epsilon = -\dot H/H^2$ is the usual slow-roll parameter, and $c_L$ is the propagation speed of scalar perturbations---which at short distances just reduce to longitudinal phonons, hence the `$L$'. One sees immediately that for $\epsilon c_L^5 \ll \Delta c^2_{\zeta \gamma}$, the mixed scalar-tensor correlator is bigger than the tensor spectrum.
\section{Icosahedral inflation}
Icosahedral inflation \cite{icosahedralinflation} is a variant of solid inflation \cite{solidinflation}. Apart from gravity, the degrees of freedom are
a triplet of scalar fields $\phi^I(x), I=1,2,3$, obeying shift symmetries and internal icosahedral rotation symmetries,
\begin{equation} \label{internal}
\phi^I \rightarrow \phi^I+a^I, \;\;\; \phi^I = D^I {}_J\phi^J
\end{equation}
where the $a^I$'s are constant shifts and $D$ is any element of the icosahedral group. To lowest order in derivatives, the basic building block for the action is the matrix
\begin{equation}
B^{IJ} = \partial_{\mu}\phi^I\partial^{\mu}\phi^J \; ,
\end{equation}
and, upon including gravity, the action reads
\begin{equation}
S_0 = \int d^4x \sqrt{-g} \Big[ \sfrac{1}{2}M_P^2 R + F \big( B^{IJ} \big) \Big] \; ,
\end{equation}
where $F$ is a generic function invariant under icosahedral rotations acting on the $I,J$ indices.
It can be checked \cite{icosahedralinflation, solidinflation} that such an action admits FRW solutions for the metric, with the scalar fields taking background values
\begin{equation} \label{background}
\langle \phi^I \rangle = x^I \; ,
\end{equation}
where the $x^I$'s are the usual FRW comoving coordinates. Moreover, such a solution describes an inflationary universe with near exponential expansion if one demands that the action above further enjoy an approximate internal dilation symmetry, $\phi^I (x) \to \lambda \, \phi^I(x)$ \cite{solidinflation}. (The slow-roll parameter $\epsilon$ can be thought of as a small breaking parameter for such an approximate symmetry.)
Considering all the spacetime and internal symmetries at our disposal, the FRW metric is invariant under spatial translations and rotations, and, trivially, under all transformations in \eqref{internal}. On the other hand, the scalar background configurations \eqref{background} are invariant under {\em (i)} the combined action of spatial translations and the internal shifts of \eqref{internal}, and {\em (ii)} the combined action of spatial icosahedral rotations and the internal icosahedral rotations of \eqref{internal}. So, overall, all background fields are invariant under {\em (i)} and {\em (ii)}, which, following standard spontaneously symmetry breaking (SSB) nomenclature, make up the unbroken subgroup, which has the algebra of 3D translations and icosahedral rotations.
When we introduce gravitational and matter perturbations,
\begin{equation}
g_{\mu\nu} = g^{\rm FRW}_{\mu\nu} +h_{\mu\nu}\; , \qquad \phi^I = \langle \phi^I \rangle + \pi^I \; ,
\end{equation}
their action will be manifestly invariant only under the unbroken subgroup. In particular, we can stop differentiating between spatial and internal indices, since they transform in the same way under the unbroken subgroup.
As usual, the broken symmetries are not lost---they are non-linearly realized on the perturbations---but we will not need them for the computations in this paper.
We refer the reader to the original papers \cite{icosahedralinflation, tensortensor, solidinflation} for more details about the general framework and the explicit construction of the model.
\section{The mixed scalar-tensor two-point function} \label{perturbative}
In solid inflation, cosmological perturbations can be classified in terms of tensors ($\gamma_{ij}$), vectors/transverse phonons ($\vec \pi_T$), and scalars/longitudinal phonons ($\pi_L$) \cite{solidinflation}.
At the two-derivative level, after solving the constraints one finds the quadratic action
\begin{equation}
S_{(2)} = S_{\gamma}+S_{L}+S_{T} \; ,
\end{equation}
with \cite{solidinflation}
\begin{align}
S_\gamma &= \sfrac14 {M_{\rm Pl}^2} \int dt\,d^3x \,a^3\Big[\sfrac12 \dot{\gamma}_{ij}^2 -\sfrac{1}{2 a^2} \big(\partial_m \gamma_{ij}\big)^2 +2\dot{H}c_T^2 \, \gamma_{ij}^2 \Big] \label{tensors}\\
S_{T} &=M_{\rm Pl}^2 \int dt \int_{\vec k} \,a^3 \bigg[\frac{ k^2/4}{1-k^2/4a^2\dot{H}} \, \big| \dot{\pi}_T^i \big|^2 +\dot{H}c_T^2 \, k^2 \big|\pi_T^i \big|^2 \bigg] \label{vectors}\\
S_{L} &= M_{\rm Pl}^2 \int dt \int_{\vec k} \, a^3 \bigg[ \frac{ k^2/3}{1-k^2/3a^2\dot{H}}\big|\dot{\pi}_L -({\dot{H}}/{H})\pi_L\big|^2+\dot{H}c_L^2 \, k^2 \big| \pi_L \big|^2 \bigg] \; . \label{scalars}
\end{align}
For icosahedral inflation, since the background does not have full $SO(3)$ symmetry, one expects quadratic mixings among these different polarizations---neither spin nor helicity are good quantum numbers. However, as pointed out already in \cite{icosahedralinflation, tensortensor}, such an effect is invisible to lowest order in the derivative expansion. On the other hand, if one takes into account higher derivative corrections, it is easy to write down mixing terms that are consistent with icosahedral symmetry. Ref.~\cite{tensortensor} considered the leading anisotropy effects for the tensor spectrum, which include a mixed correlator for helicities $+2$ and $-2$. Here we do the same for the scalar-tensor two-point function.
In a derivative expansion, the first icosahedral-invariant bilinear term we can write down that mixes scalars (and vectors) with tensors is
\begin{equation} \label{Smix}
S_{\rm mix} = -M_{\rm Pl}^{2}\int dt d^3x \, a \, {\Delta c_{\gamma\zeta}^{2}} \, T^{ijklmn}_{6} \partial_{i}\pi_{j}\partial_{k}\partial_{l}\gamma_{mn} \; .
\end{equation}
Here, $\Delta c^2_{\gamma\zeta}$ is a free dimensionless parameter---which we expect to depend slowly on time, but which we can take as constant to zeroth-order in the slow-roll expansion---and $T_6$ is the unique (up to normalization) spin-6 icosahedral invariant tensor \cite{tensortensor}.
As we show in Appendix \ref{interactionterm}, the single power of $a(t)$ is consistent with the near scale-invariance of the solid driving inflation, which is ultimately related to the slow-roll expansion \cite{solidinflation}. There, we also show that, to this order in derivatives, associated with \eqref{Smix} there are no extra scalar-tensor mixings involving $N$ or $N^i$. Finally, in the spirit of the derivative expansion and according to standard EFT logic, we need higher-derivative corrections to yield small effects at the scales of interest, that is, for typical frequencies of order $H$. This requires ${\Delta c_{\gamma\zeta}^{2}}$ to be generically `small'; how small will be made clear in sect.~\ref{nonperturbative}.
The mixing term above can come from non-minimal couplings between our solid and the Riemann tensor, e.g.~of the form $(R^{\mu\nu\rho\sigma}\partial_{\mu}\phi^I\partial_{\nu}\phi^J\partial_{\rho}\phi^K\partial_{\sigma}\phi^L)^3$ with suitable index contractions. We show this in the Appendix.
The fact that, within the regime of validity of the EFT, the term above can only yield small effects allows us to treat it in perturbation theory.
Decomposing the phonon field into its longitudinal and transverse parts,
\begin{equation}
\pi_{j} = \frac{\partial_{j}}{\sqrt{-\nabla^{2}}} \pi_{L} + \pi^{j}_{T} \; , \qquad \vec \nabla \cdot \vec \pi_T = 0 \; ,
\end{equation}
and keeping only the longitudinal one, the mixing term above becomes
\begin{equation} \label{mixing}
S_{\rm mix} = - M_{\rm Pl}^{2}\int \frac{d\tau d^{3}k}{(2\pi)^{3}} \, a^{2} \Delta c_{\zeta\gamma}^{2} \, T^{ijklmn}_{6} \, k_{i} \hat{k}_{j} k_{k}k_{l} \, \pi_{L}(\vec{k}, \tau) \gamma_{mn}(-\vec{k},\tau) \; ,
\end{equation}
where we switched to Fourier space and conformal time.
It is customary to parametrize scalar perturbations in terms of the variable $\zeta$, which for solid inflation is related to $\pi_L$ by $\zeta = -k \, \pi_{L}/3$ \cite{solidinflation}. Following standard cosmological perturbation theory \cite{maldacena}, to first order in $\Delta c_{\zeta\gamma}^{2}$ the mixed two-point function we are after thus is
\begin{equation}
\langle\zeta(\vec k , \tau) \, \gamma^{s}(\vec q , \tau)\rangle = -i\int_{-\infty}^{\tau} d\tau' \langle\Omega(-\infty)|[\zeta(\vec k , \tau)\gamma^{s}(\vec q , \tau), H_{\rm int}(\tau')]|\Omega(-\infty)\rangle \;,
\end{equation}
where $s=\pm$ is either of the two tensor polarizations,
\begin{equation} \label{gamma ij}
\gamma_{ij}(\vec k,\tau) = \sum_{s=\pm} \gamma^s (\vec{k},\tau) \, \epsilon_{ij}^s(\vec k) \; , \quad \qquad \big( \epsilon^s_{ii} = k_i \epsilon^s_{ij} = 0 \, , \; \epsilon^s_{ij} \epsilon^{s'*}_{ij} = 2 \delta^{ss'} \big) \; ,
\end{equation}
and the interaction Hamiltonian is
\begin{equation}
H_{\rm int}(\tau ') = -3M_{\rm Pl}^{2} \int \frac{d\tau ' d^{3}k' }{(2\pi)^{3}} a^{2} \, \Delta c_{\zeta\gamma}^{2} \, T^{ijklmn}_{6} \, \hat{k}_{i}' \hat{k}_{j}' \hat{k}_{k}' \hat{k}_{l}' \, k'{} ^2\zeta(\vec{k}',\tau ')\gamma_{mn}(-\vec{k}',\tau ')
\end{equation}
Writing our fields as usual as
\begin{align}
& \gamma^s (\vec{k},\tau) = \gamma_{cl}(k,\tau) \, a^s (\vec{k}) + \gamma^*_{cl}(k,\tau) \, a^{s\dagger}(-\vec{k}) \\
& \zeta(\vec{k},t) = \zeta_{cl}(\vec{k},t) \, b(\vec{k}) + \zeta_{cl}^*(\vec{k},t) \, b^{\dagger}(-\vec{k}) \; ,
\end{align}
and using the relevant mode functions to lowest order in slow roll \cite{solidinflation},
\begin{align}
& \gamma_{cl} (k,\tau) = \frac{1}{M_{\rm Pl} a}\, \frac{e^{-ik\tau}}{\sqrt{k}} \, \Big( 1 - \frac{i}{k\tau} \Big) \\
& \zeta_{cl}(\vec{k},\tau) = -\frac{1}{M_{\rm Pl}a}\sqrt{\frac{c_{L}}{4\epsilon k}} {e^{-ikc_{L}\tau}} \Big(\frac{i}{c_{L}^{2}} + \frac{1}{c_{L}^{3}k\tau} + \frac{k}{3aHc_{L}} \Big) \; ,
\end{align}
after some straightforward (but tedious) algebra we get
\begin{equation} \label{semifinal}
\langle\zeta \gamma^{s} \rangle' \equiv \frac{\langle\zeta ({\vec k , \tau}) \, \gamma^{s} (\vec q, \tau) \rangle }{(2\pi)^{3} \delta^{3}(\vec{k}+\vec{q}) }= \frac{\Delta c^{2}_{\zeta\gamma}}{\epsilon M_{\rm Pl}^{2}} \, T^{ijklmn}_{6} \, \hat{k}_{i} \hat{k}_{j} \hat{k}_{k} \hat{k}_{l} \, \epsilon^{s}_{mn}(\vec{k}) \times I(\tau) \; ,
\end{equation}
where
\begin{align}
I(\tau) \equiv \; & \frac32 \, \frac{c_{L}}{a^{2}} \Big[\Big(\frac{1}{c_{L}^{2}} - \frac{1}{k^{2}c_{L}^{3}\tau^{2}} + \frac{1}{3c_{L}} \Big) \frac{c_{L}^{2}+5c_{L}+3}{3kc_{L}^{2}(1+c_{L})^{2}} \nonumber \\
& + \Big(\frac{1}{kc_{L}^{2}\tau} + \frac{1}{kc_{L}^{3}\tau} + \frac{k}{3aHc_{L}} \Big)\Big(\frac{1}{k^{2}c_{L}^{3}\tau} - \frac{\tau}{3c_{L}(1+c_{L})}\Big)\Big] \; .
\end{align}
For late times, $k\tau \rightarrow 0^-$, our two-point function becomes time independent and scale invariant, and reduces to
\begin{equation}
\langle\zeta \gamma^{s} \rangle' = \frac32 \, \frac{2c_{L}^{3}+4c_{L}^{2}+6c_{L}+3}{(1+c_{L})^{2}} \cdot T^{ijklmn}_{6} \, \hat{k}_{i} \hat{k}_{j} \hat{k}_{k} \hat{k}_{l} \, \epsilon^{s}_{mn}(\vec{k}) \cdot \frac{\Delta c^{2}_{\zeta\gamma}}{\epsilon c_L^5}\frac{H^{2} \, }{ M^{2}_{\rm Pl}k^{3}} \label{final}
\end{equation}
Dropping order-one numerical factors,
\begin{equation}
\langle \zeta \gamma \rangle' \sim \frac{\Delta c^{2}_{\zeta\gamma}}{\epsilon c_L^5}\frac{H^{2} \, }{ M^{2}_{\rm Pl}k^{3}} \sim \frac{\Delta c^{2}_{\zeta\gamma}}{\epsilon c_L^5} \langle \gamma \gamma \rangle' \; ,
\end{equation}
consistently with the estimate in \cite{tensortensor}, which was derived for $c_L \sim 1$. Recalling that for solid inflation models the
tensor-to-scalar ratio is roughly $r \sim \epsilon c_L^5$ \cite{solidinflation}, we see that for $\Delta c^{2}_{\zeta\gamma} \gg r$
the mixed correlator we computed is much bigger than the tensor spectrum itself. As we will see in sect.~\ref{nonperturbative}, such a possibility is still within the regime of validity of the effective theory and of a perturbative expansion in $\Delta c^{2}_{\zeta\gamma}$.
\section{Visualizing the two-point function}
The two-point function \eqref{final} depends on the orientation of $\vec k $ relative to the underlying icosahedral geometry, through the factor
\begin{equation} \label{M}
M^{\zeta s}(\vec k) \equiv T^{ijklmn}_{6} \, \hat{k}_{i} \hat{k}_{j} \hat{k}_{k} \hat{k}_{l} \, \epsilon^{s}_{mn}(\vec{k})
\end{equation}
(we are using a notation consistent with that of \cite{tensortensor}, to facilitate comparison.)
As discussed at length in \cite{tensortensor}, the phase of the polarization tensor $ \epsilon^{s}_{mn}(\vec k)$ is arbitrary, and, as a function of the direction of $\vec k$, necessarily involves singularities. This makes decomposing $M^{\zeta s}(\vec k)$ in spherical harmonics or plotting its angular dependence not particularly informative.
One possible way out is to consider the squared absolute value of $M^{\zeta s}(\vec k)$, so that the ambiguous and singular phases cancel. Using the results of \cite{tensortensor},
\begin{align*}
|M^{\zeta+}|^{2} =| M^{\zeta - }|^{2} & = \sfrac12 \sum_{s=\pm1}|M^{s\zeta}|^{2} \\
&= \sfrac12 \, T^{ijklmn}_{6}T^{opqrst}_{6}\hat{k}_{i} \hat{k}_{j} \hat{k}_{k} \hat{k}_{l} \hat{k}_{o} \hat{k}_{p} \hat{k}_{q} \hat{k}_{r}\sum_{s=\pm1} \epsilon^{s}_{mn}(\vec{k})\epsilon^{s\;*}_{st}(\vec{k}) \\
&= \sfrac12 \, T^{ijklmn}_{6}T^{opqrst}_{6}\hat{k}_{i} \hat{k}_{j} \hat{k}_{k} \hat{k}_{l} \hat{k}_{o} \hat{k}_{p} \hat{k}_{q} \hat{k}_{r} (P_{ms}P_{nt}+P_{mt}P_{ns}-P_{mn}P_{st}) \numberthis \; ,
\end{align*}
where $P_{ij}$ is the transverse projector,
\begin{equation}
P_{ij}(\hat{k}) \equiv \delta_{ij}-\hat{k}_i \hat{k}_j \; .
\end{equation}
Following \cite{tensortensor}, we expect $|M^{\zeta s}|^{2} $ to contain spherical harmonics with $\ell = 0, 6, 10, 12$ only. Indeed, with the help of Mathematica we find
\begin{equation}
|M^{\zeta s}|^{2} = \sfrac12 \, \sum_{\ell m} C_{\ell m}Y^{m}_{\ell}(\theta,\phi) \; ,
\end{equation}
with the only nonzero $C_{\ell m}$ being
\begin{align*}
\mbox{$\ell = 0$:} \quad & C_{0,0}=\sfrac{1024\sqrt{\pi}}{3003}(\gamma+1) \numberthis \\
\mbox{$\ell = 6$:} \quad & C_{6,\pm6} = -\sqrt{\sfrac{5}{11}} \, C_{6,\pm2} = \sfrac{32}{323} \sqrt{\sfrac{11\pi}{273}}\,(\gamma+2) \\
& C_{6,\pm4} = -\sqrt{\sfrac{7}{2}} \, C_{6,0} = -\sfrac{352}{969} \sqrt{\sfrac{2\pi}{91}} \, (\gamma+1) \numberthis \\
\mbox{$\ell = 10$:} \quad & C_{10,\pm10} = -\sqrt{\sfrac{255}{19}} \, C_{10,\pm6} = -\sqrt{\sfrac{255}{494}} \, C_{10,\pm2} = -\sfrac{20}{23} \sqrt{\sfrac{21\pi}{46189}} \, (3\gamma+1)\\
&C_{10,\pm8} = \sfrac{1}{2}\sqrt{\sfrac{17}{3}} \, C_{10,\pm4} = -\sqrt{\sfrac{187}{130}} \, C_{10,0} = -\sfrac{20}{23}\sqrt{\sfrac{70\pi}{7293}} \,(\gamma+1) \numberthis \\
\mbox{$\ell = 12$:} \quad & C_{12,\pm12} = 5\sqrt{\sfrac{69}{154}} \, C_{12,\pm8} = \sfrac{15}{17}\sqrt{\sfrac{437}{187}} \, C_{12,\pm4} = \sfrac{5}{58}\sqrt{\sfrac{5681}{119}} \, C_{12,0} = \sfrac{45}{2}\sqrt{\sfrac{\pi}{676039}} \, (\gamma+1) \\
&C_{12,\pm10} = -\sfrac{1}{5}\sqrt{\sfrac{209}{21}} \, C_{12,\pm6} = \sqrt{\sfrac{209}{34}} \, C_{12,\pm2} = \sfrac{33}{23}\sqrt{\sfrac{3\pi}{29393}} \, (3\gamma+1) \; , \numberthis
\end{align*}
where $\gamma$ is the golden ratio.
In Figure \ref{fig:aniso} we plot the angular dependence of $|M_{\zeta\gamma}|$, alongside the underlying icosahedral structure. Clearly, the signal is concentrated around directions pointing towards the edges of the icosahedron.
\begin{figure}[h]
\centering
\subfloat[$|M_{\zeta\gamma}|$ overlapping with our icosahedron]{{\includegraphics[scale=.61]{mzg_graph} }}%
\qquad
\subfloat[$|M_{\zeta\gamma}|$ standing alone]{{\includegraphics[scale=.61]{no_icosahedron} }}%
\caption{Angular plot of $|M_{\zeta\gamma}|$.\label{fig:aniso}}
\end{figure}
Another way to get rid of the ambiguous phases is to consider directly the two-point function $ \langle \zeta \gamma_{ij}\rangle$, because the full $\gamma_{ij}$ field---eq.~\eqref{gamma ij}---is unambiguous. Using the results above and the tracelessness of $T_6$, we get
\begin{align*}
\langle \zeta \gamma_{ij}\rangle & \propto \sum_{s=\pm1} M^{\zeta s }\epsilon^{s}_{ij}(-\vec{k})\\
& =T^{klmnop}_{6}\hat{k}_{k}\hat{k}_{l}\hat{k}_{m}\hat{k}_{n} \sum_{s=\pm1}\epsilon^{s}_{op}(\vec{k})\epsilon^{s\;*}_{ij}(\vec{k})\\
& = (P_{oi}P_{pj}+P_{oj}P_{pi}-P_{op}P_{ij}) \, T^{klmnop}_{6}\hat{k}_{k}\hat{k}_{l}\hat{k}_{m}\hat{k}_{n}\\
& = \big[ 2T_{6}^{klmnij}-2(T_{6}^{klmnip}\hat{k}_{p}\hat{k}_{j}+T_{6}^{klmnjp}\hat{k}_{i}\hat{k}_{p} ) +T_{6}^{klmnop}\hat{k}_{o}\hat{k}_{p}(\delta_{ij}+\hat{k}_{i}\hat{k}_{j}) \big] \hat{k}_{k}\hat{k}_{l}\hat{k}_{m}\hat{k}_{n} \numberthis \; .
\end{align*}
This however is a transverse traceless two-index tensor (because $\gamma_{ij}$ is), and so it is difficult to visualize its angular dependence: we cannot trace it or contract it with $\hat k$'s to construct a scalar angular function.
\section{Non-perturbative check}\label{nonperturbative}
As a check of our results of sect.~\ref{perturbative}, we now try to calculate the same two-point function in a non-perturbative way. We will be able do so only in a specific kinematical regime, which however will still allow us to perform a nontrivial check.
Eventually we will still expand our result to linear order in $\Delta c^2_{\zeta \gamma}$, so, we can focus from the start on a two-field system made up of the scalar perturbations and either polarization of the tensor ones, because at linear order there cannot be interference among different sources of mixing. In particular, we can safely neglect the tensor-tensor mixing of ref.~\cite{tensortensor}.
Moreover, it turns out that, because of the time-dependence of $a(\tau)$, even this simple two-field system cannot be diagonalized for generic momenta (or generic times). We show this in Appendix \ref{diagonalappendix}. So, here we focus on modes well inside the sound horizon, $c_L k/aH \gg 1$, for which the time-dependence of $a(\tau)$ can be neglected.
With these qualifications in mind, to lowest order in slow-roll the quadratic action we need is (see eqs.~\eqref{tensors}, \eqref{scalars}, \eqref{mixing})
\begin{align}
S_{\gamma}+S_{L} +S_{\rm mix} & \to \sfrac{1}{2}M^{2}_{\rm Pl} a^2 \int \frac{d\tau d^{3}k}{(2\pi)^{3}} \Big[\sfrac{1}{2} \big(|\gamma_s'|^{2} - k^{2}|\gamma_s|^{2} \big) +2\epsilon a^{2}H^{2}(|\pi_{L}'|^{2}-c_{L}^{2}k^{2}|\pi_{L}|^{2}) \nonumber \\
& - \Delta c_{\zeta\gamma}^{2} k^3 \big( M^{\zeta s} \, \pi_L^* \gamma^s + {\rm c.c.} \big)\Big ] \qquad \qquad \qquad (c_L k/aH \gg 1) \; ,
\end{align}
where $s$ is either $+$ or $-$, $M^{\zeta s}$ is defined in \eqref{M}, and all the fields and coefficients are evaluated at $(\vec k, \tau)$.
Neglecting the time-dependence of all the coefficients---including $a$---we can go to frequency space and rewrite this conveniently in a compact form as
\begin{equation}
\int \frac{d \omega d^3 k}{(2\pi)^4} \, \psi^{\dagger} \cdot K \cdot \psi \; ,
\end{equation}
where
\begin{equation}
\psi \equiv
\begin{pmatrix}
\gamma_{s}\\
\pi_L
\end{pmatrix} , \qquad
K \equiv
\sfrac{1}{2}M^{2}_{\rm Pl} a^2
\begin{pmatrix}
\sfrac{1}{2}(\omega^2-k^2) & -\sfrac12 \Delta c_{\zeta\gamma}^{2} k^3 {M^{\zeta s}}^* \\
-\sfrac12 \Delta c_{\zeta\gamma}^{2} k^3 M^{\zeta s} & 2\epsilon a^{2}H^{2} (\omega^2 - c_L^2 k^2)
\end{pmatrix},
\end{equation}
$M^{\zeta s}$ is defined in \eqref{M}, and all the fields are now evaluated at $(\vec k, \omega)$.
To compute the equal-time two-point function we are interested in well inside the sound horizon, we can now simply invert the matrix $K$, insert the $i \epsilon$'s appropriate for the Feynman prescription for the poles, and take the integral over $\omega$ through standard residue methods. The reason this procedure is correct in our limit is that in general it gives the ground state's $T$-ordered correlation functions for a quantum system with a time-independent Hamiltonian; in our case, $T$-ordering does not matter, because our fields commute at equal time; moreover, in our inside-the-sound-horizon limit the time-dependence of the perturbations' Hamiltonian is negligible, and the Bunch-Davies ground state is equivalent to the flat-space one.
Then, the Fourier-space Feynman propagator of $\psi$ is schematically
\begin{equation}
\langle \psi \psi^\dagger \rangle_{\omega, \vec k} = i (K + i \epsilon) ^{-1} \, (2\pi)^4 \delta^{4} \; ,
\end{equation}
and so the equal-time two-point function we are interested in is
\begin{align}
\langle \pi_L \gamma^s \rangle'_{\vec k, \tau} & = \int \frac{d \omega}{(2\pi)} \, i (K + i \epsilon)_{12}^{-1} \nonumber \\
& = \frac{\Delta c^{2}_{\zeta\gamma}}{ a^2 M_{\rm Pl}^{2} } k^3 M^{\zeta s} \int \frac{d \omega}{(2\pi)} \frac{i}{\epsilon a^2 H^2 (\omega^2-k^2 + i \epsilon)(\omega^2-c_L^2 k^2 + i\epsilon) - \sfrac14 \Delta c^{2}_{\zeta\gamma} |M^{\zeta s}|^2 k^6 } \nonumber \\
& \simeq - \frac{\Delta c^{2}_{\zeta\gamma}}{2 \epsilon H^2 a^4 M_{\rm Pl}^{2} } M^{\zeta s} \frac{1}{c_L(1+c_L)} \; ,
\end{align}
where in the last step we restricted to the first order in $\Delta c^{2}_{\zeta\gamma}$. Recalling that $\zeta$ is related to $\pi_L$ by $\zeta = - k\pi_L/3$, we see that this result matches precisely our previous one, eq.~\eqref{semifinal}, in the high $k$/early times limit, $c_L k |\tau| \gg 1$.
This computation also makes it clear how small $\Delta c^2_{\zeta\gamma}$ should be for a perturbative analysis to be applicable: the $\omega$ integral above is dominated by poles with $\omega \simeq \pm k$ and $\omega \simeq \pm c_L k$. The scalar-tensor mixing shifts these, respectively, by
\begin{equation}
\frac{\Delta \omega}{\omega} \simeq \frac{(\Delta c^{2}_{\zeta\gamma})^2 |M^{\zeta s}|^2}{8 (1-c_L^2)} \frac{k^2}{\epsilon a^2 H^2 } \; , \qquad
\frac{\Delta \omega}{\omega} \simeq - \frac{(\Delta c^{2}_{\zeta\gamma})^2 |M^{\zeta s}|^2}{8 c_L^2 (1-c_L^2)} \frac{k^2}{\epsilon a^2 H^2 } \; .
\end{equation}
For these relative shifts to be small up to physical momenta $k/a$ much bigger than $H$, we need
\begin{equation}
\Delta c^{2}_{\zeta\gamma} \ll c_L \sqrt{\epsilon} \; ,
\end{equation}
where we used that $M^{\zeta s}$ and $(1-c_L^2)$ are both of order one (in solid inflation models, $c_L^2$ has to be smaller than $1/3$ \cite{solidinflation}).
\section{Imprints on CMB Anisotropies}
We now turn our attention to the effects of a scalar-tensor mixing on CMB anisotropies. In more standard cases, where the inflationary theory has both rotational and parity symmetry, a mixing between the so-called $E$ and $B$ modes is forbidden due to symmetry arguments. More precisely, following the convention in \cite{Weinberg:2008zzc},
\begin{align*}
\langle a^*_{T,lm} a_{T,l'm'} \rangle = &C_{TT,l} \, \delta_{l,l'} \, \delta_{m,m'}\\
\langle a^*_{T,lm} a_{E,l'm'} \rangle = &C_{TE,l} \, \delta_{l,l'} \, \delta_{m,m'}\\
\langle a^*_{E,lm} a_{E,l'm'} \rangle = &C_{EE,l} \, \delta_{l,l'} \, \delta_{m,m'}\\
\langle a^*_{B,lm} a_{B,l'm'} \rangle = &C_{BB,l} \, \delta_{l,l'} \, \delta_{m,m'}\\
\langle a^*_{T,lm} a_{B,l'm'} \rangle = &0\\
\langle a^*_{E,lm} a_{B,l'm'} \rangle = &0 \; .
\end{align*}
In particular, the $T$-$B$ and $E$-$B$ correlators vanish because under a parity transformation one has
\begin{equation}
a_{T,lm} \rightarrow (-1)^l \, a_{T,lm} \;, \qquad a_{E,lm} \rightarrow (-1)^l \, a_{E,lm} \;, \qquad a_{B,lm} \rightarrow -(-1)^l \, a_{B,lm} \; .
\end{equation}
Therefore, when $l = l'$, such correlators are forbidden because of parity, while for $l \neq l'$, they are forbidden because of rotations.
However, in our case there is no full rotational symmetry, hence modes of different $l$'s can mix. As a result, one can generically expect nonzero $\langle a^*_{T,lm} a_{B,l'm'} \rangle$ and $\langle a^*_{E,lm} a_{B,l'm'} \rangle$ correlators when $l = l' \pm n$ for odd $n$ (even $n$'s are still forbidden by parity, which is a symmetry of our theory). A similar argument has been presented in \cite{Bartolo:2014hwa} for pseudoscalar inflation. Adapting the notation of \cite{Bartolo:2014hwa} and \cite{Shiraishi:2010sm},
\begin{align}
a^{(s)}_{T/E, lm} =& 4\pi (-i)^l \int \frac{d^3 k}{(2\pi)^3} \mathcal{T}^{(s)}_{T/E,l}(k) \, \zeta_{\vec{k}} \, Y^{*}_{lm} (\hat{k})\\
a^{(t)}_{T/E, lm} =& 4\pi (-i)^l \int \frac{d^3 k}{(2\pi)^3} \mathcal{T}^{(t)}_{T/E,l}(k) \left[ \gamma_{\vec{k}}^{(+2)} {}_{-2} \, Y^*_{lm}(\hat{k}) + \gamma_{\vec{k}}^{(-2)} {}_{2} \, Y^*_{l\;m}(\hat{k})\right]\\
a^{(t)}_{B, lm} =& 4\pi (-i)^l \int \frac{d^3 k}{(2\pi)^3} \mathcal{T}^{(t)}_{B,l}(k) \left[ \gamma_{\vec{k}}^{(+2)} {}_{-2}\, Y^*_{lm}(\hat{k}) - \gamma_{\vec{k}}^{(-2)} {}_{2} \, Y^*_{l\;m}(\hat{k})\right]
\end{align}
Here we use $s$ and $t$ to label contributions from scalar and tensor modes, $\mathcal{T}^{(s/t)}_{T/E/B, l} (k)$ is the corresponding radiation transfer function (see, e.g., \cite{Shiraishi:2010sm} for their explicit forms), and $ {}_{\pm2}Y_{lm}$ is the spin-weighted spherical harmonics of spin $\pm 2$. The transfer functions depend only on the cosmology after inflation, and are thus independent of our inflationary model. Inflation enters the correlation functions above only through $\zeta$ and $\gamma$, evaluated at the end of inflation. (See \cite{Weinberg:2008zzc, Bartolo:2014hwa, Shiraishi:2010sm, Shiraishi:2010kd, Zaldarriaga:1996xe} for details). We thus have
\begin{align}
&\langle a^{(s)*}_{T/E,lm} a^{(t)}_{B,l'm'} \rangle \nonumber \\
=&(4\pi)^2 i^{l-l'} (-1)^{l'} \times C \times A_{(l,m),(l',m')} \times \int \frac{dk}{k} \mathcal{T}^{(s)}_{T/E,l}(k) \mathcal{T}^{(t)}_{B,l'}(k)
\end{align}
where $C$ is a $k$-independent factor, given by our previous calculation as
$$
C = \frac{3}{2} \, \frac{2c^3_L+4c_L^2+6c_L+3}{(1+c_L)^2 } \, \frac{\Delta c^2_{\zeta\gamma} H^2}{\epsilon c_L^5 M^2_{Pl}} \; ,
$$
and $A_{(l,m),(l',m')}$ is a purely geometric factor, defined as
\begin{equation}
A_{(l,m),(l',m')} \equiv \int d\Omega_{\hat{k}} \, Y_{lm}(\hat{k}) \left[M^{\zeta +} {}_{-2} \, Y^*_{l'm'}(\hat{k}) - M^{\zeta -} {}_{2} \, Y^*_{l'\;m'}(\hat{k})\right] \;.
\end{equation}
For icosahedral inflation, the parity selection rules spelled out above allow non-vanishing $A_{(l,m),(l',m')}$ only for $ l = l' \pm n$, with $n$ odd.
In fact, further investigation with Mathematica shows that there is no obvious selection rule based on the value of $l - l'$.
Notice that the arbitrary and singular phase introduced in $M^{\zeta\pm}$ by the polarization tensors is still there---it does not cancel out in the combination entering $A_{(l,m),(l',m')}$. So, in order to evaluate these expressions, one should make an explicit choice of polarization tensors. For instance, the choice of ref.~\cite{Shiraishi:2010kd} is,
\begin{equation}
\epsilon_{mn}^{\pm 2}(\hat{k}) = \sqrt{2} \, \epsilon_m^{\pm 1}(\hat{k}) \, \epsilon_n^{\pm 1}(\hat{k}) \; , \qquad \epsilon_m^{\pm 1}(\hat{k}) = \frac{1}{\sqrt{2}} \big( \hat{\theta}(\hat{k}) \pm i \hat{\phi}(\hat{k}) \big) \; ,
\end{equation}
where $\theta$ and $\phi$ are the polar and azimuthal angles of $\hat k$, and $\hat \theta$ and $\hat \phi$ are the corresponding unit vectors.
With this choice, as an example, for $l=3$, $l'=2$, $m'=2$, and arbitrary $m$, we find
\begin{align}
A_{(3,-2),(2,2)} =& \frac{2+\gamma}{3\sqrt{21}}\;,\nonumber\\
A_{(3,0),(2,2)} =& -\frac{\gamma}{6\sqrt{70}}\;,\nonumber\\
A_{(3,2),(2,2)} =& A_{(3,1),(2,2)} = A_{(3,-1),(2,2)} =0 \; ,
\end{align}
where $\gamma$ is, as before, the golden ratio (we computed the relevant integrals with Mathematica.)
Notice that icosahedral inflation also gives rise to nonzero tensor-tensor $T$-$B$ and $E$-$B$ correlators, $\langle a^{(t)*}_{T/E,lm} a^{(t)}_{B,l'm'} \rangle$, in addition to the scalar-tensor ones, $\langle a^{(s)*}_{T/E,lm} a^{(t)}_{B,l'm'} \rangle$. However, $T$-$B$ and $E$-$B$ correlators are dominated by the latter contributions, since the anisotropies in the tensor-tensor spectrum are of order $\epsilon c_L^5 \ll 1$ compared to the tensor-scalar mixing \cite{tensortensor} \footnote{As shown in Appendix \ref{interactionterm}, the parameter $\Delta c^2_\gamma$ that corrects the tensor modes' propagation speed in an anisotropic fashion in ref.~\cite{tensortensor} is generically of the same order as our mixing parameter $\Delta c^2_{\zeta\gamma}$, since the two effects can arise from the same non-linear combinations of matter fields and curvature tensors.}.
Current CMB observations are able to put constraints on $T$-$B$ and $E$-$B$ correlations. In the CMB literature, such correlations are usually assumed to be coming from cosmic birefringence. For example, recent constraints on cosmic birefringence effect coming from ACT can be found in \cite{Namikawa:2020ffr} and \cite{Choi:2020ccd}, and similar constraints from Planck can be found in \cite{Aghanim:2016fhp} and \cite{Gruppuso:2020kfy}. However, it is not straightforward to translate constraints on cosmic birefringence into constraints on the parameters of our model. We leave performing this analysis for future work.
\section{Concluding remarks}
We have computed the scalar-tensor correlation function in icosahedral inflation \cite{icosahedralinflation}, and discussed its possible imprints on CMB anisotropies, in the form of non-vanishing $T$-$E$ and $T$-$B$ spectra. Such correlations are allowed because the inflationary model at hand breaks (spontaneously) rotational invariance. Within the regime of validity of the effective field theory, the mixed scalar-tensor correlator can be parametrically larger that the tensor spectrum itself.
It is useful to compare our results and framework to other models of inflation featuring anisotropic effects, such as the model studied in \cite{Watanabe:2009ct, Gumrukcuoglu:2010yc, Watanabe:2010bu}. There, the intrinsic anisotropy of the background evolution enters all correlators of perturbations, including the scalar spectrum. In our case instead, the model is {\em designed} in such a way as to guarantee that the scalar spectrum is automatically isotropic, while leaving open the possibility of detectable anisotropies in other correlation functions, such as the scalar three-point function \cite{icosahedralinflation}, the tensor spectrum \cite{tensortensor}, and the scalar-tensor two-point function (the case considered here). The reason behind this choice is spelled out in the Introduction: the scalar spectrum is the only primordial correlation function we have detected, and it appears to be consistent with statistical isotropy.
\section*{Acknowledgements}
We thank Colin Hill and Lam Hui for useful discussions and comments.
Our work is partially supported by the US DOE (award number DE-SC011941) and by the Simons Foundation (award number 658906).
\section*{Appendix}
|
1,116,691,501,013 | arxiv | \section{Introduction}
Many astronomical experiments and recent cosmological observations
\cite{1} indicate accelerated expansion of our universe. This
expansion is believed by dark energy (DE), a cryptic exotic matter
having large negative pressure that violates the strong energy
condition. In radiation dominated era, the nucleosynthesis
scenario elicits the decelerated expansion of the universe in its
early phase. To understand the nature of DE, many cosmological
models like Chaplygin gas, phantom, quintessence and cosmological
constant etc. have been proposed \cite{2}. The modified theories
of gravity like $f(R)$ gravity, Gauss-Bonnet theory, higher
dimensional theories of gravity, scalar tensor theories etc. have
also been suggested \cite{3}. Brans-Dicke (BD) theory of gravity
is one of the most attractive scalar tensor theories due to its
vast cosmological implications \cite{4}. The varying gravitational
constant ($\frac{1}{\phi}$ acts as gravitational constant), the
non-minimal coupling between the scalar field and geometry,
compatibility with weak equivalence principle, Mach's principle
and Dirac's large number hypothesis are some dominant features of
this theory \cite{5, 6}. The BD parameter should be constrained
$\omega\geq40,000$ for its consistency with the solar system
bounds \cite{7}.
Spatially homogeneous and anisotropic Bianchi type I (BI) model is
used to study the possible effects of anisotropy in the early
universe \cite{8}. Some people \cite{9} have constructed
cosmological models by using anisotropic fluid and BI universe.
Recently, this model has been studied in the presence of binary
mixture of the perfect fluid and the DE \cite{10}. Sharif and
Kausar \cite{11} have discussed dynamics of the universe with
anisotropic fluid and Bianchi models in $f(R)$ gravity. Some exact
BI solutions have also been investigated in this modified theory
\cite{12}.
In this paper, we construct solutions of the field equations for
BI universe model in the presence of different fluids. The paper
is organized as follows. In the next section, we formulate the
field equations of BD theory for BI universe and some general
parameters. Section \textbf{3} provides solution to the field
equations in the presence of perfect fluid and then anisotropic
fluid. The BI cosmological model with magnetized anisotropic fluid
is investigated in section \textbf{4}. A special case, $m=1$, of
the magnetized anisotropic fluid is also discussed. Finally, we
summarize the results in the last section.
\section{Bianchi Type I Field Equations and Some General Parameters}
The BD theory with self-interacting potential is described by the
action \cite{13}
\begin{equation}\label{1}
S=\int d^{4}x\sqrt{-g}[\phi
R-\frac{\omega_{0}}{\phi}\phi^{,\alpha}\phi_{,\alpha}-U(\phi)+L_{m}],\quad\alpha=0,1,2,3,
\end{equation}
where $\omega_{0}$ and $L_{m}$ represent the constant BD parameter
and the matter part of the Lagrangian respectively. Here we have
taken $8\pi G_{0}=c=1$. Using the principle of least action, we
obtain the field equations
\begin{eqnarray}\label{2}
G_{\mu\nu}&=&\frac{\omega_{0}}{\phi^{2}}[\phi_{,\mu}\phi_{,\nu}
-\frac{1}{2}g_{\mu\nu}\phi_{,\alpha}\phi^{,\alpha}]+\frac{1}{\phi}[\phi_{,\mu;\nu}
-g_{\mu\nu}\Box\phi]+\frac{T_{\mu\nu}}{\phi}-g_{\mu\nu}\frac{U(\phi)}{2\phi},\\\label{3}
\Box\phi&=&\frac{T}{3+2\omega_{0}}-\frac{2U(\phi)-\phi\frac{dU(\phi)}{d\phi}}{3+2\omega_{0}}.
\end{eqnarray}
Here $T_{\mu\nu},~T, ~\Box, ~\Delta^{\mu},~U(\phi)$ represent
energy-momentum tensor, its trace, box or d'Alembertian operator
$(\Box=\Delta^{\mu}\Delta_{\mu})$, covariant derivative and the
self-interacting potential respectively. Equation (\ref{3})
represents the Klein Gordon equation or the wave equation for the
scalar field. This theory reduces to general relativity (GR) when
the scalar field is constant and the BD parameter is very large,
i.e., $\omega \rightarrow \infty$ \cite{14}. However this is not
true in general, e.g, the case of exact solutions. It is argued
that this theory goes over to GR only for the non-vanishing trace
of the energy-momentum tensor \cite{15}. For different values of
$\omega$, this theory corresponds to other alternative theories of
gravity. For example, it corresponds to Palatini metric $f(R)$
gravity, the metric $f(R)$ gravity and low energy string theory
action for $\omega=-3/2,~\omega=0$ \cite{16} and $\omega=-1$
\cite{17} respectively.
The BI universe model is given by \cite{18}
\begin{equation}\label{4}
ds^{2}=dt^{2}-A^{2}(t)dx^{2}-B^{2}(t)(dy^{2}+dz^{2}),
\end{equation}
where $A$ and $B$ are the scale factors. This model has one
transverse direction $x$ and two equivalent longitudinal
directions $y$ and $z$. The field equations (\ref{2}) and
(\ref{3}) for the model (\ref{4}) can be written as
\begin{eqnarray}\label{5}
\frac{2\dot{A}\dot{B}}{AB}+\frac{\dot{B}^{2}}{B^{2}}
&=&\frac{T_{00}}{\phi}+\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-(\frac{\dot{A}}{A}
+2\frac{\dot{B}}{B})\frac{\dot{\phi}}{\phi}+\frac{U(\phi)}{2\phi},\\\label{6}
2\frac{\ddot{B}}{B}+\frac{\dot{B}^{2}}{B^{2}}&=&-\frac{T_{11}}{\phi}
-\frac{\omega_{0}}{2}\frac{\dot{\phi}^{2}}{\phi^{2}}-2\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
-\frac{\ddot{\phi}}{\phi}+\frac{U(\phi)}{2\phi},
\\\label{7}\frac{\ddot{B}}{B}+\frac{\ddot{A}}{A}+\frac{\dot{A}\dot{B}}{AB}
&=&-\frac{T_{22}}{\phi}-\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-\frac{\ddot{\phi}}{\phi}-(\frac{\dot{A}}{A}
+\frac{\dot{B}}{B})\frac{\dot{\phi}}{\phi}+\frac{U(\phi)}{2\phi}
\end{eqnarray}
and the wave equation is
\begin{equation}\label{8}
\ddot{\phi}+(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B})\dot{\phi}
=\frac{T}{(2\omega_{0}+3)}
-\frac{2U(\phi)-\phi\frac{dU}{d\phi}}{(2\omega_{0}+3)}.
\end{equation}
The corresponding average scale factor $a(t)$, volume $V$ and the
mean Hubble parameter $H$ are
\begin{equation*}
a(t)=(AB^{2})^{1/3},\quad V=a^3(t)=AB^{2},\quad
H(t)=\frac{1}{3}(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B}).
\end{equation*}
The directional Hubble parameters in $x,~y$ and $z$ directions are
given by
\begin{equation}\label{9}
H_{x}=\frac{\dot{A}}{A},\quad H_{y}=H_{z}=\frac{\dot{B}}{B}.
\end{equation}
The anisotropy parameter of expansion $\Delta$ and the deceleration
parameter $q$ are
\begin{equation}\label{10}
\Delta=\frac{1}{3}\sum^{3}_{i=1}(\frac{H_{i}-H}{H})^{2},\quad
q=\frac{d}{dt}(\frac{1}{H})-1.
\end{equation}
The isotropic expansion of the universe can be obtained for
$\Delta=0$. The expansion and shear scalar turn out to be
\begin{eqnarray}\label{11}
\Theta=u^{a}_{;a}=\frac{\dot{A}}{A}+2\frac{\dot{B}}{B},\quad
\sigma=\frac{1}{\sqrt{3}}(\frac{\dot{A}}{A}-\frac{\dot{B}}{B}).
\end{eqnarray}
Since the field equations are highly non-linear, we assume power
law for the scalar field $\phi(t)=\phi_{0}B^{\alpha},~\alpha>0$
for the expanding universe. For a spatially homogeneous metric,
the normal congruence to homogeneous expansion implies that
$\frac{\sigma}{\Theta}$ is constant, i.e., "the expansion scalar
$\Theta$ is proportional to shear scalar $\sigma$" \cite{19}. This
leads to $A=B^{m},~m\neq 1$ for BI model \cite{18}, \cite{20}. It
is worthwhile to mention here that any universe model becomes
isotropic when $t\rightarrow +\infty, ~\Delta\rightarrow 0,~
V\rightarrow +\infty,~\rho>0$ for the diagonal energy-momentum
tensor.
\section{Anisotropic Fluid Model}
In this section, we first explore the BI model with the
energy-momentum tensor of perfect fluid given by
\begin{equation}\label{12}
T^{\mu}_{\nu}=diag[\rho, -\omega\rho, -\omega\rho,-\omega\rho],
\end{equation}
where $\rho$ and $\omega$ represent the energy density and equation
of state (EoS) parameter respectively. Using this energy-momentum
tensor in the field equations (\ref{2}) and (\ref{3}), it follows
that
\begin{eqnarray}\label{13}
(2m+1)\frac{\dot{B}^{2}}{B^{2}}&=&\frac{\rho}{\phi}+\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-(m+2)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
+\frac{U(\phi)}{\phi},\\\label{14}
2\frac{\ddot{B}}{B}+\frac{\dot{B}^{2}}{B^{2}}
&=&-\frac{\omega\rho}{\phi} -\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-2\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
-\frac{\ddot{\phi}}{\phi}+\frac{U(\phi)}{\phi},\\\nonumber
(m+1)\frac{\ddot{B}}{B}+m^{2}\frac{\dot{B}^{2}}{B^{2}}
&=&-\frac{\omega\rho}{\phi}-\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-\frac{\ddot{\phi}}{\phi}
-(m+1)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}\\\label{15}
&+&\frac{U(\phi)}{2\phi},\\\label{16}
\ddot{\phi}+(m+2)\frac{\dot{B}}{B}\dot{\phi}
&=&\frac{\rho(1-3\omega)}{(2\omega_{0}+3)}
-\frac{2U(\phi)-\frac{dU}{d\phi}}{(2\omega_{0}+3)},
\end{eqnarray}
where we have used $A=B^{m}$.
The energy conservation equation for such a fluid is
\begin{equation}\label{17}
\dot{\rho}+3H(1+\omega)\rho=0.
\end{equation}
For $0\leq\omega\leq1$, this equation yields
\begin{equation}\label{18}
\rho=\rho_{0}B^{-(m+2)(1+\omega)}.
\end{equation}
Subtracting Eq.(\ref{14}) from (\ref{15}) and using
$\phi=\phi_{0}B^{\alpha}~(\alpha>0)$, we obtain
\begin{equation}\label{19}
B(t)=[(\alpha+m+2)(k_{1}t+k_{2})]^{1/(\alpha+m+2)};\quad m\neq1
\end{equation}
where $k_{1}$ and $k_{2}$ are constants of integration.
Consequently, we have
\begin{equation}\label{20}
A(t)=[(\alpha+m+2)(k_{1}t+k_{2})]^{m/(\alpha+m+2)}.
\end{equation}
Thus the model turns out to be
\begin{eqnarray}\nonumber
ds^{2}&=&dt^{2}-[(\alpha+m+2)(k_{1}t+k_{2})]^{2m/(\alpha+m+2)}dx^{2}\\\label{21}
&-&[(\alpha+m+2)(k_{1}t+k_{2})]^{2/(\alpha+m+2)}(dy^{2}+dz^{2}).
\end{eqnarray}
The corresponding parameters become
\begin{eqnarray}\nonumber
H_{x}&=&mH_{y}=mH_{z}=\frac{\dot{B}}{B}=\frac{mk_{1}}{(\alpha+m+2)(k_{1}t+k_{2})},\\\nonumber
H&=&(\frac{m+2}{3})[\frac{k_{1}}{(\alpha+m+2)(k_{1}t+k_{2})}],\\\nonumber
\Theta&=&\frac{(m+2)k_{1}}{(\alpha+m+2)(k_{1}t+k_{2})},\\\nonumber
\sigma^{2}&=&\frac{(m-1)^{2}}{3}[\frac{k_{1}^{2}}{(\alpha+m+2)^{2}(k_{1}t+k_{2})^{2}}],\\\nonumber
V&=&B^{(m+2)}=[(\alpha+m+2)(k_{1}t+k_{2})]^{\frac{(m+2)}{(\alpha+m+2)}},\\\nonumber
q&=&\frac{d}{dt}(\frac{1}{H})-1=\frac{3\alpha+2m+4}{(m+2)}.
\end{eqnarray}
Since $\alpha,~m>0~(m\neq1)$, we have $q>0$ which yields the
decelerated expansion of the universe. The mean anisotropic
parameter of expansion $(\Delta=\frac{2(m-1)^{2}}{(m+2)^2})$ is
constant.
In order to investigate the accelerated expansion model of the
universe, we take the generalization of the perfect fluid, i.e.,
anisotropic fluid given by
\begin{equation}\label{22}
T^{\nu}_{\mu}=diag[\rho, -p_{x}, -p_{y}, -p_{z}],
\end{equation}
where $\rho$ represents the energy density of the fluid while
$p_{x},~p_{y}$ and $p_{z}$ denote pressures in $x,~y$ and $z$
directions respectively. Equation of state for this fluid is taken
as $p=\omega\rho$, where EoS parameter $\omega$ may not be constant.
By taking the directional EoS parameters
$\omega_{x}=\omega+\delta,~\omega_{y}=\omega+\gamma$ and
$\omega_{z}=\omega+\gamma$ on $x,~y$ and $z$ axes respectively,
Eq.(\ref{22}) can be written as
\begin{equation}\label{23}
T^{\nu}_{\mu}=diag[1, -(\omega+\delta), -(\omega+\gamma),
-(\omega+\gamma)]\rho,
\end{equation}
where $\delta$ denotes deviation from $\omega$ on $x$ axis while
$\gamma$ denotes deviations on $y$ and $z$ axis. Equation
(\ref{23}) with $\delta=0=\gamma$ corresponds to the
energy-momentum tensor for isotropic fluid. The energy
conservation equation for the anisotropic fluid yields
\begin{eqnarray}\label{24}
\dot{\rho}+(1+\omega)(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B})\rho(t)
+(\delta\frac{\dot{A}}{A}+2\gamma\frac{\dot{B}}{B})\rho(t)=0.
\end{eqnarray}
By decomposing the anisotropic fluid into deviation free and
anisotropy parts, we take anisotropy part equal to zero \cite{18,
21}
\begin{equation}\label{25}
(\delta\frac{\dot{A}}{A}+2\gamma\frac{\dot{B}}{B})\rho(t)=0.
\end{equation}
Since $\rho\neq 0$, this implies that either both the deviation
parameters $\delta(t)$ and $\gamma(t)$ vanish or
$\frac{H_{x}}{H_{y}}=-\frac{2\gamma}{\delta}$. For a more general
solution, we take dimensionless deviation parameters as follows
\cite{21}
\begin{eqnarray}\label{26}
\delta(t)=\frac{2n}{3}\frac{\dot{B}}{B}(\frac{\dot{A}}{A}
+2\frac{\dot{B}}{B})\frac{1}{\rho},\quad
\gamma(t)=-\frac{n}{3}\frac{\dot{A}}{A}(\frac{\dot{A}}{A}
+2\frac{\dot{B}}{B})\frac{1}{\rho},
\end{eqnarray}
where $n$ is a real dimensionless constant which describes the
deviation from EoS parameter.
The field equations for such fluid will be
\begin{eqnarray}\label{27}
(2m+1)\frac{\dot{B}^{2}}{B^{2}}&=&\frac{\rho}{\phi}+\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-(m+2)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
+\frac{U(\phi)}{\phi},\\\nonumber
2\frac{\ddot{B}}{B}+\frac{\dot{B}^{2}}{B^{2}}&=&-\frac{(\omega+\delta)\rho}{\phi}
-\frac{\omega_{0}}{2}\frac{\dot{\phi}^{2}}{\phi^{2}}-2\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
-\frac{\ddot{\phi}}{\phi}+\frac{U(\phi)}{\phi},\\\label{28}\\\nonumber
(m+1)\frac{\ddot{B}}{B}+m^{2}\frac{\dot{B}^{2}}{B^{2}}
&=&-\frac{(\omega+\gamma)\rho}{\phi}-\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-\frac{\ddot{\phi}}{\phi}
-(m+1)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}\\\label{29}&+&\frac{U(\phi)}{2\phi},\\\label{30}
\ddot{\phi}+(m+2)\frac{\dot{B}}{B}\dot{\phi}&=&\frac{\rho(1-3\omega)
-\rho(2\gamma+\delta)}{(2\omega_{0}+3)}-\frac{(2U(\phi)-\frac{dU}{d\phi})}{(2\omega_{0}+3)}.
\end{eqnarray}
Using Eqs.(\ref{26}), (\ref{28}) and (\ref{29}) along with
$\phi=\phi_{0}B^{\alpha}$, it follows that
\begin{equation*}
\frac{\ddot{B}}{B}+(m+1+\alpha)\frac{\dot{B}^{2}}{B^{2}}
-\frac{n(m+2)^{2}\dot{B}^{2}}{3B^{(\alpha+2)}(m-1)\phi_{0}}=0.
\end{equation*}
Integrating twice, we obtain
\begin{equation*}
t+k_{4}=\int
B^{(m+1+\alpha)}e^{-(k_{3}-\frac{n(m+2)^{2}B^{-\alpha}}{3\phi_{0}\alpha(m-1)})}dB,
\end{equation*}
where $k_{3}$ and $k_4$ are integration constants. For $B=T,~
x=X,~y=Y$ and $z=Z$, BI model turns out to be
\begin{equation}\label{31}
ds^{2}=T^{-(m+1+\alpha)}e^{(k_{3}-\frac{n(m+2)^{2}T^{-\alpha}}{3\phi_{0}\alpha(m-1)})}dT^{2}
-T^{2m}dX^{2}-T^{2}(dY^{2}+dZ^{2}).
\end{equation}
Some physical parameters are
\begin{eqnarray}\nonumber
V&=&T^{m+2},\quad \Delta=\frac{2(m-1)^{2}}{(m+2)^2},\\\nonumber
H_{x}&=&mH_{y}=m[2-\frac{nlT^{-\alpha}}{3\alpha\phi_{0}(m-1)}]T^{-(m+1+\alpha)},\\\nonumber
H&=&\frac{(m+2)}{3}[2-\frac{nlT^{-\alpha}}{3\alpha\phi_{0}(m-1)}]T^{-(m+1+\alpha)}\\\nonumber
\Theta&=&3H=(m+2)[2-\frac{nlT^{-\alpha}}{3\alpha\phi_{0}(m-1)}]T^{-(m+1+\alpha)},\\\nonumber
\sigma^2&=&\frac{(m-1)^2}{3}[2-\frac{2nlT^{-\alpha}}{3\alpha\phi_{0}(m-1)}]T^{-2(m+1+\alpha)},\\\nonumber
q&=&-(1-\frac{3}{(m+2)})-\frac{3}{(m+2)}(\frac{nl(m+1+2\alpha)T^{-(m+2+2\alpha)}}{3\alpha\phi_{0}(m-1)}\\\nonumber
&-&2(m+1+\alpha)T^{-(m+2+\alpha)})(2-\frac{2nlT^{-\alpha}}{3\alpha\phi_{0}(m-1)})^{-1}T^{(m+2+\alpha)}.
\end{eqnarray}
Since $\alpha,~m>0~(m\neq1)$, these parameters except the
deceleration parameter, increase with the decrease in $T$ and
approach to zero as $T\rightarrow\infty$. Also, for earlier times,
the volume of the universe is zero while the expansion and shear
scalar turn out to be infinite. For later times, the volume goes
to infinite value while the expansion and shear scalar decrease to
zero. This indicates that the universe expands from zero volume at
infinite rate of expansion. Since the anisotropy parameter of
expansion is constant (it vanishes for $m=1$), therefore the model
does not isotropize for later times. In this case, the
deceleration parameter $q$ is found to be a dynamical quantity and
can be negative for the appropriate values of the constant
parameters. For later times and $m>1$, the deceleration parameter
turns out to be negative.
The self-interacting potential $U$ can be written from Eq.(\ref{27})
as follows
\begin{eqnarray}\nonumber
U(\phi)\approx
U(T)&=&2\phi_{0}((\alpha+2)m-\frac{\omega_{0}\alpha^{2}}{2}+1+2m)e^{(2k_{3}
-\frac{2n(m+2)^{2}}{3\phi_{0}(m-1)\alpha T^{\alpha}})}\\\label{32}
&\times&T^{-(2m+4+\alpha)}-2\rho.
\end{eqnarray}
Equations (\ref{24}) and (\ref{25}) lead to
\begin{equation}\label{33}
\omega=-1-\frac{\frac{d\rho}{dt}}{(m+2)\rho\frac{\dot{B}}{B}}.
\end{equation}
Substituting Eqs.(\ref{32}) and (\ref{33}) in (\ref{30}), we
obtain
\begin{eqnarray}\nonumber
\rho(T)&=&[\phi_{0}\alpha(m+2)((3+2\omega_{0})\alpha(\alpha+m+1)+4(1+2m)
+4\alpha(m+2)\\\nonumber
&-&2\omega_{0}\alpha^{2})-\frac{\phi_{0}\alpha^{2}(m+2)(3+2\omega_{0})}{2}
(\frac{8\alpha(m+2)}{(3\alpha-2(m+2))}+\alpha-2)]\\\nonumber
&\times&[2T^{-(\alpha+2m+4)}(8\alpha(m+2)-(\alpha+2m
+4)(3\alpha-2(m+2)))^{-1}\\\nonumber
&-&2nlT^{-(2\alpha+2m+4)}(3\alpha\phi_{0}(m-1)(8\alpha(m+2)-(\alpha+2m
+4)(3\alpha\\\nonumber
&-&2(m+2))))^{-1}]+\frac{4\alpha(m+2)^{2}n(m-1)}{3}[T^{-(4+2\alpha+2m)}(8\alpha(m+2)\\\nonumber
&-&(2\alpha+2m+4)(3\alpha-2(m+2)))^{-1}-nlT^{-(4+3\alpha+2m)}(3\alpha\phi_{0}(m-1)\\\nonumber
&\times&(8\alpha(m+2)-(3\alpha+2m
+4)(3\alpha-2(m+2))))^{-1}]+\alpha^{2}\phi_{0}\\\nonumber
&\times&\frac{(3+2\omega_{0})(m+2)}{(3\alpha-2(m+2))}
[T^{-(\alpha+2m+4)}-\frac{nlT^{-(2\alpha+2m+4)}}{3\alpha\phi_{0}(m-1)}]\\\label{34}
&+&c_{1}T^{\frac{-8\alpha}{(3\alpha-2(m+2))}},
\end{eqnarray}
where $c_{1}$ is an integration constant. Inserting this value in
Eq.(\ref{32}), one can obtain the corresponding self-interacting
potential. The skewness parameters are given by
\begin{eqnarray}\label{35}
\delta(T)&=&\frac{2n(m+2)(2-\frac{2n(m+2)^{2}T^{-\alpha}}
{3\phi_{0}\alpha(m-1)})T^{-2(m+2+\alpha)}}{3\rho},\\\label{36}
\gamma(T)&=&\frac{-nm(m+2)(2-\frac{2n(m+2)^{2}T^{-\alpha}}{
3\phi_{0}\alpha(m-1)})T^{-2(m+2+\alpha)}}{3\rho}.
\end{eqnarray}
The deviation free EoS parameter (\ref{33}) can be written as
\begin{eqnarray}\nonumber
\omega(T)&=&-1-\frac{1}{\rho(m+2)}[[\phi_{0}\alpha(m+2)((3+
2\omega_{0})\alpha(\alpha+m+1)\\\nonumber
&+&4(1+2m)+4\alpha(m+2)-2\omega_{0}\alpha^{2})-(\alpha-2
+\frac{8\alpha(m+2)}{(3\alpha-2(m+2))}) \\\nonumber
&\times&\frac{\phi_{0}\alpha^{2}(m+2)(3+2\omega_{0})}{2}]
[-2(\alpha+2m+4)T^{-(\alpha+2m+5)}(8\alpha(m+2)\\\nonumber
&-&(\alpha+2m+4)(3\alpha-2(m+2)))^{-1}+2nl(2\alpha+2m+4)
\end{eqnarray}
\begin{eqnarray}\nonumber
&\times&T^{-(2\alpha+2m+5)}(3\alpha\phi_{0}(m-1)(8\alpha(m+2)-(\alpha+2m+4)\\\nonumber
&\times&(3\alpha-2(m+2))))^{-1}]+\frac{4\alpha(m+2)^{2}n(m-1)}{3}[-(4+2\alpha+2m)\\\nonumber
&\times&T^{-(5+2\alpha+2m)}(8\alpha(m+2)-(2\alpha+2m
+4)(3\alpha-2(m+2)))^{-1}+nl\\\nonumber
&\times&(4+3\alpha+2m)T^{-(5+3\alpha+2m)}
(3\alpha\phi_{0}(m-1)(8\alpha(m+2)-(3\alpha+2m\\\nonumber
&+&4)(3\alpha-2(m+2))))^{-1}]+\alpha^{2}\phi_{0}\frac{(3+2\omega_{0})(m+2)}{(3\alpha-2(m+2))}
[-(\alpha+2m+4)\\\nonumber
&\times&T^{-(\alpha+2m+5)}+\frac{nl(2\alpha+2m+4)T^{-(2\alpha+2m+5)}}
{3\alpha\phi_{0}(m-1)}]\\\label{37}
&-&c_{1}(\frac{8\alpha}{(3\alpha-2(m+2))})T^{-1-\frac{8\alpha}{(3\alpha-2(m+2))}}],
\end{eqnarray}
where $\rho$ is given by Eq.(\ref{34}). The anisotropic expansion
measure of anisotropic fluid is
\begin{equation}\label{38}
\frac{\delta-\gamma}{\omega}=\frac{n(m+2)^{2}(2-\frac{2n(m+2)^{2}T^{-\alpha}}
{3\phi_{0}\alpha(m-1)})T^{-2(m+2+\alpha)}}{3\omega(T)}.
\end{equation}
\begin{figure}
\centering \epsfig{file=fig11.eps,width=.45\linewidth}
\epsfig{file=fig12.eps,width=.45\linewidth} \caption{Plots
represent energy density $\rho$ versus time T for
$\alpha<\frac{2(m+2)}{3}$ and $\alpha>\frac{2(m+2)}{3}$
respectively. Here $\omega_{0}=1.9,~ n=2$ and $\alpha=1$.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig13.eps,width=.45\linewidth}
\epsfig{file=fig14.eps,width=.45\linewidth} \caption{The
self-interacting potential $U(T)$ versus time T for
$\alpha<\frac{2(m+2)}{3}$ and $\alpha>\frac{2(m+2)}{3}$. Here
$\omega_{0}=-1.9,~ \beta=2, n=-2,~ \alpha=1$.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig15.eps,width=.45\linewidth}
\epsfig{file=fig16.eps,width=.45\linewidth} \caption{The skewness
parameter $\delta(T)$ for $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$ respectively. Here $\omega_{0}=-1.9,~
\alpha=1,~n=2$.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig17.eps,width=.45\linewidth}
\epsfig{file=fig18.eps,width=.45\linewidth} \caption{The skewness
parameter $\gamma(T)$ for $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$ respectively.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig19.eps,width=.45\linewidth}
\epsfig{file=fig20.eps,width=.45\linewidth} \caption{The
anisotropic measure of expansion parameter
$(\delta-\gamma)/\omega$ for $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$ respectively.}
\end{figure}
Now we discuss the results for $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$. Figure \textbf{1} indicates that the
energy density is positive. For later times, it decreases and goes
to zero for $\alpha>\frac{2(m+2)}{3}$ while it increases and
approaches to infinity after big bang for $\alpha<\frac{2(m+2)}{3}$.
The self-interacting potential is positive only for
$\alpha>\frac{2(m+2)}{3}$ as shown in Figure \textbf{2} and goes to
zero for later times. Figures \textbf{3} and \textbf{4} show that
the skewness parameters $\delta(T)$ and $\gamma(T)$ turn out to be
finite at $T=0$ and approach to zero in future evolution of the
universe for both cases. The anisotropy measure of expansion for
anisotropic fluid goes to zero for $T\rightarrow\infty$ which shows
that the anisotropic fluid approaches to isotropy for future
evolution of the universe as shown in Figure \textbf{5}. Notice that
all these parameters decrease more rapidly with increasing values of
the parameter $m$.
At the initial epoch with $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$, we obtain
$\omega=-1-\frac{(4+2m+3\alpha)}{(m+2)}$ and
$\omega=-1+\frac{8\alpha}{(m+2)(3\alpha-2(m+2))}$ respectively.
These indicate that for $m>1$ and $\alpha>0$, the universe may be
in phantom region or quintessence region. For later times with
$\alpha<\frac{2(m+2)}{3}$, we have
$\omega=-1+\frac{8\alpha}{(m+2)(3\alpha-2(m+2))}$ and for
$\alpha>\frac{2(m+2)}{3}$, it follows that
\begin{eqnarray}\nonumber
\omega&=&-1-\frac{1}{(m+2)}[[\phi_{0}\alpha(m+2)((3+
2\omega_{0})\alpha(\alpha+m+1)+4(1+2m)\\\nonumber
&+&4\alpha(m+2)-2\omega_{0}\alpha^{2})-(\alpha-2
+\frac{8\alpha(m+2)}{(3\alpha-2(m+2))})\frac{\phi_{0}\alpha^{2}(m+2)}{2}\\\nonumber
&\times&(3+2\omega_{0})]
[-2(\alpha+2m+4)]+\alpha^{2}\phi_{0}\frac{(3+2\omega_{0})(m+2)}{(3\alpha-2(m+2))}
[-(\alpha+2m\\\nonumber
&+&4)][[\phi_{0}\alpha(m+2)((3+2\omega_{0})\alpha(\alpha+m+1)+4(1+2m)
+4\alpha(m+2)\\\nonumber
&-&2\omega_{0}\alpha^{2})-\frac{\phi_{0}\alpha^{2}(m+2)(3+2\omega_{0})}{2}
(\frac{8\alpha(m+2)}{(3\alpha-2(m+2))}+\alpha-2)][2\\\nonumber
&\times&(8\alpha(m+2)-(\alpha+2m
+4)(3\alpha-2(m+2)))^{-1}]\\\nonumber
&+&\alpha^{2}\phi_{0}\frac{(3+2\omega_{0})(m+2)}{(3\alpha-2(m+2))}]^{-1}.
\end{eqnarray}
This also shows that the universe will be in quintessence region or
phantom region for later times depending on the value of the BD
parameter. Thus the model represents accelerated expansion of the
universe.
\section{Magnetized Anisotropic Fluid Model}
In this section, we explore solution of the field equations for
magnetized anisotropic fluid. We take anisotropic fluid with
magnetic field along $z$ axis and assume that there is no electric
field. In this case, the scale factor $A(t)$ is perpendicular to
magnetic field while $B(t)$ is along the field lines. The
magnetized anisotropic fluid is
\begin{equation}\label{40}
T^{\nu}_{\mu}=diag[\rho+\rho_{B}, -p_{x}+\rho_{B},
-p_{y}-\rho_{B}, -p_{z}-\rho_{B}],
\end{equation}
where $\rho_{B}$ represents energy density of the magnetic field.
Using EoS for pressures in $x,~y$ and $z$ directions as in the
anisotropic fluid, Eq.(\ref{40}) can be written as
\begin{equation}\label{41}
T^{\nu}_{\mu}=diag[\rho+\rho_{B}, -(\omega+\delta)+\rho_{B},
-(\omega+\gamma)-\rho_{B}, -(\omega+\gamma)-\rho_{B}],
\end{equation}
where $\delta$ and $\gamma$ are given by Eq.(\ref{26}). For
$\delta=0=\gamma$, Eq.(\ref{41}) corresponds to the
energy-momentum tensor for the magnetized isotropic fluid while it
reduces to the anisotropic fluid for $\rho_{B}=0$. For
$\delta=0=\gamma$ and $\rho_{B}=0$, it represents the isotropic
fluid.
The field equations (\ref{2}) and (\ref{3}) for the model
(\ref{4}) and the energy-momentum tensor (\ref{41}) become
\begin{eqnarray}\label{42}
(2m+1)\frac{\dot{B}^{2}}{B^{2}}&=&\frac{\rho+\rho_{B}}{\phi}+\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-(m+2)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
+\frac{U(\phi)}{2\phi},\\\nonumber
2\frac{\ddot{B}}{B}+\frac{\dot{B}^{2}}{B^{2}}&=&-\frac{(\omega+\delta)\rho-\rho_{B}}{\phi}
-\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-2\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}
-\frac{\ddot{\phi}}{\phi}\\\label{43}
&+&\frac{U(\phi)}{2\phi},\\\nonumber
(m+1)\frac{\ddot{B}}{B}+m^{2}\frac{\dot{B}^{2}}{B^{2}}
&=&-\frac{(\omega+\gamma)\rho+\rho_{B}}{\phi}-\frac{\omega_{0}}{2}
\frac{\dot{\phi}^{2}}{\phi^{2}}-\frac{\ddot{\phi}}{\phi}\\\label{44}
&-&(m+1)\frac{\dot{B}}{B}\frac{\dot{\phi}}{\phi}+\frac{U(\phi)}{2\phi},\\\label{45}
\ddot{\phi}+(m+2)\frac{\dot{B}}{B}\dot{\phi}&=&\frac{\rho(1-3\omega)
-\rho(\delta+2\gamma)}{(2\omega_{0}+3)}
-\frac{2U(\phi)-\phi\frac{dU}{d\phi}}{(2\omega_{0}+3)},
\end{eqnarray}
where we have used the condition $A=B^{m}$.
The energy conservation equation for the magnetized anisotropic
fluid yields $\rho_{B}=\frac{\beta}{B^{4}}$ along with
Eq.(\ref{24}). Here $\beta>0$ is an integration constant.
Subtraction of Eq.(\ref{43}) from (\ref{44}) leads to
\begin{equation}\label{46}
2\frac{\ddot{B}}{B}+2(m+1+\alpha)(\frac{\dot{B}^{2}}{B^{2}})
-\frac{2n(m+2)^{2}}{3\phi_{0}(m-1)B^{\alpha}}(\frac{\dot{B}^{2}}{B^{2}})
=-\frac{4\beta}{\phi_{0}(m-1)B^{\alpha+4}}.
\end{equation}
Taking $\dot{B}=f(B)$, this turns out to be
\begin{equation}\label{47}
\frac{df^{*}}{dB}+\frac{2}{B}[(m+1+\alpha)-\frac{nl}{3\phi_{0}(m-1)B^{\alpha}}]f^{*}
=\frac{-4\beta}{\phi_{0}(m-1)B^{\alpha+3}},
\end{equation}
where $f^{*}=f^{2}$ and $l=(m+2)^{2}$ is a positive constant. This
is the first-order linear non-homogeneous differential equation
with variable coefficients whose integrating factor is
$B^{2(m+1+\alpha)}e^{\frac{2nlB^{-\alpha}}{3\phi_{0}\alpha(m-1)}}$.
After some manipulation, the solution becomes
\begin{eqnarray}\nonumber
f^{2}=\dot{B}^{2}&=&\frac{-4\beta
B^{-(\alpha+2)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
B^{-2(\alpha+1)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\label{48}
&\times&(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}B^{-2(m+1+\alpha)}
(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha}),
\end{eqnarray}
where $c_{2}$ is an integration constant. This can also be written
as
\begin{eqnarray}\nonumber
dt&=&\int[\frac{-4\beta
B^{-(\alpha+2)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
B^{-2(\alpha+1)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}B^{-2(m+1+\alpha)}
(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})]^{-1/2}dB.
\end{eqnarray}
By taking $B=T,~x=X,~y=Y$ and $z=Z$ and using Eq.(\ref{48}), BI
spacetime turns out to be
\begin{eqnarray}\nonumber
ds^{2}&=&[\frac{-4\beta
T^{-(\alpha+2)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
T^{-2(\alpha+1)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+1+\alpha)}
(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})]^{-1}dT^{2}\\\label{49}
&-&T^{2m}dX^{2}-T^{2}(dY^{2}+dZ^{2}).
\end{eqnarray}
Now we discuss some physical features of this model. Since at
$T=0$, the scale factors will be zero, the model shows point type
singularity \cite{18, 22}. The corresponding mean and directional
Hubble parameters are
\begin{eqnarray}\nonumber
H_{x}=mH_{y}&=&m[\frac{-4\beta
T^{-(\alpha+4)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+2+\alpha)}
(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})]^{1/2},\\\label{50}
\\\nonumber
H&=&\frac{(m+2)}{3}[\frac{-4\beta
T^{-(\alpha+4)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+2+\alpha)}
(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})]^{1/2}.\\\label{51}
\end{eqnarray}
Since $\alpha,~m>0~(m\neq1)$, these parameters increase with the
decrease in $T$ and approach to zero as $T\rightarrow\infty$. Also,
these parameters take infinitely large values at $T=0$. The
remaining parameters are given by
\begin{eqnarray}\nonumber
\Theta&=&3H=(m+2)[\frac{-4\beta
T^{-(\alpha+4)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\label{53}
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+2+\alpha)}
(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})]^{1/2},\\\nonumber
\sigma^2&=&\frac{(m-1)^2}{3}[\frac{-4\beta
T^{-(\alpha+4)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\label{54}
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+2+\alpha)}
(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})],\\\nonumber
q&=&-(1-\frac{3}{(m+2)})-\frac{3}{2(m+2)}[\frac{-4\beta
T^{-(\alpha+4)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
}{3m(m-1)^{2}}\\\nonumber
&\times&\frac{T^{-2(\alpha+2)}}{\phi_{0}^{2}(\alpha+2m)}(1-\frac{2nlT^{-\alpha}}
{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(m+2+\alpha)}
(1-\frac{2nl\phi_{0}^{-1}}{3(m-1)\alpha}\\\nonumber&\times&T^{-\alpha})]^{-1/2}(\frac{-4\beta
B^{-(\alpha+2)}}{\phi_{0}(\alpha+2m)(m-1)}-\frac{4nl\beta
B^{-2(\alpha+1)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}B^{-2(m+1+\alpha)}
(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha}))^{-1/2}
\end{eqnarray}
\begin{eqnarray}\nonumber
&\times&(\frac{4(\alpha+2)\beta
B^{-(\alpha+3)}}{\phi_{0}(\alpha+2m)(m-1)}+\frac{8(\alpha+1)nl\beta
B^{-(2\alpha+3)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}\\\nonumber
&\times&(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})-\frac{4nl\beta
B^{-2(\alpha+1)}}{3m(m-1)^{2}\phi_{0}^{2}(\alpha+2m)}(\frac{2nl\alpha
B^{-(\alpha+1)}}{3\phi_{0}(m-1)\alpha})\\\nonumber
&-&2c_{2}(m+1+\alpha)B^{-(2m+3+2\alpha)}
(1-\frac{2nlB^{-\alpha}}{3\phi_{0}(m-1)\alpha})\\\nonumber
&+&c_{2}B^{-2(m+1+\alpha)}
(\frac{2nl\alpha B^{-(\alpha+1)}}{3\phi_{0}(m-1)\alpha})).
\end{eqnarray}
In this case, the volume of the universe and anisotropic parameter
of expansion turn out to be the same as in anisotropic case. For
initial time, the expansion and shear scalar become infinite while
for later times, these decrease to zero. The deceleration parameter
turns out to be a dynamical quantity. It can be negative for
appropriate values of the constant parameters e.g., it becomes a
negative for later times with $m>1$. Notice that the expansion
scalar, shear scalar and Hubble parameters are decreased by the
component of magnetic field.
We solve Eqs.(\ref{42}) and (\ref{45}) simultaneously to obtain
density $\rho$ and the self-interacting potential $U(\phi)\approx
U(T)$. The density is
\begin{eqnarray}\nonumber
\rho(T)&=&\phi_{0}(1+2m+\alpha(m+2)
-\frac{\omega_{0}\alpha^{2}}{2})[\frac{-4\beta
T^{-4}}{(\alpha+2m)(m-1)\phi_{0}}\\\nonumber&-&\frac{4nl\beta
}{3m(m-1)^{2}}\frac{T^{-(\alpha+4)}}{(\alpha+2m)\phi_{0}^{2}}(1
-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})\\\label{55}&+&c_{2}T^{-(4+2m+\alpha)}(1
-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})]-\frac{\beta}{T^{4}}-\frac{U(T)}{2},
\end{eqnarray}
where $U(T)$ is
\begin{eqnarray}\nonumber
U(T)&=&[((2\omega_{0}+3)(\alpha+m+1)\alpha^{2}-4\alpha(1+2m+\alpha(m+2)
-\frac{\omega_{0}\alpha^{2}}{2}))2(m\\\nonumber
&+&2)+\alpha^{2}(m+2)(3+2\omega_{0})(\frac{8\alpha(m+2)}
{2(m+2)-3\alpha}-(\alpha-2))-6\alpha(1+2m \\\nonumber
&+&\alpha(m+2)
-\frac{\omega_{0}\alpha^{2}}{2})\frac{8\alpha(m+2)}{2(m+2)-3\alpha}][4\beta
T^{-4}[(\alpha+2m)(m-1)\phi_{0}(8\alpha\\\nonumber
&\times&(m+2)+4(2(m+2)-3\alpha))]^{-1}-\frac{4\beta
nl}{3m(\alpha+2m)(m-1)^{2}\phi_{0}^{2}}
\end{eqnarray}
\begin{eqnarray}\nonumber
&\times&(-T^{-(\alpha+4)}[(8\alpha(m+2)+(\alpha+
4)(2(m+2)-3\alpha))]^{-1}+\frac{2}{3}nl\phi_{0}^{-1}(m\\\nonumber
&-&1)^{-1} T^{-(2\alpha+4)}(8\alpha^{2}(m+2)+2\alpha(\alpha+
2)(2(m+2)-3\alpha))^{-1}-c_{2}(8\alpha\\\nonumber
&\times&(m+2)-(\alpha+
2m+4)(2(m+2)-3\alpha))^{-1}T^{-(4+2m+\alpha)}+\frac{2}{3
}nl\phi_{0}^{-1}\\\nonumber
&\times&(m-1)^{-1}\alpha
T^{-(2\alpha+2m+4)}(8\alpha(m+2)+2(\alpha+m+2)(2(m+2)\\\nonumber
&-&3\alpha))^{-1}+\frac{\alpha^{2}(2\omega_{0}+3)}{(2(m+2)-3\alpha)}[\frac{-4(m+2)\beta
T^{-4}}{(\alpha+2m)(m-1)\phi_{0}}-\frac{4(m+2)}{3m(m-1)^{2}}
\\\nonumber
&\times&\frac{\beta
nl}{(\alpha+2m)}(T^{-(\alpha+4)}-\frac{2nlT^{-2(\alpha
+2)\phi_{0}^{-1}}}{3\alpha(m-1)})
+c_{2}(m+2)(T^{-(\alpha+2m+4)}\\\nonumber
&-&\frac{2nl\phi_{0}^{-1}}{3\alpha(m-1)}T^{-(2\alpha+2m+4)}]
+\frac{4n(m+2)^{2}(1-m)\alpha}{3} [\frac{4\beta
T^{-(\alpha+4)}\phi_{0}^{-1}}{(\alpha+2m)(m-1)}\\\nonumber
&\times&(8\alpha(m+2)+(\alpha+4) (2(m+2)-3\alpha))^{-1}
-\frac{4\beta nl\phi_{0}^{-2}}{3m(\alpha+2m)(m-1)^{2}}\\\nonumber
&\times&(-T^{-(2\alpha+4)}(8\alpha(m+2)+(2\alpha
+4)(2(m+2)-3\alpha))^{-1}\\\nonumber
&+&\frac{2nlT^{-(3\alpha+4)}}{3\phi_{0}(m-1)\alpha}(8\alpha(m+2)
+(3\alpha+4)(2(m+2)-3\alpha))^{-1}\\\nonumber
&+&c_{2}(-T^{-(4+2m+2\alpha)}(8\alpha(m+2)+(2\alpha
+2m+4)(2(m+2)-3\alpha))^{-1}\\\nonumber
&+&\frac{2nlT^{-(3\alpha+2m+4)}}
{3\phi_{0}(m-1)\alpha}(8\alpha(m+2)+(3\alpha+2m+
4)(2(m+2)-3\alpha))^{-1})\\\nonumber
&-&8\alpha\beta(m+2)T^{-4}(8\alpha(m+2)+4(2(m+2)-3\alpha))^{-1}
+6\alpha(2(m+2)\\\nonumber
&-&3\alpha)_{-1}[\beta
T^{-4}-8\beta\alpha(m+2)T^{-4}(8\alpha(m+2)+4(2(m+2)-3\alpha))^{-1}]\\\nonumber
&-&\frac{6\alpha}{(2(m+2)-3\alpha)}(1+2m+\alpha(m+2)
-\frac{\omega_{0}\alpha^{2}}{2})[\frac{-4\beta
T^{-(\alpha+3)}}{(\alpha+2m)(m-1)}\\\nonumber
&-&\frac{4\beta
nl}{3m(m-1)^{2}(\alpha+2m)}(T^{-(2\alpha+3)}-\frac{2nlT^{-3(\alpha+1)}}{3\phi_{0}(m-1)\alpha})
+c_{2}(T^{-(2\alpha+2m+3)}\\\label{56}
&-&\frac{2nlT^{-(3\alpha+2m+3)}}{3\alpha(m-1)\phi_{0}})]
+c_{3}T^{\frac{8\alpha(m+2)}{2(m+2)-3\alpha}},
\end{eqnarray}
where $c_{3}$ is an integration constant. Figure \textbf{6}
indicates that the energy density is positive and decreases after
big bang but it increases and approaches to infinity for later
times with $\alpha<\frac{2(m+2)}{3}$. The energy density is
positive but decreases to zero for any positive value of the
parameter satisfying $\alpha>\frac{2(m+2)}{3}$ as shown in Figure
\textbf{6}. At the initial epoch, there is infinite energy density
in both cases as shown in Figure \textbf{6}. Figures \textbf{7}
indicate that the self-interacting potential remains positive in
both cases ($\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$). The corresponding skewness parameters
turn out to be
\begin{eqnarray}\nonumber
\delta(T)&=&\frac{2n(m+2)}{3\rho}[\frac{-4\beta
T^{-(\alpha+4)}}{(\alpha+2m)(m-1)\phi_{0}}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}(\alpha+2m)\phi_{0}^{2}}\\\label{57}
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(2+m+\alpha)}(1
-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})], \\\nonumber
\gamma(T)&=&\frac{-nm(m+2)}{3\rho}[\frac{-4\beta
T^{-(\alpha+4)}}{(\alpha+2m)(m-1)\phi_{0}}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}(\alpha+2m)\phi_{0}^{2}}
\\\label{58}&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(2+m+\alpha)}(1
-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})],
\end{eqnarray}
where $\rho$ is given by Eq.(\ref{55}). Figures \textbf{8} and
\textbf{9} indicate that the deviation parameters become finite at
$T=0$. For later times, these parameters converge to zero in both
cases. From Eqs.(\ref{24}) and (\ref{25}), the deviation free EoS
parameter $\omega$ can be written as
\begin{equation}\label{59}
\omega(T)=-1-\frac{B}{\rho(m+2)}\frac{d\rho}{dB}.
\end{equation}
The anisotropy measure of anisotropic fluid,
$\frac{\delta-\gamma}{\omega}$, for the model (\ref{49}) takes the
form
\begin{eqnarray}\nonumber
\frac{\delta-\gamma}{\omega}&=&\frac{n(m+2)^{2}}{3\omega(T)}[\frac{-4\beta
T^{-(\alpha+4)}}{(\alpha+2m)(m-1)\phi_{0}}-\frac{4nl\beta
T^{-2(\alpha+2)}}{3m(m-1)^{2}(\alpha+2m)\phi_{0}^{2}}\\\label{60}
&\times&(1-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})+c_{2}T^{-2(2+m+\alpha)}(1
-\frac{2nlT^{-\alpha}}{3\phi_{0}(m-1)\alpha})].
\end{eqnarray}
Its behavior is shown in Figure \textbf{10}.
\begin{figure}
\centering \epsfig{file=fig1.eps,width=.45\linewidth}
\epsfig{file=fig2.eps,width=.45\linewidth} \caption{Plots
represent the energy density $\rho(T)$ versus time T for
$\alpha<\frac{2(m+2)}{3}$ and $\alpha>\frac{2(m+2)}{3}$
respectively. Here $\omega_{0}=-1.9,~ \beta=2,~n=-2$. Green, red
and blue lines show the graphs for $m=2,3,4$ respectively.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig3.eps,width=.45\linewidth}
\epsfig{file=fig4.eps,width=.45\linewidth} \caption{Plots show the
self-interacting potential $U(T)$ versus time T for
$\alpha<\frac{2(m+2)}{3}$ and $\alpha>\frac{2(m+2)}{3}$
respectively.}
\end{figure}
\begin{figure}
\centering \epsfig{file=fig5.eps,width=.45\linewidth}
\epsfig{file=fig6.eps,width=.45\linewidth} \caption{The deviation
parameter $\delta(T)$ versus time T for $\alpha<\frac{2(m+2)}{3}$
and $\alpha>\frac{2(m+2)}{3}$ respectively. Here
$\omega_{0}=-1.9,~ \beta=2$ and $n=-2$.}
\end{figure}
For initial epoch, this is finite while it goes to zero for the
future evolution of the universe in both cases. This indicates
that the anisotropic fluid approaches to isotropy for later times.
When $T\longrightarrow 0$ and $\alpha<\frac{2(m+2)}{3}$, we obtain
$\omega=-1-\frac{(4+2m+3\alpha)}{(m+2)}$ which shows that the
universe model will be in phantom region at initial epoch. For
$\alpha>\frac{2(m+2)}{3}$, we obtain
$\omega=-1+\frac{8\alpha}{(m+2)(3\alpha-2(m+2))}$ which shows that
the universe model will be in quintessence region at initial
epoch. For later times with $\alpha<\frac{2(m+2)}{3}$, it follows
that $\omega=-1+\frac{8\alpha}{(m+2)(3\alpha-2(m+2))}$ showing
that the universe may be in quintessence region. When
$\alpha>\frac{2(m+2)}{3}$, the EoS parameter depends on the
component of magnetic field and BD parameter indicating that the
universe will be in phantom or quintessence region for appropriate
values of the constant parameters. Thus, in each case, the model
shows the accelerated expansion of the universe.
\begin{figure}
\centering \epsfig{file=fig7.eps,width=.45\linewidth}
\epsfig{file=fig8.eps,width=.45\linewidth} \caption{The deviation
parameter $\gamma(T)$ versus time T for $\alpha<\frac{2(m+2)}{3}$
and $\alpha>\frac{2(m+2)}{3}$ respectively.}
\end{figure}
\begin{figure} \centering
\epsfig{file=fig9.eps,width=.45\linewidth}
\epsfig{file=fig10.eps,width=.45\linewidth} \caption{Anisotropic
measure of expansion $\frac{\delta-\gamma}{\omega}$ versus time T
is shown for $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$ respectively.}
\end{figure}
Now we investigate a special case when $m=1$. The scale factors
become $A(t)=B(t)=a(t)$ and the model turns out to be the FRW
universe model
\begin{equation*}
ds^{2}=dt^{2}-a(t)^{2}(dx^{2}+dy^{2}+dz^{2}).
\end{equation*}
Equation (\ref{46}) yields
\begin{equation}\label{61}
a(t)=\sqrt{2}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{1/2},
\end{equation}
where $c_{4}$ is an integration constant. The expansion scalar
$\Theta$ turns out to be
\begin{equation}\nonumber
\Theta=3H=3(\frac{\dot{a}}{a})
=3\sqrt{\frac{2\beta}{3n}}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{-1}.
\end{equation}
This shows that the Hubble parameter and the expansion scalar are
constant at earlier time. As time increases, both of these
parameters decrease indicating expanding universe in its earlier
time. From Eqs.(\ref{42}) and (\ref{45}), the energy density
$\rho(t)$ and the self-interacting potential $U(t)$ are
\begin{eqnarray}\nonumber
\rho(t)&=&S_{1}[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{\frac{\alpha-2}{2}}
+S_{2}[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{\frac{\alpha-4}{2}}
+c_{5}[2(\sqrt{\frac{2\beta}{3n}}t\\\nonumber
&+&c_{4})]^{\frac{4\alpha}{(2-\alpha)}}
-\frac{\beta}{4}[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{-2},\\\nonumber
U(t)&=&-[2\phi_{0}\alpha((3+2\omega_{0})\alpha(\alpha+2)
-12(1+\alpha)+2\omega_{0}\alpha^{2})+\frac{\alpha^{2}
(3+2\omega_{0})}{(2-\alpha)}\\\nonumber
&\times&\phi_{0}(\alpha+2)^{2}-\frac{16\phi_{0}\alpha^{2}}{(2-\alpha)}(3(1+\alpha)
-\frac{\omega_{0}\alpha^{2}}{2})]
[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{\frac{\alpha-2}{2}}(\frac{2\beta}{3n})^{3/2}\\\nonumber
&\times&(\alpha+2)^{-2}+[\frac{2^{(\alpha-2)/2}\phi_{0}\alpha^{2}(3+2\omega_{0})\beta}{3n(2-\alpha)}
-\frac{4\alpha\phi_{0}\beta
2^{\alpha/2}}{3n(2-\alpha)}(3(1+\alpha)\\\nonumber
&-&\frac{\omega_{0}\alpha^{2}}{2})]
[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{\frac{\alpha-4}{2}}+c_{5}[2(\sqrt{\frac{2\beta}{3n}}t
+c_{4})]^{\frac{4\alpha}{(2-\alpha)}}.
\end{eqnarray}
Here $c_{5}$ is an integration constant and the constants $S_{1}$
and $S_{2}$ are given by
\begin{eqnarray}\nonumber
S_{1}&=&-(1/2)[2\phi_{0}\alpha((3+2\omega_{0})\alpha(\alpha+2)-
12(1+\alpha)+2\omega_{0}\alpha^{2})\\\nonumber &+&\frac{\alpha^{2}
(3+2\omega_{0})\phi_{0}}{(2-\alpha)}(\alpha+2)^{2}
-\frac{16\phi_{0}\alpha^{2}}{(2-\alpha)}(3(1+\alpha)-\frac{\omega_{0}\alpha^{2}}{2})]\\\nonumber
&\times&(\frac{2\beta}{3n})^{3/2}(\alpha+2)^{-2},\\\nonumber
S_{2}&=&[\frac{2^{(\alpha-2)/2}\phi_{0}\alpha^{2}(3+2\omega_{0})\beta}{6n(2-\alpha)}
-\frac{4\alpha\phi_{0}\beta2^{\alpha/2}}{6n(2-\alpha)}
(3(1+\alpha)-\frac{\omega_{0}\alpha^{2}}{2})\\\nonumber
&+&\phi_{0}(3(1+\alpha)-\frac{\omega_{0}\alpha^{2}}{2})\frac{2^{(\alpha-2)/2}\beta}{3n}].
\end{eqnarray}
Some other parameters are
\begin{eqnarray}\nonumber
\delta(t)&=&\frac{4\beta}{3\rho}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{-2},\quad
\gamma(t)=\frac{-2\beta}{3\rho}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{-2},
\end{eqnarray}
\begin{eqnarray}\nonumber
\omega(t)&=&-1-[\frac{S_{1}(\alpha-2)}{2}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{(\alpha-4)/2}
+\frac{S_{2}(\alpha-4)}{2}(\sqrt{\frac{2\beta}{3n}}t\\\nonumber
&+&c_{4})^{(\alpha-6)/2}+c_{5}2^{4\alpha/(2-\alpha)}(\frac{4\alpha}{(2-\alpha)}
\sqrt{\frac{2\beta}{3n}}t+c_{4})^{(5\alpha-2)/(2-\alpha)}\\\nonumber
&+&\frac{\beta}{2}(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{-3}]
[3S_{1}[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{\frac{\alpha-4}{2}}+3S_{2}[\sqrt{\frac{2\beta}{3n}}
t+c_{4}]^{\frac{\alpha-6}{2}}\\\nonumber
&+&3c_{5}2^{4\alpha/(2-\alpha)}[2(\sqrt{\frac{2\beta}{3n}}t
+c_{4})]^{\frac{(5\alpha-2)}{(2-\alpha)}}
-\frac{\beta}{4}[\sqrt{\frac{2\beta}{3n}}t+c_{4}]^{-3}]^{-1},\\\nonumber
\frac{\delta-\gamma}{\omega}&=&\frac{2\beta(\sqrt{\frac{2\beta}{3n}}t+c_{4})^{-2}}{\omega(t)},
\end{eqnarray}
where $\alpha\neq2$. We discuss two cases: $0<\alpha<2$ and
$\alpha>2$. Clearly, the energy density is constant at initial
epoch and approaches to infinity for later times in both cases.
The anisotropy parameters are constant at initial epoch and go to
zero for later times. Likewise the anisotropy measure of expansion
of anisotropic fluid $\frac{\delta-\gamma}{\omega}$ approaches to
isotropy for later times. The anisotropy parameter of expansion is
zero as $m=1$. At the initial epoch, the deviation free EoS
parameter shows that the universe may be in quintessence region by
choosing appropriate values of the constants in both cases. For
later times with $0<\alpha<2$, we obtain
$\omega(t)=-1-\frac{4\alpha}{3(2-\alpha)}$ which indicates that
the universe will be in phantom region. For $\alpha>2$, it follows
that $\omega(t)=-1-\frac{(\alpha-2)}{6}$, which also shows that
the universe will be in phantom region for future evolution.
\section{Summary and Discussion}
In this paper, we have constructed the BI universe models in BD
theory of gravity with perfect, anisotropic and magnetized
anisotropic fluids. We have constructed exact solutions in each
case. For anisotropic and anisotropic magnetized fluid models, the
physical behavior of the energy density, self-interacting
potential, skewness parameters and anisotropy parameter of
expansion of anisotropy fluid have been plotted for non-zero value
of $n$ with $\alpha<\frac{2(m+2)}{3}$ and
$\alpha>\frac{2(m+2)}{3}$. The results are summarized as follows.
\begin{itemize}
\item In the case of anisotropic as well as magnetized anisotropic fluids,
the skewness parameters and anisotropic measure of expansion of
anisotropic fluid go to zero indicating the isotropic behavior of
the fluid for the future evolution of the universe. This result
coincides with those already available in literature for Bianchi
type III model in $f(R)$ theory \cite{11} and Bianchi type
$(VI)_{0}$ model in GR \cite{23}.
\item In each case, the energy density remains positive. All the figures
indicate that energy density increases after big bang and approaches
to infinity for later times with $\alpha<\frac{2(m+2)}{3}$ in both
anisotropic as well as magnetized anisotropic fluids. For
$\alpha>\frac{2(m+2)}{3}$, it decreases and goes to zero in both
cases.
\item For anisotropic fluid, the self interacting potential
is positive only for $\alpha>\frac{2(m+2)}{3}$ and decreases to zero
for later times while for magnetized anisotropic fluid, it remains
positive in both cases.
\item All the physical parameters $H,~H_{x},H_{y},~\Theta$ and
$\sigma$ increase with the decrease in $T$ and go to zero as
$T\rightarrow\infty$. These parameters take infinitely large values
at $T=0$. In contrast to the perfect fluid, the deceleration
parameter for anisotropic fluids is a dynamical quantity and can be
negative for the appropriate choice of constant parameters, in
particular for later times with $m>1$. This corresponds to
accelerated expansion of the universe.
\item In the
anisotropic magnetized fluid, all the physical parameters are
reduced by the component of magnetic field with $n>0$.
\item The deviation free EoS parameters indicate that the universe may
be in quintessence or phantom region at initial epoch as well as
for later times for an appropriate values of the constant
parameters in all cases. Thus the models represent the accelerated
expansion of the universe.
\item The anisotropy parameter of expansion is constant (vanishes for
$m=1$) indicating the model does not isotropize for later times in
all cases.
\item A special case $m=1$ for the magnetized
anisotropic fluid has also been discussed which yields FRW
universe model. In this case, the deviation free EoS parameter
indicates that at initial epoch, the universe may be in
quintessence region while for the future evolution, it will be in
phantom region.
\end{itemize}
It would be interesting to construct exact solutions in the
presence of anisotropic fluid for other Bianchi models in BD
theory.
\vspace{0.25cm}
|
1,116,691,501,014 | arxiv |
\subsection{Evolution of team performances}
\label{sec:results-team-performances}
Figure~\ref{fig:teamgrowth} shows the evolution of the game ratings for Manchester City, Real Madrid, and Barcelona computed as a 15-game moving average since the start of the 2016/2017 season. We compute a team's game rating by summing the values for all the team's actions, which corresponds to summing the ratings for all the team's players in a particular game. The average game rating for Manchester City has been steadily increasing since the end of the 2016/2017 season, which was their first under the management of Pep Guardiola. Manchester City seem unbeatable and topped the Premier League table with 43 points from a possible 45 in their opening 15 games of the 2017/2018 season.
In contrast, Real Madrid had a poor start to the 2017/2018 season and ranked only fourth in the Primera Division after 14 games with 28 points from a possible 42. Their Portuguese star player Cristiano Ronaldo seems to be completely out of shape and does not appear near the top of our rankings. Rivals Barcelona finished their 2016/2017 season on a high with seven consecutive victories in their final league games of the season. The \textit{Blaugrana} also had an excellent start to their 2017/2018 season but have been struggling to convincingly win games more recently. The evolution of their game ratings suggests Barcelona might have been overperforming and are now regressing towards their regular level.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamgrowth-tomdv2.pdf}
\caption{The evolution of the game ratings for Manchester City, Real Madrid, and Barcelona computed as a 15-game moving average since the start of the 2016/2017 season. A team's game rating is computed by summing the values for all its actions.}
\label{fig:teamgrowth}
\end{figure}
Figure~\ref{fig:teamdiff1617} shows the average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2016/2017 season. Barcelona's front line, which consisted of Neymar, Luis Su\'arez, and Lionel Messi in most games, was responsible for the largest share of their average contribution per game. In contrast, Real Madrid's midfielders contributed more than their strikers, while Manchester City's midfielders and strikers contributed roughly equally.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamdiff1617-lotteb.pdf}
\caption{The average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2016/2017 season.}
\label{fig:teamdiff1617}
\end{figure}
Similarly, Figure~\ref{fig:teamdiff1718} shows the average contribution per game for each line of Barcelona, Real Madrid, and Manchester City during the 2017/2018 season. Despite their loss of Neymar to Paris Saint-Germain, Barcelona still have the strongest attack by far. Real Madrid have seen their average contribution per game go down in midfield and offense, while Manchester City have seen notable increases in both those lines.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamdiff1718-lotteb.pdf}
\caption{The average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2017/2018 season.}
\label{fig:teamdiff1718}
\end{figure}
\section*{Acknowledgements}
Tom Decroos is supported by the Research Foundation-Flanders (FWO-Vlaanderen). Jesse Davis is partially supported by the KU Leuven Research Fund (C22/15/015) and FWO-Vlaanderen (G.0356.12, SBO-150033).
\section{Related work}
\label{sec:related-work}
Although the valuation of player actions is an important task with respect to player recruitment and valuation, this subject has remained virtually unexplored in the soccer analytics community due to the challenges resulting from the dynamic and low-scoring nature of soccer. The approaches from \citet{norstebo2016valuing} for soccer, \citet{routley2015markov} for ice hockey, and \citet{cervone2014pointwise} for basketball come closest to our framework. They address the task of valuing individual actions by modeling each game as a Markov game~\citep{littman1994markov}. In contrast to \citet{norstebo2016valuing} and \citet{routley2015markov}, which divide the pitch into a fixed number of zones, our approach models the precise spatial locations of each action. Unlike \citet{cervone2014pointwise}, which is restricted to valuing only three types of on-the-ball actions, our approach considers any relevant on-the-ball action during a game. However, our definitions of player actions, action sets and games are similar to those used by these works as well as earlier research for soccer~\citep{rudd2011framework, hirotsu2002using}, American football~\citep{goldner2012markov}, and baseball~\citep{tango2007book}.
Most of the related work on soccer either focuses on a limited number of player-action types like passes and shots or fails to account for the circumstances under which the actions occurred. \citet{decroos2017starss}, \citet{knutson2017introducing}, and \citet{gregory2017how} address the task of valuing the actions leading up to a goal attempt, whereas \citet{bransen2017valuing} addresses the task of valuing individual passes. The former approaches naively assign credit to the individual actions by accounting for a limited amount of contextual information only, while the latter approach is limited to a single type of action only.
Furthermore, this work is also related to the work on expected-goals models, which estimate the probability of a goal attempt resulting into a goal \citep{lucey2014quality,caley2015premier,altman2015beyond,mackay2016introducing,aalbers2016expected,mackay2017predicting}. In our framework, computing the expected-goals value of a goal attempt boils down to estimating the value of the game state prior to the goal attempt.
\section{Conclusion}
\label{sec:conclusion}
This paper introduced an advanced soccer metric named \algoname that quantifies the performances of players during games. Our metric values any individual player action on the pitch based on its expected influence on the scoreline. In contrast to most existing metrics, our metric offers the benefits that it (1) values all types of actions (e.g., passes, crosses, dribbles, and shots), (2) bases its valuation on the game context, and (3) reasons about an action's possible effect on the subsequent actions. Intuitively, the player actions that increase a team's chance of scoring receive positive values while those actions that decrease a team's chance of scoring receive negative values.
We presented \algoname as a concrete instantiation of our more general action-valuing framework named \frameworkname for use with play-by-play event data. Several illustrative use cases based on an analysis of the data for the top five European leagues highlighted the inner workings of \algoname. Furthermore, we also proposed a language for representing play-by-play event data that is designed with the goal of facilitating data analysis.
A limitation of \algoname is its focus on valuing on-the-ball actions whereas defensive skill often manifests itself through positioning and anticipation abilities that are used to deny certain action possibilities. Therefore, including full optical tracking data would be an interesting direction for future research.
\subsection{Identification of the players who stand out}
\label{sec:results-outperformers}
One talent pipeline often exploited by larger clubs is identifying the players on less successful top division clubs whose skills have the potential to flourish in a more competitive environment. Thus, a natural question to ask is: Can our player rating metric help identify promising talent toiling at lesser clubs that larger clubs could target in the transfer market? When scouting such players from an objective perspective, one challenge is that the value of a metric often will partially reflect the team context. In this case, that means being surrounded by less-talented players, which may adversely affect a player's rating. Therefore, to find players that stand out compared to their teammates' performances, we look at the highest-ranked players on teams who finished outside the top 5 in their respective league. Table~\ref{tbl:outliers} lists the players who stood out at smaller clubs during the 2016/2017 season.
\begin{table}[H]
\centering
\tabcolsep=0.05cm
\begin{tabular}{clllr}
\toprule
\textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Position} & \textbf{Rating}\\
\midrule
1 & Junior Stanislas & Bournemouth & Winger & 0.58\\
2 & Dimitri Payet & West Ham United & Winger & 0.55\\
3 & Iago Aspas & Celta de Vigo & Central striker & 0.52\\
4 & Max Kruse & SV Werder Bremen & Central striker & 0.50\\
5 & Ryad Boudebouz & Montpellier & Attacking midfielder & 0.47\\
6 & Fin Bartels & SV Werder Bremen & Central striker & 0.46\\
7 & Allan Saint-Maximum & Bastia & Winger & 0.46\\
8 & Ross Barkley & Everton & Winger & 0.44\\
9 & Romelu Lukaku & Everton & Central striker & 0.44\\
10 & Federico Viviani & Bologna & Central midfielder & 0.43\\
\bottomrule
\end{tabular}
\caption{The highest-ranked players on teams who finished outside the top 5 in their respective league during the 2016/2017 season according to our metric.}
\label{tbl:outliers}
\end{table}
Table~\ref{tbl:outliers} contains a number of interesting names. Junior Stanislas plays winger for Bournemouth in the English Premier League, and he is especially strong at shooting. Bournemouth performed exceptionally well in the 2016/2017 season, finishing 9th after finishing 16th the previous season. Another interesting player is Ryad Boudebouz, an attacking midfielder for Montpellier last season. He has since been transferred to Real Betis, but was on the wish list for a number of other clubs as well. The list also contains a number of recognized talents such as Dimitri Payet, who was a key performer for France at EURO 2016, Romelu Lukaku, who moved to Manchester United after the 2016/2017 season and is playing well there, and Ross Barkley, who moved to Chelsea in the previous winter transfer window.
\section{Action types}
\label{sec:action-types}
Table~\ref{tbl:action-types} provides an overview of the action types in the dataset alongside their descriptions.
\begin{table}[H]
\begin{tabular}{llll}
\toprule
\textbf{Action type} & \textbf{Description} & \textbf{Successful?} & \textbf{Special result} \tabularnewline\midrule
Pass & Normal pass in open play & Reaches teammate & Offside \tabularnewline\midrule
Cross & Cross into the box & Reaches teammate & Offside \tabularnewline\midrule
Throw-in & Throw-in & Reaches teammate & - \tabularnewline\midrule
Crossed corner & Corner crossed into the box & Reaches teammate & Offside \tabularnewline\midrule
Short corner & Short corner & Reaches teammate & Offside \tabularnewline\midrule
Crossed free-kick & Free kick crossed into the box & Reaches teammate & Offside \tabularnewline\midrule
Short free-kick & Short free-kick & Reaches team mate & Offside \tabularnewline\midrule
Take on & Dribble past opponent & Keeps possession & - \tabularnewline\midrule
Foul & Foul & Always fail & Red or yellow card \tabularnewline\midrule
Tackle & Tackle on the ball & Regains possession & Red or yellow card \tabularnewline\midrule
Interception & Interception of the ball & Always success & - \tabularnewline\midrule
Shot & Shot attempt not from penalty or free-kick & Goal & Own goal \tabularnewline\midrule
Shot from penalty & Penalty shot & Goal & Own goal \tabularnewline\midrule
Shot from free-kick & Direct free-kick on goal & Goal & Own goal \tabularnewline\midrule
Save by keeper & Keeper saves a shot on goal & Always success & - \tabularnewline\midrule
Claim by keeper & Keeper catches a cross & Does not drop the ball & - \tabularnewline\midrule
Punch by keeper & Keeper punches the ball clear & Always success & - \tabularnewline\midrule
Pick-up by keeper & Keeper picks up the ball & Always success & - \tabularnewline\midrule
Clearance & Player clearance & Always success & - \tabularnewline\midrule
Bad touch & Player makes a bad touch and loses the ball & Always fail & - \tabularnewline\midrule
Dribble & Player dribbles at least 3 meters with the ball & Always success & - \tabularnewline\midrule
Run without ball & Player runs without the ball & Always success & - \tabularnewline
\bottomrule
\end{tabular}
\caption{Overview of the action types in the data set alongside their descriptions. The \textit{Success?} column specifies the condition the action needs to fulfill to be considered successful, while the \textit{Special} column lists additional possible result values.}
\label{tbl:action-types}
\end{table}
\section{Five best-ranked players per position for the 2016/2017 season}
\label{sec:best-players-2016-2017}
This section lists the five best-ranked players per position for the 2016/2017 season.
\includegraphics[width=.8\textwidth]{2016-2017_Central-Strikers_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Wingers_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Midfielders_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Wingbacks_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_CentreBacks_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Goalkeepers_table.pdf}
\section{Five best-ranked players per position for the 2017/2018 season}
\label{sec:best-players-2017-2018}
This section lists the five best-ranked players per position for the 2017/2018 season.
\includegraphics[width=.8\textwidth]{2017-2018_Central-Strikers_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Wingers_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Midfielders_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Wingbacks_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_CentreBacks_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Goalkeepers_table.pdf}
\subsection{Selection of 2016/2017 team of the season}
\label{sec:results-best-players}
Figure~\ref{fig:lineup20162017} shows the best possible line-up for the 2016/2017 season according to our metric. For each position, the line-up includes the highest-ranked player who played at least 900 minutes, which is the equivalent of ten full games, in that particular position. The offensive line features the likes of Eden Hazard (Chelsea), the inevitable Lionel Messi (Barcelona), and teenage star Kylian Mbapp\'e, who joined Paris Saint-Germain on a loan from AS Monaco last summer. The French striker will move to the French giants on a permanent basis next summer for a transfer fee rumoured to be around 90 million euros.\footnote{\url{https://www.transfermarkt.com/kylian-mbappe/profil/spieler/342229}} The midfield consists of Kevin De Bruyne (Manchester City), Isco (Real Madrid), and Cesc F\`abregas (Chelsea), who were all key figures for their respective teams during the previous campaign. However, the composition of the defensive line is somewhat more surprising. Serie A centre backs Vlad Chirices (Napoli) and Leonardo Bonucci (Juventus) combine their strength with excellent passing abilities. Bundesliga wing-backs Markus Suttner (FC Ingolstadt 04) and Lukasz Piszczek (Borussia Dortmund) are known for overlapping and providing support in offense. Goalkeeper Jordan Pickford got relegated with Sunderland last season but moved to Everton over the summer nevertheless. These somewhat surprising names in the defensive line reveal one limitation of \algoname. That is, the algorithm only values on-the-ball actions, while defending is often more about preventing your opponent from gaining possession of the ball by clever positioning and anticipation. More specifically, goalkeepers are rewarded for their interventions but not punished for the goals they concede.
The inclusion of Eden Hazard in our \textit{Team of the 2016/2017 Season} shows the strength of our metric at identifying impactful players. The Belgian winger, who had a crucial role in Chelsea's Premier League title, is the seventh-highest rated player on our metric but ranks only 133rd in terms of goals and assists per 90 minutes with 10 goals and 3 assists. Similarly, wing-back Lukasz Piszczek ranks 19th on our metric but only appears in 292nd position for goals and assists per 90 minutes with 5 goals and 1 assist. In contrast, notable omissions from the team are high-profile players like Robert Lewandowski (54th), \'Alvaro Morata (61st), Edinson Cavani (77th), and Edin Dzeko (265th), who were all directly involved in more than one goal or assist per 90 minutes in the 2016/2017 season.
Figure~\ref{fig:lineup20172018} shows the best possible line-up for the 2017/2018 season up through November 5th 2017 according to our metric. For each position, the line-up includes the highest-ranked player who played at least 450 minutes in that particular position. The average rating for the players for the 2017/2018 season (0.659) is significantly higher than the average rating for the players on the 2016/2017 season (0.551). However, we expect the average rating to regress towards the average for last season as the season progresses.
Appendix~\ref{sec:best-players-2016-2017} lists the five highest-rated players in each position for the 2016/2017 season. Appendix~\ref{sec:best-players-2017-2018} lists the five highest-rated players in each position for the 2017/2018 season until November 5th 2017.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Lineup_2016_2017.png}
\caption{The best possible line-up for the 2016/2017 season according to our metric. For each position, the line-up includes the highest-ranked player who played at least 900 minutes in that particular position.}
\label{fig:lineup20162017}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Lineup_2017_2018.png}
\caption{The best possible line-up for the 2017/2018 season until November 5th 2017 according to our metric. For each position, the line-up includes the highest-ranked player who played at least 450 minutes in that particular position.}
\label{fig:lineup20172018}
\end{figure}
\section{\repname: A language for representing player actions}
\label{sec:representation}
Valuing player actions requires a dedicated language that is \textit{human-interpretable}, \textit{simple} and \textit{complete} to accurately define and describe these actions. The human-interpretability allows reasoning about what happens on the pitch and verifying whether the action values correspond to soccer experts' intuitions. The simpleness reduces the chance of making mistakes when automatically processing the language. The completeness enables to express all the information required to value actions in their full context.
Based on domain knowledge and feedback from soccer experts, we introduce \repname (\repfull). \repname represents each action as a tuple of nine attributes:
\begin{description}
\item[StartTime:] the exact timestamp for when the action started;
\item[EndTime:] the exact timestamp for when the action ended;
\item[StartLocation:] the $(x,y)$ location where the action started;
\item[EndLocation:] the $(x,y)$ location where the action ended;
\item[Player:] the player who performed the action;
\item[Team:] the team of the player;
\item[Type:] the type of the action;
\item[BodyPart:] the body part used by the player for the action;
\item[Result:] the result of the action.
\end{description}
We distinguish between 21 possible types of actions including, among others, \textit{passes}, \textit{crossed corners}, \textit{dribbles}, \textit{runs without ball}, \textit{throw-ins}, \textit{tackles}, \textit{shots}, \textit{penalty shots}, \textit{clearances}, and \textit{keeper saves}. These action types are interpretable and specific enough to accurately describe what happens on the pitch yet general enough such that similar actions have the same type.
Depending on the type of the action, we consider up to four different body parts and up to six possible results. The possible body parts are \textit{foot}, \textit{head}, \textit{other}, and \textit{none}. The two most common results are \textit{success} or \textit{fail}, which indicates whether the action had its intended result or not. For example, a pass reaching a teammate or a tackle recovering the ball. The four other possible results are \textit{offside} for passes resulting in an off-side call, \textit{own goal}, \textit{yellow card}, and \textit{red card}.
We represent a game as a sequence of action sets, where each action set describes the actions performed by the players in between two consecutive touches of the ball. More formally, each action set $A$ consists of one on-the-ball action and $n-1$ off-the-ball actions, where $n$ is the total number of players on the pitch. Each game is a sequence of action sets $<A_1,A_2,\ldots, A_m>$, where $m$ is the total number of touches of the ball.
In addition to being human-interpretable, simple and complete, \repname has the added advantage of being able to naturally unify both event data and tracking data collected by providers such as Wyscout, Opta, and STATS. The representations used by these companies have multiple different objectives (e.g., providing information to the media or informing clubs) and are not necessarily designed to facilitate data analysis. Furthermore, each representation uses a slightly different terminology when describing the events that occur during a game. \repname is an attempt to unify the existing description languages into a common vocabulary that enables subsequent data analysis. The following sections operate on data in the \repname format.
\section{Introduction}
How will a player's actions impact his or her team's performances in games? This question is among the most relevant questions that needs to be answered when a professional soccer club is considering whether to sign a player. Nevertheless, the task of objectively quantifying the impact of the individual actions performed by soccer players during games remains largely unexplored to date. What complicates the task is the low-scoring and dynamic nature of soccer games. While most actions do not impact the scoreline directly, they often do have important longer-term effects. For example, a long pass from one flank to the other may not immediately lead to a goal but can open up space to set up a goal chance several actions down the line.
To help fill the gap in objectively quantifying player performances, we propose a novel advanced soccer metric that assigns a value to any individual player action on the pitch, be it with or without the ball, based on its impact on the game outcome. Intuitively, our action values reflect the actions' expected influence on the scoreline. That is, an action valued at +0.05 is expected to contribute 0.05 goals in favor of the team performing the action, whereas an action valued at -0.05 is expected to yield 0.05 goals for their opponent. Unlike most existing advanced metrics, our proposed metric considers all types of actions (e.g., passes, crosses, dribbles, take-ons, and shots) and accounts for the circumstances under which each of these actions happened as well as their possible longer-term effects.
Our metric was designed to take a step towards addressing three important limitations of most existing advanced soccer metrics~\citep{routley2015markov}. The first limitation is that existing metrics largely ignore actions other than goals and shots. The soccer analytics community's focus has very much been on the concept of the expected value of a goal attempt in recent years \citep{lucey2014quality,caley2015premier,altman2015beyond,mackay2016introducing,aalbers2016expected,mackay2017predicting}. The second limitation is that existing approaches tend to assign a fixed value to each action, regardless of the circumstances under which the action was performed. For example, many pass-based metrics treat passes between defenders in the defensive third of the pitch without any pressure whatsoever and passes between attackers in the offensive third under heavy pressure from the opponents similarly. The third limitation is that most metrics only consider short-term effects and fail to account for an action's effects a bit further down the line. These limitations render many of the existing metrics virtually useless for player recruitment purposes.
Using our metric, we analyzed the 2016/2017 campaign to construct a \textit{Team of the 2016/2017 Season}. When applied to on-the-ball actions like passes, dribbles, and shots alone, Barcelona's Lionel Messi unsurprisingly headlines the team as the highest-ranked player. His average action value per game last season was 26\% higher than his nearest competitor's. Other members featuring on the team include forward Kylian Mbapp\'e then playing for AS Monaco, Real Madrid midfielder Isco, Manchester City playmaker Kevin De Bruyne as well as Chelsea teammates Eden Hazard and Cesc F\`abregas. To identify young talent, we also ranked the best players under 21 years old from the 2016/2017 season according to our metric. Teenage star Mbapp\'e, who moved to French giants Paris Saint-Germain last summer, tops this list. He appears ahead of his fellow countrymen Ousmane Demb\'el\'e, who moved to Barcelona from Borussia Dortmund over the summer, and midfielder Maxime Lopez of Olympique Marseille.
In summary, this paper presents the following four contributions:
\begin{enumerate}
\item \repname: A powerful but flexible language for representing player actions, which is described in Section~\ref{sec:representation}.
\item \frameworkname: A general framework for valuing player actions based on their contributions to the game outcome, which is introduced in Section~\ref{sec:framework}.
\item \algoname: An algorithm for valuing on-the-ball player actions as a concrete instance of the general framework, which is outlined in Section~\ref{sec:algorithm}.
\item A number of use cases showcasing our most interesting results and insights, which are presented in Section~\ref{sec:results}.
\end{enumerate}
\section{Use cases}
\label{sec:results}
In this section, we present a number of use cases to demonstrate the possible applications of our proposed metric. We focus our analysis on the English Premier League, Spanish Primera Division, German 1. Bundesliga, Italian Serie A, and the French Ligue 1. We apply the \algoname algorithm to 9582 games played since the start of the 2012/2013 season. We only include league games and thus ignore all friendly, cup, and European games. We train the predictive models on the games in the 2012/2013 through 2015/2016 seasons and report results for the 2016/2017 season as well as the ongoing 2017/2018 season until Sunday November 5th 2017. We represent each game as a sequence of roughly 1750 on-the-ball-actions. The most frequently occurring actions in our dataset are passes (53\%) and dribbles (24\%). In contrast, shots are much rarer and represent just 1.4\% of the actions with only 11\% of them resulting in a goal.
The remainder of this section is structured as follows. Section~\ref{sec:results-intuition} explains the intuition behind our metric by means of Kevin De Bruyne's goal for Manchester City against Arsenal on Sunday November 5th 2017.
Section~\ref{sec:results-distributions} provides insights into the distribution of the action values.
Section~\ref{sec:results-best-players} shows the best possible line-up for the 2016/2017 season based on our metric.
Section~\ref{sec:results-best-talents} discusses the five highest-rated players born after January 1st 1997 for the 2016/2017 season. Section~\ref{sec:results-outperformers} identifies a number of players who stood out at smaller clubs during the 2016/2017 season.
Section~\ref{sec:results-playing-styles} explains how our metric can be used to compare players in terms of their playing styles.
Section~\ref{sec:results-team-performances} shows how the performances of Manchester City, Real Madrid, and Barcelona have evolved since the start of the 2016/2017 season.
Section~\ref{sec:deployment} discusses how our metric is used by SciSports, a Dutch data analytics company providing expertise to soccer clubs.
\input{tex/results_intuition.tex}
\input{tex/results_distributions.tex}
\input{tex/results_best_players.tex}
\input{tex/results_best_talents.tex}
\input{tex/results_outperformers.tex}
\input{tex/results_playing_styles.tex}
\input{tex/results_team_performances.tex}
\input{tex/results_deployment.tex}
\subsection{Distribution of the action values}
\label{sec:results-distributions}
Figure~\ref{fig:nr_action_mean} shows the number of actions that players execute on average per 90 minutes and the average value of their actions for those players who played at least 900 minutes during the 2016/2017 season. Naturally, there is a tension between these two quantities. If a player performs a high number of actions, then it is harder for each action to have a high value. The 15 highest-rated players according to our metric are highlighted in red.
The grey dotted isoline shows the gap in total contribution between Messi and other players. This isoline is curved since a player's total contribution is computed as the average value per action (\emph{x-axis}) multiplied by the number of actions per 90 minutes (\emph{y-axis}).
The plot shows that strikers like Harry Kane (Tottenham Hotspur), Luis Su\'arez (Barcelona), Kylian Mbapp\'e (AS Monaco), and Pierre-Emerick Aubameyang (Borussia Dortmund) are less involved in the game as they perform a relatively low number of actions on average. However, the actions they do perform tend to be highly valued. In contrast, players like Arjen Robben (Bayern Munich), Eden Hazard (Chelsea), and Philippe Coutinho (Liverpool) perform more actions although the average value of their actions is considerably lower. Cesc F\`abregas (Chelsea), Isco (Real Madrid), and James Rodr\'iguez (Real Madrid) perform more actions per 90 minutes than them while maintaining a higher average value per action. Finally, as shown by the isoline and more traditional statistics,\footnote{\url{https://fivethirtyeight.com/features/lionel-messi-is-impossible/}} Lionel Messi is clearly in a class of his own.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{nr_value_actions-tomd.pdf}
\caption{Scatter plot that contrasts the average number of actions performed per 90 minutes with the average value of these actions for each player who played at least 900 minutes during the 2016/2017 season. The 15 highest-rated players according to our metric are highlighted in red.}
\label{fig:nr_action_mean}
\end{figure}
For nine positions on the pitch, Figure~\ref{fig:distr_pos} shows the distribution of the average ratings per game for those players who played at least 900 minutes during the 2016/2017 season. The highest-rated player for each position is highlighted in red.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{distr_pos-tomd.pdf}
\caption{Distribution of average per game rating for players who played at least 900 minutes in the 2016/2017 season.}
\label{fig:distr_pos}
\end{figure}
\subsection{Deployment in the soccer industry}
\label{sec:deployment}
The SciSports Datascouting department leverages our action values for providing data-driven advice to soccer clubs and soccer associations with respect to player recruitment and opponent analysis. Until recently, the SciSports datascouts almost exclusively relied upon more traditional metrics and statistics as well as the company's SciSkill Index, which ranks all professional soccer players in the world in terms of their actual and expected future contributions to their teams' performances. The SciSkill Index provides intuitions about the general level of a player, whereas our action values offer more insights into how each player contributes to his team's performances. While our action values are currently only available for internal use by the SciSports datascouts, they will also be made available in the SciSports Insight\footnote{\url{https://insight.scisports.com}} online scouting platform.
\subsection{Characterization of playing styles}
\label{sec:results-playing-styles}
Clubs are beginning to consider player types during the recruitment process in order to focus on identifying those players who best fit a team's preferred style of play (e.g., short passes and high defending vs. long balls and defensive play). Currently, scouts and experts are typically tasked with judging playing style. These experts' time is almost always the limiting resource in the player recruitment process, which makes it difficult to consider the entire pool of players. Therefore, advanced metrics offer the potential to help select a set of players that are worthy of additional attention. The metrics can be used to assess a player's ability at performing different types of actions. With our metric, this can be accomplished by computing a player's total value per 90 minutes for each type of action.
To showcase this use case, we analyze the playing styles of Lionel Messi, Harry Kane, and Kylian Mbapp\'e,
who are all counted among the best forward players in the world. Figure~\ref{fig:playercharacteristics} shows the total contributions per 90 minutes for the passes, crosses, dribbles, and shots performed by these three players. Messi rates excellent at all four aspects and is an \textit{allrounder}. In comparison to Messi, Kane rates poorly at passing, dribbling and particularly crossing. However, he outperforms Messi in shooting and is clearly a \textit{finisher}, which is also reflected in the fact that he has scored 23 goals while providing only one assist in the ongoing season. In comparison to Messi, Mbapp\'e only rates poorly at passing and even outperforms him in crossing.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{playercharacteristics-lotteb.pdf}
\caption{Overview of the total contribution per 90 minutes for different types of actions for Lionel Messi, Harry Kane, and Kylian Mbappé.}
\label{fig:playercharacteristics}
\end{figure}
As another use case, consider FC Barcelona's attempts to offset the loss of Neymar by acquiring Borussia Dortmund's Ousmane Demb\'{e}l\'{e} and Liverpool's Philippe Coutinho. Figure \ref{fig:neymar} compares Demb\'{e}l\'{e}, Coutinho and Neymar's total values per 90 minutes for four action types. According to our metric, both Demb\'{e}l\'{e} and Coutinho's passes receive a much higher value than Neymar's. Demb\'{e}l\'{e} is the best crosser, with Neymar and Coutinho receiving nearly identical values for this skill. Neymar is a superior dribbler, and is ranked as the third best dribbler out of all players we analyzed in the 2016/2017 season. However, Demb\'{e}l\'{e} is also exceptionally strong at dribbling and is ranked as the tenth best dribbler, whereas Coutinho is ranked thirty fourth. From a stylistic perspective, this breakdown suggests that Demb\'{e}l\'{e} was a reasonable target in that he comes close to replicating Neymar's signature skill of dribbling.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{playercharacteristics2-lotteb.pdf}
\caption{Overview of the total contribution per 90 minutes for different types of actions for Neymar, Ousmane Demb\'el\'e, and Philippe Coutinho.}
\label{fig:neymar}
\end{figure}
\subsection{Identification of young talents}
\label{sec:results-best-talents}
Table~\ref{tbl:talents2016} shows the five highest-rated players born after January 1st 1997 who played at least 900 minutes during the 2016/2017 season. Kylian Mbapp\'{e}, who is recognized as one of the biggest talents in the world, tops this list with a rating nearly twice as high as his nearest competitor. He has seamlessly transitioned from Monaco to Paris Saint-Germain this season, and has continued to gain acclaim for his play.
Allan Saint-Maximin who played midfielder for Bastia in the French Ligue 1 last season is second-ranked. His play earned him both a transfer to Nice after the season and plaudits from the soccer intelligensia.\footnote{\href{http://www.squawka.com/news/allan-saint-maximin-the-monaco-wonderkid-you-havent-heard-of-yet-and-europes-take-on-king/919430}{http://www.squawka.com/news/allan-saint-maximin-the-monaco-wonderkid-\\you-havent-heard-of-yet-and-europes-take-on-king/919430}} Ousmane Demb\'{e}l\'{e} is also a huge talent, who parlayed his outstanding season for Borussia Dortmund into a summer move to FC Barcelona, where he was injured early in the season. Maxime Lopez and Malcom play in the Ligue 1 and remained with their respective clubs where they continue to play well and are attracting significant interest from bigger clubs.
\begin{table}[H]
\centering
\begin{tabular}{cllclr}
\toprule
\textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Age} & \textbf{Position} & \textbf{Rating} \tabularnewline
\midrule
1 & Kylian Mbapp\'e & AS Monaco & 18 & Central striker & 0.82 \tabularnewline
2 & Allan Saint-Maximin & Bastia & 20 & Winger & 0.46 \tabularnewline
3 & Ousmane Demb\'el\'e & Borussia Dortmund & 20 & Winger & 0.38 \tabularnewline
4 & Maxime Lopez & Olympique Marseille & 19 & Attacking midfielder & 0.30 \tabularnewline
5 & Malcom & Girondins Bordeaux & 20& Winger & 0.26 \tabularnewline
\bottomrule
\end{tabular}
\caption{The highest-ranked players born after January 1st 1997 during the 2016/2017 season according to our metric.}
\label{tbl:talents2016}
\end{table}
Next, we consider a slightly larger age range and also consider players under 23 years old. Figure~\ref{fig:growth} shows the 15-game moving average for our metric for Leroy San\'{e}, Mikel Oyarzabal, and Karol Linetty. Leory San\'{e} was a big signing for Pep Guardiola in the summer of 2016, and is widely recognized for his high level of play this season with Manchester City. Mikel Oyarzabal currently plays for mid-table Primera Division team Real Sociedad. However, the 20-year-old winger, who debuted for the Spanish national team last year, is being linked with big clubs throughout Europe.
Karol Linetty is a 22-year-old central midfielder playing for Sampdoria in Serie A. He is much less well known than the other two players, but our metric suggests he is playing at a level commensurate with these more highly touted youngsters, and hence the Pole may be one to watch.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{growth-player-tomd.pdf}
\caption{The 15-game moving average for our metric for Leroy San\'{e} (Manchester City), Mikel Oyarzabal (Real Sociedad), and Karol Linetty (Sampdoria) since the start of the 2016/2017 season.}
\label{fig:growth}
\end{figure}
\section{\algoname: An algorithm for valuing on-the-ball actions}
\label{sec:algorithm}
In this section, we describe the \algoname (\algofull) algorithm for valuing on-the-ball player actions as an instantiation of our general framework. As a data source, we consider play-by-play event data, which means that each action set contains exactly one on-the-ball action and no other actions. We employ machine learning to estimate the probabilities $P_{hg}$ and $P_{vg}$ from the stream of actions. Consequently, we frame this as a binary classification problem and train a probabilistic classifier to estimate the probabilities. Our implementation involves three key tasks: (1) transforming the stream of actions into a feature-vector format, (2) selecting and training a probabilistic classifier, and (3) aggregating the individual action values to arrive at a rating for a player.
\subsection{Constructing features}
Applying standard machine learning algorithms requires converting the sequence of action sets $<A_1,A_2,\ldots, A_m>$ describing an entire game into examples in the feature-vector format. Thus, one training example is constructed for each game state $S_i$. A game state $S_i$ is labeled positive if the team possessing the ball after action set $A_i$ scored a goal within the next ten actions. A goal in this time frame could arise from either a converted shot by the team possessing the ball after $A_i$ or an own goal by the opposing team.
For each example, instead of defining features based on the entire current game state $S_i = <A_1,...,A_i>$, we only consider the previous three action sets $<A_{i-2},A_{i-1},A_i>$. Approximating the game state in this manner offers several advantages. First, most machine learning techniques require examples to be described by a fixed number of features. Converting game states with varying numbers of actions, and hence different amounts of information, into this format would necessarily result in a loss of information. Second, considering a small window focuses attention on the most relevant aspects of the current context. The number of action sets to consider in the approximation is a parameter of the approach, and three sets was empirically found to work well as shown in Section \ref{sec:estimating-probabilities}.
Since each action set $A_i$ only consists of one on-the-ball action $a_i$ in our data source, we denote the actions we consider as $<a_{i-2},a_{i-1},a_i>$.
From these actions, we define features that will impact the probability of a goal being scored in the near future. Based on the \repname representation, we consider three categories of features.
First, for each of the three actions, we define a number of categorical and real-valued features based on information explicitly included in the \repname representation. There are categorical features for an action's $Type$, $Result$, and $BodyPart$. Similarly, there are continuous features for the $(x,y)$-coordinates of its start location, the $(x,y)$-coordinates of its end location, and the time elapsed since the start of the game.
Second, we define a number of complex features that combine information within an action and across consecutive actions. Within each action, these include (1) the distance and angle to the goal for both the action's start and end locations, and (2) the distance covered during the action in both the $x$ and $y$ directions. Between two consecutive actions, we compute the distance and elapsed time between the start position and time of an action, and the end position and time of the next action. These features provide an intuition about the current speed of play in the game. Additionally, there is also a feature indicating whether the ball changed possession between these two actions.
Finally, to capture the game context, we add as features (1) the number of goals scored in the game by the team possessing the ball after action $a_i$, (2) the number of goals scored in the game by the defending team after action $a_i$, and (3) the goal difference in the game after action $a_i$.
\subsection{Estimating probabilities}
\label{sec:estimating-probabilities}
We investigated which learner to use as well as the number of actions prior to the action of interest to consider. To properly evaluate our classifiers, we used play-by-play event data for Europe's top five competitions. We trained models on all game states for the 2012/2013 through 2014/2015 seasons and predicted the goal probabilities for all game states for the 2015/2016 season.
First, we investigated which learner to use for this task. Logistic Regression is the prevalent method in the soccer analytics community, while Random Forest and Neural Network are popular choices for addressing machine-learning tasks. We compared the performance of these three learners as implemented in the H2O software package\footnote{\url{https://www.h2o.ai}} on three commonly-used evaluation metrics in probabilistic classification~\citep{ferri2009experimental}: (1) logarithmic loss, (2) area under the receiver operating characteristic curve (ROC AUC), and (3) Brier score. A Random Forest classifier with 1000 trees won on all metrics and achieved a ROC AUC of 79.7\%. Furthermore, it was the best calibrated classifier as shown in Figure~\ref{fig:calibration-classifier}. Our observation that Random Forest outperforms Logistic Regression on the task of probabilistically predicting goals is in line with earlier work~\cite{decroos2017predicting}.
\begin{figure}[H]
\includegraphics[width=\textwidth]{calibration-tomd.pdf}
\caption{Calibration curves of the three classifiers under consideration. The probabilities produced by the Random Forest model are calibrated better than the probabilities produced by the other two models.}
\label{fig:calibration-classifier}
\end{figure}
Second, we investigated the number of previous actions to consider. Adding too few actions might leave valuable contextual information unused, while adding too many actions can make the feature set unnecessarily noisy. We trained five different Random Forest classifiers ranging the number of previous actions from one through five as shown in Table~\ref{tbl:eval-actionnb}. We found that three actions is the best number, which is in line with earlier work by~\citet{mackay2017predicting}.
\begin{table}[H]
\centering
\begin{tabular}{crrr}
\toprule
\textbf{Actions} & \textbf{Logarithmic loss} & \textbf{ROC AUC} & \textbf{Brier score} \tabularnewline
\midrule
1 &0.0548 &0.7955 &0.0107 \tabularnewline
2 &0.0546 &0.7973 &0.0107 \tabularnewline
\textbf{3} &\textbf{0.0546} &\textbf{0.7977} &\textbf{0.0107} \tabularnewline
4 &0.0546 &0.7970 &0.0107 \tabularnewline
5 &0.0547 &0.7965 &0.0107 \tabularnewline
\bottomrule
\end{tabular}
\caption{Comparison of five Random Forest models taking into account a varying number of actions prior to the action of interest. For the logarithmic loss and the Brier score a lower value is better, while for the ROC AUC a higher value is better. The best results are in bold.}
\label{tbl:eval-actionnb}
\end{table}
\subsection{Rating players}
To this point, our method assigns a value to each individual action. However, our method also allows aggregating the individual action values into a player rating for multiple time granularities as well as along several different dimensions. A player rating could be derived for any given time frame, where the most natural ones would include a time window within a game, an entire game, or an entire season. Regardless of the given time frame, we compute a player rating in the same manner. Since spending more time on the pitch offers more opportunities to contribute, we compute the player ratings per 90 minutes of game time. For each player, we first sum the values for all the actions performed during the given time frame, then divide this sum by the total number of minutes he played and finally multiply this ratio by 90 minutes.
Players can also be compared along several different axes. First, players have different positions, and the range of values for the rating may be position dependent. Therefore, comparisons could be done on a per-position basis. Similarly, some players are versatile and what position they play may vary depending on the game. Therefore, it may be interesting to examine a player's rating for each position he or she plays. Second, instead of summing over all actions, it is possible to compute a player's rating for each action type. This would allow constructing a player profile, which may enable identifying different playing styles.
\section{\frameworkname: A framework for valuing player actions}
\label{sec:framework}
Broadly speaking, most actions in a soccer game are performed with the intention of (1) increasing the chance of scoring a goal, or (2) decreasing the chance of conceding a goal. Given that the influence of most actions is temporally limited, one way to assess an action's effect is by calculating how much it alters the chances of both scoring and conceding a goal in the near future. We treat the effect of an action on scoring and conceding separately as these effects may be asymmetric in nature and context dependent.
In this section, we introduce the \frameworkname (\frameworkfull) framework for valuing actions performed by players. In our framework, valuing an action boils down to estimating the probabilities that a team will score and concede a goal in the near future for both the game state before the action was performed and the game state after the action was performed.
Now, we will more formally define our metric. For ease of exposition, we will use $h$ to denote the home team and $v$ the visiting team, and will focus on the perspective of the home team. Given any game state $S_i=<A_1, \ldots, A_{i}>$, we need to estimate the short-term probability of a home goal ($hg$) and a visiting goal ($vg$), which we denote by:
\begin{eqnarray*}
P_{hg}(S_i) &=& P(hg \in F^k_i | S_i) \\
P_{vg}(S_i) &=& P(vg \in F^k_i | S_i)
\end{eqnarray*}
where $F^k_i = <A_{i+1}, \ldots, A_{i+k}>$ is the sequence of $k$ action sets that follow action set $A_i$, and $k$ is a user-defined parameter. These probabilities form the basis of our action-rating framework.
Valuing an action requires assessing the \emph{change} in probability for both $P_{hg}$ and $P_{vg}$ as a result of action set $A_i$ moving the game from state $S_{i-1}$ to state $S_i$.\footnote{The challenge of distributing the payoffs of the joint actions that a group takes across the individuals constituting the group goes beyond the scope of this paper but is a well-studied topic in the field of cooperative game theory~\citep{driessen2013cooperative}. The Shapley value is one possible solution to this challenge and has been successfully applied to soccer already~\citep{altman2016finding}.} The change in probability of the home team scoring can be computed as:
\begin{equation*}
\Delta P_{hg} = P_{hg}(S_i) - P_{hg}(S_{i-1}).
\end{equation*}
\noindent This change will be positive if the action increased the probability that the home team will score. The change can be computed in an analogous manner for $P_{vg}$ as:
\begin{equation*}
\Delta P_{vg} = P_{vg}(S_i) - P_{vg}(S_{i-1}).
\end{equation*}
Finally, before combining these two terms, we must contend with the subtlety that the ball may change possession as a result of $A_i$. To account for this, we always normalize the value to be computed from the perspective of the team that has possession after the $i^{th}$ action set. If the home team has possession after action set $A_i$, then the value is calculated as:
\begin{equation*}
V(A_i) = \Delta P_{hg} - \Delta P_{vg}
\end{equation*}
For this valuing scheme, higher scores represent more valuable actions so the change in $P_{vg}$ is subtracted from the change in $P_{hg}$ because it is advantageous for the home to decrease its chance of conceding. If the visiting team had possession after action set $A_i$, the two terms would be swapped.
The \frameworkname framework provides a simple approach to valuing actions that is independent of the representation used to describe the actions. The strength of the framework lies in the fact that it transforms the subjective task of valuing an action into the objective task of predicting the likelihood of a future event in a natural way. One possible limitation is that game-state transitions correspond to on-the-ball actions, whereas some off-the-ball actions (e.g., a smart overlap from a wing-back) can span several consecutive on-the-ball actions. As a result, accurately valuing such off-the-ball actions would require the additional step of aggregating the values of the constituting subactions.
\subsection{Intuition behind the action values}
\label{sec:results-intuition}
Figure~\ref{fig:de-bruyne} visualizes the goal from Manchester City midfielder Kevin De Bruyne against Arsenal on Sunday November 5th 2017. The table at the top of the figure shows the action values assigned to the shot that resulted in the goal as well as the twelve prior actions.
\begin{figure*}[h!]
\centering
\includegraphics[width=.8\textwidth]{De_Bruyne.pdf}
\caption{Visualization of Kevin De Bruyne's 19th-minute goal for Manchester City against Arsenal on Sunday November 5th 2017. The table at the top shows the values assigned to each of the actions performed in the build-up to the shot.
\label{fig:de-bruyne}
\end{figure*}
The attack starts with Argentine forward Sergio Ag\"uero who first takes on an opponent (Action~1), then dribbles into the box (Action~2), and finally delivers a cross that fails to reach a teammate (Action~3), which gets a negative value of -0.045. The clearance from Arsenal defender Laurent Koscielny (Action~4) is collected by De Bruyne, who attempts a shot on target (Action~5). The Belgian midfielder sees his shot saved by Arsenal goalkeeper Peter Cech (Action~6), whose save gets a positive value of 0.014. However, Manchester City are able to recover the ball, which returns to De Bruyne following passes from Leroy San\'e (Action~7) and Fabian Delph (Action~8). De Bruyne first dribbles a bit towards the middle of the pitch (Action~9) and sets up a one-two pass with teammate Fernandinho (Actions~10~and~11), then dribbles into the box (Action~12), and finally sends the ball into the lower-right corner of the goal with a powerful driven shot (Action~13). The dribble into the box and the shot get positive values of 0.040 and 0.888, respectively.
The attack leading to De Bruyne's goal is a clear example of how our metric works. Actions increasing a team's chances of scoring (e.g., a dribble or pass to a more dangerous location on the pitch like Actions~11~and~12) or decreasing the opponent's chances of scoring (e.g., a clearance and a save by the goalkeeper like Actions~4~and~6) receive positive values, whereas actions decreasing a team's chances of scoring like the failed cross from Ag\"uero (Action~3) receive negative values. In this particular game, the 19th-minute goal from De Bruyne is the highest-valued action, while a 47th-minute foul from Arsenal's Nacho Monreal causing a penalty is the lowest-valued action.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.